Deep learning based natural language processing (NLP) has become the mainstream of research in recent years and significantly outperforms conventional methods. However, deep learning models are notorious for being data and computation hungry. These downsides limit such models' application from deployment to different domains, languages, countries, or styles, since collecting in-genre data and model training from scratch are costly. The long-tail nature of human language makes challenges even more significant.
Meta-learning, or ‘Learning to Learn’, aims to learn better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, and beyond. Meta-learning has been shown to allow faster fine-tuning, converge to better performance, and achieve outstanding results for few-shot learning in many applications. Meta-learning is one of the most important new techniques in machine learning in recent years, but the method is mainly investigated with applications in computer vision. It is believed that meta-learning has excellent potential to be applied in NLP, and some works have been proposed with notable achievements in several relevant problems, e.g., relation extraction, machine translation, and dialogue generation and state tracking. However, it does not catch the same level of attention as in the image processing community.
This workshop (Meta Learning and Its Applications to Natural Language Processing Workshop, or MetaNLP) will bring concentrated discussions on meta-learning for the field of NLP via several invited talks, oral and poster sessions with high-quality papers, and a panel of leading researchers from industry and academia. Alongside research work on new meta-learning methods, data, applications, and results, this workshop will call for novel work on understanding, analyzing, and comparing different meta-learning approaches for NLP. The workshop aims to:
MetaNLP workshop invites submissions that investigate the theoretical and experimental nature of meta learning methodologies and their applications to NLP. Relevant research directions include, but not limited to:
Popular meta-learning topics include, but not limited to:
We welcome three categories of papers: regular workshop papers, cross-submissions, and extended abstracts. Only the regular workshop paper will be included in the proceedings. The extended Abstracts and cross-submissions will simply be hosted on our websites. Submissions should be made to softconf.
University of Cambridge
Stanford University
Carnegie Mellon University
University of Illinois Urbana-Champaign
McGill University
Columbia University
TBD
Hung-Yi Lee
Associate Professor, National Taiwan University
Mitra Mohtarami
Research Scientist, Massachusetts Institute of Technology
Shang-Wen Li
Senior Applied Scientist, Amzaon web services AI
Di Jin
Applied Scientist, Amazon Alexa AI
Mandy Korpusik
Assistant Professor, Loyola Marymount University
Annie Dong
Applied Scientist, Amazon Alexa AI
Ngoc Thang Vu
Professor, University of Stuttgart
Dilek Hakkani-Tur
Senior Principal Scientist, Amazon Alexa AI
Meta learning is one of the fastest growing research areas in the deep learning scope. However there is no standard definition for meta learning. Usually the main goal is to design models that can learn new tasks rapidly with few in domain training examples, by having models to pre-learn from many, relevant or not, training tasks in a way that the models ar e easy to be generalized to new tasks. For better understanding the scope of meta learning, we provide several online courses and papers describing the works falling into the area. These works are just for showcasing, and we definitely encourage people with research not covered here but sharing the same goal mentioned above to submit.