Deep learning based natural language processing (NLP) has become the mainstream of research in recent years and significantly outperforms conventional methods. However, deep learning models are notorious for being data and computation hungry. These downsides limit such models' application from deployment to different domains, languages, countries, or styles, since collecting in-genre data and model training from scratch are costly. The long-tail nature of human language makes challenges even more significant.
Meta-learning, or ‘Learning to Learn’, aims to learn better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, and beyond. Meta-learning has been shown to allow faster fine-tuning, converge to better performance, and achieve outstanding results for few-shot learning in many applications. Meta-learning is one of the most important new techniques in machine learning in recent years, but the method is mainly investigated with applications in computer vision. It is believed that meta-learning has excellent potential to be applied in NLP, and some works have been proposed with notable achievements in several relevant problems, e.g., relation extraction, machine translation, and dialogue generation and state tracking. However, it does not catch the same level of attention as in the image processing community.
This workshop (Meta Learning and Its Applications to Natural Language Processing Workshop, or MetaNLP) will bring concentrated discussions on meta-learning for the field of NLP via several invited talks, oral and poster sessions with high-quality papers, and a panel of leading researchers from industry and academia. Alongside research work on new meta-learning methods, data, applications, and results, this workshop will call for novel work on understanding, analyzing, and comparing different meta-learning approaches for NLP. The workshop aims to:
MetaNLP workshop invites submissions that investigate the theoretical and experimental nature of meta learning methodologies and their applications to NLP. Relevant research directions include, but not limited to:
Popular meta-learning topics include, but not limited to:
We welcome three categories of papers: regular workshop papers, cross-submissions, and extended abstracts. Only the regular workshop paper will be included in the proceedings. The extended Abstracts and cross-submissions will simply be hosted on our websites. Submissions should be made to softconf.
University of Cambridge
Stanford University
Carnegie Mellon University
University of Illinois Urbana-Champaign
Columbia University
Schedule (EDT)
6:00-6:15 Opening remarks
6:15-7:00 Invited talk - Meta-Learning for few-shot learning in NLP - Andreas Vlachos
7:00-7:20 Contributed talk - Don't Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification
7:20-7:40 Contributed talk - Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling
7:40-8:00 Contributed talk - Meta-Reinforcement Learning for Mastering Multiple Skills and Generalizing across Environments in Text-based Games
8:00-8:20 Contributed talk - Few-Shot Event Detection with Prototypical Amortized Conditional Random Field
8:20-8:40 Contributed talk - Meta-Learning for Improving Rare Word Recognition in end-to-end ASR
8:40-9:00 Contributed talk - Minimax and Neyman–Pearson Meta-Learning for Outlier Languages
9:00-9:15 Coffee break
9:15-10:00 Invited talk - Meta-learning for dialog systems - Zhou Yu
10:00-10:45 Invited talk - Learning-to-learn through Model-based Optimization: HPO, NAS, and Distributed Systems - Eric Xing
10:45-11:00 Coffee break
11:00-11:20 Contributed talk - Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer
11:20-11:40 Contributed talk - Zero-Shot Compositional Concept Learning
11:40-12:00 Contributed talk - Few Shot Dialogue State Tracking using Meta-learning
12:00-13:00 Poster session
13:00-13:15 Coffee break
13:15-14:00 Invited talk - Few-Shot Learning to Give Feedback in the Real World - Chelsea Finn
14:00-14:45 Invited talk - Learning from Annotation Guideline: A case study on Event Extraction - Heng Ji
14:45-15:00 Closing remarks
Accepted Papers - Talk
Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer
Weijia Xu, Batool Haider, Jason Krone and Saab Mansour
Meta-Reinforcement Learning for Mastering Multiple Skills and Generalizing across Environments in Text-based Games
Zhenjie Zhao, Mingfei Sun and Xiaojuan Ma
Zero-Shot Compositional Concept Learning
Guangyue Xu, Parisa Kordjamshidi and Joyce Chai
Cross submissions/presentations - Talk
Meta-Learning for Improving Rare Word Recognition in end-to-end ASR [ICASSP 2021]
Florian Lux and Ngoc Thang Vu
Few Shot Dialogue State Tracking using Meta-learning [EACL 2021]
Saket Dingliwal, Shuyang Gao, Sanchit Agarwal, Chien-Wei Lin, Tagyoung Chung and Dilek Hakkani-Tur
Few-Shot Event Detection with Prototypical Amortized Conditional Random Field [ACL 2021 findings]
Xin Cong, Shiyao Cui, Bowen Yu, Tingwen Liu, Wang Yubin, and Bin Wang
Minimax and Neyman–Pearson Meta-Learning for Outlier Languages [ACL 2021 findings]
Edoardo Maria Ponti, Rahul Aralikatte, Disha Shrivastava, Siva Reddy and Anders Søgaard
Don't Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification [ACL 2021 findings]
Qiaoyang Luo, Lingqiao Liu, Yuhao Lin, and Wei Emma Zhang
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling [ACL 2021 findings]
Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, and Ting Liu
Accepted Papers - Posters
Multi-Pair Text Style Transfer for Unbalanced Data via Task-Adaptive Meta-Learning
Xing Han and Jessica Lundin
On the cross-lingual transferability of multilingual prototypical models across NLU tasks
Oralie Cattan, Sophie Rosset and Christophe Servan
Meta-Learning for Few-Shot Named Entity Recognition
Cyprien de Lichy, Hadrien Glaude and William Campbell
Multi-accent Speech Separation with One Shot Learning
Kuan Po Huang, Yuan-Kuei Wu and Hung-yi Lee
Semi-supervised Meta-learning for Cross-domain Few-shot Intent Classification
Yue Li and Jiong Zhang
Meta-learning for Classifying Previously Unseen Data Source into Previously Unseen Emotional Categories
Gaël Guibon, Matthieu Labeau, Hélène Flamein, Luce Lefeuvre, and Chloé Clavel
Accepted extended abstract - Posters
Meta-learning for Task-oriented Household Text Games
Zhenjie Zhao and Xiaojuan Ma
Meta-learning for downstream aware and agnostic pretraining
Hongyin Luo, Shuyan Dong, Yung-Sung Chuang and Shang-Wen Li
Hung-Yi Lee
Associate Professor, National Taiwan University
Mitra Mohtarami
Research Scientist, Massachusetts Institute of Technology
Shang-Wen Li
Senior Applied Scientist, Amzaon web services AI
Di Jin
Applied Scientist, Amazon Alexa AI
Mandy Korpusik
Assistant Professor, Loyola Marymount University
Annie Dong
Applied Scientist, Amazon Alexa AI
Ngoc Thang Vu
Professor, University of Stuttgart
Dilek Hakkani-Tur
Senior Principal Scientist, Amazon Alexa AI
Meta learning is one of the fastest growing research areas in the deep learning scope. However there is no standard definition for meta learning. Usually the main goal is to design models that can learn new tasks rapidly with few in domain training examples, by having models to pre-learn from many, relevant or not, training tasks in a way that the models ar e easy to be generalized to new tasks. For better understanding the scope of meta learning, we provide several online courses and papers describing the works falling into the area. These works are just for showcasing, and we definitely encourage people with research not covered here but sharing the same goal mentioned above to submit.