Meta Learning for NLP
Meta Learning and Its Applications to Natural Language Processing
Meta Learning and Its Applications to Natural Language Processing
Workshop at ACL 2021

Submit papers

Description

Deep learning based natural language processing (NLP) has become the mainstream of research in recent years and significantly outperforms conventional methods. However, deep learning models are notorious for being data and computation hungry. These downsides limit such models' application from deployment to different domains, languages, countries, or styles, since collecting in-genre data and model training from scratch are costly. The long-tail nature of human language makes challenges even more significant.

Meta-learning, or ‘Learning to Learn’, aims to learn better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, and beyond. Meta-learning has been shown to allow faster fine-tuning, converge to better performance, and achieve outstanding results for few-shot learning in many applications. Meta-learning is one of the most important new techniques in machine learning in recent years, but the method is mainly investigated with applications in computer vision. It is believed that meta-learning has excellent potential to be applied in NLP, and some works have been proposed with notable achievements in several relevant problems, e.g., relation extraction, machine translation, and dialogue generation and state tracking. However, it does not catch the same level of attention as in the image processing community.

This workshop (Meta Learning and Its Applications to Natural Language Processing Workshop, or MetaNLP) will bring concentrated discussions on meta-learning for the field of NLP via several invited talks, oral and poster sessions with high-quality papers, and a panel of leading researchers from industry and academia. Alongside research work on new meta-learning methods, data, applications, and results, this workshop will call for novel work on understanding, analyzing, and comparing different meta-learning approaches for NLP. The workshop aims to:

Call for Papers

MetaNLP workshop invites submissions that investigate the theoretical and experimental nature of meta learning methodologies and their applications to NLP. Relevant research directions include, but not limited to:

Popular meta-learning topics include, but not limited to:

We welcome three categories of papers: regular workshop papers, cross-submissions, and extended abstracts. Only the regular workshop paper will be included in the proceedings. The extended Abstracts and cross-submissions will simply be hosted on our websites. Submissions should be made to softconf.

Important Dates

  • Paper Submissions Due: April 26  May 7, 2021 (AoE)
  • Notification of Acceptance: May 28  May 31, 2021 (AoE)
  • Camera-ready Paper Due: June 7, 2021 (AoE)
  • Workshop Date: August 5, 2021

Invited Speakers

Andreas Vlachos

University of Cambridge

Chelsea Finn

Stanford University

Eric Xing

Carnegie Mellon University

Heng Ji

University of Illinois Urbana-Champaign

Zhou Yu

Columbia University

Program

Schedule (EDT)

6:00-6:15         Opening remarks

6:15-7:00         Invited talk - Meta-Learning for few-shot learning in NLP - Andreas Vlachos

7:00-7:20         Contributed talk - Don't Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification

7:20-7:40         Contributed talk - Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling

7:40-8:00         Contributed talk - Meta-Reinforcement Learning for Mastering Multiple Skills and Generalizing across Environments in Text-based Games

8:00-8:20         Contributed talk - Few-Shot Event Detection with Prototypical Amortized Conditional Random Field

8:20-8:40         Contributed talk - Meta-Learning for Improving Rare Word Recognition in end-to-end ASR

8:40-9:00         Contributed talk - Minimax and Neyman–Pearson Meta-Learning for Outlier Languages

9:00-9:15         Coffee break

9:15-10:00       Invited talk - Meta-learning for dialog systems - Zhou Yu

10:00-10:45     Invited talk - Learning-to-learn through Model-based Optimization: HPO, NAS, and Distributed Systems - Eric Xing

10:45-11:00     Coffee break

11:00-11:20     Contributed talk - Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer

11:20-11:40     Contributed talk - Zero-Shot Compositional Concept Learning

11:40-12:00     Contributed talk - Few Shot Dialogue State Tracking using Meta-learning

12:00-13:00     Poster session

13:00-13:15     Coffee break

13:15-14:00     Invited talk - Few-Shot Learning to Give Feedback in the Real World - Chelsea Finn

14:00-14:45     Invited talk - Learning from Annotation Guideline: A case study on Event Extraction - Heng Ji

14:45-15:00     Closing remarks


Accepted Papers - Talk

    Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer

    Weijia Xu, Batool Haider, Jason Krone and Saab Mansour

    Meta-Reinforcement Learning for Mastering Multiple Skills and Generalizing across Environments in Text-based Games

    Zhenjie Zhao, Mingfei Sun and Xiaojuan Ma

    Zero-Shot Compositional Concept Learning

    Guangyue Xu, Parisa Kordjamshidi and Joyce Chai


Cross submissions/presentations - Talk

    Meta-Learning for Improving Rare Word Recognition in end-to-end ASR [ICASSP 2021]

    Florian Lux and Ngoc Thang Vu

    Few Shot Dialogue State Tracking using Meta-learning [EACL 2021]

    Saket Dingliwal, Shuyang Gao, Sanchit Agarwal, Chien-Wei Lin, Tagyoung Chung and Dilek Hakkani-Tur

    Few-Shot Event Detection with Prototypical Amortized Conditional Random Field [ACL 2021 findings]

    Xin Cong, Shiyao Cui, Bowen Yu, Tingwen Liu, Wang Yubin, and Bin Wang

    Minimax and Neyman–Pearson Meta-Learning for Outlier Languages [ACL 2021 findings]

    Edoardo Maria Ponti, Rahul Aralikatte, Disha Shrivastava, Siva Reddy and Anders Søgaard

    Don't Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification [ACL 2021 findings]

    Qiaoyang Luo, Lingqiao Liu, Yuhao Lin, and Wei Emma Zhang

    Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling [ACL 2021 findings]

    Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, and Ting Liu


Accepted Papers - Posters

    Multi-Pair Text Style Transfer for Unbalanced Data via Task-Adaptive Meta-Learning

    Xing Han and Jessica Lundin

    On the cross-lingual transferability of multilingual prototypical models across NLU tasks

    Oralie Cattan, Sophie Rosset and Christophe Servan

    Meta-Learning for Few-Shot Named Entity Recognition

    Cyprien de Lichy, Hadrien Glaude and William Campbell

    Multi-accent Speech Separation with One Shot Learning

    Kuan Po Huang, Yuan-Kuei Wu and Hung-yi Lee

    Semi-supervised Meta-learning for Cross-domain Few-shot Intent Classification

    Yue Li and Jiong Zhang

    Meta-learning for Classifying Previously Unseen Data Source into Previously Unseen Emotional Categories

    Gaël Guibon, Matthieu Labeau, Hélène Flamein, Luce Lefeuvre, and Chloé Clavel


Accepted extended abstract - Posters

    Meta-learning for Task-oriented Household Text Games

    Zhenjie Zhao and Xiaojuan Ma

    Meta-learning for downstream aware and agnostic pretraining

    Hongyin Luo, Shuyan Dong, Yung-Sung Chuang and Shang-Wen Li

Organizers

Hung-yi Lee received the M.S. and Ph.D. degrees from National Taiwan University (NTU), Taipei, Taiwan, in 2010 and 2012, respectively. From September 2012 to August 2013, he was a postdoctoral fellow in Research Center for Information Technology Innovation, Academia Sinica. From September 2013 to July 2014, he was a visiting scientist at the Spoken Language Systems Group of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is currently an associate professor of the Department of Electrical Engineering of National Taiwan University, with a joint appointment at the Department of Computer Science & Information Engineering of the university. His research focuses on machine learning (especially deep learning), spoken language understanding, and speech recognition. He owns a YouTube channel teaching deep learning in Mandarin (more than 4M Total Views and 48k Subscribers).

Hung-Yi Lee

Associate Professor, National Taiwan University

Mitra Mohtarami is a Research Scientist in the Computer Science and Artificial Intelligence Laboratory at MIT. She is a member of MIT's Spoken Language Systems Group, directed by Dr. James Glass. She received her PhD in Computer Science from the National University of Singapore in Dec 2013, and worked as a Research Scientist at the Institute for Infocomm Research (I2R) from Nov 2013 to Aug 2014, and joined MIT as a Postdoctoral Associate in Sep 2014. She has been the recipient of several awards including the Dean's Graduate Research Excellent Award (2013), Outstanding Research Achievement Award (2012), NUS Scholarship Award (2009-2013), and others. Mitra's primary research centers on Natural Language Processing and some of her recent projects include fact checking, open-ended question answering, sentiment and emotion analysis.

Mitra Mohtarami

Research Scientist, Massachusetts Institute of Technology

Shang-Wen Li is a senior Applied Scientist at Amazon AI. His research focuses on spoken language understanding, dialog management, and natural language generation. His recent interest is transfer learning for low-resourced conversational bots. He earned his PhD from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in 2016. He received M.S. and B.S. from National Taiwan University. Before joining Amazon, he also worked at Apple Siri researching conversational AI. He is the workshop co-organizer about &quotSelf-Supervised Learning for Speech and Audio Processing" at NeurIPS (2020) and one of the tutorial speakers about &quotMeta Learning and its application to Human Language Processing" at Interspeech (2020).

Shang-Wen Li

Senior Applied Scientist, Amzaon web services AI

Di Jin is an Applied Scientist at Amazon Alexa AI. His research focuses on conversational AI, adversarial robustness, transfer learning (esp. meta-learning for domain adaptation), question answering, and conditional text generation. He earned his PhD from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in 09/2020, supervised by Prof. Peter Szolovits. He received B.S. from Tsinghua University in China.

Di Jin

Applied Scientist, Amazon Alexa AI

Mandy Korpusik is an Assistant Professor in the Department of Computer Science at Loyola Marymount University. She completed her B.S. in Electrical and Computer Engineering from Franklin W. Olin College of Engineering in May, 2013 and received her S.M. and PhD in Computer Science from MIT, where she worked for six years in the Spoken Language Systems group in the Computer Science and Artificial Intelligence Laboratory. Her primary research interests include natural language processing and spoken language understanding for dialogue systems. Mandy used deep learning models to build the Coco Nutritionist application for iOS that allows obesity patients to track their diet more easily through natural language. Her long-term research goal is to deploy a collection of AI-based conversational agents that improve the health, well-being, and productivity of people.

Mandy Korpusik

Assistant Professor, Loyola Marymount University

Annie Dong is an Applied Scientist in Amazon Alexa AI working on data efficient and robustness strategies for Natural Language Understanding and Entity Resolution systems. Prior to joining Amazon, she had a couple short stints devising modeling solutions for healthcare tech, education tech, and finance applications. Annie received her M.S. from University of California, Santa Barbara.

Annie Dong

Applied Scientist, Amazon Alexa AI

Ngoc Thang Vu received his Diploma (2009) and PhD (2014) degrees in computer science from Karlsruhe Institute of Technology, Germany. From 2014 to 2015, he worked at Nuance Communications as a senior research scientist and at Ludwig-Maximilian University Munich as an acting professor in computational linguistics. In 2015, he was appointed assistant professor at University of Stuttgart, Germany. Since 2018, he has been a full professor at the Institute for Natural Language Processing in Stuttgart. His main research interests are natural language processing (esp. speech recognition and dialog systems) and machine learning (esp. deep learning) for low-resource settings. He is one of the tutorial speakers about &quotMeta Learning and its application to Human Language Processing" at Interspeech (2020).

Ngoc Thang Vu

Professor, University of Stuttgart

Dilek Hakkani-Tur is a senior principal scientist at Amazon Alexa AI focusing on enabling natural dialogues with machines. Prior to joining Amazon, she was leading the dialogue research group at Google Research, a principal researcher at Microsoft Research, International Computer Science Institute (ICSI) and AT&T Labs-Research. She received her PhD degree from Bilkent Univ., Department of Computer Engineering in 2000. Her research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing. She has over 80 patents that were granted and co-authored more than 300 papers in natural language and speech processing. She is the recipient of three best paper awards for her work on active learning for dialogue systems, from IEEE Signal Processing Society, ISCA and EURASIP. She served as an associate editor for IEEE Transactions on Audio, Speech and Language Processing, member of the IEEE Speech and Language Technical Committee, area editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters, and served on the ISCA Advisory Council. She is currently the Editor-in-Chief of the IEEE/ACM Transactions on Audio, Speech and Language Processing and a program co-chair for NAACL-HLT 2020. She is a fellow of the IEEE (2014) and ISCA (2014).

Dilek Hakkani-Tur

Senior Principal Scientist, Amazon Alexa AI

Program committee

×

Reading

Meta learning is one of the fastest growing research areas in the deep learning scope. However there is no standard definition for meta learning. Usually the main goal is to design models that can learn new tasks rapidly with few in domain training examples, by having models to pre-learn from many, relevant or not, training tasks in a way that the models ar e easy to be generalized to new tasks. For better understanding the scope of meta learning, we provide several online courses and papers describing the works falling into the area. These works are just for showcasing, and we definitely encourage people with research not covered here but sharing the same goal mentioned above to submit.

Online Courses

Papers

Meta Learning Technology
Applications to Natural language understanding:

CONTACT

Feel free to send us messages at: