Meta Learning for NLP
Meta Learning and Its Applications to Natural Language Processing
Meta Learning and Its Applications to Natural Language Processing
Workshop at ACL 2021

Description

Deep learning based natural language processing (NLP) has become the mainstream of research in recent years and significantly outperforms conventional methods. However, deep learning models are notorious for being data and computation hungry. These downsides limit such models' application from deployment to different domains, languages, countries, or styles, since collecting in-genre data and model training from scratch are costly. The long-tail nature of human language makes challenges even more significant.

Meta-learning, or ‘Learning to Learn’, aims to learn better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, and beyond. Meta-learning has been shown to allow faster fine-tuning, converge to better performance, and achieve outstanding results for few-shot learning in many applications. Meta-learning is one of the most important new techniques in machine learning in recent years, but the method is mainly investigated with applications in computer vision. It is believed that meta-learning has excellent potential to be applied in NLP, and some works have been proposed with notable achievements in several relevant problems, e.g., relation extraction, machine translation, and dialogue generation and state tracking. However, it does not catch the same level of attention as in the image processing community.

This workshop (Meta Learning and Its Applications to Natural Language Processing Workshop, or MetaNLP) will bring concentrated discussions on meta-learning for the field of NLP via several invited talks, oral and poster sessions with high-quality papers, and a panel of leading researchers from industry and academia. Alongside research work on new meta-learning methods, data, applications, and results, this workshop will call for novel work on understanding, analyzing, and comparing different meta-learning approaches for NLP. The workshop aims to:

Call for Papers

MetaNLP workshop invites submissions that investigate the theoretical and experimental nature of meta learning methodologies and their applications to NLP. Relevant research directions include, but not limited to:

Popular meta-learning topics include, but not limited to:

We welcome three categories of papers: regular workshop papers, cross-submissions, and extended abstracts. Only the regular workshop paper will be included in the proceedings. The extended Abstracts and cross-submissions will simply be hosted on our websites. Submissions portal will be announced later.

Important Dates

  • Paper Submissions Due: April 26, 2021 (AoE)
  • Notification of Acceptance: May 28, 2021 (AoE)
  • Camera-ready Paper Due: June 7, 2021 (AoE)
  • Workshop Date: August 5 or 6, 2021

Invited Speakers

Andreas Vlachos

University of Cambridge

Chelsea Finn

Stanford University

Eric Xing

Carnegie Mellon University

Heng Ji

University of Illinois Urbana-Champaign

William L. Hamilton

McGill University

Zhou Yu

Columbia University

Program

TBD

Organizers

Hung-yi Lee received the M.S. and Ph.D. degrees from National Taiwan University (NTU), Taipei, Taiwan, in 2010 and 2012, respectively. From September 2012 to August 2013, he was a postdoctoral fellow in Research Center for Information Technology Innovation, Academia Sinica. From September 2013 to July 2014, he was a visiting scientist at the Spoken Language Systems Group of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is currently an associate professor of the Department of Electrical Engineering of National Taiwan University, with a joint appointment at the Department of Computer Science & Information Engineering of the university. His research focuses on machine learning (especially deep learning), spoken language understanding, and speech recognition. He owns a YouTube channel teaching deep learning in Mandarin (more than 4M Total Views and 48k Subscribers).

Hung-Yi Lee

Associate Professor, National Taiwan University

Mitra Mohtarami is a Research Scientist in the Computer Science and Artificial Intelligence Laboratory at MIT. She is a member of MIT's Spoken Language Systems Group, directed by Dr. James Glass. She received her PhD in Computer Science from the National University of Singapore in Dec 2013, and worked as a Research Scientist at the Institute for Infocomm Research (I2R) from Nov 2013 to Aug 2014, and joined MIT as a Postdoctoral Associate in Sep 2014. She has been the recipient of several awards including the Dean's Graduate Research Excellent Award (2013), Outstanding Research Achievement Award (2012), NUS Scholarship Award (2009-2013), and others. Mitra's primary research centers on Natural Language Processing and some of her recent projects include fact checking, open-ended question answering, sentiment and emotion analysis.

Mitra Mohtarami

Research Scientist, Massachusetts Institute of Technology

Shang-Wen Li is a senior Applied Scientist at Amazon AI. His research focuses on spoken language understanding, dialog management, and natural language generation. His recent interest is transfer learning for low-resourced conversational bots. He earned his PhD from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in 2016. He received M.S. and B.S. from National Taiwan University. Before joining Amazon, he also worked at Apple Siri researching conversational AI. He is the workshop co-organizer about &quotSelf-Supervised Learning for Speech and Audio Processing" at NeurIPS (2020) and one of the tutorial speakers about &quotMeta Learning and its application to Human Language Processing" at Interspeech (2020).

Shang-Wen Li

Senior Applied Scientist, Amzaon web services AI

Di Jin is an Applied Scientist at Amazon Alexa AI. His research focuses on conversational AI, adversarial robustness, transfer learning (esp. meta-learning for domain adaptation), question answering, and conditional text generation. He earned his PhD from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in 09/2020, supervised by Prof. Peter Szolovits. He received B.S. from Tsinghua University in China.

Di Jin

Applied Scientist, Amazon Alexa AI

Mandy Korpusik is an Assistant Professor in the Department of Computer Science at Loyola Marymount University. She completed her B.S. in Electrical and Computer Engineering from Franklin W. Olin College of Engineering in May, 2013 and received her S.M. and PhD in Computer Science from MIT, where she worked for six years in the Spoken Language Systems group in the Computer Science and Artificial Intelligence Laboratory. Her primary research interests include natural language processing and spoken language understanding for dialogue systems. Mandy used deep learning models to build the Coco Nutritionist application for iOS that allows obesity patients to track their diet more easily through natural language. Her long-term research goal is to deploy a collection of AI-based conversational agents that improve the health, well-being, and productivity of people.

Mandy Korpusik

Assistant Professor, Loyola Marymount University

Annie Dong is an Applied Scientist in Amazon Alexa AI working on data efficient and robustness strategies for Natural Language Understanding and Entity Resolution systems. Prior to joining Amazon, she had a couple short stints devising modeling solutions for healthcare tech, education tech, and finance applications. Annie received her M.S. from University of California, Santa Barbara.

Annie Dong

Applied Scientist, Amazon Alexa AI

Ngoc Thang Vu received his Diploma (2009) and PhD (2014) degrees in computer science from Karlsruhe Institute of Technology, Germany. From 2014 to 2015, he worked at Nuance Communications as a senior research scientist and at Ludwig-Maximilian University Munich as an acting professor in computational linguistics. In 2015, he was appointed assistant professor at University of Stuttgart, Germany. Since 2018, he has been a full professor at the Institute for Natural Language Processing in Stuttgart. His main research interests are natural language processing (esp. speech recognition and dialog systems) and machine learning (esp. deep learning) for low-resource settings. He is one of the tutorial speakers about &quotMeta Learning and its application to Human Language Processing" at Interspeech (2020).

Ngoc Thang Vu

Professor, University of Stuttgart

Dilek Hakkani-Tur is a senior principal scientist at Amazon Alexa AI focusing on enabling natural dialogues with machines. Prior to joining Amazon, she was leading the dialogue research group at Google Research, a principal researcher at Microsoft Research, International Computer Science Institute (ICSI) and AT&T Labs-Research. She received her PhD degree from Bilkent Univ., Department of Computer Engineering in 2000. Her research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing. She has over 80 patents that were granted and co-authored more than 300 papers in natural language and speech processing. She is the recipient of three best paper awards for her work on active learning for dialogue systems, from IEEE Signal Processing Society, ISCA and EURASIP. She served as an associate editor for IEEE Transactions on Audio, Speech and Language Processing, member of the IEEE Speech and Language Technical Committee, area editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters, and served on the ISCA Advisory Council. She is currently the Editor-in-Chief of the IEEE/ACM Transactions on Audio, Speech and Language Processing and a program co-chair for NAACL-HLT 2020. She is a fellow of the IEEE (2014) and ISCA (2014).

Dilek Hakkani-Tur

Senior Principal Scientist, Amazon Alexa AI

×

Reading

Meta learning is one of the fastest growing research areas in the deep learning scope. However there is no standard definition for meta learning. Usually the main goal is to design models that can learn new tasks rapidly with few in domain training examples, by having models to pre-learn from many, relevant or not, training tasks in a way that the models ar e easy to be generalized to new tasks. For better understanding the scope of meta learning, we provide several online courses and papers describing the works falling into the area. These works are just for showcasing, and we definitely encourage people with research not covered here but sharing the same goal mentioned above to submit.

Online Courses

Papers

Meta Learning Technology
Applications to Natural language understanding:

CONTACT

Feel free to send us messages at: