Natural Language Processing

Recent Breakthroughs and Uphill Battles in Modern NLP

Description

Deep Learning has changed the face of Natural Language Processing (NLP). Across NLP tasks, generic neural architectures surpass the performance of systems designed based on domain knowledge, and in some cases even perform on par with humans. The recent advancement of pre-trained language models (such as Google's BERT), trained on a massive amount of texts, further boosted performance and reduced development time. Their ease of use has made NLP accessible to non-experts. But looking beyond popular media reports and performance metrics, is NLP anywhere near being solved? In this talk I will present some of the remaining challenges in NLP. I will discuss current blind spots that limit the real-world applicability of models, such as their limited ability to generalize outside the training domain, the vast data requirements, and the lack of common sense knowledge. We will also review broader consequences related to environmental and ethical issues.

Vered Shwartz

Postdoctoral researcher at the Allen Institute for AI