Seminar in Computational Linguistics
- Date: –14:30
- Location: Engelska parken 9-3042
- Lecturer: Joakim Nivre
- Contact person: Miryam de Lhoneux
Is the End of Supervised Parsing in Sight – Twelve Years Later
At ACL in Prague in 2007, Rens Bod asked whether supervised parsing models would soon be a thing of the past, giving way to superior unsupervised models. For the next decade or so, the answer seemed to be negative, as supervised approaches to syntactic parsing continued to outperform their unsupervised counterparts by large margins. However, recent developments in our field has made the question relevant again, in at least two different ways. First, there is the question of whether the end of all parsing (supervised or other) is in sight, simply because systems that are trained end-to-end for real applications have no room (or need) for traditional linguistic representations of the kind that parsers produce, a question that I will not directly discuss in this talk. However, there is also the question of whether we need traditional supervised parsers even if we want to compute discrete syntactic representations of natural language sentences. To put this question into perspective, I will survey developments in dependency parsing since 2007, showing that most of the advanced parsing models proposed during the first half of this period have been made obsolete by advances in deep learning during the second half. Moreover, the advent of deep contextualized word embeddings appear to have eliminated the last remaining differences between different algorithmic approaches, suggesting that specialized parsing algorithms are largely superfluous in the state-of-the-art systems of today.