Lectures on Natural Language Processing
- Datum: –09.55
- Plats: Engelska parken English Park Campus, Room 22-0031
- Föreläsare: Roger Levy
- Kontaktperson: Ahmed Ruby
Syntactic Processing in Humans and Neural Language Models
Large language models (LLMs) generate fluent and sophisticated text in ways that are remarkably responsive to linguistic context. But to what extent does their behavior reflect human-like grammatical knowledge, in both the classic senses of linguistic competence and performance? In this talk I describe recent work in our research group addressing this question. I argue that LLMs' next-word predictive distributions captures sophisticated features of human grammatical competence, as exemplified by an in-depth study of how LLMs capture filler–gap dependencies and their “island” restrictions. However, LLMs do not recapitulate the performance constraints that characterize human language processing: for that, we need to posit additional cognitive apparatus that sheds light on the architecture of human language comprehension. This body of work sheds light on classic learnability debates on cognitive science and deepens our understanding of language processing in the human mind and brain.