Seminar in Computational Linguistics
- Date: –15:00
- Location: Engelska parken 9-3042
- Lecturer: James Henderson
- Contact person: Miryam de Lhoneux
- Seminarium
Learning Vector Representations of Abstraction with Entailment-Based Distributional Semantics
Representation learning for natural language has made great progress using the rich notion of similarity provided by a vector space. In particular, models of the distribution of contexts where a word occurs (distributional semantics) have learned vector representations of words (word embeddings) which capture a widely useful notion of semantic similarity. But for many tasks we want abstraction, not similarity.
This talk presents distributional semantic models using a vector space for abstraction instead of similarity. These entailment vectors represent how much is known in each dimension, thereby representing information inclusion between vectors, known as entailment, or abstraction. Operators for measuring entailment between these vectors and methods for inferring these vectors from entailment relations are used to define an entailment-based model of the semantic relationship between a word and its context, which forms the basis of distributional semantic models for learning entailment-based representations of words. These representations give state-of-the-art results on unsupervised lexical entailment (hyponymy) detection. We argue that the entailment vectors framework has wider applicability in natural language semantics, including showing that entailment vector representations of sentences are good for the supervised modelling of textual entailment between sentences.