Seminar in Computational Linguistics
- Date: –15:00
- Location: Engelska parken 9-3042
- Lecturer: Gong Zhengxian
- Contact person: Miryam de Lhoneux
- Seminarium
Research on Some Key Aspects of Document-level Neural Machine Translation
Currently document-level information has proven useful for Neural Machine Translation(NMT). In the most popular encoder-decoder framework of NMT, big context(document-level information) is usually utilized by introducing context-attention mechanism or a special component-memory or cache. However, previous sentences do not always have close relation to current sentence, especially in a large context, such as a document. Useless context not only occupy computation resource but also has a bad impact on attention mechanism. In order to address this problem, two attempts have been conducted on this research. Given relevant context, one attempt is to slim down context, i.e. only keeping key parts of context, for example, co-reference. Another attempt is to reduce the bad impact from the irrelevant context and improve current sentence representation by a joint learning classifier. In this talk, I first introduce NMT in brief; next, give and analyze the key aspects of research on current document-level NMT; then, describe the details of two above mentioned attempts; finally show a dataset labelled with thematic information, designing and developing for serving document-level natural language processing in the future work.