Journal
NATURAL LANGUAGE ENGINEERING
Volume 25, Issue 4, Pages 467-482Publisher
CAMBRIDGE UNIV PRESS
DOI: 10.1017/S1351324919000202
Keywords
natural language inference; sentence representations; representation learning
Funding
- Academy of Finland through the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence [314062]
- European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme [771113]
- GPU grant
- Academy of Finland [270354/273457/313478]
- Academy of Finland (AKA) [314062, 314062] Funding Source: Academy of Finland (AKA)
Ask authors/readers for more resources
Sentence-level representations are necessary for various natural language processing tasks. Recurrent neural networks have proven to be very effective in learning distributed representations and can be trained efficiently on natural language inference tasks. We build on top of one such model and propose a hierarchy of bidirectional LSTM and max pooling layers that implements an iterative refinement strategy and yields state of the art results on the SciTail dataset as well as strong results for Stanford Natural Language Inference and Multi-Genre Natural Language Inference. We can show that the sentence embeddings learned in this way can be utilized in a wide variety of transfer learning tasks, outperforming InferSent on 7 out of 10 and SkipThought on 8 out of 9 SentEval sentence embedding evaluation tasks. Furthermore, our model beats the InferSent model in 8 out of 10 recently published SentEval probing tasks designed to evaluate sentence embeddings' ability to capture some of the important linguistic properties of sentences.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available