4.5 Article

Transfer learning for Turkish named entity recognition on noisy text

期刊

NATURAL LANGUAGE ENGINEERING
卷 27, 期 1, 页码 35-64

出版社

CAMBRIDGE UNIV PRESS
DOI: 10.1017/S1351324919000627

关键词

Named entity recognition; Transfer learning; Recurrent neural networks; Low-resource language; Noisy text

向作者/读者索取更多资源

This study investigates the use of deep neural networks in named entity recognition on Turkish noisy text, demonstrating that valuable latent features can be learned without using hand-crafted features and domain-specific resources. The results show that using character-level, morpheme-level representations significantly contributes to learning latent features for morphologically rich languages.
In this article, we investigate using deep neural networks with different word representation techniques for named entity recognition (NER) on Turkish noisy text. We argue that valuable latent features for NER can, in fact, be learned without using any hand-crafted features and/or domain-specific resources such as gazetteers and lexicons. In this regard, we utilize character-level, character n-gram-level, morpheme-level, and orthographic character-level word representations. Since noisy data with NER annotation are scarce for Turkish, we introduce a transfer learning model in order to learn infrequent entity types as an extension to the Bi-LSTM-CRF architecture by incorporating an additional conditional random field (CRF) layer that is trained on a larger (but formal) text and a noisy text simultaneously. This allows us to learn from both formal and informal/noisy text, thus improving the performance of our model further for rarely seen entity types. We experimented on Turkish as a morphologically rich language and English as a relatively morphologically poor language. We obtained an entity-level F1 score of 67.39% on Turkish noisy data and 45.30% on English noisy data, which outperforms the current state-of-art models on noisy text. The English scores are lower compared to Turkish scores because of the intense sparsity in the data introduced by the user writing styles. The results prove that using subword information significantly contributes to learning latent features for morphologically rich languages.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据