4.7 Article

Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

期刊

出版社

ELSEVIER
DOI: 10.1016/j.isprsjprs.2018.01.023

关键词

Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis

资金

  1. Spanish project [TIN2016-79717-R]
  2. CHISTERA project M2CR [PCIN2015-251]
  3. SSF through a grant for the project SymbiCloud
  4. VR starting grant through the Strategic Area for ICT research ELLIIT [2016-05543]
  5. CENIIT grant [18.14]
  6. project AIROBEST [317387, 317388]
  7. Academy of Finland
  8. project MegaMrt2
  9. Electronic Component Systems for European Leadership (ECSEL) Joint Undertaking of the Horizon European Union funding programme [737494]

向作者/读者索取更多资源

Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. (C) 2018 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据