4.7 Article

A Biologically Interpretable Two-Stage Deep Neural Network (BIT-DNN) for Vegetation Recognition From Hyperspectral Imagery

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2021.3058782

Keywords

Feature extraction; Biological system modeling; Deep learning; Vegetation mapping; Data models; Biology; Neural networks; Classification; deep learning; hyper-spectral images (HSIs); interpretability

Funding

  1. Biotechnology and Biological Sciences Research Council (BBSRC) [BB/R019983/1, BB/S020969/1]
  2. Newton Fund Institutional Links through the Newton-Ungku Omar Fund partnership - U. K. Department of Business, Energy, and Industrial Strategy (BEIS)
  3. Malaysian Industry-Government Group for High Technology
  4. British Council [332438911]
  5. BBSRC [BB/R019983/1, BB/S020969/1] Funding Source: UKRI

Ask authors/readers for more resources

This study proposes a novel interpretable deep learning model, capable of achieving both high accuracy and interpretability on hyper-spectral image classification. The model extracts and encapsulates the biophysical and biochemical attributes of target entities through prior knowledge and spectral-spatial feature transformation, resulting in improved classification performance and reduced computational complexity.
Spectralx2013;spatial-based deep learning models have recently proven to be effective in hyper-spectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of x201C;black-boxx201D; model representation, how to explain and interpret the learning process and the model decision, especially for vegetation classification, remains an open challenge. This study proposes a novel interpretable deep learning modelx2014;a biologically interpretable two-stage deep neural network (BIT-DNN), by incorporating the prior-knowledge (i.e., biophysical and biochemical attributes and their hierarchical structures of target entities)-based spectralx2013;spatial feature transformation into the proposed framework, capable of achieving both high accuracy and interpretability on HSI-based classification tasks. The proposed model introduces a two-stage feature learning process: in the first stage, an enhanced interpretable feature block extracts the low-level spectral features associated with the biophysical and biochemical attributes of target entities; and in the second stage, an interpretable capsule block extracts and encapsulates the high-level joint spectralx2013;spatial features representing the hierarchical structure of biophysical and biochemical attributes of these target entities, which provides the model an improved performance on classification and intrinsic interpretability with reduced computational complexity. We have tested and evaluated the model using four real HSI data sets for four separate tasks (i.e., plant species classification, land cover classification, urban scene recognition, and crop disease recognition tasks). The proposed model has been compared with five state-of-the-art deep learning models. The results demonstrate that the proposed model has competitive advantages in terms of both classification accuracy and model interpretability, especially for vegetation classification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available