4.4 Article

A general generative adversarial capsule network for hyperspectral image spectral-spatial classification

期刊

REMOTE SENSING LETTERS
卷 11, 期 1, 页码 19-28

出版社

TAYLOR & FRANCIS LTD
DOI: 10.1080/2150704X.2019.1681598

关键词

-

向作者/读者索取更多资源

A novel generative adversarial capsule network (Caps-GAN) model for hyperspectral image spectral-spatial classification is proposed in this Letter, which can effectively solve the scarce availability problem of annotated samples and improve classification performance. In the proposed method, a series of deconvolutional layers are utilized to generate fake samples as real as training samples with additional label information and 3D capsule network (CapsNet) is designed to discriminate the inputs, which can achieve higher classification performance than convolutional neural networks (CNNs) by considering spatial relationships in images. Furthermore, the generated samples with labels and training samples are put into discriminator for joint training, and the trained discriminator can determine the authenticity of input sample as well as the class label. This auxiliary conditional generative adversarial training strategy can effectively improve the generalization capability of the capsule network when labelled samples are limited. The Pavia University and Indian Pines images are used to evaluate the classification performance, and the overall accuracies of proposed method for these two datasets achieve and , respectively. The comparative experimental results reveal that the proposed model can improve the classification accuracy and provide competitive results compared with state-of-the-art methods, especially when there are few annotated samples.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据