期刊
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
卷 17, 期 9, 页码 1573-1577出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2019.2949745
关键词
Feature extraction; Generative adversarial networks; Gallium nitride; Training; Generators; Task analysis; Decoding; Deep learning; generative adversarial network (GAN); multispectral (MS) image; panchromatic (PAN); pansharpening (PNN)
类别
资金
- National Natural Science Foundation of China [61671312, 61922029]
- Sichuan Science and Technology Program [2018HH0070]
Due to the limitation of the satellite sensor, it is difficult to acquire a high-resolution (HR) multispectral (HRMS) image directly. The aim of pansharpening (PNN) is to fuse the spatial in panchromatic (PAN) with the spectral information in multispectral (MS). Recently, deep learning has drawn much attention, and in the field of remote sensing, several pioneering attempts have been made related to PNN. However, the big size of remote sensing data will produce more training samples, which require a deeper neural network. Most current networks are relatively shallow and raise the possibility of detail loss. In this letter, we propose a residual encoder-decoder conditional generative adversarial network (RED-cGAN) for PNN to produce more details with sharpened images. The proposed method combines the idea of an autoencoder with generative adversarial network (GAN), which can effectively preserve the spatial and spectral information of the PAN and MS images simultaneously. First, the residual encoder-decoder module is adopted to extract the multiscale features from the last step to yield pansharpened images and relieve the training difficulty caused by deepening the network layers. Second, to further enhance the performance of the generator to preserve more spatial information, a conditional discriminator network with the input of PAN and MS images is proposed to encourage that the estimated MS images share the same distribution as that of the referenced HRMS images. The experiments conducted on the Worldview2 (WV2) and Worldview3 (WV3) images demonstrate that our proposed method provides better results than several state-of-the-art PNN methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据