4.7 Article

Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery

期刊

出版社

ELSEVIER
DOI: 10.1016/j.isprsjprs.2019.08.018

关键词

Deep learning; CNN; Multi-label; Multi-modal; Indexing; Scene classification; PlanetScope; Sentinel-2; Remote sensing; Ensemble; Sen2Cor; MACCS

向作者/读者索取更多资源

With the increasing availability of high-resolution satellite imagery it is important to improve the efficiency and accuracy of satellite image indexing, retrieval and classification. Furthermore, there is a need for utilizing all available satellite imagery in identifying general land cover types and monitoring their changes through time irrespective of their spatial, spectral, temporal and radiometric resolutions. Therefore, in this study, we developed deep learning models able to efficiently and accurately classify cloud, shadow and land cover scenes in different high-resolution (< 10 m) satellite imagery. Specifically, we trained deep convolutional neural network (CNN) models to perform multi-label classification of multi-modal, high-resolution satellite imagery at the scene level. Multi-label classification at the scene level (a.k.a. image indexing), as opposed to the pixel level, allows for faster performance, higher accuracy (although at the cost of detail) and higher generalizability. We investigated the generalization ability (i.e. cross-dataset and geographic independence) of individual and ensemble CNN models trained on multi-modal satellite imagery (i.e. PlanetScope and Sentinel-2). The models trained on PlanetScope imagery collected over the Amazon performed well when applied to PlanetScope and Sentinel-2 imagery collected over the Wet Tropics of Australia with an F-2 score of 0.72 and 0.69, respectively. Similarly, PlanetScope-based CNN models trained on imagery collected over the Wet Tropics of Australia performed well when applied to Sentinel-2 imagery with an F-2 score of 0.76, and the reverse scenario resulted in the same F-2 score of 0.76. This suggests that our CNN models have high cross-dataset generalization ability and are suitable for classifying cloud, shadow and land cover classes in satellite imagery with resolutions from 3 m (PlanetScope) to 10 m (Sentinel-2). The performance of our CNN models was also comparable to the state-of-the-art methods (i.e. Sen2Cor and MACCS) developed specifically for classifying cloud and shadow classes in Sentinel-2 imagery. Finally, we show the potential of our CNN models to mask cloud and shadow contaminated areas from PlanetScope- and Sentinel-2-derived NDVI time-series.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据