4.7 Article

Deep Learning From Multiple Crowds: A Case Study of Humanitarian Mapping

Journal

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Volume 57, Issue 3, Pages 1713-1722

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2018.2868748

Keywords

Active learning; deep learning; humanitarian mapping; satellite image; volunteered geographic information (VGI)

Funding

  1. Klaus Tschira Foundation, Heidelberg

Ask authors/readers for more resources

Satellite images are widely applied in humanitarian mapping that labels buildings, roads, and so on for humanitarian aid and economic development. However, the labeling now is mostly done by volunteers. In this paper, we utilize deep learning to solve humanitarian mapping tasks of a mobile software named MapSwipe. The current deep learning techniques, e.g., convolutional neural network (CNN), can recognize ground objects from satellite images but rely on numerous labels for training for each specific task. We solve this problem by fusing multiple freely accessible crowdsourced geographic data and propose an active learning-based CNN training framework named MC-CNN to deal with the quality issues of the labels extracted from these data, including incompleteness (e.g., some kinds of object are not labeled) and heterogeneity (e.g., different spatial granularities). The method is evaluated with building mapping in South Malawi and road mapping in Guinea with level-18 satellite images provided by Bing Map and volunteered geographic information from OpenStreetMap, MapSwipe, and OsmAnd. The results based on multiple metrics, including Precision, Recall, F1 Score, and area under the receiver operating characteristic curve, show that MC-CNN can fuse the crowdsourced labels for higher prediction performance and be successfully applied in MapSwipe for humanitarian mapping with 85% labor saved and an overall accuracy of 0.86 achieved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available