4.7 Article

A text and image analysis workflow using citizen science data to extract relevant social media records: Combining red kite observations from Flickr, eBird and iNaturalist

Journal

ECOLOGICAL INFORMATICS
Volume 71, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.ecoinf.2022.101782

Keywords

User-generated content; Volunteered geographic information; Data integration; Image content analysis; Convolutional neural networks

Categories

Funding

  1. Swiss National Science Foundation project EVA- VGI 2 [186389]
  2. German Research Foundation [424985896, 424966858, 314671965]

Ask authors/readers for more resources

There is an urgent need to develop new methods for monitoring the environment, and using new data sources such as User-Generated Content can improve existing approaches. Our study focuses on using citizen science records to train a Convolutional Neural Network (CNN) that can identify images of red kites, and integrating this CNN into a workflow with bird classifiers and text metadata. By combining image and text classifiers, we achieve almost perfect precision.
There is an urgent need to develop new methods to monitor the state of the environment. One potential approach is to use new data sources, such as User-Generated Content, to augment existing approaches. However, to date, studies typically focus on a single date source and modality. We take a new approach, using citizen science records recording sightings of red kites (Milvus milvus) to train and validate a Convolutional Neural Network (CNN) capable of identifying images containing red kites. This CNN is integrated in a sequential workflow which also uses an off-the-shelf bird classifier and text metadata to retrieve observations of red kites in the Chilterns, England. Our workflow reduces an initial set of more than 600,000 images to just 3065 candidate images. Manual inspection of these images shows that our approach has a precision of 0.658. A workflow using only text identifies 14% less images than that including image content analysis, and by combining image and text classifiers we achieve almost perfect precision of 0.992. Images retrieved from social media records complement those recorded by citizen scientists spatially and temporally, and our workflow is sufficiently generic that it can easily be transferred to other species.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available