4.1 Article

Perspectives on crowdsourcing annotations for natural language processing

Journal

LANGUAGE RESOURCES AND EVALUATION
Volume 47, Issue 1, Pages 9-31

Publisher

SPRINGER
DOI: 10.1007/s10579-012-9176-1

Keywords

Human computation; Crowdsourcing; NLP; Wikipedia; Mechanical Turk; Games with a purpose; Annotation

Funding

  1. CSIDM from the National Research Foundation (NRF) [CSIDM-200805]

Ask authors/readers for more resources

Crowdsourcing has emerged as a new method for obtaining annotations for training models for machine learning. While many variants of this process exist, they largely differ in their methods of motivating subjects to contribute and the scale of their applications. To date, there has yet to be a study that helps the practitioner to decide what form an annotation application should take to best reach its objectives within the constraints of a project. To fill this gap, we provide a faceted analysis of crowdsourcing from a practitioner's perspective, and show how our facets apply to existing published crowdsourced annotation applications. We then summarize how the major crowdsourcing genres fill different parts of this multi-dimensional space, which leads to our recommendations on the potential opportunities crowdsourcing offers to future annotation efforts.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available