4.3 Article

Detecting Careless Responding in Survey Data Using Stochastic Gradient Boosting

Journal

EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Volume 82, Issue 1, Pages 29-56

Publisher

SAGE PUBLICATIONS INC
DOI: 10.1177/00131644211004708

Keywords

careless responding; gradient boosted trees; data cleaning; response times; outlier detection

Ask authors/readers for more resources

This study examines the bias of careless responding in survey responses, which poses a threat to the reliability and validity of psychological measurements. The introduction of gradient boosted trees as a machine learning technique to identify careless respondents outperformed traditional detection mechanisms in simulated studies, but this advantage did not translate to empirical studies. Existing detection methods may not be robust enough to handle the erratic nature of responses in real-world settings, prompting further research in the field.
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus items), auxiliary or paradata (e.g., response times), or data-driven statistical techniques (e.g., Mahalanobis distance). In the present study, gradient boosted trees, a state-of-the-art machine learning technique, are introduced to identify careless respondents. The performance of the approach was compared with established techniques previously described in the literature (e.g., statistical outlier methods, consistency analyses, and response pattern functions) using simulated data and empirical data from a web-based study, in which diligent versus careless response behavior was experimentally induced. In the simulation study, gradient boosting machines outperformed traditional detection mechanisms in flagging aberrant responses. However, this advantage did not transfer to the empirical study. In terms of precision, the results of both traditional and the novel detection mechanisms were unsatisfactory, although the latter incorporated response times as additional information. The comparison between the results of the simulation and the online study showed that responses in real-world settings seem to be much more erratic than can be expected from the simulation studies. We critically discuss the generalizability of currently available detection methods and provide an outlook on future research on the detection of aberrant response patterns in survey research.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available