Empirical Study on Effects of Self-Correction in Crowdsourced Microtasks

Authors

  • Masaki Kobayashi University of Tsukuba
  • Hiromi Morita University of Tsukuba
  • Masaki Matsubara University of Tsukuba
  • Nobuyuki Shimizu Yahoo! Japan
  • Atsuyuki Morishima University of Tsukuba

DOI:

https://doi.org/10.15346/hc.v8i1.1

Keywords:

crowdsourcing, microtasks, worker behavior

Abstract

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review.Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task.It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are required.In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks.This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.We found that:(1) Self-correction is effective for making workers reconsider their judgments.(2) Self-correction is effective more if workers are shown the task results of higher-quality workers during the second stage.(3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks.(4) A Perceptual learning effect is observed, particularly with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers.These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach.However, (5) no long-term effects of the self-correction task were transferred to other similar tasks in two different settings.

Downloads

Published

2021-03-01

How to Cite

Kobayashi, M., Morita, H., Matsubara, M., Shimizu, N., & Morishima, A. (2021). Empirical Study on Effects of Self-Correction in Crowdsourced Microtasks. Human Computation, 8(1). https://doi.org/10.15346/hc.v8i1.1

Issue

Section

Research