Motivational feedback in crowdsourcing: A case study in speech transcription

G. Riccardi, A. Ghosh, S. A. Chowdhury, A. O. Bayer

Research output: Contribution to journalConference articlepeer-review

5 Citations (Scopus)

Abstract

A widely used strategy in human and machine performance enhancement is achieved through feedback. In this paper we investigate the effect of live motivational feedback on motivating crowds and improving the performance of the crowdsourcing computational model. The provided feedback allows workers to react in real-time and review past actions (e.g. word deletions); thus, to improve their performance on the current and future (sub) tasks. The feedback signal can be controlled via clean (e.g. expert) supervision or noisy supervision in order to trade-off between cost and target performance of the crowd-sourced task. The feedback signal is designed to enable crowd workers to improve their performance at the (sub) task level. The type and performance of feedback signal is evaluated in the context of a speech transcription task. Amazon Mechanical Turk (AMT) platform is used to transcribe speech utterances from different corpora. We show that in both clean (expert) and noisy (worker/turker) real-time feedback conditions the crowd workers are able to provide significantly more accurate transcriptions in a shorter time.

Original languageEnglish
Pages (from-to)1111-1115
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - 2013
Externally publishedYes
Event14th Annual Conference of the International Speech Communication Association, INTERSPEECH 2013 - Lyon, France
Duration: 25 Aug 201329 Aug 2013

Keywords

  • Crowdsourcing
  • Feedback systems
  • Speech transcription

Fingerprint

Dive into the research topics of 'Motivational feedback in crowdsourcing: A case study in speech transcription'. Together they form a unique fingerprint.

Cite this