Crowdsourcing for book search evaluation: Impact of HIT design on comparative system ranking

Gabriella Kazai*, Jaap Kamps, Marijn Koolen, Natasa Milic-Frayling

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

120 Citations (Scopus)

Abstract

The evaluation of information retrieval (IR) systems over special collections, such as large book repositories, is out of reach of traditional methods that rely upon editorial relevance judgments. Increasingly, the use of crowdsourcing to collect relevance labels has been regarded as a viable alternative that scales with modest costs. However, crowdsourcing suffers from undesirable worker practices and low quality contributions. In this paper we investigate the design and implementation of effective crowdsourcing tasks in the context of book search evaluation. We observe the impact of aspects of the Human Intelligence Task (HIT) design on the quality of relevance labels provided by the crowd. We assess the output in terms of label agreement with a gold standard data set and observe the effect of the crowdsourced relevance judgments on the resulting system rankings. This enables us to observe the effect of crowd-sourcing on the entire IR evaluation process. Using the test set and experimental runs from the INEX 2010 Book Track, we find that varying the HIT design, and the pooling and document ordering strategies leads to considerable differences in agreement with the gold set labels. We then observe the impact of the crowdsourced relevance label sets on the relative system rankings using four IR performance metrics. System rankings based on MAP and Bpref remain less affected by different label sets while the Precision@10 and nDCG@10 lead to dramatically different system rankings, especially for labels acquired from HITs with weaker quality controls. Overall, we find that crowdsourcing can be an effective tool for the evaluation of IR systems, provided that care is taken when designing the HITs.

Original languageEnglish
Title of host publicationSIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval
PublisherAssociation for Computing Machinery
Pages205-214
Number of pages10
ISBN (Print)9781450309349
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011 - Beijing, China
Duration: 24 Jul 201128 Jul 2011

Publication series

NameSIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval

Conference

Conference34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2011
Country/TerritoryChina
CityBeijing
Period24/07/1128/07/11

Keywords

  • Book search
  • Crowdsourcing quality
  • Prove it

Fingerprint

Dive into the research topics of 'Crowdsourcing for book search evaluation: Impact of HIT design on comparative system ranking'. Together they form a unique fingerprint.

Cite this