TY - GEN
T1 - Crowdsourcing software evaluation
AU - Sherief, Nada
AU - Jiang, Nan
AU - Hosseini, Mahmood
AU - Phalp, Keith
AU - Ali, Raian
PY - 2014
Y1 - 2014
N2 - Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers' knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation.
AB - Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers' knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation.
KW - Crowdsourcing
KW - Software evaluation
KW - Users feedback
UR - http://www.scopus.com/inward/record.url?scp=84905456476&partnerID=8YFLogxK
U2 - 10.1145/2601248.2601300
DO - 10.1145/2601248.2601300
M3 - Conference contribution
AN - SCOPUS:84905456476
SN - 9781450324762
T3 - ACM International Conference Proceeding Series
BT - 18th International Conference on Evaluation and Assessment in Software Engineering, EASE 2014
PB - Association for Computing Machinery
T2 - 18th International Conference on Evaluation and Assessment in Software Engineering, EASE 2014
Y2 - 12 May 2014 through 14 May 2014
ER -