TY - GEN
T1 - Overview of the CLEF-2019 CheckThat! Lab
T2 - 10th International Conference of the CLEF Association, CLEF 2019
AU - Elsayed, Tamer
AU - Nakov, Preslav
AU - Barrón-Cedeño, Alberto
AU - Hasanain, Maram
AU - Suwaileh, Reem
AU - Da San Martino, Giovanni
AU - Atanasova, Pepa
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019
Y1 - 2019
N2 - We present an overview of the second edition of the CheckThat! Lab at CLEF 2019. The lab featured two tasks in two different languages: English and Arabic. Task 1 (English) challenged the participating systems to predict which claims in a political debate or speech should be prioritized for fact-checking. Task 2 (Arabic) asked to (A) rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) identify useful passages from these pages, and (D) use the useful pages to predict the claim’s factuality. CheckThat! provided a full evaluation framework, consisting of data in English (derived from fact-checking sources) and Arabic (gathered and annotated from scratch) and evaluation based on mean average precision (MAP) and normalized discounted cumulative gain (nDCG) for ranking, and F $$:1$$ for classification. A total of 47 teams registered to participate in this lab, and fourteen of them actually submitted runs (compared to nine last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. As for Task 2, learning-to-rank was used by the highest scoring runs for subtask A, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.
AB - We present an overview of the second edition of the CheckThat! Lab at CLEF 2019. The lab featured two tasks in two different languages: English and Arabic. Task 1 (English) challenged the participating systems to predict which claims in a political debate or speech should be prioritized for fact-checking. Task 2 (Arabic) asked to (A) rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) identify useful passages from these pages, and (D) use the useful pages to predict the claim’s factuality. CheckThat! provided a full evaluation framework, consisting of data in English (derived from fact-checking sources) and Arabic (gathered and annotated from scratch) and evaluation based on mean average precision (MAP) and normalized discounted cumulative gain (nDCG) for ranking, and F $$:1$$ for classification. A total of 47 teams registered to participate in this lab, and fourteen of them actually submitted runs (compared to nine last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. As for Task 2, learning-to-rank was used by the highest scoring runs for subtask A, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.
KW - Check-worthiness estimation
KW - Computational journalism
KW - Evidence-based verification
KW - Fact-checking
KW - Fake news detection
KW - Veracity
UR - http://www.scopus.com/inward/record.url?scp=85072834528&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-28577-7_25
DO - 10.1007/978-3-030-28577-7_25
M3 - Conference contribution
AN - SCOPUS:85072834528
SN - 9783030285760
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 301
EP - 321
BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction - 10th International Conference of the CLEF Association, CLEF 2019, Proceedings
A2 - Crestani, Fabio
A2 - Braschler, Martin
A2 - Savoy, Jacques
A2 - Rauber, Andreas
A2 - Müller, Henning
A2 - Losada, David E.
A2 - Heinatz Bürki, Gundula
A2 - Cappellato, Linda
A2 - Ferro, Nicola
PB - Springer Verlag
Y2 - 9 September 2019 through 12 September 2019
ER -