TY - GEN
T1 - Overview of CheckThat! 2020
T2 - 11th Conference and Labs of the Evaluation Forum, CLEF 2020
AU - Barrón-Cedeño, Alberto
AU - Elsayed, Tamer
AU - Nakov, Preslav
AU - Da San Martino, Giovanni
AU - Hasanain, Maram
AU - Suwaileh, Reem
AU - Haouari, Fatima
AU - Babulkov, Nikolay
AU - Hamdan, Bayan
AU - Nikolov, Alex
AU - Shaar, Shaden
AU - Ali, Zien Sheikh
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.
AB - We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in two different languages: English and Arabic. The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification. The lab is completed with Task 5 on check-worthiness estimation in political debates and speeches. A total of 67 teams registered to participate in the lab (up from 47 at CLEF 2019), and 23 of them actually submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over the baselines on all tasks. Here we describe the tasks setup, the evaluation results, and a summary of the approaches used by the participants, and we discuss some lessons learned. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and automatic claim verification.
KW - Check-worthiness estimation
KW - Computational journalism
KW - Detecting previously fact-checked claims
KW - Evidence-based verification
KW - Fact-checking
KW - Social media verification
KW - Veracity
UR - http://www.scopus.com/inward/record.url?scp=85092207715&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58219-7_17
DO - 10.1007/978-3-030-58219-7_17
M3 - Conference contribution
AN - SCOPUS:85092207715
SN - 9783030582180
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 215
EP - 236
BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction - 11th International Conference of the CLEF Association, CLEF 2020, Proceedings
A2 - Arampatzis, Avi
A2 - Kanoulas, Evangelos
A2 - Tsikrika, Theodora
A2 - Vrochidis, Stefanos
A2 - Joho, Hideo
A2 - Lioma, Christina
A2 - Eickhoff, Carsten
A2 - Névéol, Aurélie
A2 - Névéol, Aurélie
A2 - Cappellato, Linda
A2 - Ferro, Nicola
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 22 September 2020 through 25 September 2020
ER -