TY - JOUR
T1 - Overview of the CLEF-2022 CheckThat! Lab Task 1 on Identifying Relevant Claims in Tweets
AU - Nakov, Preslav
AU - Barrón-Cedeño, Alberto
AU - Da San Martino, Giovanni
AU - Alam, Firoj
AU - Míguez, Rubén
AU - Caselli, Tommaso
AU - Kutlu, Mucahid
AU - Zaghouani, Wajdi
AU - Li, Chengkai
AU - Shaar, Shaden
AU - Mubarak, Hamdy
AU - Nikolov, Alex
AU - Kartal, Yavuz Selim
N1 - Publisher Copyright:
© 2022 Copyright for this paper by its authors.
PY - 2022
Y1 - 2022
N2 - We present an overview of CheckThat! lab 2022 Task 1, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). Task 1 asked to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in six languages: Arabic, Bulgarian, Dutch, English, Spanish, and Turkish. A total of 19 teams participated and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and GPT-3. Across the four subtasks, approaches that targetted multiple languages (be it individually or in conjunction, in general obtained the best performance. We describe the dataset and the task setup, including the evaluation settings, and we give a brief overview of the participating systems. As usual in the CheckThat! lab, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research on finding relevant tweets that can help different stakeholders such as fact-checkers, journalists, and policymakers.
AB - We present an overview of CheckThat! lab 2022 Task 1, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). Task 1 asked to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in six languages: Arabic, Bulgarian, Dutch, English, Spanish, and Turkish. A total of 19 teams participated and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and GPT-3. Across the four subtasks, approaches that targetted multiple languages (be it individually or in conjunction, in general obtained the best performance. We describe the dataset and the task setup, including the evaluation settings, and we give a brief overview of the participating systems. As usual in the CheckThat! lab, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research on finding relevant tweets that can help different stakeholders such as fact-checkers, journalists, and policymakers.
KW - COVID-19
KW - Check-Worthiness Estimation
KW - Computational Journalism
KW - Fact-Checking
KW - Social Media Verification
KW - Veracity
UR - http://www.scopus.com/inward/record.url?scp=85136928741&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85136928741
SN - 1613-0073
VL - 3180
SP - 368
EP - 392
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2022 Conference and Labs of the Evaluation Forum, CLEF 2022
Y2 - 5 September 2022 through 8 September 2022
ER -