TY - JOUR
T1 - Overview of the CLEF-2021 CheckThat! Lab Task 1 on check-worthiness estimation in tweets and political debates
AU - Shaar, Shaden
AU - Hasanain, Maram
AU - Hamdan, Bayan
AU - Ali, Zien Sheikh
AU - Haouari, Fatima
AU - Nikolov, Alex
AU - Kutlu, Mucahid
AU - Kartal, Yavuz Selim
AU - Alam, Firoj
AU - da San Martino, Giovanni
AU - Barrón-Cedeño, Alberto
AU - Míguez, Rubén
AU - Beltrán, Javier
AU - Elsayed, Tamer
AU - Nakov, Preslav
N1 - Publisher Copyright:
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2021
Y1 - 2021
N2 - We present an overview of Task 1 of the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The task asks to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in five languages: Arabic, Bulgarian, English, Spanish, and Turkish. A total of 15 teams participated in this task and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and RoBERTa. Here, we describe the process of data collection and the task setup, including the evaluation measures, and we give a brief overview of the participating systems. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in check-worthiness estimation for tweets and political debates.
AB - We present an overview of Task 1 of the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The task asks to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in five languages: Arabic, Bulgarian, English, Spanish, and Turkish. A total of 15 teams participated in this task and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and RoBERTa. Here, we describe the process of data collection and the task setup, including the evaluation measures, and we give a brief overview of the participating systems. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in check-worthiness estimation for tweets and political debates.
KW - COVID-19
KW - Check-worthiness estimation
KW - Computational journalism
KW - Detecting previously fact-checked claims
KW - Fact-checking
KW - Social media verification
KW - Veracity
KW - Verified claims retrieval
UR - http://www.scopus.com/inward/record.url?scp=85113440372&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85113440372
SN - 1613-0073
VL - 2936
SP - 369
EP - 392
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021
Y2 - 21 September 2021 through 24 September 2021
ER -