TY - GEN
T1 - Overview of the CLEF–2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News
AU - Nakov, Preslav
AU - Da San Martino, Giovanni
AU - Elsayed, Tamer
AU - Barrón-Cedeño, Alberto
AU - Míguez, Rubén
AU - Shaar, Shaden
AU - Alam, Firoj
AU - Haouari, Fatima
AU - Hasanain, Maram
AU - Mansour, Watheq
AU - Hamdan, Bayan
AU - Ali, Zien Sheikh
AU - Babulkov, Nikolay
AU - Nikolov, Alex
AU - Shahi, Gautam Kishore
AU - Struß, Julia Maria
AU - Mandl, Thomas
AU - Kutlu, Mucahid
AU - Kartal, Yavuz Selim
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality, and covers Arabic, Bulgarian, English, Spanish, and Turkish. Task 1 asks to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics (in all five languages). Task 2 asks to determine whether a claim in a tweet can be verified using a set of previously fact-checked claims (in Arabic and English). Task 3 asks to predict the veracity of a news article and its topical domain (in English). The evaluation is based on mean average precision or precision at rank k for the ranking tasks, and macro-F1 for the classification tasks. This was the most popular CLEF-2021 lab in terms of team registrations: 132 teams. Nearly one-third of them participated: 15, 5, and 25 teams submitted official runs for tasks 1, 2, and 3, respectively.
AB - We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality, and covers Arabic, Bulgarian, English, Spanish, and Turkish. Task 1 asks to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics (in all five languages). Task 2 asks to determine whether a claim in a tweet can be verified using a set of previously fact-checked claims (in Arabic and English). Task 3 asks to predict the veracity of a news article and its topical domain (in English). The evaluation is based on mean average precision or precision at rank k for the ranking tasks, and macro-F1 for the classification tasks. This was the most popular CLEF-2021 lab in terms of team registrations: 132 teams. Nearly one-third of them participated: 15, 5, and 25 teams submitted official runs for tasks 1, 2, and 3, respectively.
KW - COVID-19
KW - Check-worthiness estimation
KW - Disinformation
KW - Fact-checking
KW - Fake news detection
KW - Misinformation
KW - Verified claim retrieval
UR - http://www.scopus.com/inward/record.url?scp=85115877178&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-85251-1_19
DO - 10.1007/978-3-030-85251-1_19
M3 - Conference contribution
AN - SCOPUS:85115877178
SN - 9783030852504
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 264
EP - 291
BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction - 12th International Conference of the CLEF Association, CLEF 2021, Proceedings
A2 - Candan, K. Selçuk
A2 - Ionescu, Bogdan
A2 - Goeuriot, Lorraine
A2 - Larsen, Birger
A2 - Müller, Henning
A2 - Joly, Alexis
A2 - Maistro, Maria
A2 - Piroi, Florina
A2 - Faggioli, Guglielmo
A2 - Ferro, Nicola
PB - Springer Science and Business Media Deutschland GmbH
T2 - 12th International Conference of the Cross-Language Evaluation Forum for European Languages, CLEF 2021
Y2 - 21 September 2021 through 24 September 2021
ER -