TY - GEN
T1 - Overview of the CLEF–2023 CheckThat! Lab on Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority of News Articles and Their Source
AU - Barrón-Cedeño, Alberto
AU - Alam, Firoj
AU - Galassi, Andrea
AU - Da San Martino, Giovanni
AU - Nakov, Preslav
AU - Elsayed, Tamer
AU - Azizov, Dilshod
AU - Caselli, Tommaso
AU - Cheema, Gullal S.
AU - Haouari, Fatima
AU - Hasanain, Maram
AU - Kutlu, Mucahid
AU - Li, Chengkai
AU - Ruggeri, Federico
AU - Struß, Julia Maria
AU - Zaghouani, Wajdi
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023/9/11
Y1 - 2023/9/11
N2 - We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.
AB - We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.
KW - Authority Finding
KW - Check-Worthiness
KW - Fact Checking
KW - Factuality of Reporting
KW - Political Bias
KW - Subjectivity
UR - http://www.scopus.com/inward/record.url?scp=85165624423&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-42448-9_20
DO - 10.1007/978-3-031-42448-9_20
M3 - Conference contribution
AN - SCOPUS:85165624423
SN - 9783031424472
VL - 14163
T3 - Lecture Notes In Computer Science
SP - 251
EP - 275
BT - Experimental Ir Meets Multilinguality, Multimodality, And Interaction, Clef 2023
A2 - Arampatzis, A
A2 - Kanoulas, E
A2 - Tsikrika, T
A2 - Vrochidis, S
A2 - Giachanou, A
A2 - Li, D
A2 - Aliannejadi, M
A2 - Vlachos, M
A2 - Faggioli, G
A2 - Ferro, N
PB - Springer Science and Business Media Deutschland GmbH
T2 - Proceedings of the 14th International Conference of the Cross-Language Evaluation Forum for European Languages, CLEF 2023
Y2 - 18 September 2023 through 21 September 2023
ER -