TY - GEN
T1 - SemEval-2016 Task 3
T2 - 10th International Workshop on Semantic Evaluation, SemEval 2016
AU - Nakov, Preslav
AU - Màrquez, Lluís
AU - Moschitti, Alessandro
AU - Magdy, Walid
AU - Mubarak, Hamdy
AU - Freihat, Abed Alhakim
AU - Glass, James
AU - Randeree, Bilal
N1 - Publisher Copyright:
© 2016 Association for Computational Linguistics.
PY - 2016
Y1 - 2016
N2 - This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.
AB - This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85032305015&partnerID=8YFLogxK
U2 - 10.18653/v1/s16-1083
DO - 10.18653/v1/s16-1083
M3 - Conference contribution
AN - SCOPUS:85032305015
T3 - SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
SP - 525
EP - 545
BT - SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
PB - Association for Computational Linguistics (ACL)
Y2 - 16 June 2016 through 17 June 2016
ER -