TY - GEN
T1 - AIMA at SemEval-2024 Task 3
T2 - 18th International Workshop on Semantic Evaluation, SemEval 2024, co-located with the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL 2024
AU - Kure, Alireza Ghahramani
AU - Dehghani, Mahshid
AU - Abootorabi, Mohammad Mahdi
AU - Ghazizadeh, Nona
AU - Dalili, Seyed Arshan
AU - Asgari, Ehsaneddin
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024/6
Y1 - 2024/6
N2 - The SemEval-2024 Task 3 presents two subtasks focusing on emotion-cause pair extraction within conversational contexts. Subtask 1 revolves around the extraction of textual emotion-cause pairs, where causes are defined and annotated as textual spans within the conversation. Conversely, Subtask 2 extends the analysis to encompass multimodal cues, including language, audio, and vision, acknowledging instances where causes may not be exclusively represented in the textual data. Despite this, our model addresses Subtask 2 using the same architecture as Subtask 1, focusing solely on textual and linguistic cues. Our architecture is organized into three main segments: (i) embedding extraction, (ii) cause-pair extraction & emotion classification, and (iii) post-pair-extraction cause analysis using QA. Our approach, utilizing advanced techniques and task-specific fine-tuning, unravels complex conversational dynamics and identifies causality in emotions. Our team, AIMA (MotoMoto at the leaderboard), demonstrated strong performance in the SemEval-2024 Task 3 competition ranked as the 10th rank in subtask 1 and the 6th in subtask 2 out of 23 teams. The code for our model implementation is available on https://github.com/language-ml/SemEval2024-Task3.
AB - The SemEval-2024 Task 3 presents two subtasks focusing on emotion-cause pair extraction within conversational contexts. Subtask 1 revolves around the extraction of textual emotion-cause pairs, where causes are defined and annotated as textual spans within the conversation. Conversely, Subtask 2 extends the analysis to encompass multimodal cues, including language, audio, and vision, acknowledging instances where causes may not be exclusively represented in the textual data. Despite this, our model addresses Subtask 2 using the same architecture as Subtask 1, focusing solely on textual and linguistic cues. Our architecture is organized into three main segments: (i) embedding extraction, (ii) cause-pair extraction & emotion classification, and (iii) post-pair-extraction cause analysis using QA. Our approach, utilizing advanced techniques and task-specific fine-tuning, unravels complex conversational dynamics and identifies causality in emotions. Our team, AIMA (MotoMoto at the leaderboard), demonstrated strong performance in the SemEval-2024 Task 3 competition ranked as the 10th rank in subtask 1 and the 6th in subtask 2 out of 23 teams. The code for our model implementation is available on https://github.com/language-ml/SemEval2024-Task3.
UR - http://www.scopus.com/inward/record.url?scp=85201939875&partnerID=8YFLogxK
M3 - Conference contribution
T3 - SemEval 2024 - 18th International Workshop on Semantic Evaluation, Proceedings of the Workshop
SP - 1698
EP - 1703
BT - SemEval 2024 - 18th International Workshop on Semantic Evaluation, Proceedings of the Workshop
A2 - Ojha, Atul Kr.
A2 - Dohruoz, A. Seza
A2 - Madabushi, Harish Tayyar
A2 - Da San Martino, Giovanni
A2 - Rosenthal, Sara
A2 - Rosa, Aiala
PB - Association for Computational Linguistics (ACL)
Y2 - 20 June 2024 through 21 June 2024
ER -