TY - JOUR
T1 - Rumour verification through recurring information and an inner-attention mechanism
AU - Aker, Ahmet
AU - Sliwa, Alfred
AU - Dalvi, Fahim
AU - Bontcheva, Kalina
N1 - Publisher Copyright:
© 2019 Elsevier B.V.
PY - 2019/9
Y1 - 2019/9
N2 - Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.
AB - Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.
KW - Inner Attention Model
KW - Recurring Terms in Rumours
KW - Rumour Verification
UR - http://www.scopus.com/inward/record.url?scp=85071501184&partnerID=8YFLogxK
U2 - 10.1016/j.osnem.2019.07.001
DO - 10.1016/j.osnem.2019.07.001
M3 - Article
AN - SCOPUS:85071501184
SN - 2468-6964
VL - 13
JO - Online Social Networks and Media
JF - Online Social Networks and Media
M1 - 100045
ER -