Rumour verification through recurring information and an inner-attention mechanism

Ahmet Aker*, Alfred Sliwa, Fahim Dalvi, Kalina Bontcheva

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.

Original languageEnglish
Article number100045
JournalOnline Social Networks and Media
Volume13
DOIs
Publication statusPublished - Sept 2019

Keywords

  • Inner Attention Model
  • Recurring Terms in Rumours
  • Rumour Verification

Fingerprint

Dive into the research topics of 'Rumour verification through recurring information and an inner-attention mechanism'. Together they form a unique fingerprint.

Cite this