Detecting toxicity triggers in online discussions

Hind Almerekhi, Bernard J. Jansen, Haewoon Kwak, Joni Salminen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

29 Citations (Scopus)

Abstract

Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5% of detecting toxicity triggers from diverse Reddit communities.

Original languageEnglish
Title of host publicationHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media
PublisherAssociation for Computing Machinery, Inc
Pages291-292
Number of pages2
ISBN (Electronic)9781450368858
DOIs
Publication statusPublished - 12 Sept 2019
Event30th ACM Conference on Hypertext and Social Media, HT 2019 - Hof, Germany
Duration: 17 Sept 201920 Sept 2019

Publication series

NameHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media

Conference

Conference30th ACM Conference on Hypertext and Social Media, HT 2019
Country/TerritoryGermany
CityHof
Period17/09/1920/09/19

Keywords

  • Neural networks
  • Reddit
  • Social media
  • Toxicity
  • Trigger detection

Fingerprint

Dive into the research topics of 'Detecting toxicity triggers in online discussions'. Together they form a unique fingerprint.

Cite this