Adversarial NLP for Social Network Applications: Attacks, Defenses, and Research Directions

Izzat Alsmadi, Kashif Ahmad*, Mahmoud Nazzal, Firoj Alam, Ala Al-Fuqaha, Abdallah Khreishah, Abdulelah Algosaibi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

The growing use of media has led to the development of several machine learning (ML) and natural language processing (NLP) tools to process the unprecedented amount of social media content to make actionable decisions. However, these ML and NLP algorithms have been widely shown to be vulnerable to adversarial attacks. These vulnerabilities allow adversaries to launch a diversified set of adversarial attacks on these algorithms in different applications of social media text processing. In this article, we provide a comprehensive review of the main approaches for adversarial attacks and defenses in the context of social media applications with a particular focus on key challenges and future research directions. In detail, we cover literature on six key applications: 1) rumors detection; 2) satires detection; 3) clickbaits and spams identification; 4) hate speech detection; 5) misinformation detection; and 6) sentiment analysis. We then highlight the concurrent and anticipated future research questions and provide recommendations and directions for future work.

Original languageEnglish
Pages (from-to)3089-3108
Number of pages20
JournalIEEE Transactions on Computational Social Systems
Volume10
Issue number6
DOIs
Publication statusPublished - 1 Dec 2023

Keywords

  • Adversarial machine learning (AML)
  • Computational modeling
  • Fake news
  • Hate speech
  • Linguistics
  • Natural languages
  • Security
  • Social networking (online)
  • Taxonomy
  • Text analysis
  • machine learning (ML)
  • natural language processing (NLP)

Fingerprint

Dive into the research topics of 'Adversarial NLP for Social Network Applications: Attacks, Defenses, and Research Directions'. Together they form a unique fingerprint.

Cite this