Adversarial Machine Learning Attack on Modulation Classification

Muhammad Usama, Muhammad Asim, Junaid Qadir, Ala Al-Fuqaha, Muhammad Ali Imran

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    14 Citations (Scopus)

    Abstract

    Modulation classification is an important component of cognitive self-driving networks. Recently many ML-based modulation classification methods have been proposed. We have evaluated the robustness of 9 ML-based modulation classifiers against the powerful Carlini Wagner (C-W) attack and showed that the current ML-based modulation classifiers do not provide any deterrence against adversarial ML examples. To the best of our knowledge, we are the first to report the results of the application of the C-W attack for creating adversarial examples against various ML models for modulation classification.

    Original languageEnglish
    Title of host publication2019 UK/China Emerging Technologies, UCET 2019
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9781728127972
    DOIs
    Publication statusPublished - Aug 2019
    Event2019 UK/China Emerging Technologies, UCET 2019 - Glasgow, Scotland, United Kingdom
    Duration: 21 Aug 201922 Aug 2019

    Publication series

    Name2019 UK/China Emerging Technologies, UCET 2019

    Conference

    Conference2019 UK/China Emerging Technologies, UCET 2019
    Country/TerritoryUnited Kingdom
    CityGlasgow, Scotland
    Period21/08/1922/08/19

    Keywords

    • Adversarial machine learning
    • Modulation classification

    Fingerprint

    Dive into the research topics of 'Adversarial Machine Learning Attack on Modulation Classification'. Together they form a unique fingerprint.

    Cite this