TY - GEN
T1 - Defending Emotional Privacy with Adversarial Machine Learning for Social Good
AU - Al-Maliki, Shawqi
AU - Abdallah, Mohamed
AU - Qadir, Junaid
AU - Al-Fuqaha, Ala
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Protecting the privacy of personal information, including emotions, is essential, and organizations must comply with relevant regulations to ensure privacy. Unfortunately, some organizations do not respect these regulations, or they lack transparency, leaving human privacy at risk. These privacy violations often occur when unauthorized organizations misuse machine learning (ML) technology, such as facial expression recognition (FER) systems. Therefore, researchers and practitioners must take action and use ML technology for social good to protect human privacy. One emerging research area that can help address privacy violations is the use of adversarial ML for social good. Evasion attacks, which are used to fool ML systems, can be repurposed to prevent misused ML technology, such as ML-based FER, from recognizing true emotions. By leveraging adversarial ML for social good, we can prevent organizations from violating human privacy by misusing ML technology, particularly FER systems, and protect individuals' personal and emotional privacy. In this work, we propose an approach called Chaining of Adversarial ML Attacks (CAA) to create a robust attack that fools misused technology and prevents it from detecting true emotions. To validate our proposed approach, we conduct extensive experiments using various evaluation metrics and baselines. Our results show that CAA significantly contributes to emotional privacy preservation, with the fool rate of emotions increasing proportionally to the chaining length. In our experiments, the fool rate increases by 48% in each subsequent chaining stage of the chaining targeted attacks (CTA) while keeping the perturbations imperceptible (ϵ = 0.0001).
AB - Protecting the privacy of personal information, including emotions, is essential, and organizations must comply with relevant regulations to ensure privacy. Unfortunately, some organizations do not respect these regulations, or they lack transparency, leaving human privacy at risk. These privacy violations often occur when unauthorized organizations misuse machine learning (ML) technology, such as facial expression recognition (FER) systems. Therefore, researchers and practitioners must take action and use ML technology for social good to protect human privacy. One emerging research area that can help address privacy violations is the use of adversarial ML for social good. Evasion attacks, which are used to fool ML systems, can be repurposed to prevent misused ML technology, such as ML-based FER, from recognizing true emotions. By leveraging adversarial ML for social good, we can prevent organizations from violating human privacy by misusing ML technology, particularly FER systems, and protect individuals' personal and emotional privacy. In this work, we propose an approach called Chaining of Adversarial ML Attacks (CAA) to create a robust attack that fools misused technology and prevents it from detecting true emotions. To validate our proposed approach, we conduct extensive experiments using various evaluation metrics and baselines. Our results show that CAA significantly contributes to emotional privacy preservation, with the fool rate of emotions increasing proportionally to the chaining length. In our experiments, the fool rate increases by 48% in each subsequent chaining stage of the chaining targeted attacks (CTA) while keeping the perturbations imperceptible (ϵ = 0.0001).
KW - Emotional-Privacy Preservation
KW - Evasion Attacks for Good
KW - Robust Adversarial ML attacks
UR - http://www.scopus.com/inward/record.url?scp=85167659403&partnerID=8YFLogxK
U2 - 10.1109/IWCMC58020.2023.10182780
DO - 10.1109/IWCMC58020.2023.10182780
M3 - Conference contribution
AN - SCOPUS:85167659403
T3 - 2023 International Wireless Communications and Mobile Computing, IWCMC 2023
SP - 345
EP - 350
BT - 2023 International Wireless Communications and Mobile Computing, IWCMC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE International Wireless Communications and Mobile Computing Conference, IWCMC 2023
Y2 - 19 June 2023 through 23 June 2023
ER -