Towards secure private and trustworthy human-centric embedded machine learning: An emotion-aware facial recognition case study

Muhammad Atif Butt, Adnan Qayyum, Hassan Ali, Ala Al-Fuqaha, Junaid Qadir*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

The use of artificial intelligence (AI) at the edge is transforming every aspect of the lives of human be-ings from scheduling daily activities to personalized shopping recommendations. Since the success of AI is to be measured ultimately in terms of how it benefits human beings, and that the data driving the deep learning-based edge AI algorithms are intricately and intimately tied to humans, it is important to look at these AI technologies through a human-centric lens. However, despite the significant impact of AI design on human interests, the security and trustworthiness of edge AI applications are not foolproof and ethicalneither foolproof nor ethical; Moreover, social norms are often ignored duringin the design, implementation, and deployment of edge AI systems. In this paper, we make the following two contribu-tions: Firstly , we analyze the application of edge AI through a human-centric perspective. More specifi-cally, we present a pipeline to develop human-centric embedded machine learning (HC-EML) applications leveraging a generic human-centric AI (HCAI) framework. Alongside, we also analyzediscuss the privacy, trustworthiness, robustness, and security aspects of HC-EML applications with an insider look at their challenges and possible solutions along the way. Secondly , to illustrate the gravity of these issues, we present a case study on the task of human facial emotion recognition (FER) based on AffectNet dataset, where we analyze the effects of widely used input quantization on the security, robustness, fairness, and trustworthiness of an EML model. We find that input quantization partially degrades the efficacy of ad-versarial and backdoor attacks at the cost of a slight decrease in accuracy over clean inputs. By analyzing the explanations generated by SHAP, we identify that the decision of a FER model is largely influenced by features such as eyes, alar crease, lips, and jaws. Additionally, we note that input quantization is notably biased against the dark skin faces, and hypothesize that low-contrast features of dark skin faces may be responsible for the observed trends. We conclude with precautionary remarks and guidelines for future researchers.(c) 2022 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
Original languageEnglish
Article number103058
Number of pages17
JournalComputers and Security
Volume125
DOIs
Publication statusPublished - Feb 2023

Keywords

  • Adversarial machine learning
  • Embedded machine learning
  • Human -Centered artificial intelligence
  • Privacy -awareness
  • Robustness
  • Security
  • Tiny machine learning
  • Trustworthiness

Fingerprint

Dive into the research topics of 'Towards secure private and trustworthy human-centric embedded machine learning: An emotion-aware facial recognition case study'. Together they form a unique fingerprint.
  • EX-QNRF-NPRPS-51: Development of Human-Centric Robust ML-Driven IoT Smart Services

    Ghaly, M. (Principal Investigator), Al Fuqaha, A. (Lead Principal Investigator), Assistant-1, R. (Research Assistant), Assistant-2, R. (Research Assistant), Assistant-3, R. (Research Assistant), Associate-1, R. (Research Associate), Bou-Harb, D. E. (Principal Investigator), Zubair, D. M. (Principal Investigator), Filali, P. F. (Principal Investigator) & Qadir, P. J. (Principal Investigator)

    15/03/2115/09/23

    Project: Applied Research

Cite this