AI, Explainability, and Safeguarding Patient Safety in Europe

Barry Solaiman, Mark G. Bloom

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Wearables are becoming more user-friendly, computationally powerful, and readily available. For regulators, the challenge is to ensure that the ethical boundaries of their use as AI diagnostic tools are respected while not stifling innovation. The laws and regulations in this field are in their infancy. This has led the EU to develop “Ethics Guidelines for Trustworthy Artificial Intelligence.” This chapter contends that the principles relied upon are idealistic and perforated by key flaws concerning implementation in the health sector. There are limitations surrounding the concepts of human agency and oversight, the responsibility and accountability for harms caused by AI, and the lack of a centralized system of health data. More detailed analyses are required recognizing that even experts within the AI field struggle to understand certain processes. It is critical to develop specific regulations dealing with the scientific and technical challenges of AI. These changes aim to create more robust guidelines relevant to the use of AI in wearables for health management and delivery in the EU and beyond.
Original languageEnglish
Title of host publicationThe Future of Medical Device Regulation
Number of pages12
Publication statusPublished - 31 Mar 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'AI, Explainability, and Safeguarding Patient Safety in Europe'. Together they form a unique fingerprint.

Cite this