Generative artificial intelligence (GenAI) and decision-making: Legal & ethical hurdles for implementation in mental health

Barry Solaiman*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This article argues that significant risks are being taken with using GenAI in mental health that should be assessed urgently. It recommends that guidelines for using generative artificial intelligence (GenAI) in mental health care must be established promptly. Currently, clinicians using chatbots without appropriate approval risk undermining legal protections for patients. This could harm the patient and undermine the standards of the profession, undermining trust in an area where human involvement in decision-making is critical. To explore these concerns, this paper is divided into three parts. First, it examines the needs of patients in mental health. Second, it explores the potential benefits of GenAI in mental health and highlights the risks of its use as it pertains to patient needs. Third, it notes the ethical and legal concerns around data use and medical liability that require careful attention. The impact of the European Union's (EU) Artificial Intelligence Act (AI-Act) is also considered. It will be seen that these laws are insufficient in the context of mental health. As such, the paper recommends that guidelines should be developed to help resolve the existing legal gaps until codified rules are established.

Original languageEnglish
Article number102028
JournalInternational Journal of Law and Psychiatry
Volume97
DOIs
Publication statusPublished - 1 Nov 2024
Externally publishedYes

Keywords

  • Ethics
  • Generative artificial intelligence (GenAI)
  • Healthcare
  • Law
  • Mental health
  • Psychiatry

Fingerprint

Dive into the research topics of 'Generative artificial intelligence (GenAI) and decision-making: Legal & ethical hurdles for implementation in mental health'. Together they form a unique fingerprint.

Cite this