Explainable Recommendations and Calibrated Trust: Two Systematic User Errors

Mohammad Naiseh, Deniz Cemiloglu, Dena Al Thani, Nan Jiang, Raian Ali

Research output: Contribution to specialist publicationArticle

20 Citations (Scopus)

Abstract

The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations and why trust-calibration errors occur, taking clinical decision-support systems as a case study.

Original languageEnglish
Pages28-37
Number of pages10
Volume54
No.10
Specialist publicationComputer
DOIs
Publication statusPublished - Oct 2021

Fingerprint

Dive into the research topics of 'Explainable Recommendations and Calibrated Trust: Two Systematic User Errors'. Together they form a unique fingerprint.

Cite this