A pragmatic perspective on AI transparency at workplace

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, artificial intelligence (AI) systems have been widely used in different contexts and professions. However, with these systems developing and becoming more complex, they have transformed into black boxes that are difficult to interpret and explain. Therefore, urged by the wide media coverage of negative incidents involving AI, many scholars and practitioners have called for AI systems to be transparent and explainable. In this study, we examine transparency in AI-augmented settings, such as in workplaces, and perform a novel analysis of the different jobs and tasks that can be augmented by AI. Using more than 1000 job descriptions and 20,000 tasks from the O*NET database, we analyze the level of transparency required to augment these tasks by AI. Our findings indicate that the transparency requirements differ depending on the augmentation score and perceived risk category of each task. Furthermore, they suggest that it is important to be pragmatic about transparency, and they support the growing viewpoint regarding the impracticality of the notion of full transparency.
Original languageEnglish
Pages (from-to)189-200
Number of pages12
JournalAI and Ethics
Publication statusPublished - 30 Jan 2023

Fingerprint

Dive into the research topics of 'A pragmatic perspective on AI transparency at workplace'. Together they form a unique fingerprint.

Cite this