..

Système et gestion des télécommunications

Soumettre le manuscrit arrow_forward arrow_forward ..

Trust in Artificial Intelligence: What do we know and why is it important?

Abstract

Dr Steve Lockey

The rise of Artificial Intelligence (AI) in our society is becoming ubiquitous and undoubtedly holds much promise. However, AI has also been implicated in high profile breaches of trust or ethical standards, and concerns have been raised over the use of AI in initiatives and technologies that could be inimical to society. Public trust and perceptions of AI trustworthiness underpin AI systems’ social licence to operate, and a myriad of company, industry, governmental and intergovernmental reports have set out principles for ethical and trustworthy AI. To guide the responsible stewardship of AI into our society, a firm foundation of research on trust in AI to enable evidence-based policy and practice is required. However, in order to inform and guide future research, it is imperative to first take stock and understand what is already know about human trust in AI. As such, we undertake a review of 100 papers examining the relationship between trust and AI. We found a fragmented, disjointed and siloed literature with an empirical emphasis on experimentation and surveys relating to specific AI technologies. While findings suggest some convergence on the importance of explainability as a determinant of trust in AI technologies, there are still gaps between conceptual arguments and what has been examined empirically. We urge future research to take a more holistic approach and investigate how trust in different referents impacts on attitudinal and behavioural intentions. Doing so will facilitate a more nuanced understanding of what it means to develop trustworthy AI.

Avertissement: Ce résumé a été traduit à l'aide d'outils d'intelligence artificielle et n'a pas encore été examiné ni vérifié

Partagez cet article

Indexé dans

arrow_upward arrow_upward