Transparency and the Black Box Problem: Why We Do Not Trust AI

By:

Appeared In: Philosophy & Technology

Publication Date: September 2021

How can we trust an unsupervised intelligent system to analyze data or even make decisions on our behalf when its decision-making process remains opaque or unintelligible to us? This paper explores what has been called the “black box problem” in AI and whether such systems can satisfy commonly held criteria to be trustworthy. It analyzes attempts to develop “explainable AI” (XAI) to make these systems more transparent and concludes by suggesting that our focus should be on making the socio-technical context surrounding AI, rather than just the artifact, worthy of trust.

von Eschenbach, W.J. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 34, 1607–1622 (2021). DOI: 10.1007/s13347-021-00477-0

Related Articles