Algorithmic Bias and Corporate Responsibility: How companies hide behind the false veil of the technological imperative

By:

Appeared In: Ethics of Data and Analytics

Publication Date: 2022

In this chapter from the book Ethics of Data and Analytics (Taylor & Francis 2022), Martin argues that acknowledging the value-laden biases of algorithms as inscribed in design allows us to identify the associated responsibility of corporations that design, develop, and deploy algorithms. Put another way, claiming algorithms are neutral or that the design decisions of computer scientists are neutral obscures the morally important decisions of computer and data scientists.

Martin focuses on the implications of making technological imperative arguments: framing algorithms as evolving under their own inertia, as providing more efficient, accurate decisions, and as outside the realm of any critical examination or moral evaluation. She argues specifically that judging AI on efficiency and pretending algorithms are inscrutable produces a veil of the technological imperative, which shields corporations from being held accountable for the value-laden decisions made in the design, development, and deployment of algorithms. While there is always more to be researched and understood, we know quite a lot about testing algorithms. She then outlines how the development of algorithms should be critically examined to elucidate the value-laden biases encoded in design and development. The moral examination of AI pierces the (false) veil of the technological imperative.

Martin, K. 2022. Algorithmic Bias and Corporate Responsibility: How companies hide behind the false veil of the technological imperative. In The Ethics of Data and Analytics, Taylor & Francis.

Related Articles