Review of “Algorithmic bias: on the implicit biases of social technology”

By:

Article Author: Gabbrielle M. Johnson, New York University
Appeared In: Synthese
Read It: “Algorithmic bias: on the implicit biases of social technology”

TLDR

Gabbrielle Johnson’s recent article on the value-laden-ness of algorithms makes a rigorous, compelling, and clear argument against the fabled “objectivity” of computer science.  

Summary

Gabbrielle Johnson argues computer scientists, in their role as computer scientists, make value-judgements. When developers make decisions as to the goal of the algorithm, the criteria for success, what counts as a mistake, what data is important, etc., they “establish the value-laden nature of automated decision making programs.” Johnson’s argument directly challenges notions that machine learning, algorithm development, AI, etc. are value-free, neutral, “just math,” objective, or give an answer that is from a position of nowhere. In particular, just as scientific explanations require scientists to make non-evidential assumptions to accomplish the aims of science, so too algorithms can never be value-free because use of some data analysis method over others depends on the aims of the program and goals of the programmer—which are value-laden judgments. Considerations about program aims and programmer goals themselves depend upon the social and political context of use and practice, the most important of which are the negative consequences of “getting things wrong.” 

Key contributions

Johnson illuminates key ideas by analyzing a well-known and much-discussed case in algorithmic bias of a risk-assessment algorithm program used by judges across the United States to produce recidivism risk scores, COMPAS, whose biases against Black defendants and in favor of white defendants were first exposed by ProPublica. She argues that it is a mistake to expect a unified set of conditions that all risk-assessment algorithms must satisfy, because “there can be no algorithm for building algorithms” (p. 10). Equally as important, which fairness criteria we adopt in applying the algorithm (i.e., whether we emphasize accuracy, equality of error in false-positives, or equality of error in false-negatives) depends on the goals and aims of the particular context and our criminal justice system. 

Johnson also rightly names the goals of science (and programs), such as accuracy, simplicity, efficiency, etc., as values and offers equally compelling sets of values (novelty, applicability to human needs, complexity of interaction) that are compelling and competing superordinate values. Johnson provides compelling and approachable examples where our “usual” values, including simplicity and efficiency, led us astray, such as the example where doctors based dosage recommendations for the sleep medicine Ambien on studies involving only men, thus neglecting the different, adverse impact such doses would have on women. One is reminded of recent articles in computer science examining the values encoded in machine learning research or databases (Scheuerman, Hanna, and Denton, 2021; Birhane et al. 2021)  

What makes this article unique/impactful?

  1. Johnson leverages the forceful idea of “Dragnet objectivity” to capture the constant argument that machine learning or AI in general is “just the facts.”   
  2. The application of critiques of scientific objectivity by feminist philosophers of science to current discussions about algorithm bias.
  3. The discussion of COMPAS highlighting the necessity of making value judgements even when aiming for accuracy: whose accuracy, which types of errors are preferred, what are the moral implications of differential accuracy and errors?
  4. Her suggestion that because we cannot expect a universal set of standards that determine which values algorithmic decision procedures should maximize, we need to be transparent about those that are chosen and should consider “the implementation of institutional mechanisms for the ethical oversight for machine learning programs akin to Institutional Review Boards (IRB)” (p. 26).  This approach is in keeping with recent work in computer science taking a critical approach to values in research.  

References

Scheuerman, Morgan Klaus, Alex Hanna, and Emily Denton. "Do datasets have politics? Disciplinary values in computer vision dataset development." Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW2 (2021): 1-37.

Birhane, Abeba, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. "The values encoded in machine learning research." arXiv preprint arXiv:2106.15590 (2021).

Related Articles