Two faculty affiliates of the Notre Dame Technology Ethics Center (ND TEC) recently responded to a request for input from the White House Office of Science and Technology Policy (OSTP) concerning the “Public and Private Sector Uses of Biometric Technologies.” The request for information, or RFI, is part of OSTP’s broader efforts to develop a Bill of Rights for an Automated Society.
The response was coauthored by Elizabeth M. Renieris, director of policy and an associate professor of the practice at ND TEC, and Yong Suk Lee, assistant professor of technology, economy, and global affairs at Notre Dame’s Keough School of Global Affairs.
OSTP sought information on AI-enabled technologies that gather biometric data—people’s fingerprints, eye movements, voices, the way they walk, etc.—with the goal of understanding how these technologies are deployed for identity verification, identification of individuals, and inference of attributes, including mental and emotional states.
“The already widespread and rapidly proliferating use of biometric technologies across the public and private sectors raises a wide array of ethical concerns and challenges,” Renieris and Lee wrote in the introduction to their response. “As such, we are encouraged by the OSTP’s efforts to consider policies that can equitably harness the benefits of these technologies while providing effective and iterative safeguards against their anticipated abuses and harms.”
Respondents to the RFI were invited to provide information in any or all of six main areas. Renieris and Lee focused on two:
- Examples of how biometric technologies are currently used. In the United States, this includes tools for identity verification, contactless payments/checkout, employee monitoring/tracking, security access/control, and law enforcement applications, among others. Renieris and Lee also discussed how in China, “wearable devices such as ’smart’ helmets, ‘smart’ bands, and ‘smart’ uniforms are increasingly being used by organizations in an attempt to detect individuals’ movements and whereabouts, as well as changes in their emotional states.”
- Exhibited and potential harms of biometric technologies. Here, Renieris and Lee covered the ethics of biometric identity verification systems—including the implications for the people whose personal data is used to train the technology in the first place—along with the shaky scientific foundations for many biometric use cases, data privacy and cybersecurity concerns, the risks of government surveillance, and the financial incentives to create identity verification systems in situations where none may be needed.
“While [facial recognition technology] has been a primary focal point of the conversation,” Renieris and Lee noted in the conclusion, “a wide array of other physical and behavioral biometric modalities present similar concerns … that risk commercializing all of our interactions, whether as citizens, employees, or consumers.”
Benjamin Larsen, a Ph.D. fellow at the Copenhagen Business School and the Chinese Academy of Sciences in Beijing, contributed research on use cases to Renieris and Lee’s submission.