Rome Ethics of AI Workshop

Rome Ethics Of AI Workshop

University of Notre Dame Rome Global Gateway
Via Ostilia, 15
October 20 and 21, 2022

The European and North American Workshop on the Ethics of Artificial Intelligence is being convened by the Notre Dame Technology Ethics Center at the University of Notre Dame and the Peace Research Institute Oslo.

This day-and-a-half event, hosted at Notre Dame’s Rome Global Gateway facility, will bring together a small group of leading scholars and experts working on issues related to AI from various disciplines and perspectives. Discussions will explore the conceptual and ethical foundations as well as applications related to artificial intelligence and related technologies, including autonomous weapon systems.

The workshop will consist of several panels of experts, each of whom will give a short, informal presentation of about 30 minutes, followed by discussion among participants. The panels will focus on foundational issues, ethics of AI, ethics in AI, and AI military applications.

A full schedule along with more information about the participants is included below.

Accommodations

Hotel Capo D’Africa
Via Capo D’Africa, 54

Check in any time after 3:00 p.m. on Wednesday, Oct. 19

Thursday, October 20

All events to take place in Notre Dame Rome’s Global Gateway facility unless otherwise specified.

Speakers are asked to keep their remarks to a maximum of 30 minutes, which will be followed by 15 minutes of discussion.

You can click on items in the agenda for more details.

9:00–9:30 AM: Registration/Coffee and Tea

In the Matthew and Joyce Walsh Aula

9:30–9:45 AM: Welcome Remarks

Warren von Eschenbach, Associate Director for Academic Affairs, Notre Dame Technology Ethics Center

9:45–11:15 AM: Session 1

Chair: Greg Reichberg, Research Professor, Peace Research Institute Oslo

“Ghosts in the Machine: The Relevance of Folk Conceptions of AI”
Ophelia Deroy (Ludwig Maximilian University Munich)

Abstract: Much of the ethics of AI comes from evidence and anticipation that citizens and users will hold autonomous AI as responsible or eventually come to wonder about their rights and moral status. But what shapes people’s conceptions of AI and how do they matter and relate to the ontological categories needed to ground ethical and legal frameworks? This talk will explore these questions and warns that folk conceptions of AI are fundamentally non-rational.

“Algorithm Risk Assessment and AI Auditing”
Jovana Davidovic (University of Iowa)

Abstract: Scholars, policymakers, and business are increasingly aware of the importance of AI governance. In this talk, I address why AI governance matters to society, and why businesses need it. I start by defining AI governance and examining some of the key initiatives and organization structures that have been built with an eye towards meaningful AI governance. I focus on the role of ethical risk assessments and impact assessments as well as bias audits for robust AI ethical risk management and AI governance. I also share the results of our team's most recent research on what works when it comes to AI governance and supplement those findings with a case study discussing the main lessons our team has drawn from performing ethical risk assessments and bias audits.

11:15–11:30 AM: Break

11:30 AM–1:00 PM: Session 2

Chair: Warren von Eschenbach, Associate Director for Academic Affairs, Notre Dame Technology Ethics Center

“Human-AI-interaction in Security-related Decision-making: Empirical Findings and Normative Consequences”
Markus Christen (University of Zurich)

Abstract: Decision-making in security-related contexts (armed conflict, law enforcement, and rescue) often involves limited information, stress, and time pressure for the involved human decider. It is foreseeable that AI systems both as decision-support systems and (embodied) decision-making systems (such as robots) will become increasingly important in such settings to overcome those limitations. However, given that human life is at stake in such situations, there is still consensus that moral responsibility in such decisions should remain with humans, which translates into the idea of “meaningful human control.” In this contribution, results of various studies from our research group will be presented to better understand human-AI interaction in security-related decision-making. We present results from survey studies (including a representative sample of the Swiss population) on responsibility-attribution to human or AI decision-makers and on human behavior experiments when interacting with human and/or AI systems that either advise a human decision-maker (in-the-loop and on-the-loop setting) or where humans have to predict the behavior of human-piloted or AI piloted systems. In the light of those results, the concept of “meaningful human control” of AI systems will be discussed from a normative point of view and a potential alternative—“controlling” humans through AI systems—will be sketched.

“Give Killer Robots a Chance?”
Ed Barrett (United States Naval Academy)

1:00–2:30 PM: Lunch and Keynote

Chair: Greg Reichberg, Research Professor, Peace Research Institute Oslo

Keynote
Paolo Benanti, Extraordinary Professor of Moral Theology, Bioethics, Neuroethics, and Ethics of Technologies, Pontifical Gregorian University

Abstract: Artificial intelligences permeate many aspects of daily living: doing an internet search, applying for a loan, looking for a job, and even getting to know a person through a platform are all activities that occur through the action of various artificial intelligence algorithms. These technologies, precisely because they are present in the background of existence, become almost invisible and are unknown to us in their true nature. Trying to make the action of these ubiquitous tools visible and understandable and asking what to do to manage them and how not to oust humans from the decision-making process is the goal of this colloquium. It tries to address the challenge of keeping humanity capable of control in an age when the machine becomes capable of surrogating human decisions. What can the machine do without human control? What decisions can it make? How to manage the possible nefarious outcomes of this delegation? Most importantly, how to ensure that the person always remains at the center of those processes vital to the survival of our species and peaceful social coexistence?

2:30–4:00 PM: Session 3

Chair: Warren von Eschenbach, Associate Director for Academic Affairs, Notre Dame Technology Ethics Center

“In Conversation with AI: Aligning Language Models with Human Values”
Atoosa Kasirzadeh (University of Edinburgh)

Abstract: Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case of these technologies, conversational agents, output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions including: What does it mean to align conversational agents with human values? Which values should they be aligned with? And how can this be done? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can serve as mechanisms governing linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by examining some practical implications of our proposal for future research into the creation of aligned conversational agents. 

“AI Bias in Practice: How to Manage It and How It Can Play a Key Role Towards the Common Good”
Georgina Curto Rex (University of Notre Dame)

Abstract: An important body of work has been dedicated to the mitigation of AI bias, in line with the Trustworthy AI principle of justice and fairness. However, an increasing number of voices highlight the need for recommendations on how to translate this ethical principle into practice. Based on a social sciences framework, I will present a methodology to assist AI multidisciplinary teams in the management of fairness decision-making throughout the design of AI systems with the active participation of the legitimate stakeholders. On the other hand, AI can help us identify, follow up, and raise awareness about societal biases. In that respect, I will present the first set of empirical data evidencing bias against the poor, which constitutes an aggravating factor for gender, race, or age discrimination. Finally, AI can also constitute a useful tool to inform policymaking. By using AI simulation systems, I will present a project in which we aim to obtain evidence whether lower levels of bias against the poor would contribute to achieve lower levels of poverty, in line with the #1 UN Sustainable Development Goal. Our ultimate goal is to provide recommendations for a new generation of poverty-reduction policies, which would focus not only on redistribution of wealth but also on the mitigation of bias against the poor.

4:00–4:15 PM: Break

4:15–5:45 PM: Session 4

Chair: Greg Reichberg, Research Professor, Peace Research Institute Oslo

“‘Justice removed’? Assessing Prospects for Just War Reasoning with AI on the Battlefield”
Esther Reed (University of Exeter)

Abstract: This paper assesses prospects for (an ethic of) just war reasoning with AI on the battlefield. Mindful of lack of progress at the CCW, it considers recent Holy See statements that prioritize the presumption against war over the presumption against injustice, in part due to AI-enabled technologies, before asking what justice on the battlefield requires. The paper considers the claim that it is impossible today to speak of a “just war” and/or justice on the battlefield against the question of whether military force must yet play a part in the search for justice and peace because the practical demand for peace requires judgment against wrongdoing. The argument is advanced that (i) those in political authority with the capacity to effect what “may yet be sufficiently necessary for the common good to be required for peace” (ST II-II, q.40, a.1) are required so to do; and (ii) that while new threat environments and weapons technologies variously intensify moral challenge/s to restore justice to just war reasoning—not least because of the realities of “persistent competition below the threshold of war”—the tradition is conceptually equipped for this challenge.

“Concatenating Artificial Intelligence with Other Military Technologies”
George Lucas (United States Naval Academy)

Abstract: A site visit with the CEO and engineers of a leading international armaments manufacturer reveals the emergent and wholly unanticipated legal and moral challenges that can arise when combining distinctive individual technologies to inadvertently create an entirely new class of weapon (in this instance, an “intelligent” and lethally armed autonomous anti-tank weapon for use in Ukraine) out of seemingly innocuous components. The resultant “slippery slope” underscores a healthy shared anxiety among many defense contractors and engineers who genuinely desire to assure themselves that their chosen activities are morally justifiable and specifically do not violate the extant provisions of international law or otherwise represent reckless or irresponsibly risky behavior. Efforts at governance of weapons development are impossible without the willing participation of such experts, whether working directly for national governments, their militaries, or private defense contractors. I apply the distinction between “black-letter” (hard) law and voluntary compliance (“soft law”) to the current legal debates regarding robotics, cyber, and other new areas of governance wherein the “canons of good governance” tend to mitigate against prospects for substantive “hard law” revisions, making voluntary consent and compliance a more reasonable path to success.

6:00 PM: Reception

TBD

7:30 PM: Dinner

At Hotel Capo D’Africa

Friday, October 21

All events to take place in Notre Dame Rome’s Global Gateway facility unless otherwise specified.

Speakers are asked to keep their remarks to a maximum of 30 minutes, which will be followed by 15 minutes of discussion.

You can click on items in the agenda for more details.

8:15–8:45 AM: Coffee and Tea

8:45–11:00 AM: Session 5

Chair: Greg Reichberg, Research Professor, Peace Research Institute Oslo

“Dawn of the Moral Dead: Artificial Intelligence and the Case for Moral Zombies”
Sigurd Hovd (Peace Research Institute Oslo)

Abstract: A philosophical zombie is physical duplicate of a conscious subject, functionally identical to this subject (endowed with identical behavioral capacities), but lacking any characteristics associated with qualitative consciousness or sentience. The logical conceivability of such beings, and what plausible metaphysical position that is to be implied therefrom, has been, and remains, a topic of much contention in contemporary analytic philosophy of mind. Recently this being of philosophical thought experiments has also entered debates on moral artificial agency. For if we realize that to think of algorithms as potential artificial moral agents is to think of a kind of moral zombie, we should also realize that a true artificial moral agent is a fundamentally incoherent notion. So Carissa Véliz has recently argued, pointing to the fact that concepts of moral autonomy and accountability, arguably essential features of what we think it means to be a moral agent, rely centrally on our conception of sentience. Responding to this argument, I will in this article point to what I see as an asymmetric set of theoretical motivations guiding accounts of artificial moral agency, vs. human original moral agency. Corresponding to these different theoretical motivations are different sets of theoretical constraints, which, I will argue, should make the notion a moral zombie far more conceivable than it may first appear.

“Can AI Systems Behave Themselves? Towards a Genuine Machine Ethics”
Thomas Powers (University of Delaware)

Abstract: Discussions of AI ethics have lately drawn journalistic and hence popular interest. Popular media have highlighted the dangers—some real, some imagined—of increasingly autonomous and intelligent machines that might escape human control. These dangers point to allegedly unresolved (or unresolvable) moral dilemmas for autonomous vehicles, displaced human agency and the loss of human control, and even the weaponization of many theaters of war by means of AI computers. While journalists, technologists, and even some governmental bodies have become very interested in the ethics of AI, researchers have made scant progress in putting ethics into AI. The time is ripe to do some foundational work on the concept of ethical automation and the logic of control architectures. I will argue that our best chance to address the risks of AI systems is by controlling them with programmed ethical routines that are rule-based. Such forms of programmed control are represented in an older paradigm of “symbolic” AI and expert systems and do not presuppose anything like consciousness or judgment in the AI system. The rival paradigm of “sub-symbolic” AI relies on analyzing large data sets and applying machine learning techniques. While machine learning has shown tremendous technological advances, such as in facial recognition (Facebook) and natural language processing (Siri), and these advances can support ethics in AI, I will argue that they are unlikely to produce satisfactory results in the area of ethical control.

11:00–11:15 AM: Break

11:15 AM–12:45 PM: Session 6

Chair: Warren von Eschenbach, Associate Director for Academic Affairs, Notre Dame Technology Ethics Center

TBD
Annette Zimmermann (University of Wisconsin)

“Moral Robots?  How to Make AI Agents Fit for Human Societies”
Matthias Scheutz (Tufts University)

Abstract: Artificial intelligence is different from other technologies in that it enables and creates machines that can perceive the world and act on it autonomously. We are, for the first time in human history, on the verge of creating sentient machines that could significantly improve our lives and better human societies. Yet, AI also poses dangers that are ours to mitigate as we are developing ever more complex AI systems. In this presentation, I will argue that AI-enabled systems, in particular autonomous robots, must have moral competence: they need to be aware of human social and moral norms, be able to follow these norms, and justify their decisions and actions with resource to normative principles in ways that humans understand. Throughout the presentation I will give examples from our work on AI robots and human-robot interaction to demonstrate how we envision ethical autonomous robots.

12:45–1:00 PM: Break

1:00–1:30 PM: Plenary Wrap-Up

Warren von Eschenbach, Associate Director for Academic Affairs, Notre Dame Technology Ethics Center

Greg Reichberg, Research Professor, Peace Research Institute Oslo

1:30 PM: Box Lunch

Saturday, October 22

Noon: Hotel Checkout