TEC Talks Episode Transcripts

TEC Talks podcast transcripts are published to this page as episodes are released. Transcripts of older episodes are accessible via the link below. You can find more information about all the episodes on the podcast’s homepage.

While we make every effort to produce highly reliable transcripts, if you want to quote from one of our episodes, particularly the words of our guests, please listen to the audio whenever possible. Thank you.

Transcripts for Episodes 1–10

14. Our Data Privacy and the Issue With Inferences (November 30, 2022)

Guest: Ignacio Cofone (McGill University)

Transcript

 

Kirsten Martin  0:01 
(voiceover) Hey, it's Kirsten. Just wanted to let you know that this is our last episode of 2022, but we'll be back early next year with more TEC Talks and more great guests. Thank you for listening, and happy holidays. (end voiceover)

Kirsten Martin  0:15
Welcome to TEC Talks, a podcast about the impact of technology on humanity. I'm Kirsten Martin, the director of the Notre Dame Technology Ethics Center, or what we like to call ND TEC. In these discussions, we talk about an important, idea, paper, article, discovery in tech ethics, and today, I'm so happy to be joined by Ignacio Cofone. Ignacio is an assistant professor and Canada Research Chair in Artificial Intelligence Law and Data Governance at McGill University's Faculty of Law. His research focuses on privacy harms and on algorithmic decision-making to explore how the law should adapt to technological and social change. He was previously a research fellow at the NYU Information Law Institute, a resident fellow at the Yale Law School Information Society Project, and a legal adviser to the city of Buenos Aires. He has a joint Ph.D. from Erasmus University in Rotterdam and Hamburg University, and a J.S.D. from Yale Law School, which is a doctor of science of law, which I did not realize. But that was very on-point.

So today, we're gonna take a deeper dive into your Illinois Law Review article "Privacy Standing." And I thought we could start with this idea--so what I liked about this article was you really kind of slow down what we're talking about when we talk about a loss of privacy. So we sometimes just say, That's a privacy loss. And we just kind of flippantly talk about this, and we try to disentangle what this means. And you kind of ask us to slow down as to, like, each of the piece parts about what's going on with a loss of privacy, privacy harms, and privacy injuries. And what I thought would be helpful is to find out what--usually there's a problem that you saw, either an incident that occurred or a problem within the theory that you were trying to solve. Like, what was the problem that you saw going on that you were like, Okay, I need to figure out how to try to solve this.

Ignacio Cofone  2:00  
Yeah, that's actually a good question. Well, first of all, thank you so much for having me here; I'm very excited to talk about the article with you. So I think the problem that I was trying to solve with this article is a consistent line of terrible Supreme Court jurisprudence (Kirsten laughs) regarding privacy harms. And how can one navigate it cohesively, and how can one comply with especially federal courts with constitutional requirements, and with Supreme Court requirements of dubious constitutional value, and still give some recognition to people's privacy injuries?

Kirsten Martin  2:34
So you're saying, like, the people, the courts were really struggling with how to identify or make decisions around privacy when they were trying to identify a privacy loss. Was a harm required? You know, what type of injury counts to say that you have standing? Maybe it would help to kind of back up and say, What--because the article is actually called "Privacy Standing," which requires harm. But maybe it would be worth backing up and saying, Why is privacy standing important?

Ignacio Cofone  3:02
Privacy standing is important because it determines to which extent people will get compensated for the harms that they suffer regarding their privacy, and regarding a bunch of other consequential harms that are linked to their privacy, such as their reputations, such as being discriminated, such as financial harm. And it is important because the extent to which we allow it also will determine the disincentives that corporations have to create those harms or to not have care to prevent those harms in absence of regulatory enforcement. And regulatory agencies are very important in privacy, but they have limited powers, and they have limited resources and limited attention. So giving people the right to sue when they were harmed is important for protecting our data rights.

Kirsten Martin  3:47
And so to be able to say that you have standing to sue--to say, You hurt me, I want something back--you have to actually prove, show that you had an injury, that the organization caused that injury, and that it's fixable in some way, that there's some redressability. Is that right?

Ignacio Cofone  4:04
Yeah, that's right. If we get into legalese--

Kirsten Martin  4:06
No, yeah, yeah.

Ignacio Cofone  4:07 
No, no, but if we get into legalese, we usually would call [it] "standing" only when it's for statutory privacy, and when it's tort law, we don't usually call it "standing."

Kirsten Martin  4:15
Okay.

Ignacio Cofone  4:16
But the principle of being, of importance of the right to sue applies to both, and the requirement of harm that you're bringing up, that's a really insightful point because courts have been really adamant about it in statutory law. But in tort law, they recognize that if someone invaded your privacy, then you must have suffered a privacy harm and the evidentiary requirements are different. And there's a lot of room to learn from tort law as to how courts apply statutory law.

Kirsten Martin  4:42
Okay, okay. So for the layman, because like the--and this gets a little bit ahead when you talk about the different types of, like, injuries that you can feel from privacy and the different types of harms--but the idea is that sometimes you can just have a harm just from the loss of privacy; you don't have to say I had a reputational harm, you don't have to say any of the kind of later-on harms that you get from the loss of privacy. It's the loss of privacy itself that can be considered a harm. Is that fair?

Ignacio Cofone  5:11 
That's fair. Or at least that's what the law should do.

Kirsten Martin  5:14
That's what the law should do. (laughs) Fair enough, fair enough, fair enough. So one of the first things you define is a loss of privacy, which seems very subtle when you define it, but it is actually super important. And so how, can you talk about how you got to the idea of, like, what constitutes a loss of privacy or a diminished--it's on a continuum, but, like, having less privacy?

Ignacio Cofone  5:35
Yeah, I got to this definition of loss of privacy by thinking about inferences. So many harms that people go through happened not because of collected information or because of information that they shared, but because of information that was inferred about them. And most of the binary definitions of privacy, such as privacy as secrecy, don't capture that. And they don't capture the fact that privacy may be lost in different graduations and that if I share some information with one person, that doesn't mean that it is not, that it is not private anymore because that's to keep expectations of privacy towards other people. And say that I tell you something, and then you tell everyone Notre Dame what I told you, it is not the case that I didn't lose any privacy because the information wasn't private anymore; it may very well be the case that I lost privacy, at least descriptively. Then we can get into into a conversation about whether it harmed me. But that thing that seems minor is important to avoid dismissing people's injuries.

Kirsten Martin  6:34
Right. And so the casual way that people have talked about loss of privacy is actually an idea around secrecy, which you mentioned, or sometimes control. But the same idea goes towards, if you hand over privacy to someone or an organization, disclose it in some way--either typing it into a box on their webpage or their app, telling it to somebody, it could be all sorts of ways--that that's actually where the privacy loss occurred. You know, in, like, handing over your data. And you're saying that actually, that what they do with the data can actually constitute a new privacy loss, you know, that when they hand over the data, there's still a privacy expectation about how that will be used or shared, and the loss of privacy depends on whether somebody else actually gains "probabilistic information about the observed." Which gets away from this idea of whether they actually collected no data about you. So you could have not shared any additional data to me, but I figured out more information about you, and that can still constitute a privacy loss. Is that--

Ignacio Cofone  7:37
Exactly. Yeah, that's exactly right. And so many privacy harms happen not because of information that is collected from me, but collected from people similar to me, or people--or information that is inferred about me from other data that I disclosed.

Kirsten Martin  7:51
Right, yeah, right. And so I think this is where attempts to fix the handoff of information--whether it's, you know, giving me rights to the data, you know, having me own my data, you know, in some sort of like web 3.0 idea of making it harder and harder for you to get my data--doesn't actually address in many ways the majority of the privacy issues that are going on right now with the way that data is aggregated and inferences are drawn, in that I don't have to share any data for a company to figure out a lot of information about me. Is that--

Ignacio Cofone  8:28
Yeah. Data ownership would even make the problem worse because privacy law already focuses too much on the moment of collection and not enough on inferences and processing. And data ownership would shift all the regulatory burden onto the moment of collection, which worsens all the problems that we know in privacy law, such as asymmetry of power, asymmetry of information, the importance of inferences, and the wide scope that corporations have to do whatever they want with our data after they obtain it.

Kirsten Martin  8:58
Oh, good, good, good. Yes, right. So not only is it not solving all of the problem, but it can be a distraction and get companies to only focus on this one thing, which they're too focused on anyway, which is this handoff of information. And it could mistakenly make people think that their privacy is being respected when it's not. You know what I mean? Like, so they could actually have this quote "solution," a privacy solution, that really does absolutely nothing to stopping companies from knowing a lot of inferential data about you, information about you, that doesn't rely on you sharing any of that information. That's a great point, you know, that it's not only a distraction, it's not only addressing the problem, it's actually--it can be making it worse by focusing on it.

Ignacio Cofone  9:40
And that's why corporations are always so happy to say, Yes, you should own your data (Kirsten laughs), you should have control over your data, and politicians funded by them are always very happy to say, Yes, we should totally give people control over their data and have them own their data. Because it won't really help them.

Kirsten Martin  9:53
Right. Yeah, and this is where, this is your own words, so I'm just going to quote back to you, but you said--the cases that you go through, and you go through cases that are great, and you say "highlight a major policy aspect of privacy harm decisions: it is rare that a piece of disclosed personal information is the information that produces the harm to the person." Like, so it's often not the issue of what was disclosed, it's what they figured out about you based on that or other information that was there. Okay, so then, so we have this idea of the loss of privacy that's a little bit different. It's a slight shift, but it's kind of known in the literature that this occurs. And so then you move from privacy loss to how to figure out a privacy harm--like, pulling those two concepts apart and defining one and defining separately. So can you talk a little bit about what you talk about with privacy harms?

Ignacio Cofone  10:41
I think it is worth separating privacy loss from privacy harm so that we have a descriptive element that correctly captures inferences that may be over-inclusive as to what we want to protect, and then we narrow it down with the values that we believe that privacy protects, such as autonomy or expectations. So I think the main difference between a privacy loss and a privacy harm revolve[s] around social expectations and social norms. So a privacy loss that is a privacy harm is one that happens contrary to our social rules about what is acceptable and what is not acceptable to collect or to share about other people and that interferes with the different values that the literature has discussed privacy protects, such as intimacy, autonomy, and people's well-being.

Kirsten Martin  11:27
Mm hmm. And you you also make a move to separate out privacy versus consequential harms.

Ignacio Cofone  11:33
Yeah, I think that is really important, and it is an aspect that courts often don't see. So oftentimes, when courts require some type of harm to provide standing, what they mean is that they require a harm that is not privacy. So they ask, like, Has your reputation been harmed? Have your finances been harmed? Have you been physically hurt? Have you been discriminated [against]? And a lot of those things are really important, and all those are values that protecting our personal information does feed into, and they are interests that our privacy protects, but it may very well be the case that someone breached your privacy, and none of that happened. And the reason why it is practically important to distinguish that is that oftentimes when those things happen, they happen really far down the road. And once they happen, we cannot point to a causal link to sue. So you might think, for example, a data breach, a large data breach, like the one Equifax had. If you're a victim of the Equifax breach, and then you sue, then the judge will tell you--if you get one of the judges that require harm--the judge will tell you, Well, have you suffered identity theft or credit card fraud? And then you say actually, No, not yet. But that's something that can happen to you in five or 10 years. And when it happens, you won't be able to trace it back to the Equifax hacks. So now you get no remedy because it didn't happen yet, and then you get no remedy because you cannot establish the link.

Kirsten Martin  12:56
So you're stuck. And I think this was a great move--it does a lot. So moving, saying that there's a privacy harm, which is the harm from the loss of privacy, and that can be around dignity, autonomy, all the reasons why, the normative reasons why we think privacy is important. And we justify those in lots of different ways for individual[s] in the society and why it's important. And moving the word "privacy harm" off of the consequential harm. So these are the downstream effects from a loss of privacy. And so this, this could be, I'm unable to get a job, you know, my reputation has been harmed in this way, I was denied a mortgage--like, things that we can put in a spreadsheet, and we can sum up, and we can say, This is actually the value of these consequential harms. And the move is wise because we previously, in calling them privacy harms, we actually placed too high of a burden to be able to quantify the harm of actual privacy harms, if that makes sense. So we tended, when we called them privacy harms for the last 20 years or more--I mean, like, for a long, long time, we would call them privacy harms, and people did work like Ryan Calo about objective versus subjective privacy harms and trying to tease out what was going on under the general rubric of privacy harms. But what this move does is it allows us to talk about consequential harms as a different category with a different name, and so we're less likely to hold them up against one another to say, Well, you have a lot of these funny things that really aren't quantifiable in the courts, or to an organization to say, Well, what harm is there? What's the privacy harm? When some were quantified and some were not, it was a tough battle. (laughs) And in calling them consequential harms, it really is helpful for organizations, too, to be able to say, like, Look, there's a privacy harm in and of itself; we can also talk about consequential harms. But this is considered, you know, harmful, and we don't even have to put a number on it.

Ignacio Cofone  14:49
Yeah, and some institutions like the Knight Institute, for example, have done really productive litigation with mixed results by telling judges, because all these consequential harms materialize later on and then we kind of prove the costs, what we should take into account is the risk of those consequential harms. And that's way better than the status quo. However, it has two limitations. The first limitation is that it's a larger departure from the existing legal requirements than just recognizing that privacy harm also exists and is different from those, because then we'd have an actual harm to tell federal courts that they need to recognize. And second, as you were saying, oftentimes proving the risk is really difficult.

Kirsten Martin  15:30
Right.

Ignacio Cofone  15:30
And if we have a recognition of the privacy harm, then we could remedy that, and then remedy the actual consequential harms downstream.

Kirsten Martin  15:37
I just think that, it's a move that you make within the article, and I will use it a lot because I think that it's really helpful to kind of cordon them off as separate and consequential, but not exactly--but leave privacy harms for the label of actually the harm from the privacy loss. Okay. And then you make a third move. (Ignacio laughs) So it was the privacy loss move, the privacy harm move, and you make a third move from privacy harm to actionable privacy injury. So can you talk about that kind of subtle difference?

Ignacio Cofone  16:05
Yeah. So that's mainly to satisfy the law that federal courts have to follow from the Supreme Court.

Kirsten Martin  16:11
Okay.

Ignacio Cofone  16:11
So if we didn't have Article III constitutional requirements, as delineated by the Supreme Court, so if we were at a state court, for example, then possibly we could say that privacy loss plus privacy harm is enough to sue. But we do have a requirement of showing, in most legal contexts, that whatever privacy harm accrued has to be the consequence of a legal wrong to be actionable.

Kirsten Martin  16:37
Got it.

Ignacio Cofone  16:37
There are different ways to define that legal wrong. One way to define that legal wrong is the breach of a statute. But judges could get more creative in satisfying that legal wrong by interpreting a tort more expansively, particularly if they're state courts.

Kirsten Martin  16:53
Okay. Got it. I think that the issue of privacy harm, I don't think a lot of people realize how important it is to movement around regulations or taking cases to kind of show some sort of harm that went along with the privacy loss, you know, previously. That it's really difficult, especially with before, when they were just focused on contracts and not really open to talking about torts and harms, and would really, really overly focus on this handoff of information and whether or not there was some sort of breach of what was promised. You know, versus understanding that, like, harms can come regardless of that handoff of information when no information was [inaudible] gathered about us.

Ignacio Cofone  17:36
Yeah, there is some historical, and particularly in the US, notice and choice has such--

Kirsten Martin  17:42
Yeah.

Ignacio Cofone  17:43
Has had such a prevalence in the way that we think about privacy. Now, we think it's almost silly to think about privacy in terms of, Well, companies have to be really clear about what they do, and as long as they're clear, they can do whatever they want. And there are movements to depart from that. They're really important principles, particularly in state legislation that have been enacted. But I think even the most robust versions of the legislation that we have, such as the CCPA, have room for improvement if they move further away from recognizing that people kind of waive things through consent and have a larger focus on actually reducing the harms that happen.

Kirsten Martin  18:22
Well, and I think your point about ownership of data as being a distraction, I mean, notice and choice in many ways is the initial distraction (Ignacio laughs), like where it actually did a lot of damage by--I mean, I always jokingly say, you know, we told them to just focus on consent and notices, and businesses did it for decades. I mean, that's what we told them to do, and then, and we told them, you know, whatever you do, don't make a falsehood in that notice. And so they just made no claims whatsoever in their notices. And so, in some ways, they were extremely responsive, you know, to the regulators in the United States, in that they did whatever the FTC and others wanted, you know, in that realm.

Ignacio Cofone  19:02
There are some amazing cases, like the Snapchat cases, that got sanctioned by the FTC for over-promising, and then as a consequence, had actually minimal changes in the way to handle the data and large changes in how they described that they were handling the data.

Kirsten Martin  19:15
(laughing) Right, right, exactly. That's what I mean, like they'll do, they're very responsive. In some ways, it's why I always say, like, whatever we decide as a society is important to regulate or court cases to bring and adjudicate, businesses will respond. And in some ways, like, this idea that they're gonna go under if we ever do X, I always am like, Oh, they will figure this out. They figured this out in the past, and they actually are extremely responsive. And in some ways, that's a great example of like, Oh, the notice was too clear? (laughs)

Ignacio Cofone  19:52
(laughs) Yeah, exactly, yeah, yeah, yeah.

Kirsten Martin  19:53
I can fix that, I can actually make it obscure and actually, like, lessen the amount that I'm promising you that I would ever provide in terms of privacy protection.

Ignacio Cofone  20:03
Yeah, particularly when the regulations address not the business model itself but aim at reducing the undesirable consequences from the business model such as privacy harm, then businesses can adjust. It would be more difficult to tell a business, Oh, it should just all stop collating data and making inferences because then it would be [inaudible] the data. But if you say something like, Well, you may be liable more often for the harms that you produce to people when you're making money, then that is the way that the law deals with so many other areas where companies are making money and producing unintended harms.

Kirsten Martin  20:36 
Right, right. And in some ways, and this is where businesses are, you know, just not, what we were talking about beforehand, just not keeping the collected data and making inferences about it, and we don't have a method of dealing with that right now. And even though there's a loss of privacy, they have new knowledge about us that we did not give them, you know, or did we think about them having and there could be a harm from it--or there is a harm from a privacy loss, and there could be actually an injury as well. And we are not thinking about--or we're thinking about it, but we don't, we're not used to thinking about how to do that in the law. You know what I mean? Like, so we're struggling.

Ignacio Cofone  21:13
And the ways that we have to think about it, like the purpose limitation principle and the data minimization principle, are so onerous that then we end up not taking them very seriously. So the purpose limitation principle, for example, would be great to address some of that, to say that, Well, they have to have it for a certain purpose, and then they have to acquire consent again, whatever that means, and it's not that difficult to have a different purpose. But then, because companies often don't know what purpose they're going to give the data later on, we end up accepting very wide purposes, like "marketing." So we end in this weird dichotomy [inaudible]. If we took the purpose limitation principle, which is actually protected very seriously, then the cost would be enormous. So we take it not seriously at all, and then it's almost [inaudible] having it, while other solutions that try not to address the business goal directly but then discern consequences from it, like a more robust liability system, would be better in that sense.

Kirsten Martin  22:05
Yeah, that's a great point. I remember there's, if you ever read back into like the 1960s, you know, economics of information stuff that all this is built upon--like, so all the notice and choice is kind of built upon these, like, three people in Chicago, you know, in the '60s. (Ignacio laughs) But they assumed that we would never buy and sell data, that it would always be--they literally assumed it would always be cheaper to always just ask Kirsten again for the data.

Ignacio Cofone  22:30
Yeah.

Kirsten Martin  22:31
And they never imagined a data market. They literally said, like, Well, no one would ever buy and sell it, it would always be better just to go back to the individual and ask them for that data again. Because they thought that the only place that you could actually hold onto that much data is the government. So this idea of, like, asking permission and that being, like, the way that you can make sure that you trust whoever has your data, and that's the fix, was literally premised on an old marketplace, you know, that just doesn't exist anymore.

Ignacio Cofone  23:02
Yeah, absolutely. Even when I speak with economists today, sometimes they don't immediately see how is this different from a normal standard form contract, like the one that you may have with your internet service provider where you don't read the contract. And then it's like, Well, you know, the harms are unknown, the set of potential negative consequences to you are unknown, the risk is unknown. And it's different than when you know exactly how much you're paying for internet, and then you may not read the small print and other clauses.

Kirsten Martin  23:27 
Well, when you have a hammer, everything's a nail, so (both laugh)--everything comes back to a contract and transaction costs. So I don't know. But even within those, you can show that it's not working, like, if you actually look and show, like, how the market works. And I, what I think is the future, and you're really hammering it here, and I know you have a book coming out is this "problem"--and I'm putting that in air quotes, even though it's not visual--the problem of inferences drawn about us. And I just think that that, like, talking about that is the issue. You know what I mean? Like, and getting in front of it and figuring out what shifts we need to do in the way that we even think about knowledge about us is kind of the point. And I'll just use a quote from, this is from you, so I'm not making this up. But what you said is, "Most often, information acquires its harmful characteristic through the process of aggregating different pieces of personal information and inferring new information out of them. In other words, harmful information is rarely collected information and is frequently inferred information produced by aggregating different pieces of seemingly inoffensive collected information." And I think that's key, is that a lot of times the data that is drawn seems inconsequential, you know, it's this exhaust that it's hard to point to the collection as actually the problem.

Ignacio Cofone  24:39
Sometimes even the inferences made about you are not on the basis of only the information collected from you, but information collected from others. So consent is absolutely irrelevant.

Kirsten Martin  24:48
Right, right. In fact, they might only need a few pieces of inconsequential information that I think they have and they should have, like, let's just say my age and my gender and where I went to school or something like that. But they know a ton about a million people that are just like me, and so then they're able to draw inferences about me. You go onto say, "When we share something about ourselves, we simply do not know what other information is out there for malicious actors to aggregate and use." And then, "Thus, if no remedy is provided for each illegally disclosed piece of information that contributes to the aggregation, but then the aggregation that can produce harm is invisible to the law, no remedy will ever be provided." And this is like this sticking point that we're at, where the collection was, what we're going to call unproblematic. You know what I mean? Like, or maybe it didn't even occur, there was no collection of data. And yet the aggregation of that data is literally invisible to the law because we don't see it as something that we should care about. The entire process, there's no remedy. Because it doesn't see it as a problem. And yet that, as you point out earlier, that's actually where the majority of the harms are occurring, is from this knowledge that's generated about us. So I think, I really appreciated the article just in kind of very clearly articulating and really walking through, like, why this is a problem right now and our lack of a remedy for it. Which is important because sometimes you can say, Well, we have a solution for that, this already exists. But that's just not the case right now.

Ignacio Cofone  26:15
Yeah. Even laws that are coming out recently, and there's some recognition of inferences in California, but other than California right now, even laws that are coming out now like the proposed bill in Canada that are quite robust and have quite a robust set of data rights, don't explicitly recognize inferences. Huge gap.

Kirsten Martin  26:34
Hopefully people will start doing more studies, also, like, writing about it because I really think that it like is an all-hands-on-deck type of situation. But before--I won't take any more of your time, and I would love to have you back when your book is out because I know it'll cover how to address inferences and stuff like that.

Ignacio Cofone  26:49
I'd be very happy to, thank you.

Kirsten Martin  26:50
Is there anyone--we always ask if there's anybody that we should be paying attention to, someone that you're like, Oh, I can't wait to see what they write next, young scholars in law or outside, any discipline?

Ignacio Cofone  27:02
Yeah, that's a tough question because there's so many good ones.

Kirsten Martin  27:04
I know.

Ignacio Cofone  27:05
So if I think about people in law, which are the people that I know better, I really like--well, first of all, Salome Viljoen's work, you already spoke with her. And two other scholars that I always pay attention to what they write are Rebecca Wexler and Margot Kaminski.

Kirsten Martin  27:20
Oh, that's great. Yeah, those are great suggestions. Yeah, it's so funny, especially, I always say it's a young person's game. But like, there's always, there are a lot of young scholars in the area just doing such interesting work from all different areas. And I always love reading a new thing, and I'm like, Oh, I never thought about that, that's so interesting. (Ignacio laughs) Like, they just think of, like, such different takes on things, and it's a great community, I have to say, in that respect, in that there's so much kind of work to do that it seems very collaborative, in that people just want more good ideas out, and they're not trying to scrape and, you know, no one feels impinged upon in their ideas, you know, when a new idea comes out.

Ignacio Cofone  27:59
Yeah, we're really lucky. Both the junior and the senior people working in this field have been so generous in feedback and in helping out with ideas.

Kirsten Martin  28:07
Oh, I agree. I mean, when I first started going to the Privacy Law Scholars Conference, I just, I was kind of like, it was an eye-opener to me how you could be really constructive and helpful and come out with new ideas all the time. And you'd have these people asking such interesting questions to kind of push your thinking, and it was just very collaborative and constructive.

Ignacio Cofone  28:24
Well, I remember the first PLSC that I went, I think it was 2014, I had a really productive conversation with you about the paper that I was trying to write. (both laugh)

Kirsten Martin  28:31
Oh, good. Yeah. Yeah, yeah, well, that's how we met, at PLSC. I mean, I always joke they should have like a, you know, people that come out of PLSC and, like, start writing together and do things together because, like, there's all these things that come out of it. They could, like, have a tree that shows all the people that come out of it, especially across disciplines. Well, thank you so much. I really appreciate you coming on and talking and stuff like that. So I know it's super busy.

Ignacio Cofone  28:57
Thanks. This was fun.

Kirsten Martin  28:59
(voiceover) TEC Talks is a production of the Notre Dame Technology Ethics Center. For more, visit techethics.nd.edu.

13. AI, Anti-Discrimination Law, and Your (Artificial) Immutability (November 16, 2022)

Guest: Sandra Wachter (University of Oxford)

Transcript

 

Kirsten Martin  0:03  
Welcome to TEC Talks, a podcast about the impact of technology on humanity. I'm Kirsten Martin, the director of the Notre Dame Technology Ethics Center, or what we like to call ND TEC. In these discussions, we discuss an important idea, paper, article, discovery in tech ethics, and today, I'm so happy to be joined by Sandra Wachter. Sandra is a professor of technology and regulation at the Oxford Internet Institute at the University of Oxford, where she researches the legal and ethical implications of AI, big data, and robotics as well as internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness as well as governmental surveillance, predictive policing, human rights online, and health and medical law. She studied law at the University of Vienna and has a Ph.D. in technology, intellectual property, and democracy. So we take this, the idea of this series is to take one idea or case and examine the larger implications for the field of technology ethics, and today, we're gonna do a deeper dive into your article in the Tulane Law Review called "The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-discrimination Law." So I thought we could start with kind of the, I'm putting this in air quotes, the "problem" that you were trying to address. And when you--that's usually how these things start is, you see a problem, and then you're trying to figure out how to address it. So what was that problem that you saw that you're trying to address with this paper?

Sandra Wachter  1:24
Yes, so I'm very much interested in the question of algorithmic accountability and how we can make those systems work for us rather than against us. And I think most of us will be aware that one of the issues that always comes up with AI is that they can be discriminatory. And I think many, many fantastic people have written on this. One of the reasons why this is the case is because, you know, it's collecting historical data, and therefore it's transporting inequalities from the past into the future. And so very often you have data that links back to protected attributes, such as ethnicity or gender, sexual orientation, religion, age, those kinds of things. And so there's very interesting research going on, and I have done my share fair in this area as well. But then I came to realize that another thing is happening as well, that algorithms are not just grouping us similar to protected attributes; they're also grouping us according to groups that fall outside of non-discrimination law. So for example, you could be applying for a job, and if you were to use a browser such as Internet Explorer or Safari, you're more likely to get rejected than if you used Chrome or Firefox, for example. 

Kirsten Martin  2:44
Mm hmm.

Sandra Wachter  2:45
And so that's quite interesting because obviously, browser usage or Safari user group is not something that is protected under the law. But nonetheless, it's holding you back in the same way. And so I started to realize that AI is grouping us in all those different types of groups, making very important decisions about us, yet the group that I'm part of has no protection under the law. And so I got interested in this topic.

Kirsten Martin  3:13
That's great. Right. And I think, because sometimes, and especially in the more technical work, we see people really focusing on protected classes as a shorthand for lots of things--like, if we could fix this protected class, but the problem is always that there's a lot of things going on around grouping. And you mentioned two groups that are, that you see, these algorithmic groups, and I didn't know if you could explain--I thought that was interesting how you categorized these two types of algorithmic groups that you were worried about, or that we should be thinking about, and if you could explain those. I think one is, like, non-protected, and then the other, incomprehensible.

Sandra Wachter  3:48
Yes, exactly. It makes sense to categorize them because it just shows how deep the problem actually goes. So there are algorithmic groups that are created by an algorithm that fall outside non-discrimination law because they're not seen as a protected attribute. This could be, I'm not getting a loan because I have a dog, dog ownership as a group.

Kirsten Martin  4:11
Right.

Sandra Wachter  4:11
It could be Safari user, it could be fast-scrollers, it could be football players, sad teenagers, video gamers. Those are all groups that are already being used to make decisions about people, but those groups don't have any protection. And the other side, there are algorithmically created groups where we don't even have human understanding or human language to describe what's going on, yet they are being used to make decisions about you. So it's, you know, electronic signals that you send out by your computer where we don't have a human concept to describe what that actually is. And so protection for that is even harder because I don't even have a social concept to describe what's going on.

Kirsten Martin  4:57
Right. And so I think, and this is where I thought, what I really liked about the paper--so it was kind of, you could see this process of, there seems to be something wrong that we're using these groupings to make a decision about someone. They're not fitting into these classes that we've decided are protected, and so before jumping to saying, Why should we protect them? You take the step and say, What were we trying to do with discrimination law and protected classes? Like, what was the point, like, what work was that doing for us, and could that actually apply in that situation? And so it just--anyone who's interested, I would say, there's a great explainer in some ways, like, a small literature review of, What is the theory behind and the reason why we have discrimination law? That really allows you to come away and say, Okay, this is the work that discrimination law is doing for us. And there are places that this might actually apply aside--is that a fair point?--aside from protected classes.

Sandra Wachter  5:56
Yes, absolutely. I think you have actually just summarized it better than I did in the paper. (both laugh) That's absolutely correct, yes. I did try to find out what makes a group worthy of protection and came to the conclusion that those either nonsensical groups or non-protected groups just don't really fit into that concept. And I started thinking about, What's the underlying purpose? What does the law actually want for us? What would society look like if the law got its way? And so at the very, very basic level, right, the idea is that the law wants you to be independent and self-sustaining. The law wants you to be your own person, to steer your path in life and make your luck, basically. 

Kirsten Martin  6:40
Mm hmm.

Sandra Wachter  6:40
And that includes various things. That means you should have the ability to get an education. That means you should have the ability to train and learn for a profession, get a job at some point. It means that you're entitled to health care. It means that you should have enough food and shelter. And so if you think about those sectors, those are really needed to be an independent, self-sustaining person. And so the law has decided that the protected attributes are protected because we humans have used them as reasons to pull people back from getting education, from getting health care, from realizing their life goals, right? And it just so happened that those were the criteria that we decided upon, why you're holding people back, because we have biased beliefs about gender and ethnicity. Well, algorithms [are] also holding us back, but just using different criteria that we would never think of. We would never think of holding somebody back because of their browser usage. But the effect is the same, right? I'm being held back based on an attribute that I actually really don't have any play in acquiring; it is something that is assigned to me without me being aware of it.

Kirsten Martin  7:52
Right. And this is where I think it has, like, a general application. So it has--it's definitely for the law, and kinda saying like, We need to start thinking about this a little bit differently. And I think we sometimes talk, like in business or just in general, about, Well, is this discriminatory? Is this a discriminatory behavior? And it immediately goes to protected classes: Well, it's not discrimination if it's not protected classes. As if they have to go together. And what you do is say, Hold on, like, what was the end goal? What was the law envisioning for us when it was written? What world did it want to create? And is there another way that we're actually undermining that world in some way that we need to actually shore this up in some way? And I like, this is a good quote is when you say--this is not my words, these are yours: "All [of] these views have one thing in common: discriminatory behaviors carry with them an assumption of moral superiority." This means that discriminatory behavior "demeans an individual, considers them of lower moral value, promotes negative stereotypes and prejudice, treats them with disrespect, and has a negative impact on their deliberate freedom, autonomy, dignity, or life choices or benefits." And so, this is the idea of why discriminatory behavior is wrong; you know, so this is, like, the wrongness of discriminatory behavior. And it doesn't ever mention protected classes in that. I mean, there's no, you know, only according to religion, national origin, ethnicity, right? It doesn't say anything about that. It just has a general vision of, like, behavior that would be discriminatory, and then--and why people deserve protection based on, you're gonna say is arbitrary groupings that we might have.

So I would say, in general, that was a great summary of kind of why we have discrimination law, and then why we intended to do it for protected classes. And I thought maybe you could take a moment to talk about kind of your step from discrimination--this is what it does, this is why we avoid it, this is the vision for the future--and then why it should apply to these groupings. Or just, not discrimination law, they don't have to be protected classes because we couldn't enumerate them all. But why we need to start thinking about them as needing protection.

Sandra Wachter  10:02
Yes. So I actually borrowed a little bit from the literature in terms of terminology. So I chose the word "artificial immutability." And so that's borrowed from one idea that certain protected attributes are protected because you had no hand in acquiring them, and you wouldn't be able to change them. That includes, for example, age. That includes ethnicity. And so the law always thinks you should only be based on actions that you do rather than things that you have no control over. And so I borrowed that because, you know, we would usually think about immutable characteristics as something that is--has somewhat of a quote-unquote "natural source," maybe, right, that has been given to you. And so I thought about, it's artificially created immutability.

Kirsten Martin  10:55
Right.

Sandra Wachter  10:56
So it's not in the sense that I was maybe born with it, but it was assigned to me. And there are various ways of making it de facto immutable. So for example, if I don't know what criteria are being used to make hiring decisions, then they are immutable, de facto immutable, because I can't actually control any of those things, right? If you're using facial recognition software, right, to decide if I should get a loan, that doesn't mean I can move my retina differently, even though that's a decision criteria, right? Or the sweat that I have on my face. And so it means that the criteria that are being used are assigned to me, but they're immutable to me; I cannot actually change it. And traditionally, that's a problem. Because you know, if you say you have to have good grades to get into university, that means, you know, you're gonna study, you're gonna prepare well, and then you're gonna have good grades, and then you go into a good college or get a good job, right? There's control over that criteria. With moving your retina, that's never going to be possible, I'm never going to be able to control my heartbeat in the same way. And so algorithms just create a new type of immutability that has the same effect, as in I cannot control it, but it's just artificially assigned to me rather than given by birth.

Kirsten Martin  12:16
And that was, like, a kind of "Aha" in the paper. I mean, there was an "Aha" around discrimination. But I liked the idea of, we all know that protected classes, the reason why we don't like people to be discriminated against for protected classes, it might be historical reasons, but it's also because you can't change it. It's immutable. It's the idea that it's an attribute of you, and it shouldn't be, and we should be choosing you based on something else that you had some control over. But what your point was, and you do a great job of, like, identifying also why something should be, is considered immutable. Like, if we don't know about it, we can't change it, if something is kind of sticky in some way. And why would that be any different in the law or how we think about things that we shouldn't be discriminated against or treated differently based on something that we haven't been given a chance to change if we could?

Sandra Wachter  13:04
Yes, exactly. 

Kirsten Martin  13:05
And so could you say a little bit more about what makes something immutable?

Sandra Wachter  13:09
Yes. So I came up with a couple of criteria that make something, in my opinion, immutable. So it could either have to do with opacity, as in, I don't really know what the decision criteria are therefore I have no control over. It might be that the decision criteria are too vague. So I could tell you, you know, Your friends on Facebook have an impact on whether you get a loan; that doesn't really give me the ability to know who's a good friend. So again, I have really no control. How can I put my best foot forward if I don't know what's a good Facebook friend? Stability, right? If the criteria are constantly changing, I have no control over that process. I know good grades get me into university, but what if that changes, and at some point, it's my dog, and the year after it's my browser, and the year after it's my retina. So I can't actually prepare and have control over the path in my life if that's constantly changing. Involuntariness is another one. Again, face recognition software that measures how your retina moves, that I can't control the sweat on your face, your heartbeat. And the last one is if there's no social concept for the words. So that comes back to those two groups, like, the dog owners where there's a word for it, but it's not protected, and then those human un-understandable groups where there is no social concept. So if I don't even have a human understanding of what that group actually means, how can I put a good loan application--

Kirsten Martin  14:32
Yeah, right, yeah.

Sandra Wachter  14:33
--[based on] electronic signals. And so those are very different types of immutability than we would usually think when we think about age, for example. But that's why I'm saying they're artificially created because they're in effect the same, they're de facto immutable, because I have no control over them.

Kirsten Martin  14:48
And it also goes towards, like, if you don't, if you can't--I don't want to use the word "explain"--the idea of not having a word for "why." So if we either, if we don't know why you were denied a job, it's as fuzzy as anything, and so how are we--there's some immutable attribute of you that we have now made a decision based on. But we can't explain it, so how would you, by definition, ever be able to change it in the future or get better at it if we're not able to explain it to you? And I think that the other thing that's so interesting is, these are constructed, right? Like you're saying, they're artificially created, they're constructed by the organization that either designs, develops, or deploys the AI system. And so in some ways, they're creating their own problems. I mean, they create their own problems with regular old discrimination, too. But you can almost see each of those being like, How could we possibly try to make better decisions not on immutable attributes of people? How could we not create an immutable attribute in our design of AI? So in one way, it speaks to regulators to say whether or not--should this be something that we're looking at for regulation if people are making, organizations are making decisions based on immutable attributes? Because we've said in the past that we don't like that. But then you could ask organizations, Why are you creating an immutable attribute? You know what I mean? And this is how you're making that immutable attribute. Is that fair? Like it could have design implications and regulatory implications?

Sandra Wachter  16:13
Yes, I think it could have both. And I think it will be in the interest of both sides to actually dig deeper. Because I think, you know, figuring out why an immutable characteristic is relevant to a decision at hand is just normatively something that is valuable, but also just something that could be very interesting for a company to know. You know, is dog ownership really a good predictor for repaying a loan or not? (Kirsten laughs) It would be a good thing to know more about that, right? And unfortunately, very litttle is done at the moment to find out what the causal relationship between the data points is. Because very often, it's just good enough to rely on correlation rather than causation. This is not to say that immutable characteristics are always problematic.

Kirsten Martin  16:55
Right, right.

Sandra Wachter  16:56
I would just apply a similar idea to how we deal with immutable characteristics in traditional settings. So for example, on the face, prima facie, yes, immutable characteristics is always going to be a problem because you cannot control it. But there are exemptions if you can justify it. For example, age is an immutable characteristic. But we have laws against child labor that are based on age. And that's a good thing, right?

Kirsten Martin  17:22
Right, right.

Sandra Wachter  17:23
Other characteristics, we have, for example, that you have to have perfect eyesight when you're a pilot--immutable characteristic, but there is a reason that this is acceptable, for example, right? We have schools for gifted children on a particular IQ that you can't change, right? And so there are situations in society where it is acceptable to use immutable characteristics, but you need to explain why it is acceptable to use a characteristic that you cannot change. And when you can do that, then I'm happy for you to use an artificial immutable characteristic if you can tell me why it's acceptable. I cannot change it. (laughs)

Kirsten Martin  18:03
Right. Yeah, no, I think it's, I think it's brilliant, really. I mean, I just think it--it's a great example of trying to say, you know, Why do we care about these protected classes or this one category of things? And even though it looks differently, it looks different right now--it seems like, Oh, it's not a protected class--well, is it doing the same work as a protected class? And should our same thinking about the law around protected classes apply to the way we think about artificial immutable traits? And so that's what I really liked about it, is that the decomposing, you know, what we're doing with discrimination law, and then kind of reapplying it to another area to say, Look, this is also similar to these protected classes in certain ways. And then by doing it this way, I have to say, just by--especially enumerating the ways it's immutable--it's super helpful just the way that you said it because it would allow someone to say, How am I contributing to this being immutable? And, Can I justify any one of these types of immutability that I'm making an attribute? And so I think that it's really in that way prescriptive, as well--not only by saying, you know, This is what we should be doing, but saying, If you don't want immutable attributes, don't make them. You know what I mean? Or justify it, you know? Or justify why, and maybe there's a good business case. And then the last bit is, of course, if you can't articulate why you're making a decision, you know, it falls into the fifth group, you know, which is, it's immutable. You know, if all intents and purposes, and so that's not where you want to be. And so it gives organizations an incentive to be able to articulate why they're making a decision, which I always like. (laughs) Like so that's, that's a good thing.

And I'll end with, you have a great quote, I'll just read it from, it's towards the end of the paper, and you say: "Algorithms, as opposed to humans, do not make decisions based on prejudices or the idea of inferior worth, but in the same way they prevent people from accessing goods and services. They do this by creating groups that effectively act as immutable characteristics that individuals are unable to change and have no control over. As a result, individuals lose the ability to exercise their rights and freedoms and gain access to goods and services. Therefore, the harm is the same as that originally imagined in an anti-discrimination law, only the mode and process of bringing about [that harm] are different." And I just thought that's a great encapsulation of the article and kind of why it's important because a lot of times these algorithms are actually being used on rights and freedoms and taking away access to goods and services and important decisions like employment, credit, getting into schools. So I loved it.

Sandra Wachter  20:40
Thank you so much. I'm very glad you liked it.

Kirsten Martin  20:43
Oh, yeah, yeah, no, and it's, the thing is, I think, you know, it's in Tulane Law Review, so it's obviously in a law review, and it's based on anti-discrimination law and discrimination law. But I really think that the implications are much broader, the way that people need to start thinking about the groupings that they're creating. And that's where I think it has, like, kind of a general application to tech ethics. And speaking of which, are there, we always like to end with a wrap-up of, is there anyone kind of in the area of tech ethics broadly--I mean, a lot of us read across disciplines--so is there anyone in the area of tech ethics that you think, Oh, I can't wait for them to write again or see what they're presenting, and anyone we should keep an eye on?

Sandra Wachter  21:21
Yes, I would highly recommend to keep an eye on Dr. Amy Orben, who is now at Cambridge. She's a psychologist, and she's interested in the questions of online harm--online harms broadly, but particularly on social media on young people and mental health issues. And so she is doing extremely interesting, insightful, peer-reviewed empirical work to actually figure out what the harm is that people experience when they are experiencing life online.

Kirsten Martin  21:54
Oh, that's great. That's great. And super important because I don't think they're--they're obviously not capturing, "they" being platforms, aren't capturing all the costs that they create when they're making money online. So it's helpful to have someone enumerate exactly what those harms might be. That's a great recommendation. Well, gosh, Sandra, thank you so much. I really appreciate you coming on and talking just briefly about your paper. Thanks so much for taking the time.

Sandra Wachter  22:14
Yeah, thank you so much for the invitation. Such a pleasure to be here. Anytime again.

Kirsten Martin  22:18
Oh, I will take you up on it.

(voiceover) TEC Talks is a production of the Notre Dame Technology Ethics Center. For more, visit techethics.nd.edu.

12. Algorithmic Fairness is More Than a Math Problem (October 19, 2022)

Guest: Ben Green (University of Michigan)

Transcript

 

Kirsten Martin  0:03
Welcome to TEC Talks, a podcast about the impact of technology on humanity. I'm Kirsten Martin, the director of the Notre Dame Technology Ethics Center, or what we like to call ND TEC. In these discussions, we talk about an important idea, paper, article, discovery in tech ethics, and today, I'm so happy to be joined by Ben Green. Ben is an assistant professor in the Ford School at the University of Michigan and a postdoctoral scholar in the University of Michigan Society of Fellows. He has a Ph.D. in applied mathematics with a secondary field in science, technology, and society from Harvard. He studies social and political impacts of government algorithms with a focus on algorithmic fairness, smart cities, and the criminal justice system. So today, we're going to take a deeper dive into this article in Philosophy and Technology, which is forthcoming, called "Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness." And I'll just say, I think my favorite part is Figure 2--but we'll get to that in a little bit--because I think it really is a nice visual as to the role of formal fairness. But, well, I don't want to skip too far ahead, but it's great. And so I thought this is, in general, talking about this move from formal fairness or mathematical fairness to something more substantive, and so I thought maybe you could start with the problems that you identify rightly about the dominant approaches to fairness that rel[y] on mathematical modeling, and that gets us into this impossibility idea of fairness, where we see ideas of fairness as incommensurable, before kind of going to your solution.

Ben Green  1:26
Yeah. Well, thanks so much for inviting me on. And I think that's a great place to start. I'm excited to have this conversation about this paper. Yeah, you know, I think this paper really springs out of trying to figure out, where does the field of algorithmic fairness go? What is a positive agenda for this field? It [ha]s often been caught between sort of two camps. On the one hand, you have the computer scientists, who say, Fairness is a problem, we can formalize fairness, we can create a definition for fairness. And then you have critical scholars in philosophy and STS and other fields that say, Of course you can't do that, of course you can't, you know, characterize this very complex notion of fairness into a single mathematical metric, here are all of these problems that arise when you try to do that. And so there's this sort of deep tension between, what do we do, right? On the one hand, there's these attempts to formalize the problem; on the other hand, there are critical scholars saying that that approach doesn't actually get us that far.

And one of the major issues that has come up through this formalization approach is what's been called the impossibility of fairness. And this is a mathematical result that sort of shows the incompatibility of different mathematical notions of fairness. And in particular, there's sort of the two major notions of fairness that often are talked about, the first being an idea of calibration, which is that people who are predicted to have an outcome at similar rates should be treated similarly. And the other notion would be something along the lines of equal false-positive and false-negative rates for different groups, saying that, you know, if two groups are equally, if someone is not going to commit a certain outcome, then they should be treated similarly; there should not be more false positives or false negatives across different groups. And this debate came up most notably when it was really sparked by the ProPublica article about the COMPAS algorithm, their machine bias report that showed that this algorithm that was used in pre-trial and sentencing made false-positive predictions about Black defendants, saying that--falsely predicting that Black defendants were going to recidivate at a much higher rate than they did, than the algorithm did for white defendants. And so it sparked this question of, you know, how do you navigate between this tension of wanting to, you know, get rid of these different false-positive rates, but then that's in tension with, you know, treating people the same based on their prediction. So that's really this impossibility result that has, I think, stymied the field's progress.

Kirsten Martin  4:06
And I think it also has real implications. Like, I feel it when I'm talking to people in the business school or anyone that's kind of looking at more applied data science, and they use this as a kind of excuse as to, like, We're doing the best we can, and if I just kind of, I just have to choose one of these random mathematical definitions of fairness, and therefore that's what fairness is, you know? And so I see that from the titles of "the impossibility of fairness," people glom onto that and say, Well, it's impossible, you know that, right? It's impossible, that we can't do this. And so it's kind of had this downstream effect into applied work from the computer scientists noting that if we say that we want to have the same types of mistakes for the same type--that all people have error rates about the same versus good predictions about the same, that it's hard mathematically to get that to work. But that doesn't mean that we don't stop trying, or in your case to say, Maybe those aren't the right definitions of fairness, and that's not all there is.

Ben Green  5:01
Yeah, and that's a great point. And I think that was part of what motivated me to really dig into this impossibility result more fully was having a lot of conversations like what you're describing, where computer scientists or even policymakers are saying, Well, it's just a matter of trading one versus the other, we have to pick one, or we have to figure out the right balance, and that's really the best that we can do, this sort of shrug-your-shoulders, accept-your-fate approach. Which on one level is mathematically correct but I think felt to me like it was really limiting our ability to think creatively about how we could try to promote equality with algorithms.

Kirsten Martin  5:39
Right. Yeah. And you do a nice job of saying, you know, just that the problems with the formal, that one, we're ignoring these philosophical meanings, but we're also just naturally limited by the tools of the field, which I like your quote from Dewey, which I always misquote by saying, A problem defined is a problem half-solved, which is, like, the much shorter version of it. But the longer version, which you nicely quote, is, "The way in which a problem is conceived decides what specific suggestions are entertained and which are dismissed." And so we automatically by defining a problem as a mathematical problem that we need to solve kind of go to it with only those tools that we have, and then the answer we get, it's incommensurable, is the answer that we have, and we just stop looking, we kind of dismiss these other ideas. And your great point is, which [is] leading to myopic suggestions for how to promote fairness in practice, which I think we see the results of.

Ben Green  6:31
Yeah, and I think, you know, a lot of this is--I don't get too much into, you know, the Dewey of it all in this paper (Kirsten laughs), but I think there's more that I've been thinking about and hope to write on sort of bringing in Dewey's pragmatism into thinking about the role of algorithms and how we design algorithms, really shifting away from trying to focus on these abstract conceptions of ends in and of themselves. When we--you know, from computer scientists trying to define fairness, then the conversation centers around the definitions of fairness, these mathematical conceptions, and even our arguments, you know, sort of focus on that level to say, Are algorithms inherently biased? Can we make fair algorithms? And it's very much focused on the algorithm itself [inaudible] because of the way that the problem is conceived. And what I am pushing with this article and some of my other work is to say that we really shouldn't be worrying about that mathematical construct so much, except to say, How does this actually affect the real world? How does this work in practice? Rather than asking is this algorithm inherently fair or biased, we should be thinking more in terms of, to what extent do algorithms improve or hurt efforts to promote equality? Right? Sort of shifting from a debate of, This represents fairness versus, No it doesn't, to: How can we have algorithms that play more productive roles in promoting equality? And whatever we're doing on the formalization side is instrumental to that downstream goal.

Kirsten Martin  8:03
To that goal. Right, right, right, right. To kind of shift what the goal is versus a technical outcome is to say, What do we want to have in this decision space? Like, so what is, what is our goal for this type of decision? And then have that kind of, you know, filter downstream as to whatever, and it would almost become an engineering input of, like, the assumptions that you need to make.

Ben Green  8:22
Yeah, because there's a huge gap between the, you know, these sorts of notions that we put into algorithms and the real-world impacts. And that's part of what critiques of algorithmic fairness, including my own here, are pointing to is that, an algorithm can satisfy a notion of fairness that's pretty standard, and yet actually, by many accounts, be re-entrenching inequality. So if that's the case, clearly we can't just be relying on these definitions, we need to find other approaches.

Kirsten Martin  8:50
Right, I thought that was the--I thought that was the more powerful part of the piece was to say, not just that they're, not just that they're not helpful, but they could, by using them, be doing more harm. Like, by relying on them, if we're kind of generating disparities in social and material resources, you know, it's kind of reinforcing systematic problems that we have in society, then they're actually not--not that they're ever neutral, but they're not that they're not that helpful, they're actually harmful in some ways, like, by relying on them too much. So I thought, I didn't know if you could talk a little bit about the move, which you pull on from formal versus substantive equality, as to how you see that difference and how you kind of take that conversation in formal and substantive equality and then bring it over to the fairness in algorithms discussion.

Ben Green  9:35
Yeah, so what I'm looking at with--yeah, so sort of thinking about, Where are the problems in how algorithmic fairness is conceived? And it's really two intertwined issues, the first being the issue around really trying to make this a mathematical problem and define fairness mathematically, the second piece being the limiting scope of treating fairness really as just a matter of inputs and outputs within a specific decision-making process. And that really maps onto ideas of formal equality, which are striving to sort of, yeah, create--you know, sometimes this is quite similar to procedural equality--to create, you know, fair procedures, decision-making processes that say, you know, Two people who are alike in terms of their input attributes for this decision should be treated similarly within the bounds of this process, and everything upstream and downstream of that decision-making process is ignored. And I think that that sort of maps onto a lot of the issues that we can see within the formal approaches. And part of how I bring in substantive equality is by looking at how some philosophers and legal scholars have responded to similar types of tensions within efforts to promote equality that actually look a lot like the impossibility of fairness. Where within a more formal frame of analysis, it looks like we're caught between these two completely incompatible ways of trying to promote equality, but actually, if we take a zoomed-out approach, if we take a more substantive approach that looks at social hierarchies and downstream impacts and sort of the real-world complexity of the situation and thinks about reforms beyond just the scope of decision-making procedures, we can actually see that what looked like a fundamental tension is not so intractable after all. So I'm bringing in ideas from those scholars to say, Here are some ways to think about how we might approach reform in these settings where it looks like we're stuck with a deeply intractable zero-sum dilemma.

Kirsten Martin  11:42
So almost, like, as a visual--which I think this actually goes towards, like, other things with algorithms as to, like, efficiency claims and stuff like that--is that a lot of times, we're looking in a very, very small box around just, like, the data that we've received and then the model, and then maybe an outcome. And we're trying to--I'm using my hands, and no one can see me except for Ben--but with very, very small, and we're trying to optimize or figure out the fairness within that. So this is like the error rates being the same or the true positives being similar for similar groups or something along those lines. But when all of a sudden, that's all the outcome and how we treat people, how the datasets were actually created is actually embedded within this larger system that we need, if we kind of expand out to see what's going on--instead of just taking that as, like, a null, like a given--if we expand out to see what's going on, one, we're not gonna use mathematical models for that, but also the problems that we're trying to solve might be different. And the issues that we might be facing with the data might be different, as well. So our solutions and the problems are going to be different if we expand out. And that's similar to issues around equality, where we're trying to--I don't know, university admissions, where we're trying to, like, micromanage little tiny things about someone's application versus saying, Well, who's even taking the SAT? And which schools are we even--like, kind of broadening out further to say, What else is going on in society that we need to understand how, instead of looking only at SAT scores, looking, going more broadly, and trying to understand what else is going on in the world to create the imbalances that we're seeing in SATs or GPAs or whatever that might be. Is that a fair characterization of kind of broadening it out?

Ben Green  13:20
Yeah, no, I think that that captures it really well. And I think, with respect to algorithms, gets to this point around, you know, are we trying to define these sort of mathematical formalisms about algorithms as somehow a property of algorithm, which maybe makes sense in some internal world, or are we actually trying to figure out how we can use algorithms as tools for improving society? And if we're in the first world, then sure, we can define fairness, we can treat inputs and outputs and think about fairness as this mathematical construct. But if we're actually interested in real-world impact, we have to look at what's happening in practice, we have to take this broader scope, especially because just relying on these mathematical notions of fairness, it doesn't actually lead to deeper equality when implemented in the real world. So we really have to think in this deeper way, and I think yeah, there are a lot of lessons from ideas about equality and ideas about political reform and theories of change to draw on and say, Why do certain types of reforms, you know, lead in one direction and certain types of reforms lead in another direction? I think we can really pull on that, you know, especially when we're talking about algorithms like pre-trial risk assessments or predictive policing algorithms or welfare algorithms, where these are directly operating on these high-stakes decisions and on people's lives. But also for the developers a major motivation is that they are contributing to improving society. The goal isn't just to make a fancy mathematical tool for, you know, theoretical computer science. They're working on an applied problem with an implicit or explicit normative goal behind that when they're developing these systems. So I think, you know, it's important to carry that normative lens all the way through not just to a high-level motivation but also to how do we evaluate what these tools are doing and whether or not they're actually working.

Kirsten Martin  15:18
That's a great point. Because I think, like, the idea of, like, what it means to actually work in the real world is one of those things that we just don't pay enough attention to, and that we need to pay more attention to. What I always say is--which is extremely odd. Because in other places in an organization--like when we automate a manufacturing line--we have to take into consideration all these laws and norms and, like, actually what's going on within the context and make sure that it's not creating problems in the manufacturing line, if it's not hurting employees. I'm not saying that businesses do this all the time, but they should. (laughs) And so I think that it's just one of those things that we looked at whether it optimized or was efficient in such a narrow, small little box and not really understand in what way are their downstream effects, or even, like, what are the harms of additional true positives to someone who's not committing or people think is committing fraud in welfare? And especially since you look in the government context, it's always someone that's taking away someone's rights, right? So we have someone, like, so it's like a higher impact on someone, and it's not really clear how we're measuring whether or not that AI is actually successful, or what success even means in that case. I don't know, do you know Apryl Williams? She does stuff on reparative--

Ben Green  16:28
Mm hmm.

Kirsten Martin  16:28
Yeah, so she has a workshop coming up this weekend on reparative algorithms with that same kind of idea of looking at some sort of larger goal and not just trying to optimize on something kind of very narrow and small, but looking at something that's more of a social justice repairing what's out there versus just trying to make whatever is going on faster, which is kind of where we are right now. So I don't know, could you speak a little bit about your alternative, which is, you know, focusing on substantive? I recommend everyone look at Figure 2, which shows the role of--we don't have to go through Figure 2, but what I liked about Figure 2 is that I think it's a great, like, actual visual reminder of all the work that needs to be done around substantive ideas of fairness. Like, there's three times the boxes in Figure 2 around substantive fairness, and formal is really like an offshoot of a discussion. And so it really shows everything that we're missing by ignoring questions about substantive fairness and our kind of preoccupation with this one lane of formal.

Ben Green  17:31
So yeah, the approach that I describe in the paper is called substantive algorithmic fairness. And I characterize it into two different types of responses. There's the relational response and the structural response. And we can think about this in terms of a more substantive diagnosis of the problem of even the impossibility of fairness, which comes up, you know, most first in Figure 1 as a sort of prelude to the second figure, which is more of a how-might-you-implement-this-method-in-practice. But when we're looking at something like the impossibility of fairness, we can see sort of upstream and downstream issues that create this tension. Upstream of a decision-making process, we have what I call a relational harm. We have social hierarchies that say, you know, there are deep inequalities in terms of educational outcomes or how likely different communities are to be arrested or to commit crimes in the future. And, you know, it's important to note that these are not just issues of the data being biased in the sense of misrepresentation, but also that the data is often accurately picking up on real-world enduring inequalities. And I think that's an important point of this relational harm, where often we talk about the data's bias just to mean, well, the data's not--you know, is misrepresenting the reality. But often the reality is deeply unequal, and the data actually accurately captures that.

Kirsten Martin  19:01
Right. 

Ben Green  19:01
So we can't just have the issue be a concern about misrepresentation. So that's really this issue of the relational harm. And then downstream of a decision-making process, we have what I call the structural harm of how--what are the implications of the decision that the algorithm is informing? And typically, as you already hinted at, often these decisions are punishing individuals or restricting benefits from individuals who are judged negatively. So essentially here, downstream we're re-entrenching the inequality that exists by saying, Oh, you didn't, you know, you have lower SAT score likely because you have all, you know, because your social group was oppressed, that also now means that you, rather than getting support, you are now not able to get into college, you're not able to get this job or something like that. So we have the downstream harm that is further magnifying the upstream inequality. You know, sort of if you think about this larger chain of a process, when we're just looking at the decision-making process itself, which is the case in formal equality or formal algorithmic fairness, we're not able to account for either of these concerns. And the goal of substantive algorithmic fairness is to bring both of these types of concerns into view.

Kirsten Martin  20:21
I have to say, I was really impressed by how early you got to your answer, and how much time you spent with it. Because the vast majority of papers get to, like, here's everything that's wrong with what you all are doing--and then implications and discussion. You know what I mean? And so I want to applaud you for sticking with your solution as long and as detailed as you do. Because every time you do that, you are, like, laying yourself out to be then later critiqued. (both laugh) Even though we're academics, and we invite that on the regular--like, that's just kind of our job--it's still, it is not an easy thing. And I even looked at the number of, like, the pages, and I was like, Oh, you got there, like, by page 15 (Ben laughs), I'm aready talking about your solution. And I was like, That's impressive. Because I always tell people, Get to the halfway point, talk about what you're going to do differently. And you did it, and you spend a lot of time with it, and very detailed in applying it. So I mean, it's one thing to propose, like, here's a way we can think about this--we should be thinking about this differently, even stronger--we should be thinking about this differently. And by the way, and this is how you would do it. Like, this is, I'm going to walk you through what that would mean in a very specific case and kind of what the repercussions of doing that. So taking the COMPAS idea, the sentencing algorithm, and then saying like, What would that mean in this situation? What would it mean in other situations? And I think--I really recommend that people read it because even if you're not interested in it, just to see, like, how you can actually put forward an idea and then defend it, you know, for a lengthy amount of time. (both laugh) I don't know what the reviewer said, but--

Ben Green  21:53
Yeah, it's funny--I appreciate that you picked up on that and enjoyed that. Because I would say in successive drafts of this paper, and it went through many drafts, the positive agenda expanded and got larger and larger and started earlier and earlier, and the critique element got shorter and shorter. Because yeah, increasingly, it felt like--if the paper is more, there's a lot of papers that, yeah, helpfully point out things wrong and then, right, have a very brief section of, Here are some high-level steps forward. I've written papers like that, too.

Kirsten Martin  22:27
Right, same.

Ben Green  22:28
They don't give a clear, positive agenda. And so yeah, my goal was much more to provide that, which then means, oh, you have to keep adding more and more layers to the positive agenda (Kirsten laughs) so that it feels fully fleshed out. It's always easier to do critique--

Kirsten Martin  22:43
Oh yeah. (laughs)

Ben Green  22:43
Than I think to make proposals. But yeah, I'm hoping that this can be, you know--hopefully now as opposed to earlier drafts, it has enough meat on its bones, so to speak, that it's both, yeah, more to think about how it could be implemented in practice. Even Figure 2 is something I added quite late in the process to really try to distill it down to say, What are the actual steps that someone would follow here? As opposed to just sort of a higher level conceptual frame? 

Kirsten Martin  23:10
Yeah, I mean, it's not often you see figures in a philosophy paper. So I--kudos to that one, as well. Because I used to joke, they wouldn't even know how to process it normally. (both laugh) They probably were like, What? What is this JPEG? I don't even understand what this is.

Ben Green  23:23
We'll see what happens when I go through the proofs. (laughs) I'm sure they'll have no problems.

Kirsten Martin  23:28
No, I'm sure, I'm sure. You know the other, what it's a good reminder of, though, is that in some ways--there's this great philosophy of science, Richard Rudner, who talks about the value-ladenness of science, and people have applied it to technology. And the idea that he says is that, Look, you're making these value-laden decisions throughout; by ignoring them--so by ignoring your more substantive definitions of fairness, of algorithmic fairness--it doesn't mean that you're not actually taking a stand on them. You're just doing it, in Rudner's words, unmanageably. So you're not actually being thoughtful about this because you're inheriting them no matter what, and you're contributing to them, or you're interacting with them no matter what. It's a matter if you're kind of managing that interaction in the way that you deal with it on the front end, or as your point, are you contributing to it later on downstream as well? And the visual, it really helps with that. So I will just say that I think this kind of summarizes your paper: "Substantive algorithmic fairness presents a new direction for algorithmic fairness: away from formal mathematical models of 'fair' decision-making and toward substantive evaluations of how algorithms can and cannot promote justice." Which happens early in the paper, but I thought it was a great summary of kind of what the point is and also really focuses on the main point of the paper, which is to offer this new vision or different vision of how we think about algorithmic fairness, and also the role of mathematical fairness as a piece of the overall understanding of fairness.

So we usually end with anyone that you know in the field--you know, broadly construed--that we should be paying attention to. And this can be from any discipline whatsoever. I'm very agnostic to discipline. So if there's anyone that you really, when you think, Oh, I can't wait to see what they write about next. Is there anyone we should be paying attention to?

Ben Green  25:18
Absolutely. I think there's, you know, a number of people that I think are really doing interesting work at the intersections of AI, ethics, race, and the real-world social impact. So Rashida Richardson is one who's a legal scholar at Northeastern, and she writes really great articles about sort of the role of race in how AI is used in police departments and government bodies generally. Anna Lauren Hoffmann is at University of Washington writing some of the critical articles on fairness that I mention, refer to, and cite in the article, but really pointing to the gap between these sort of abstract ideal conceptions and some of the real-world issues of inequality. Lily Hu is a friend who is a philosopher at Yale writing about issues of race and social science and AI in terms of how we construct race and issues of causality that I always really enjoy reading. And then Rodrigo Ochigame, who is a professor at the University of Leiden writing about, similarly to many of the others, about the limits of AI ethics and AI fairness and trying to bring in more of a historical approach, an STS approach, an anthropological approach, to understanding, where do these conceptions come from? What sorts of impacts do they have when deployed by political actors and engineers into practice?

Kirsten Martin  26:51
Oh, that's great. Well that's, you covered your bases. I saw when you cited Anna Lauren Hoffman, I love that paper of hers from a number of years ago where she really just takes it to task on the focus on discrimination, and it was just so pointed and on point that I was--it's so refreshing to have someone just kind of tackle that straight on. But thank you very much, that's really helpful. Well, thanks so much for being here. I really appreciate you taking the time out of your day. I know it's super busy, so thank you very much.

Ben Green  27:17
Yeah. Thank you so much. This was a great conversation. Thanks for having me on.

Kirsten Martin  27:23
(voiceover) TEC Talks is a production of the Notre Dame Technology Ethics Center. For more visit techethics.nd.edu.

11. Provoking Alternative Visions of Technology (October 5, 2022)

Guest: Daniel Susser (Penn State University)

Transcript

 

Kirsten Martin  0:03
Welcome to TEC Talks, a podcast about the impact of technology on humanity. I'm Kirsten Martin, the director of the Notre Dame Technology Ethics Center, or what we like to call ND TEC. In these discussions, we discuss an important idea, paper, article, or discovery in tech ethics, and today, I'm so happy to be joined by Daniel Susser. Daniel is an assistant professor in the College of Information Sciences and Technology [at Penn State]. A philosopher by training, he works at the intersection of technology, ethics, and policy. His research aims to highlight normative issues in the design, development, and use of digital technologies, and he's currently focused on questions about privacy, online influence, and automated decisionmaking. Today, we're going to take a deeper dive into your article in Surveillance & Society that's just recent--super short, but I really liked it--on "Data and the Good?" And I-- well, I like the fact that, we can get into the meat of it, but pretty much what I took you to be saying is that privacy law scholars and surveillance studies scholars, which don't always operate, other people might not realize but they kind of build on different areas and their goals are different. Even though we might think of them as similar from the outside, they actually are very distinct. But they both talk about the broader implications of surveillance for individuals and society but don't focus enough on what should be happening. So focusing on what shouldn't be happening, but not so much on what should be happening, and I didn't know if you could speak a little bit about that, both where they're focused now and where you think they should be focused.

Daniel Susser  1:26
Yeah, thanks. So I mean, one, thanks for having me, Kirsten. I'm really excited to be here, and I'm glad you enjoyed the piece. You know, just sort of as context, the short essay was part of a, like a dialogue series in Surveillance & Society. And so the idea was for a few pieces to sort of speak together on some kind of theme--really short, sort of like provocation pieces. So that's the spirit in which I wrote it. And the prompt was the intersection of privacy law scholarship and surveillance studies scholarship. So that's where that focus came from. And you know, Scott Skinner-Thompson at the University of Colorado invited me to participate and so had me just sort of thinking about where these two different fields--which, as you say, sort of speak to similar issues often but generally usually coming from very different theoretical backgrounds and disciplinary perspectives--sort of what happens at the intersection of these two spaces. And like you said, I was just really struck as I was thinking about it by how much really amazing, trenchant, I think super important work has been done and is being done to point out the kinds of harms that data collection and data-driven technologies can produce, and to level, I think, really meaningful and, like, devastating normative critiques against these kinds of harms. But much less--I mean, there is some work, which I try to point to in the paper and I'm sure we'll talk about, but just much less work that really tries to put forward an alternative vision for what these technologies can do for us. And I have to say, I mean, this is--I do not mean this piece as an attempt to cast stones. I think my own work reflects this same kind of bias, where I have been mostly focused on critique and less on putting forward a kind of substantive vision. And so in part, this is kind of like a mea culpa. (Kirsten laughs) I want to be, in my own work, be thinking about sort of how we could do this more.

Kirsten Martin  3:21
I think for a lot of us, and I would say the same about my own work--you know, we both have written about manipulation, we both write about privacy. I think a lot of it is, you see something wrong going on, and you're like, Oh, I want to enumerate why this is wrong and help out and explain, these are the things that are wrong. And then you get to the end of the paper, and then that's just it. (laughs) Or maybe the target is a different type of journal outlet, where you're supposed to be designing perhaps, you know, regulation or policy that might be different. And I thought was also interesting is you do a great job of identifying, like, why the fields steer clear. You know, so why--like, this is a commonality of surveillance studies and privacy law that we steer clear of enumerating what a good technology would look like given the theories that I just used to explain why this went wrong. And some of it's like, it's just not the job of law to do that. These law professors were not brought up to identify, and the law scholarship is not there to identify, procedural norms that should occur within the law and not a design decision, which is really left up to me in a business school or you in an information school. Right?

Daniel Susser  4:30
Yeah, a hundred percent. I mean, I think both of the things that you just pointed to were really, like, important as I was thinking through this piece. On the one hand, right, like I think the motivation for really focusing on diagnosing harms is completely understandable and I think justified in a lot of cases because there are so many of them, and they are so incredibly worrying that, you know, it's perfectly understandable why so many of us, I think, really home in on that and think like, Let's point out what these potential harms are and think about how we can mitigate them. And then yeah, the sort of middle part of the piece, where it's like a bit speculative.

Kirsten Martin  5:06
Yeah, right right.

Daniel Susser  5:07
But I identify what I imagine are some sources for this hesitation to sort of put forward a substantive, positive vision. Which yeah, in the context of legal scholarship, I think the sort of liberal sort of foundations of American legal scholarship and most Western legal scholarship does exactly as you said, which is sort of assume that the law's role is really to provide a set of rules within which otherwise autonomous people can kind of live out their life in whichever way they see fit. And there's a kind of implicit injunction, sometimes an explicit injunction, against law sort of intervening in our lives in a more substantive way, which again, I don't, I don't reject that. I think that's, like, a perfectly--there's a lot of wisdom to that perspective. And likewise, I think, you know, some people who work not in a liberal tradition but in a more radical tradition are really worried about, you know, if we put forward new substantive conceptions of how these technologies should be operating, are we going to maybe, you know, inadvertently reinscribe racist or colonial or imperialist or other kinds of oppressive values into these technologies? And so there's a real worry about, I think, normative prescription at all coming from some of those traditions, especially in the surveillance studies literature. And I think those worries are perfectly understandable, too. But, you know, for reasons I sort of lay out in the second part of the paper, I think we should be emboldened, and we should sort of, even in the face of those kinds of real concerns, work to put forward a substantive vision. Because the alternative, as I suggest in the paper, is that we are just beholden to the technology industry's vision for the future. And so I think if we don't put forward our own alternative conceptions, the best we can hope for is a kind of harm reduction. And I think that we can do better than that.

Kirsten Martin  7:00
Right, and I like what you used, [Langdon] Winner, to identify that "particular technologies," I'll just quote, "generate specific 'forms of life.' By 'provid[ing] structure for human activities,'" they "'reshape that activity and its meaning.'"And "'As they become woven into the texture of everyday existence, the devices, techniques, and systems we adopt shed their tool-like qualities'" and "'become a part of our very humanity.'" And so I--which is beautifully written, I have to say--but it also goes to, like, how important it is to be speaking of an alternative vision of, like, where you think the future should be or what world are you trying to create with the design of this technology? That that's--I like to say, That's where the magic happens is in design and development. And in some ways, kind of harm reduction or mitigation at the end is unsatisfying because it is--it's important, and we should still do it, but the main idea is, like, How would we have done this differently? Because that's actually where the important decisions, building on Winner, that's where the important decisions are being made.

Daniel Susser  8:00
Yeah. Isn't that quote so good?

Kirsten Martin  8:02
Yes. 

Daniel Susser  8:02
He's such a good, good writer. Yeah, I think there is a, I mean, I totally agree with everything you just said. I mean, I think there's a vein of scholarship in political theory and political philosophy that has of late really sort of focused on the sort of diagnosis of harms and mitigation of harms as sort of a more tractable set of problems that we can deal with, and sort of move away from what has traditionally been called sort of ideal theory in politics that tries to put forward these kinds of more substantive, positive visions. And I think in certain domains, that makes a lot of sense. Because what we--like, the urgent problem is just to, you know, create less-racist institutions or to create a politics that is less polarizing or something like that.

But the thing I take away from Langdon Winner's work, from the work of Phil Agre, and other people that I cite in the paper--sort of really classical texts in science and technology studies, philosophy of technology--is that technologies are, you know, we use them in this way that feels like they are just sort of enhancing our ability to realize our own ends. But what the scholarship really teaches us--and we have, you know, decades now of work that really tries to make this argument, and I think makes it really powerfully--is that that's never all that's going on. Technologies are always world-building in this way, and I think the language, you know, Langdon Winner takes this language from Wittgenstein, of forms of life, that technologies provide a kind of form of life. And what he means is that they structure our activity and the meaning that we imbue that activity with in various different ways that really impacts how we live our lives and experience our lives individually, how we organize socially and collectively. It impacts the kinds of political institutions that we're able to create and maintain. And so because technologies are always already doing that kind of positive, like, constructive work in the background, I think it just raises the question for us: If technologies are world-building, like, what are the worlds that we want our technologies to build? And I think that for most of us, while it's true that we would like them to be less-unjust worlds than the ones that we're currently experiencing, I think we want more than that. And, you know, I think we want a more democratic world, I think we want a more egalitarian world. And I think that that requires thinking not just about sort of removing opposition to democratic movements or removing obstacles to egalitarian social relations but actually, like, positively building technologies in ways that advance those goals.

Kirsten Martin  10:50
That's great. Yeah, I think your paper is a great call to not only identify what's going on and also why it's wrong--so what exactly is going on when we get this instinct that something seems off--and then you identify some scholars, like you said a minute ago, like, you identify some scholars that are kind of moving in this direction to say, like, What's this alternative vision of what we actually want the technologies to support? Versus saying, Stop doing this, modify this.

Daniel Susser  11:20
Yeah, absolutely. There--you know, I mention a number of folks in the paper who I think are moving in this direction. I think, you know, there's amazing work by Ruha Benjamin. She has an amazing book called Race After Technology, where she advances this vision of tech abolition, which sounds like a purely negative project, but in Benjamin's view, you know, she says something to the effect of, Abolition is never just about sort of destroying the oppressive system. It's also about envisioning a new one. And I think that that's really crucial. I'm really excited about her new, forthcoming book--I think it's coming out, like, next month or the month after--called Viral Justice, where it looks like she's going to be engaging in that kind of work, as well. I'm not sure if this is the point where you want me to sort of name other folks that I mention.

Kirsten Martin  12:05
Oh, yeah, we should. That's a, that's a good time, that's a good place to do it. Yeah. Especially because you name some of them in your paper.

Daniel Susser  12:11
Yeah, for sure. So, you know, another person I mention in the paper, Salomé Viljoen, who is a legal scholar at the University of Michigan, I think is doing really amazing work that really reflects, I think, exactly this kind of perspective that we've been talking about. It is on the one hand, you know, it produces a critique of the existing order, but then also really tries to push us in the direction of imagining what a new sort of legal order would look like. Salomé's focus is on thinking about data not just as sort of about individual people, but rather as a medium for producing social relationships. And in particular, she wants us to think about what it would mean for data to produce more democratic, egalitarian social relationships. And she offers some really useful concepts for thinking that through.

Another book that I just have to give a sort of shout-out to, James Muldoon has a new book called Platform Socialism, which does a really incredible job of thinking about how we can sort of, like, leverage the kinds of tools and techniques that we have in order to produce a radically different political economic order. And you know, not everyone is going to be on board with platform socialism. (Kirsten laughs) That's sort of a contentious set of political views. 

Kirsten Martin  13:27
Right right right.

Daniel Susser  13:28
But even for those people who maybe don't want to take on board all of that politics, as an example of work that can provide just a completely different set of conceptual tools for helping us think through these kinds of questions to sort of build what I call in the paper, I take the language from Sheila Jasanoff, of sociotechnical imaginaries. And I say that, you know, we need folks to help us develop new sociotechnical imaginaries so that we're not kind of beholden to those of Silicon Valley. And so James Muldoon's book I think really offers some amazing conceptual tools that helps us think in different ways from the ways that we're used to. I should also just say because you brought him up, I mean, I just cannot recommend enough to people who might be listening to this some of the classics--

Kirsten Martin  14:15
I know.

Daniel Susser  14:15
--in philosophy of technology and science and technology studies. So you know, Langdon Winners' book, The Whale and the Reactor, I teach it every semester, and every time I go through it, I, like, find new stuff that makes me think really hard even though it was written, you know, 40 years ago. Phil Agres' work. And there's also a really great, a really great blog, like, Substack, if people are interested in such things by Michael Sacasas called the Convivial Society, where he takes a lot of--he's a really wonderful reader of these kinds of classic philosophy of technology writers, and he does really amazing work sort of translating their insights for contemporary problems. So, highly recommend that.

Kirsten Martin  14:53
That's great. That's a great list of people to look for, both very current and then, like, the classics. I know I'm going up--Apryl Williams' work on reparative algorithms, which is the idea of not just attempting to get to whatever status quo is and not doing further harm, but actually trying to empower people at the margins through the design of your AI. And so there is work--now she's not in surveillance studies or PLSC, or privacy law scholars. Which maybe is to the point that there's, you know, interesting reparative work that's going on where you see a vision of the future, and, like, how can we actually design our technologies to enact that vision of the future? Because we're, I think the power of Winner and others like Winner, Langdon Winner, is that, what he would say probably is that we're doing it anyway. And so whether you're thoughtful about it or not, and why don't we just be more thoughtful and manage it in a better way, which is kind of your call to arms for this entire endeavor, which I really liked. I'll just say this is your words, not Langdon Winner's I think, from your paper, but-- "I'm suggesting that we contemplate new goals. In addition to diagnosing and mitigating the risks of data-driven technologies, privacy and surveillance scholars ought to contest the technology industry's vision of the technological future we are striving to achieve by offering competing visions of our own." And I really think that that was a great summary of, like, the call of saying kind of, you know, at the end, kind of push ourselves to come up with what's an alternative vision or what would be the steps of identifying an alternative vision. And so I really--I really thank you for writing it. It was a great piece.

Daniel Susser  16:24
Oh, thanks so much. I really appreciate you taking the time to sort of, like, think through it with me and to highlight these pieces. I hope it is--it was meant as a provocation, so I hope it provokes people. I'm excited to see what it yields.

Kirsten Martin  16:36
Right. Yeah, well, we might as well amplify the provocation. (Daniel laughs) I always love a good provocation. So thank you very much for coming.

Daniel Susser  16:41
Oh, thanks so much, Kirsten. 

Kirsten Martin  16:44
(voiceover) TEC Talks is a production of the Notre Dame Technology Ethics Center. For more visit techethics.nd.edu.