TEC Talks

the words TEC Talks against a digitized background and above logos for Notre Dame and ND TEC

TEC Talks was a podcast featuring conversations on a broad range of topics in technology ethics.

Each episode took one article, idea, case, or discovery and examined the larger implications for the field of tech ethics, with our goal being to make this work accessible to a wide audience. Episodes were 15–30 minutes long.

The podcast was hosted by Kirsten Martin, a nationally recognized expert in privacy, technology, and business ethics, during her tenure as director of the Notre Dame Technology Ethics Center (ND TEC).

*Note: Prior to being a podcast, TEC Talks was produced as a virtual live event series in partnership with the Notre Dame-IBM Tech Ethics Lab and ThinkND. You can access videos of these talks through the links on the left side of this page.

Podcast Trailer

 

Subscribe to the Podcast

    

You can also join the ND TEC email list to get episodes delivered right to your inbox.

Season 1 Episodes

14. Our Data Privacy and the Issue With Inferences (November 30, 2022)

Guest: Ignacio Cofone (McGill University)

Episode and Notes

How much would “owning” your data actually protect your privacy?

Host Kirsten Martin is joined by Ignacio Cofone, an assistant professor and Canada Research Chair in Artificial Intelligence Law & Data Governance at McGill University’s Faculty of Law. His research focuses on privacy harms and on algorithmic decision-making, with his current projects examining how to evaluate standing and compensation in privacy class actions and how to prevent algorithmic discrimination.

Ignacio came on the show to talk about his paper “Privacy Standing,” which appeared in the University of Illinois Law Review.

Providing courts with guidance on how to assess privacy injuries and advocating for people’s rights to seek compensation for them (i.e., legal standing), Ignacio’s paper distinguishes between what constitutes a privacy loss, a privacy harm, and an actionable privacy injury. He also seeks to define downstream, consequential harms as something distinct from privacy harms so that the latter can be recognized as harmful on their own and not dismissed simply because they haven’t (yet) led to something more tangible like identity theft or a financial loss.

As for where privacy harms originate, Ignacio emphasizes how frequently they arise not from the moment our data is collected but rather from the inferences later made about us from that data—or even from the data of others who just happen to be similar to us. That means the prevalent approach of giving people notice and choice—which Kirsten traces back to the economics of information literature of the 1960s—and its focus on asking users for permission to collect their data is in many ways inadequate when it comes to protecting our privacy.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ignacio highlighted three fellow law professors who also study privacy, among other issues:

*Salomé was also the guest for episode 10 of TEC Talks, “Moving Data Governance to the Forest From the Trees.”

13. AI, Anti-Discrimination Law, and Your (Artificial) Immutability (November 16, 2022)

Guest: Sandra Wachter (University of Oxford)

Episode and Notes

How could a personal characteristic like eye movement affect, say, whether you get a loan?

Host Kirsten Martin is joined by Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute (OII) at the University of Oxford. She founded and leads OII’s Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.

Sandra came on the show to talk about her paper “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law,” which is forthcoming in the Tulane Law Review.

Most people are familiar with the idea of anti-discrimination law and its focus on protected-class attributes—e.g., race, national origin, age, etc.—that represent something immutable about who we are as individuals and that, as Sandra explains, have been criteria humans have historically used to hold each other back.

She says that with algorithms, we’re now being placed in other groups that are also largely beyond our control but that can nevertheless impact our access to goods and services and things like whether we get hired for a job. These groups fall into two main categories: people who share non-protected attributes—say, what type of internet browser they use, how their retinas move, dog owners, etc.—and people who share characteristics that are significant to computers (e.g., clicking behavior) but for which we as humans have no social concept.

This leads to what Sandra calls “artificial immutability” in the attributes used to describe us, or the idea that there are things about ourselves we can’t change not because they were given by birth but because we’re unaware they’ve been assigned to us by an algorithm. She offers a definition of what constitutes an immutable trait and notes that there can be legitimate uses of them in decision-making, but that in those cases organizations need to be able to explain why they’re relevant.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Sandra highlighted University of Cambridge psychologist Amy Orben and her research on online harms, particularly in the context of young people’s use of social media.

Episode 13 Preview

12. Algorithmic Fairness is More Than a Math Problem (October 19, 2022)

Guest: Ben Green (University of Michigan)

Episode and Notes

Host Kirsten Martin is joined by Ben Green, an assistant professor at the Gerald R. Ford School of Public Policy and a postdoctoral scholar in the Michigan Society of Fellows at the University of Michigan. Specializing in the social and political impacts of government algorithms, with a focus on algorithmic fairness, smart cities, and the criminal justice system, Ben is also an affiliate of the Berkman Klein Center for Internet & Society at Harvard University and a fellow of the Center for Democracy & Technology.

He came on the show to talk about his paper “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness,” which recently appeared in Philosophy & Technology.

Ben begins by explaining the aforementioned “impossibility of fairness,” an idea that describes the incompatibility of different mathematical notions of what makes a system fair. By focusing on meeting one of these formal definitions of fairness, an algorithm that is mathematically “fair” can nevertheless yield decisions that re-entrench real-world injustices, including those it may have been designed to counter.

Asking whether the ultimate purpose of an algorithm is to satisfy a mathematical formalism or rather improve society, Ben puts forward an alternative notion of what he calls substantive algorithmic fairness—his detailed diagram of which, labelled Figure 2 in the paper, made a lasting impression on Kirsten. His approach still envisions a role for mathematical conceptions of fairness, but it repositions them as one consideration in a broader process where the primary concern is accounting for and mitigating both upstream inequalities that exist before an algorithm is deployed and downstream harms present afterwards.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ben highlighted four he says are working at the intersections of AI, ethics, race, and real-world social impact:

Episode 12 Preview

11. Provoking Alternative Visions of Technology (October 5, 2022)

Guest: Daniel Susser (Penn State University)

Episode and Notes

Host Kirsten Martin is joined by Daniel Susser, an assistant professor in the College of Information Sciences and Technology and a research associate in the Rock Ethics Institute at Penn State University. A philosopher by training, he works at the intersection of technology, ethics, and policy, with his research currently focused on questions about privacy, online influence, and automated decision-making.

Daniel came on the show to talk about his short essay “Data and the Good?” that recently appeared in Surveillance & Society.

Considering the intersection of scholarship in privacy law and surveillance studies, he notes how research in these fields tends to focus on critiques of existing technologies and their potential harms. While he and Kirsten are quick to emphasize how necessary this kind of work is, Daniel describes his paper as a provocation meant to push researchers, himself included, to at the same time put forward substantive alternatives for how technology could or should be used. He says there are understandable reasons why this doesn’t happen more often, but that absent competing visions for our technological future, we are beholden to those crafted by the technology industry.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about.

In addition to citing classic texts in science and technology studies by Langdon Winner and Phil Agre as well as The Convivial Society blog, which applies classic writing in philosophy of technology to contemporary problems, Daniel highlighted three people working to advance alternative visions of technology: 

*Salomé was also the guest for episode 10 of TEC Talks, “Moving Data Governance to the Forest From the Trees.”

Episode 11 Preview

10. Moving Data Governance to the Forest From the Trees (September 21, 2022)

Guest: Salomé Viljoen (University of Michigan)

Episode and Notes

Host Kirsten Martin is joined by Salomé Viljoen, an assistant professor of law at the University of Michigan Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She studies the information economy, particularly data about people and the automated systems it trains, and is interested in how information law structures inequality and how alternative legal arrangements might address that inequality.

Salomé came on the show to talk about her paper “A Relational Theory of Data Governance,” which appeared in The Yale Law Journal.

The paper proposes a new framework for thinking about how we govern the use of people’s data, so she and Kirsten begin by discussing the current/traditional approach focused on the privacy of individual transactions and the degree to which we consent to share our own information. However, Salomé explains what this approach misses, saying how in the digital economy, data isn’t collected to make decisions about any one person. Instead, it’s used to understand populations of people with similar interests, backgrounds, etc. and then predict things about them, such that opting out of sharing your own data doesn’t change the inferences being made about you.

Based on Salomé’s argument, Kirsten compares putting all our attention on the handoff of our data rather than on what happens with it afterwards to the old adage about missing the forest for the trees. Salomé then details what she means by moving toward a relational theory of data governance, one that accounts for population-level impacts of big data, recognizes both its potential benefits and harms, and prioritizes the scrutiny of data flows most likely to affect vulnerable communities in disproportionately negative ways (e.g., facial recognition data).

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Salomé highlighted four:

9. It’s AI, Not a Personality Detector (Part 2) (September 7, 2022)

Guests: Luke Stark (Western University), Jevan Hutson (Hintze Law)

Episode and Notes

In this second of a two-part episode, host Kirsten Martin continues her conversation with Luke Stark, an assistant professor in the Faculty of Information and Media Studies at Western University in London, Ontario, and Jevan Hutson, an associate at Hintze Law PLLC. Luke researches the historical, social, and ethical impacts of computing and artificial intelligence technologies, and Jevan‘s practice focuses on the intersection of privacy, security, and data ethics.

They came on the show to talk about a paper they coauthored titled “Physiognomic Artificial Intelligence,” which appeared in the Fordham Intellectual Property, Media and Entertainment Law Journal.

In the first episode, Luke started with the troubling history of physiognomy and phrenology. These two pseudosciences were widely discredited in the early 20th century, but their notions that people’s external appearances can be a way to access internal truths about them have made a comeback in the form of AI systems that purport to be able to perform this type of analysis. Jevan also discussed some of the troubling commercial applications in areas like hiring, education, and criminal justice where we’re already seeing this “physiognomic AI” deployed.

Part 2 picks up with Kirsten asking Jevan about the menu of regulatory options he and Luke propose in the paper to remedy the fundamental problems with these systems. Jevan describes why they think physiognomic AI should be barred completely and the existing legal frameworks through which that might happen. Kirsten adds that the gap between AI ethicists and other technologists is larger in this area than just about any other, and Luke suggests computer vision isn’t the only field of study where physiognomic impulses can still be found.

Episode Links

At the end of each episode, Kirsten asks about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Luke and Jevan highlighted three:

8. It’s AI, Not a Personality Detector (Part 1) (August 24, 2022)

Guests: Luke Stark (Western University), Jevan Hutson (Hintze Law)

Episode and Notes

It’s a TEC Talks first: two guests! Host Kirsten Martin is joined by Luke Stark, an assistant professor in the Faculty of Information and Media Studies at Western University in London, Ontario, and Jevan Hutson, an associate at Hintze Law PLLC. Luke researches the historical, social, and ethical impacts of computing and artificial intelligence technologies, and Jevan‘s practice focuses on the intersection of privacy, security, and data ethics.

They came on the show to talk about a paper they coauthored titled “Physiognomic Artificial Intelligence,” which appeared in the Fordham Intellectual Property, Media and Entertainment Law Journal.

And with two guests, the conversation went a little longer than usual, so we’ve decided to break it into two parts.

In part 1, Luke starts with a quick overview of physiognomy and phrenology, two pseudosciences with racialized and gendered histories that claim people’s inner traits can be discerned from their physical/behavioral characteristics and the shapes of their skulls, respectively. Although physiognomy and phrenology were widely discredited in the early 20th century, the notion that external appearances can be a way to access internal truths has made a comeback in the form of AI systems that purport to be able to perform this type of analysis.

Jevan discusses some of the troubling commercial applications in areas like hiring, education, and criminal justice where we’re already seeing this “physiognomic AI” deployed. Luke also addresses why one human being making inferences about another—something we all engage in all the time with, as he points out, very mixed results—is fundamentally different from a computer trying to do the same. He says that this is simply beyond the capabilities of artificial intelligence, with Kirsten noting that because the flaw is in the concept of physiognomic AI itself, no amount of additional data will fix the problem.

Episode Links

At the end of each episode, Kirsten asks about another scholar in tech ethics (or several) whose work our guest is particularly excited about. However, because we split this conversation into two parts, you’ll have to come back September 7 for the second to get Luke’s and Jevan’s recommendations. Stay tuned. :)

7. When Privacy is a Facade for Data Extraction (August 10, 2022)

Guest: Ari Waldman (Northeastern University)

Episode and Notes

Host Kirsten Martin is joined by Ari Waldman, professor of law and computer science at Northeastern University, where he is the director of the Center for Law, Information, and Creativity. A leading authority on law, technology, and society, he studies how law and technology affect marginalized populations, with particular focus on privacy, misinformation, and the LGBTQ community.

Ari came on the show to talk about his book Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power, published in 2021 by Cambridge University Press.

Intended for both a general audience of technology practitioners and more research-focused tech scholars, the book begins with interviews meant to construct a “day in the life” of people working at tech companies—which in one instance included something called “the bro meeting”—and their thoughts on privacy. Ari says his two biggest takeaways from a sociological perspective were the limits to these employees’ conceptions of what constitutes “privacy” and a false consciousness of what their companies were actually doing (or not doing) on that front.

He and Kirsten talk about how compliance is routinely used as a way to advance the goals of industry rather than the rights of users, with the corporate idea of privacy even shaping the regulatory approach of laws like Europe’s General Data Protection Regulation (GDPR), such that companies don’t have to change their underlying models of data extraction.

Kirsten and Ari also cover parallels between privacy and diversity compliance, the problems with notice and consent, how privacy shouldn’t be confused with encryption and security, the way siloed teams hinder information flow and negatively impact the product design process, and what it would take to shift the culture around privacy.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ari highlighted seven whose scholarship relates in one way or another to the issues he tackles in Industry Unbound:

6. Lost in Translation: When Machines Learn Language (July 27, 2022)

Guest: Amandalynne Paullada (University of Washington)

Episode and Notes

Host Kirsten Martin is joined by Amandalynne Paullada, a postdoctoral fellow at the University of Washington’s Department of Biomedical Informatics and Medical Education. Amandalynne recently earned her Ph.D. in computational linguistics from Washington, where her dissertation examined the social impact of natural language processing, or NLP, wherein computers are programmed to learn human languages.

She came on the show to talk about a paper she authored in The Gradient titled “Machine Translation Shifts Power,” which was a runner-up for the inaugural Gradient Prize.

Amandalynne and Kirsten begin by discussing the use of platforms like Google Translate to analyze the social media feeds of people seeking to enter the United States, a task for which those tools were not designed and one where mistranslations can have significant negative repercussions for the individuals being vetted. They then talk about translation being a means to exert power since well before the advent of machine learning and the questions raised by using technology to translate a language you don’t otherwise understand.

Their conversation also covers the implications of tech giants having access to massive amounts of natural language data, minimizing the role of trained interpreters; the notion of “paranoid reading” with respect to translated texts; and how NLP and human translators can work together to produce translations that are not just technically but also contextually accurate.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Amandalynne highlighted three:

5. Creative Speculation: Computer Science Taps Science Fiction (July 13, 2022)

Guest: Casey Fiesler (University of Colorado Boulder)

Episode and Notes

Host Kirsten Martin is joined by Casey Fiesler, an assistant professor in the Department of Information Science (and Computer Science, by courtesy) at the University of Colorado Boulder. Her research currently focuses on big data research ethics, ethics education, ethical speculation in technology design, technology empowerment for marginalized communities, and broadening participation in computing, with much of this work supported by the National Science Foundation, Mozilla, and Omidyar.

Casey came on the show to talk about a paper she authored in the Colorado Technology Law Journal titled “Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination.”

Kirsten and Casey begin with the notion of unanticipated consequences in the development of new technology and Casey’s efforts, drawing on both legal education and science fiction, to get computer science and information science students thinking creatively about problems that could arise after a design has been deployed. They also discuss why critiquing technology is not the same thing as being against it, with Casey pointing to her love of tech as the reason she’s so invested in ways to make it better.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Casey highlighted two as well as an educational initiative in computer science:

4. An Evolutionary Case for Better Privacy Regulations (June 22, 2022)

Guest: Laura Brandimarte (University of Arizona)

Episode and Notes

Host Kirsten Martin is joined by Laura Brandimarte, an assistant professor of management information systems at the University of Arizona’s Eller College of Management. Holding a Ph.D. in public policy and management from Carnegie Mellon University, she specializes in privacy and behavioral economics, including the psychology of self-disclosure and the social dynamics of privacy decision-making and information-sharing.

Laura came on the show to talk about a paper she coauthored with Alessandro Acquisti (Carnegie Mellon University) and Jeff Hancock (Stanford University) titled “How privacy’s past may shape its future,” which appeared in January in Science magazine.

Referencing work that points to the notion of privacy being present throughout human history, Laura explains that privacy management is about our ability to moderate what we share and with whom, not never sharing anything. But she notes that the strategies humans have developed evolutionarily to manage our privacy—e.g., having a conversation in hushed tones so no one but the person we’re speaking to hears—often don’t have an online equivalent and thus aren’t helpful in that context.

Laura also discusses why an overreliance on the “notice and consent” approach to privacy—typified by a website presenting users with a long set of terms and conditions when they go to use it—makes it difficult to impossible for people to arrive at the best privacy decisions for themselves. Drawing on an analogy from the automotive industry and citing a lack of incentives for data holders to make changes to how they handle that data, she and her coauthors argue for regulations that move beyond notice and consent and shift responsibility for sound privacy practices to those gathering our data in the first place.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Laura highlighted Joy Buolamwini, founder of the Algorithmic Justice League, an organization devoted to equitable and accountable AI.

3. Not the (Speech) Chilling Effect We Think (June 1, 2022)

Guest: Suneal Bedi (Indiana University)

Episode and Notes

Host Kirsten Martin is joined by Suneal Bedi, an assistant professor of business law and ethics at Indiana University’s Kelley School of Business. Suneal’s areas of expertise include intellectual property, marketing law/ethics, brand strategy, and the First Amendment. Holding a joint Ph.D. in marketing and Ph.D. in business ethics from The Wharton School of the University of Pennsylvania as well as a J.D. from Harvard Law School, he employs multiple methods in his research to answer business-relevant questions that sit at the intersection of law, marketing, and public policy.

Suneal came on the show to talk about a paper he recently published in the Harvard Journal of Law & Technology titled “The Myth of the Chilling Effect.”

He and Kirsten started by talking about how the First Amendment and Section 230 of the Communications Decency Act—the provision that protects tech companies from being liable for user content posted on their platforms—are routinely misapplied in debates about content moderation on social media and elsewhere.

Suneal then explained the study he conducted where he asked participants to write negative reviews of dining experiences to test whether putting restrictions on what people can post online does in fact have what’s known as a “chilling effect,” or the consequence of deterring speech in unintended ways. He did find evidence of this effect, but not in terms of the substance of what people were saying; rather, it tended to make their tone slightly more positive. He and Kirsten also discussed how a lack of content moderation can have its own type of chilling effect by excluding marginalized groups who may not feel comfortable on the platform.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar (or several) in tech ethics whose work our guest is particularly excited about. In addition to saying he’s interested in seeing what happens with Twitter in light of the Elon Musk news—a topic Kirsten has been quoted on widely in recent weeks, including in this story from CNN—Suneal cited the work of George Washington University’s Vikram Bhargava, the guest for the first episode of TEC Talks (episode available below).

2. Don’t Take the Data and Run (May 11, 2022)

Guest: Katie Shilton (University of Maryland, College Park)

Episode and Notes

Host Kirsten Martin is joined by Katie Shilton, an associate professor in the College of Information Studies at the University of Maryland, College Park, where she leads the Ethics and Values in Design (EViD) Lab. Her research focuses on ethics and policy for the design of information technologies, systems, and collections, and she is a co-principal investigator of the PERVADE project, a multi-campus collaboration focused on big data research ethics funded by the National Science Foundation.

Katie came on the show to talk about a paper she recently coauthored with the members of the PERVADE team titled “Excavating awareness and power in data science: A manifesto for trustworthy pervasive data research,” which appeared in Big Data & Society.

PERVADE was created to tackle unanswered empirical questions facing researchers working with big data—such as that gathered from social media platforms—and this paper in particular was a first attempt at making recommendations based on input from three main stakeholder groups: the researchers themselves, institutional review boards (IRBs) and other regulators, and social media users.

Katie and Kirsten talked about how the ethical challenges of working with big data aren’t actually due to its bigness; rather, they arise because of how pervasive data collection has become. Katie explained how the traditional lab-based model for conducting ethical research doesn’t translate well to the big data space, discussing what researchers might instead learn from anthropologists, specifically ethnographers. Kirsten then brought up the applicability of this mindset not only in academia but also in corporate research environments.

Oh, and if you listen closely, you’ll catch a cameo from one of Kirsten’s dogs who was determined to play tug during the interview.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar (or several) in tech ethics whose work our guest is particularly excited about. Katie highlighted three:

1. Social Media Addiction: Adding Insult to Injury (April 20, 2022)

Guest: Vikram Bhargava (George Washington University)

Episode and Notes

Host Kirsten Martin is joined by Vikram Bhargava, an assistant professor of strategic management and public policy at the George Washington University School of Business. His research focuses on technology addiction, mass social media outrage, autonomous vehicles, artificial intelligence, the future of work, and other topics related to digital technology policy.

Vik came on the show to talk about a paper he recently coauthored with Manuel Velasquez of Santa Clara University titled “Ethics of the Attention Economy: The Problem of Social Media Addiction,” which appeared in Business Ethics Quarterly. In it, they “argue that addicting users to social media is impermissible because it unjustifiably harms users in a way that is both demeaning and objectionably exploitative.”

Vik talked with Kirsten about how social media addiction raises ethical issues we haven’t seen before with other types of addictive products, using his morning cup of coffee to illustrate the distinction and what in the paper he and Velasquez call the “adding insult to injury argument.” Vik also discussed how the picture is further complicated by the fact that a social media account is routinely the most straightforward way to access certain social goods—e.g., job search websites—and his ideas on possible ways forward given that social media does provide benefits to society, as well.

Episode Links

At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Vik highlighted Dartmouth’s Sonu Bedi, specifically his research on race-based filters in dating app algorithms.

Privacy Statement

We use the podcast hosting service Buzzsprout to publish TEC Talks. The Buzzsprout privacy policy includes information on how it approaches podcast listener privacy.

The platform/app through which you choose to listen to this or any other podcast may also make certain listener data available to podcast publishers. For more information, consult the terms of service of the app you use.