*Note: This story was published by the Notre Dame Deloitte Center for Ethical Leadership and features ND TEC faculty affiliates Corey Angst and Nicholas Berente.
Not a day passes without a headline on artificial intelligence (AI) and its promise, or one reflecting fears we may one day be replaced by our own artificial creations. There have also been questions from many sectors–from business leaders, to scientists, to the clergy–about what makes us…well, us. What makes human beings unique?
Producing creative, thoughtful work - in the form of a report submitted by a manager, an essay turned in for a university assignment, or an article of scholarly research - is critical to an informed society. So what is our role in a world where computer programs are doing our thinking for us?
Much of Western philosophy is based on the dictum, "I think, therefore, I am."
New AI technology is showing us that what is intrinsically human is not our ability to think, but our ability to act, and by acting to create a better world.
Notre Dame faculty in the Mendoza College of Business offered clarity and balance to the fraught questions AI chatbots and similar technologies have raised, and their answers are reassuring. Rather than fearing that AI is going to replace us, faculty members offered an optimistic picture of a technology that can not only help relieve humans of some tedious tasks but a technology that can help business leaders rethink the value of humans in the workplace and society.
It is not by thinking alone that people create a better organization or society. It is our ability to act in the world that makes us unique.
To get a better understanding of this new technology, we had conversations with three Mendoza College of Business faculty members. Professor Corey Angst is the Jack and Joan McGraw Family Collegiate Professor of IT, Analytics, and Operations (ITAO).
Professor Nicholas Berente is a Professor in ITAO who studies digital innovation.
Professor Timothy Hubbard is an Assistant Professor of Strategic Management and the Co-Director of Notre Dame’s Virtual Reality Lab.
Each professor offered an optimistic view of the possibilities AI offers business leaders based on their research and their own use of the technology in the classroom.
What is Chat AI?
First, as Professor Angst said, it is important to demystify this technology and talk about what it actually is, what it can actually do, and how it could be used in the future. The Oxford English Dictionary defines artificial intelligence as “the theory and development of computer systems able to perform tasks that normally require human intelligence.” Chat AI, the technology bringing new possibilities and anxieties front and center, is a Large Language Model, or LLM.
We are already familiar with LLMs. LLMs are the foundation for technology like predictive text in our messages and emails. A LLM scans massive amounts of data to find patterns. It uses these patterns to answer questions or write messages. It can respond to emails, analyze large data sets, and write code. It can be prompted, with limited success, to write song lyrics and poetry. Fast food restaurants are considering it for taking orders and companies that power our search engines are using it to prompt more relevant results. Large Language Models are good at consuming data at a rate humans could never accomplish. This ability to consume and organize information can be helpful for businesses, and it could free individuals to pursue more creative and meaningful activities.
There are areas of obvious interest to business:
- Searching datasets
- Searching academic and industry papers
- Writing messages and emails
- Drafting press releases
- Writing code
- Creating technical manuals
But there are limitations to the technology that should not be overlooked:
- It can provide incorrect information that seems legitimate
- It can create faulty code with ‘bugs’
- It cannot fully analyze data sets, so peculiarities in data can be dismissed, obscuring the picture
- It can provide information riddled with misogyny and racism
While there are other types of AI, both real and imagined, the professors we interviewed confined their discussion to Chat AI because the technology is new, real, and set to impact how we do business.
Chat AI for Business
We asked these researchers to discuss the uses, and dangers, of Chat AI for business and how they personally use the technology now. While each tackled the questions differently, there was an interesting continuity to what they said. Chat AI is a technology, and like any technology, it can be used for good or ill. What its impact on society will come down to is the skill, vision, and values of the human beings using it.
The value of Chat AI will only be as great as the human wielding it, therefore humans have an important role in bringing out the value of AI chat technologies.
In looking at the search capacities of Chat AI, Professor Berente points out two main issues. One, is the validity of the answer. Two, is the source of the answer. Traditional web searches allow users to evaluate where information comes from and to pull information from multiple sources to test knowledge claims. These technologies don’t allow for that so, as Professor Berente says, “It is up to humans to verify the quality of the output.” He believes there are significant business uses for Chat AI, but “it is always up to humans to validate the answers they get” for the end result to have value. Professor Berente adds, “AI chat tools are simply tools. As with all tools, it is the human that must act and it is the human that ensures the appropriate use of the system.”
If skill-level and human engagement are important, what can businesses do to ensure they are getting the best out of AI technology and supporting their workforce?
Professor Berente believes it is imperative that managers educate themselves on the technology and create best practices for their businesses. Supported in part by IBM, Professor Berente said Mendoza College of Business is working on “a variety of materials to support the ethical development and deployment of these AI technologies in organizations.” Professors Berente and Angst are also working on how Chat AI technologies can be applied in manufacturing settings. Their project focuses on ways Chat AI can help skilled workers—such as welders and inspectors—share knowledge and learn just by talking to the system. Professors Berente, Angst, and Hubbard are all using Chat AI in the classroom in order to equip students—the next generation of ethical leaders—to understand its uses, limitations, and drawbacks. As numerous news outlets have shown, Chat AI technology has incredible power to relieve drudgery and increase worker productivity. But it will be up to us to ensure those benefits are equitably distributed throughout society.
This will require business leaders to use the new technology in line with their values.
Perhaps it is feeling, not thinking, that actually separates human beings from our machine creations.
Several articles have made the case that chat AI technologies cannot make value judgments. They can’t make ethical decisions based on reason. They can’t think about the real-world impact of actions on human beings, other creatures, and the environment.
It is our job, as humans, to make these calculations–and to do this well will require strong values and ethical leadership.
Professor Angst points out the dangers that come with this technology. He calls it a “double-edged sword.” He notes the fears around the technology, though sometimes overblown, are based in legitimate concerns about privacy, protection, and job security.
At its best, Professor Angst says AI will “allow people to move out of menial, repetitive tasks in favor of more gratifying work.” But human progress is not inevitable and human history is littered with examples of powerful technology that enhanced productivity while also enhancing human suffering and drudgery. For the benefits of AI technology to be equally shared, business leaders must make ethical decisions based on values of equity, inclusion, and shared benefit.
To harness the power of AI, we must have a clear vision of the society we want to create.
“The goal of all technology,” says Professor Hubbard, “is to make the world a better place.”
What kind of world do we want to create? What makes the world better? As an experiment, to explore Professor Hubbard’s remarks, I asked ChatGPT both of these questions. And I got bland, pleasant answers filled with buzzwords. Empathy. Kindness. Community. Equality. Nothing wrong with those words. But they won’t come into being because they were fed into a computer and came out the other end. They will have to be created—through thoughts and words—but also through deeds and actions. Through human hands taking action beyond internet buzzwords to decide on and create a better world.
Rene Descarte’s made us believe,“I think, therefore I am.” But we have now learned that machines can think. They can process. They can examine. They might even do it better than us. But machines can’t act. To use technology to create a better world will require human action–action based on shared skills, values, and vision. Action that lives up to the promise of using technology to make the world a better place for all, not for some.
Perhaps in the 21st century people will say, “I act, therefore I am.”
Looking to the Future
All of these professors are excited about the future uses of AI. The lifting of human drudgery, the opportunity for people to engage with more meaningful tasks, and using technology to enhance business while uplifting workers are exciting prospects.
Professor Hubbard said the number one feeling around the technology is excitement. This is something new. Something that can create change. We’re on the cusp of uncharted territory. And humans love to explore.
Despite legitimate concerns over the technology, there is the possibility that AI will help us reconnect to our own humanity. Professor Angst had students ask the AI chatbot philosophical questions about life. They were “flabbergasted” by the detailed responses they received. But many agreed, “There was something lacking from the response.” Professor Angst described it as the “element of the human touch that is absent.” It is interesting that Professor Angst used the word touch. Something uniquely embodied. Something that our most advanced AI cannot do. It cannot reach out. It cannot touch. No matter how much knowledge it has, it can’t take action. As humans, we can. And we should.
For further reading on AI:
- NDDCEL Faculty Fellow Chris Adkins and a team from Deloitte describe AI ethics as "a business imperative for boards and C-suites."
- ND experts discuss the opportunities, concerns, and impact of AI
- What makes employees trust vs. second guess AI? (Harvard Working Knowledge)
- Also check out the Notre Dame Technology Ethics Center, directed by Mendoza College of Business Professor Kirsten Martin
Consider these tech ethics case studies for use in the classroom or the workplace:
The Giving Voice to Values (GVV) collection of case studies, pioneered by NDDCEL Advisory Board member Dr. Mary Gentile, is a cutting-edge curriculum used in over 1,400 educational and business settings on all seven continents. GVV focuses on ethical implementation and asks: “What if I were going to act on my values? What would I say and do? How could I be most effective?”
Eight new tech ethics cases have been added to the collection, including one written with NDDCEL Faculty Director Jessica McManus Warnell: "Toxic for Teens: Navigating a Career in the Social Media Industry."
Selected faculty publications:
Angst, Corey. "How IT Saved Higher Education During the Pandemic." (With Yoon Seock Son, Jiyong Park), LSE Business Review, 2023.
Berente, Nicholas. "Rethinking Project Escalation: An Institutional Perspective on the Persistence of Failing Large-Scale Information System Projects." (With Carolina Salge, Venkata Mallampalli, Ken Park), Journal of Management Information Systems, 39, 2022.
Hubbard, Timothy. "How to Cross the Uncanny Valley: Developing Management Laboratory Studies Using Virtual Reality." (With Michael Villano), Research Methodology in Strategy and Management, in-press - Accepted (awaiting publication).
Originally published by ethicalleadership.nd.edu on June 22, 2023.at