Read the transcript
The transcript has been formatted and lightly edited for clarity and readability.
Jenna Liberto, Host:
In Hesburgh Library, gonna talk to students about AI.
Students:
I’m Diego, and I’m a graduate student.
Shang Ma, and currently a fourth-year Ph.D.
I’m Anna, and I’m a senior.
My name is Lauren Eglite, and I am a sophomore.
Drew, and I’m a senior.
I’m Carson Giesting. I’m a sophomore.
Jenna Liberto:
What would be your biggest concern when it comes to AI?
Drew:
The impact that it has on art and creative expression.
Carson Giesting:
It taking people’s jobs is probably a big one.
Diego:
I don’t think I have any concerns, particularly.
Lauren Eglite:
Getting too dependent on AI can be bad because, you know, it takes away the process of really thinking and putting together ideas in your head.
Anna:
You just become slower when you’re on your own or when you don’t have it.
Shang Ma:
And my research is all about AI.
Jenna Liberto:
Really?
Shang Ma:
Scammers are using AI to generate new scams, and we are thinking of whether AI can actually help to defend against the scammers.
Jenna Liberto:
I mean, that’s the good and the bad, right?
Shang Ma:
Yeah, two sides of the coin. Yeah.
Diego:
It helps when professors allow you to use it, but give you guidelines on how to use it. You avoid kind of an ethical dilemma, but I could see how in other scenarios, maybe, if there’s no parameters on how to use it, then yeah, it could be a little concerning.
Introduction:
Welcome to Notre Dame Stories, the official podcast of the University of Notre Dame, where we push the boundaries of discovery, embracing the unknown for a deeper understanding of our world.
Jenna Liberto:
Depending on who you ask, it’s either the biggest threat or the biggest opportunity of our time. AI. How should we use it? How does it fit into everyday life? Is there an ethical line we should be aware of?
Philosopher Meghan Sullivan has guided approaches to these kinds of questions for years. Students clamor to take her wildly popular course, God and the Good Life. Now, some high-profile stakeholders are asking Meghan to help the tech industry navigate the ethical complexities of AI.
So, how would a philosopher approach AI? To start, she’s developed a framework that puts humans at the center. And despite the concerns shared by many, Meghan remains hopeful about the future of AI.
Meghan Sullivan, the Wilsey Family College Professor of Philosophy; Director, Notre Dame Ethics Initiative:
For the past 14 months, Notre Dame has been getting very, very serious about what do we have to offer conversations about ethics and AI? And how can we root our framework in our Catholic mission—our Catholic identity—while still putting ideas out there in the world that are going to be accessible to everyone?
The DELTA framework is the University of Notre Dame’s work to try to develop and shape that conversation. DELTA is an acronym: Dignity, Embodiment, Love, Transcendence, and finally, Agency. We have the power to make moral choices. That is one of the most fundamental moral virtues that we have. As AI makes more and more decisions for us, we have to protect the space for the kinds of decisions that only a human conscience should make.
Jenna Liberto:
So when you talk about these developments, these trends, the technology, there are certainly some things that are concerning, even frightening. Yet, the way you talk about it is so ... has such resolve and optimism. What gives you that outlook as you look at your work in this broader topic.
Meghan Sullivan:
We navigated huge transitions before as a human race. I mean, you think about—think about the 1940s. Think about 1945. On August 5, 1945, a vast majority of people had no idea that we had the power to harness atomic energy or to build atomic bombs. You know, that was a huge discovery.
When those bombs dropped on Hiroshima on August 6, we suddenly realized, Oh my gosh, we have this incredible technology at our fingertips now, and it is capable of really terrible things. But it can also be harnessed for the good. I am so moved when I go back and read the writings of Catholic thinkers in the 1940s and the 1950s who were living through that atomic transition and realize that there were people of faith who were right there in the middle of the debate, thinking, Look, in all of these opportunities, technology is morally neutral. It’s how we use it. It’s how our consciences develop that determines what’s going to happen next—and hope comes from people that have that kind of ethical vision.
I think AI is similar to the discovery and harnessing of atomic energy. In that respect, it’s going to give us incredible power to do things. And it’s up to us to decide, ethically, how we’re going to harness that power. I really think that communities like Notre Dame—like the Catholic world—can play a vital role in helping the rest of society realize, OK, what’s the vision of the good society and the good life that we could build with it?
Jenna Liberto:
You’re touching on something I want to dig a bit deeper into, and that is Notre Dame’s role as a convenor in these conversations. And your work in this area has garnered some high-profile interest already. Can you tell us who else is involved in this conversation? And then, how are we at the University partnering with them?
Meghan Sullivan:
We had an absolutely amazing gathering back in September, where we hosted 200 people on Notre Dame’s campus, coming from three different groups. First, we gathered together leaders from the Vatican, Catholic leaders, Protestant Christians, Evangelical Christians who are really interested in exploring the particular contours of a faith-based approach to AI ethics.
Second, we invited educators, from college professors at top universities to people who are working with kindergarteners on reading using AI tools. And finally, and this meant a lot to me, there were leaders from major tech firms who came to join us and listen, most importantly. To listen to these people from faith communities, and to these educators who have visions about what technology could do positively for their lives, but also have some serious concerns about the kinds of ethical questions this new AI is going to raise.
Cristos Goodrow, Vice President of Engineering at YouTube:
My name is Cristos Goodrow. I’m a vice president of engineering at YouTube. I think it’s great that Notre Dame would be involved in this, not just from a convening, but a leadership perspective. Faith is a little bit of a trickier thing because everyone brings their own faith tradition to this. But I do think that, at least in the conversations I’ve had with people and the work that we do, we do have a lot of respect for the fact that people do come with faith traditions, and that these should inform our decisions.
Paul Taylor, Co-Founder and President, Bay Area Center for Faith, Work, and Tech:
Paul Taylor. I’m the co-founder and president of the Bay Area Center for Faith, Work, and Tech. I mean, we just feel really passionately that the world is at a point where big decisions are being made and big changes are happening. And we think that Christians ought to have a voice in that conversation, and that we ought to bring, kind of, our understanding of what it means to be human, what it means to steward well the resources God has given us, and that if we fail to bring those pieces to the table in the conversation, then society’s going to be the worse for it.
Paul Scherz, the Our Lady of Guadalupe Professor of Theology:
Paul Scherz, Our Lady of Guadalupe Professor of Theology. This is one of the major problems that is confronting society and the Church, right? So, if we’re not there, who else is going to take up those mantles? And we really have a unique set of abilities and skills and scholars who are here who can serve as a nucleus, and then draw other groups around us.
Linda Rand:
Linda Rand, I am actually just retired from Google. So I think when you’re thinking about interacting with generative AI, you have to decide there’s going to be different versions of principles. Which ones do you want to interact with, and what filters do you want to put on that? And I think the DELTA framework can be an important filter for the development of the technology moving forward.
Meghan Sullivan:
One of the things that I also really appreciated about this meeting was, well, we had some really high-level talks. I mean, we had discussions from engineers on machine learning. We had discussions about new advances in AI. Those were balanced out with opportunities for prayer, with opportunities for meals together in person.
We even went and saw The Tempest. The Tempest is this incredible Shakespearean play about what happens when a person harnesses the power of magic and then realizes that they’re misusing it and it’s ruining their life. Everyone left that week of the Notre Dame Faith, Flourishing, and AI Summit with this feeling, like not only was there hope for how we could develop AI ethics, especially hope in this DELTA framework, but also that they had friends and access to people who were going to help them do the work across industries and across groups.
Jenna Liberto:
What do you see as Notre Dame’s most important role in that conversation?
Meghan Sullivan:
I think one of the central questions that AI is raising for many people is: What is it to be a human being? Like, what makes humans special? If you were kind of lazy in the last century, maybe you thought that what makes human beings special is, we have the ability to solve problems, or we have the capacity to be creative.
Well, oh my gosh. Now we have apps on our phones and on our computers, these large models that can solve problems faster than any human mind can. They can be creative in ways that are astonishing and surprising us. If we thought that was what was really special about us, we have a big competitor right now in the form of this software.
But of course, at a Catholic university like Notre Dame, we have never believed that what makes a human person special is just their ability to solve problems or do work or be productive or earn money. What makes someone special is something far deeper than that.
And I think a place like the University of Notre Dame—a Catholic university—has a real opportunity to enter this discussion where a lot of people are suddenly wondering, “What is the rock-bottom essence of a good human life?” And say, “You know what, we’ve got 2,000 years of tradition of thinking about this question that we can now offer to the conversation.”
Right before we came into this interview, my sister-in-law texted me a picture of my soon-to-be 4-year-old niece whose birthday is next week, and this is her just sticking her tongue out in this completely crazy face. But I think, one of the most important things that makes us human, and you see this when you watch like the lights come on in little kids’ minds, is not only that we have the capacity to make cool things, but also, that we have just this spark, this beauty inside of us. The image of God that’s silly and lovable. It just calls out for love.
Oftentimes, we have a hard time seeing that spark in everyone. But when I think about what makes us human, I think about these people that I love so dearly and the things that I love about them. And so, there’s something about ... something about being human that’s a matter of, like, being weird in a way that AI is never going to touch.
Jenna Liberto:
Do you think about her? Certainly, I’m sure you think about your students when you do this work, but do you think about her—the youngest generation that’s coming up with this technology?
Meghan Sullivan:
Oh my gosh, absolutely. So, this is my little niece. Her birthday, again, is in a week. And I was talking with her on the phone last week and I asked her what she wanted for her birthday. And she told me a red or yellow robot. And at first I was like, “Oh, no.” Like, I am going to keep you away from AI and from robots as long as I possibly can, because little kids need opportunities. Like they need what my friend Andy Crouch calls “magic free spaces” to grow up. Spaces to grow up where they can try, and things can move slowly, and they can learn how to relate.
So I was like, I’m not buying you a robot. Then her mom clarified that there was this particular little toy that has no AI in it—that’s just like a little yellow piece of plastic on wheels that rolls around. That’s what she really wants. And so then I was very happy to buy it for her. I got her the yellow one and Notre Dame colors, not the red one.
Jenna Liberto:
Oh, that’s great.