Read the transcript
The transcript has been formatted and lightly edited for clarity and readability.
Nitesh Chawla:
We can be worried or scared about, “Oh my goodness, is it going to take over the world?” and everything else, or we can ask ourselves, “How can we use this gift of technology to make an impact on society?” And that’s what we’ve been doing.
Jenna Liberto:
Nitesh Chawla is founding director of the Lucy Family Institute for Data & Society. At a time when AI is dominating the national conversation, Nitesh is adamant that this emerging technology can not only be used for good, but the human impact is where the conversation should begin.
Introduction:
Welcome to Notre Dame Stories, the official podcast of the University of Notre Dame. Here we take you along the journey where curiosity becomes a breakthrough for people using knowledge as a means for good in the world.
Jenna Liberto:
Well, Nitesh, it’s so great to be here. Thank you for hosting us and thank you for the conversation we’re about to have.
Nitesh Chawla:
Thank you. Thank you for inviting me to be a part of the conversation; I’m looking forward to it.
Jenna Liberto:
So our series is called Notre Dame Stories, so we do like to start by asking our guests to share a little bit about their Notre Dame story, especially what may be most meaningful to you as you made your journey here.
Nitesh Chawla:
My Notre Dame story could be summarized in one word, “gratitude.” And the reason being, I was in—before I came to Notre Dame—I was in banking, and I had this romantic itch about academia and I needed a place that would allow me to explore whether that romance was real. And Kevin Bowyer, who was a department chair of our department, allowed me to come here for a year, right, on a sabbatical, and that I’m grateful for. And that allowed me to get to know this beautiful place that we call home and experience the gift of this University. And then I started my tenure-track offer here in 2007 so, since then, I’ve just been grateful to be here.
So, if that year of exploration hadn’t happened, then it would have been a romance left unfilled, so . . .
Jenna Liberto:
But instead, it became a love, it seems.
Nitesh Chawla:
It became a love. It is. So “gratitude” and “love,” maybe.
Jenna Liberto:
Good. Well, we’re glad you’re here. And you are the founding director of the Lucy Family Institute, and we’ll get in a little bit deeper to the work that you do here and what the Lucy Family Institute is about, but I’d like to start with a conversation on AI because it’s captivating all corners of our society.
Here at Notre Dame, it’s what everyone’s talking about, and there’s a healthy bit of skepticism, I would say. But you seem very positive, very optimistic, about the potential of AI. So where does that come from?
Nitesh Chawla:
So, in general, I’m an optimistic person to begin with. What we are looking at, AI—we’ve been doing machine learning AI-related things or adjacent things, or whatever AI we’ve experienced today at the core, even the ChatGPT—at its core is machine learning, right? When I graduated in 2002, my Ph.D. dissertation was learning from extremes of data—size and imbalance—and then in banking, we were doing machine learning. So we’ve been doing and experiencing the fruits of this technology for a couple of decades now.
But now, with what we are seeing with the euphoria around AI, I think it’s fascinating because now it allows us to scale what we do in terms of development of the algorithms or the technology that we do as computer scientists, but also think about how we can become more inclusive in the reach of these technologies. Yes, we can be worried or scared about, “Oh my goodness, is it going to take over the world” and everything else, or we can ask ourselves, “How can we use this gift of technology to make an impact on society?” And that’s what we’ve been doing.
So that’s what my reason of optimism is: Shut your ears to the noise, focus on the good that can come, because it’s here to stay. It’s not going anywhere, right? Folks like McKinsey talk about a $6 trillion industry. The investments have been made, so we could be worried and everything else, or we could ask ourselves, as a University, where should we invest and where should we partner to ensure that we are being as inclusive, as safe, as responsible as possible with this technology that we have.
Jenna Liberto:
So a healthy skepticism maybe has its place, or, at least, some caution?
Nitesh Chawla:
I think, yes, there is caution because we may not get it right if we don’t actually take leadership and, you know, create a space where we are being more inclusive in our dialogue and thinking. Like yesterday morning, I was—at 6:45 a.m.—speaking at a panel for Global AI Data Science in Kenya. I joined in virtually with a couple of colleagues from London, UK, and they had four of us on the panel. And one of the concerns that even the panelists’ questions that were coming to me were, “What about us,” right? Then someone shared the example that in Swahili, they said, a woman could say that a child is not “in play,” but that means a child is at risk, whereas ChatGPT responded, “If the child is not playing outside, do this.” So even if you’re interacting in English, the context of the local language or the way it’s said has to be captured. So I feel those are the places that, if we don’t capture that, if we don’t include those voices’ challenges, we could then, you know, create some risks.
So that’s where I feel there has to be a healthy dose of skepticism. We can’t just be all gung ho. As much as I’d like to think computer science and AI is going to bring world peace, it’s not. But how do we be more deliberative, inclusive, responsible in our thinking and work?
Jenna Liberto:
It sounds like you look at AI as a tool, and just like any tool, there are many applications for how you use that, right? So a year ago, I know you were involved in a research project that used AI to track chemotherapy complications, so can you talk about that specific applicable example—using AI for something that we can all put ourselves in, right? A family member going through cancer treatment—chemotherapy—and AI being used to enhance that experience.
Nitesh Chawla:
Absolutely. So there was a project that we started several years ago. It’s with a hospital, Infantil de Mexico Federico Gómez, and it was also a project that arose from serendipity, right? We were there for a meeting and I came across these challenges, and I couldn’t shake those challenges that those children and families were facing off my head—that, you know, we had to do something, if we could. And we then dove in deeper and we realize that there is no way of risk-triaging using clinical data of these children who arrive at the hospital, and that’s a gift we have in the United States and other countries, where we could risk-triage using electronic medical records for this hospital. And one of the reasons was that they didn’t have any electronic data; everything was sitting in a paper file.
So there was no way for them to understand after a treatment, what’s the risk of sepsis shock? What’s the risk of bacteremia? What’s the risk of death? And the outcomes were not very good if you compare the outcomes that we may experience here, for example. So that’s when we sort of worked with the physicians and the oncologists and infectious disease experts at the hospital in Mexico to identify, to first curate a digital version of your data—everything was in paper files, right?
So we worked with the medical interns. We did that, then we developed risk-based models, and then we could predict that, based on certain clinical conditions a child may be associated with, these are the outcome risks. And if we could predict that, then we could sort of take the hospital, and the clinical partners could take appropriate clinical actions to go with it. So that’s a great example of AI where there was a need, and we could have ignored the need, but, you know, we’re Notre Dame. How can we step away from a challenge like that? And then, how do we first get the data? Make sure the data is proper. Second, how do we build the algorithms? Make sure we build the right models. And then, we created our AI model and made predictions, and we analyzed those and we published in a top journal.
So my student Jen Schnur—while I’m sitting here talking to you about it, it was her Ph.D.’s work, right? It was her commitment that “this sounds like something I can commit myself to, even though it’ll take me longer than anyone else to get this work out because we have to go the whole nine yards.” . . . So that’s what the project was about.
Jenna Liberto:
And you just got back from visiting that same hospital, right?
Nitesh Chawla:
Yes.
Jenna Liberto:
You talk about seeing a problem that Notre Dame can’t walk away from. Here’s this very human element to, yes, the tool is AI, but what you’re talking—the impact you’re talking about—are these very human things that Notre Dame embraces.
Nitesh Chawla:
Absolutely. And, in fact, I was coming back—I was in Mexico City last . . . yesterday, I came back last night. It was a quick day-and-a-half visit, but we were visiting the same hospital. And now, to do these works, right, it’s not just AI and science that we can do, we have to earn trust of the partners. They have to believe in us, that we are there for the right thing and we won’t just start some and leave it halfway. So we have actually digitized all their records, paper files. As of yesterday, 62,500 documents sitting in their basements are now sitting as digital records.
We have built an application where they can now electronically enroll patients that they could not do, and we have also now started asking questions about social determinants of health—what are some of these challenges these families may be facing? Let’s say we clinically give them the right treatment but, if there’s malnourishment, there’s violence that the child experiences, there’s other risks that the child and the mother experience, can we be aware? If there’s a decision now that the hospital has to take, the decision is clinical and socioeconomic both, right? So now we have about 384 families. Yesterday we got a commitment of enrolling about 500 additional families by January.
It’s been four years in making but now, as we think about, you know, the research that we do is making an impact but now that impact is making us—informing our research—it would be the longest study done for anyone. Forget Latin America, even in the US or other places which maybe we have captured all these kinds of data and observed a child and the health over a period of time . . . nowhere it’s been done. So now, not only the publication impact we will have but the societal impact.
And the beauty of this is, we led ourselves from the societal impact and backed into the scholarship impact. And if you think about when, you know, [University President] Father Bob or [Provost] John McGreevy talk about the University mission, it’s a global Catholic university distinct from or on par with other private universities. Societal impact is us being a global Catholic university. The paper that we would publish now is being distinct or at par with the best private universities in the world. That’s what our mission calls us to do.
Jenna Liberto:
And research, bridging the two, which is so powerful.
Nitesh Chawla:
Bridging disciplines is in his inaugural address. Father Bob said “bridging disciplines,” so this is where we are not only bridging disciplines of social sciences, computer sciences, medicine, but also bridging cultures and people. You know, that’s a path—accessible, affordable education, right? Where it may not be students, but accessible, affordable acts to the knowledge that we are creating as a University.
Jenna Liberto:
What you just summarized for us, how do you see that impacting our students here on campus and the graduate students that are part of this work?
Nitesh Chawla:
These are the new leaders of tomorrow. So at Notre Dame, we talk about the whole person, holistic education. It’s not something they’re just going to be; they’re going to be learning by doing and thinking, right? They’re going to using their hands, their mind, and their hearts to tackle these issues. That’s the kind of a student experience we’re going to give.
Yesterday morning, I was speaking to an undergrad student joining a research program—she’s going to work on this project and she was thrilled. Her parents came from Mexico. She’s a senior now in economics and she wants to do some data AI, and she was excited where, she’s like, “I can bridge the cultures and it’s meaningful to me,” as she lit up on the joy of this project, right? So I think it shows our students that we can get an amazing education, but also be socially responsible and engaged as part of our education. Whether it’s a graduate student, whether it’s an undergraduate student—and, in my opinion, as a graduate student, this becomes a recruiting toolkit.
Like, if I could tell a student, “Hey, you could be doing amazing potential for research; you could go to place A or place B. Come to Notre Dame because you not only will do amazing research, publish great papers, [create] amazing scholarship, but also leave an impact. Are you ready for that challenge?” And when they come in with that mindset, you know, I can just sit in my office and do nothing.
Jenna Liberto:
But you’re certainly not doing nothing, and that’s what I want to talk about in the next part of our conversation. Let’s talk about the Lucy Family Institute for Data & Society. You gave us a beautiful example already, but how would you summarize the work that happens here?
Nitesh Chawla:
I think the work that happens at the institute—we’ve been very deliberate that first, we have to be guided by the mission of our University. Second, we have to be domain-informed and data-driven. And then, do data and AI converge in research to tackle this? It’s a mouthful, of course, because we academics don’t know how to simplify things, so I’ll try to break it down a bit.
So when we say domain-informed and, if you’re addressing, say, a grand challenge, whether it’s in health care—the Mexico example I gave—or pick anyone, or in sciences, in chemistry, it’s foolish of me, as a computer scientist, to assume you might understand that, right? So the domain, I have to be able to listen to what that domain is telling me, what the real challenges are, ident—and then I have to break it down into my computational mind and go, “OK, this means I need this. This means I need to build up this technology or algorithm,” etc. And by data and AI convergence research, I mean that we will only be able to address some of the vexing wicked problems in society if these disciplines are coming together. Otherwise, we are simply doing incremental research.
For us to take on challenges that . . . Imagine if two disciplines become one. [The challenge] could be, say, in social sciences or in computer science or in physics, and they go, “We have no idea how to solve it.” But imagine if we can. Imagine if we get there. Oh, what an impact it will be. That requires courage. That requires conviction. That requires us to be adventurously collaborative, and that’s what Lucy is about.
Jenna Liberto:
And that’s a broad range of opportunities, so where do you start? And that’s my next question. It seems like you’ve been very intentional and strategic as you do your work across disciplines. As far as where to start first, using our strategic framework of the University as a guide—you know, we talk about ethics, democracy, poverty—is that your strategy? And can you talk about that formula for success starting with the strategic framework?
Nitesh Chawla:
The strategic framework challenges us to think about taking on some of the biggest challenges of our generation, whether it’s poverty, democracy, health and well-being, or thinking about technology from an ethical perspective. And . . . at the same time we also, as a University, have to think institutionally.
So, as we think about what does it mean to think institutionally, it means that, how do we capitalize collectively on the investments we are making as a University? Because these are important problems to be solved, right? And figuring out how do we work together? Are there data AI innovations that we can take for poverty applications? Are there things that we can do to understand governance in society? Because data is ubiquitous. Data is in our stories. Data is, you know, if you’re doing this interview, you’ll create a transcript. That’s data that we can capture, and a year later, the next version of ChatGPT will have our website incorporated as content. So data is ubiquitous. How do we figure out where we need to be, where we can be complimentary to each other? Where we can be collaborative to have a bigger impact?
So it was—it is—a strategic decision for the institute, but it’s also the right decision.
Jenna Liberto:
Can we drill down even more into . . . what data looks like when we’re talking about an issue like poverty? So, that could be numbers that tell impact. That could be demographic information, right? Can you give us some tangible examples? When I think of poverty, I might not go straight to data, but how you start with that as your foundation?
Nitesh Chawla:
So we have to think about poverty as a multidimensional aspect, right? So, like a dimension that I have personally worked at, this could be about child welfare and child malnourishment. So in that project we realized that there was data that was being captured in electronic records—biometrics of a child and everything else, what kind of food interventions could be provided, etc., and it’s related to health and poverty both. But then, what we realized is that the social workers were capturing stories about the children, their interactions, in their notebooks. That [data] wasn’t anywhere. And we were sitting in a room of these social workers and they were very emotional when they were telling these stories, and then I looked around and I said, “But this is not in the data that we are seeing.”
The data was all about their economic conditions, their household information, their clinical information, and then the food interventions were given based on those factors. And understanding the social worker data made us identify about 77 different psychological markers. If a social worker showed more empathy or was more connected, addressing social anxiety of the child first, understood the dynamic between the parent or the guardian and the child . . . There were a lot of these factors that came into play, and capturing that data made [interventions] much more impactful—hey, not just this food intervention, but spend some time talking to that child, spend some time understanding that through the power of social workers. And that led to an amazing publication for us, as well. But that’s an example of what we may do in terms of data.
There are others in the institute who are collaborating with the Poverty Initiative on building this massive ARCOS database of all the opioid prescriptions that have been given. But again, the institute built up the data platform. The economists, the social scientists are asking the domain questions there, as well.
And then, some of my colleagues are working with the Poverty Initiative or otherwise on how can we do agent-based simulation of what if we were to implement certain policies? What dynamics would change at a larger scale if we have, if we could create almost like a digital twin of a city? Could we say, “OK, this is what the parameters are what if he shocks certain things based on the best economic conditions?” What would be the impact of that? So those are the kinds of things that folks are looking at.
Jenna Liberto:
I think that’s such a beautiful example, talking about capturing these human emotions of the social workers and having that be data that’s used for good. I’m curious . . . can you give us some similar examples when it comes to ethics? If you think of data as very formulaic and tangible, ethics seems kind of the opposite.
Nitesh Chawla:
Yes, so I think . . . it’s not, it’s part of the fabric that we do, right? What we often think about at the institute is, we have to be responsible, inclusive, safe, and ethical, right? By “responsible” . . . are we making sure that the tools and algorithms that we develop—at the beginning, we were talking about what bad could come out of it, right? What guard rails can we put up? Are we understanding the biases that may be in data?
By “inclusive,” what we mean is that, are we making sure that everyone is represented in this journey? By “safe,” are the systems that we’re developing, are they going to behave? Are they behaving as we predict them to be? We’re doing a lot of work in chemistry, etc., where we think about dual use of our technology, where it’s a biotech sector falling in the wrong hands, folks could turn their garage into a chemical chemistry lab. So it’s something that NSF is charged us to think about, hey, can we figure these things out?
And then “ethics,” that sort of grounds us, like how are we approaching these problems? There are normative elements to it. What I often like to think about is the theory of beneficence. What the theory of beneficence says is, can we “do no evil, do no harm”? But at the same time, can we understand the malfeasance and benefit to society? The negative and the positive to society? And there will be a decision that will need to be taken.
At what point, the positives, they would be a side effect, right? When FDA releases drugs and pharma or vaccines or what have you, you know, there’s always this thing they say: “Oh, take this magical drug, it’s going to change your life.” And then the fast-talking voice comes on, “blah blah blah blah blah,” right?
Jenna Liberto:
Disclaimers.
Nitesh Chawla:
The disclaimer. Because there’s an acceptable risk . . . and that’s the notion of beneficence, right? What’s the acceptable risk? We have to at least think about it. . . . We could be completely normative about some principles or we could be a bit pragmatic about those principles. And I think that’s where the dialogue with the Ethics Initiative becomes very interesting.
Jenna Liberto:
You alluded to, earlier, this challenge that we’ve been given by our University leadership: opportunity to think as an institution. And certainly, one part of that is collaboration. I’m wondering what your experience with collaboration has been like at the University and how it influences how you do your day-to-day work?
Nitesh Chawla:
So when I left banking to come to academia, as I said, you know, the story of gratitude. Gratitude to the University, super grateful to my wife to believe in it, that “hey, let’s go leave everything” [idea]. We were looking for a bit of an excitement in our lives, so we said, let’s leave Toronto and go to South Bend, the more happening city between the two.
Jenna Liberto:
You found it.
Nitesh Chawla:
We found it, right? Of course, there’s not a better city than this. So we came in 2007 and, you know, and I was trained as a more traditional—you know, my Ph.D. was in more traditional theoretical machine learning. And I realized on day one when I came to the University that I want to be courageous. I want to take risks as an academic, think about the next frontier of challenges. And I realized we could only do that by building networks of collaboration.
So in 2008, I started, a year later—much against the advice of my chair and my dean at that time—I decided that network science is going to be a thing. Learning on graphs and connections. I had not done any prior work in that, but there was a feeling that that’s going to be huge. And then health care, as well. So we started on two additional directions to my research. So we created this—where we are seated here—it was the Interdisciplinary Center for Network Science and Applications back in 2008. And, in fact, if you look at the carpet, we have these . . . these circles are the nodes. Every room has a node.
Jenna Liberto:
OK.
Nitesh Chawla:
It’s like a network. And then every arc edge goes out so you’re connecting every room of scholarship. This is circa 2008, and [I] worked with physicist Zoltan Toroczkai, sociologist David Hachen, applied mathematician Mark Alber, and we said, “Let’s create this center.” And then we would convince the University to give us some support, and we said, “OK, we’re going to write an NSF grant. If it works, we have arrived. If it doesn’t work, well, we tried.” And that was, literally, the interdisciplinary journey.
And it was . . . I didn’t know Zoltan or David, but I would just go and talk to them. “Hey, what do you think? What do you think?” I was curious. And it was possible here. It wasn’t something that, you know, that I couldn’t reach out. So I think the size of the place makes building these networks easier. And I used to sit in Fitzpatrick [Hall]. Now, I don’t even have an office there—they kicked me out—so I sit here in Nieuwland [Hall], but it was great. We used to be hanging out together, our students used to be mixed together.
So back in 2008–09, we used to have biologists, physicists, sociologists, computer scientists, Ph.D. students hanging out together. We are sitting in an interdisciplinary place where different disciplined faculty were sitting together. So that’s how we started that Interdisciplinary Center for Network Science, which doesn’t exist now because Lucy is now taking on bigger challenges and aspirations from that perspective.
Jenna Liberto:
So maybe that spirit of collaboration existed a bit from day one for you. But have you seen it grow?
Nitesh Chawla:
Oh, it has grown exponentially. It is unbelievable what this place is now, right? So that was, that existed where the individual had to take the initiative, right? Where it took almost like a force of nature to do those things, but now it’s like the nature to do those things. It’s natural. And that’s what the strategic framework is telling us to do. When Father Bob says “building bridges” and “building communities,” that’s what he’s asking us to do. So I think that’s what is going to define the best universities in the world, the universities that say, “We are going to take on challenges that are at the interface of disciplines.”
Jenna Liberto:
We started talking about AI and how we embrace the potential of this tool, and then we talked about this interdisciplinary approach to research and what you do. If we bring it all together, I’m curious what your hope is. What do you ultimately hope your work achieves for society?
Nitesh Chawla:
That’s a great question. All the questions were great, but, I think . . . so the hope here is that we can measure our impact on two things: One is, can we ask ourselves tomorrow, “Did we make a difference to somebody through our work yesterday?” And then, second thing is, in doing so, did we push new frontiers of knowledge and scholarship? If we can point to both, doing the latter, pushing the frontiers of scholarship and knowledge is easier. Doing the former where it’s actually being translated to the benefit of society is almost like doing a Last Mile Challenge, where we have had the publication.
As a grad student/faculty postdoc, that’s great, that’s a line item on your CV, and if it’s in a great place, you write the impact factor. But then, to say, “OK, I am not done until I have translated it for someone to use,” then that journey is often more difficult. Because now you have to take what you’ve done in research, find the partners, do the innovation in that space, learn lessons where the technology or the methods that we have developed have to be adapted, the interventions that we thought would work have to be structured, do it in the right, responsible, inclusive way—that is a challenge. But if we can achieve both, it’ll be job well done.
Jenna Liberto:
And why is Notre Dame the right place to do that?
Nitesh Chawla:
Tell me which other place? These are hard problems, and this is the mission of the University, as I mentioned earlier. It asks us, you know, you want to be the best global Catholic university and distinct or at par with the other private universities? The scholarship is distinct but at par with other private universities.
Being the global Catholic university asks us—challenges us—to show that we can go beyond our labs and our rooms and into the society. And what is society? I mean, not just in a broader spectrum of, “We’ll work with populations, marginalized populations, well-enabled, and underserved populations,” but, as well as innovations in what we doing in AI for science and AI for chemistry, AI for biology . . . those things have an impact. So this is what this University’s calling is, in my opinion. And this is the place to do it.
I can’t imagine being so comfortable addressing these challenges here where it’s, OK, you can go out, spend two days in communities, pretend to be a social scientist for two days, but you’re still a computer scientist, right? And—or back in 2008—talk to your three different strangers of three different disciplines and say, “Could we do something?” I was just a, you know, silly assistant professor starting up with my job at that time. But I think if not us, then the question is who?
Jenna Liberto:
I really appreciate your perspective. Fascinating conversation, Nitesh. You said you’re a risk-taker.
Nitesh Chawla:
Mhm.
Jenna Liberto:
OK, so we’d like to try out something new with you if you’re up for it. It’s our lightning round.
Nitesh Chawla:
All right.
Jenna Liberto:
Three questions; don’t think too hard. OK. Are you up for it? This isn’t too big of a risk, I’ll be honest.
Nitesh Chawla:
Can I call a friend if I get stuck or something?
Jenna Liberto:
I’ll allow you to call in a friend. I think you’ll know the answers. So the first question is: top three AI tools you use personally?
Nitesh Chawla:
All right, you know, now search engines, [I] use Google . . . AI, ChatGPT. I don’t know what would be third tool, right? I mean, all the methods and algorithms and software that we use, I guess, are tools for productivity, but the first two are, you know . . .
Jenna Liberto:
I think that’s a great answer. Those are accessible and we all use them, too. All right, second question: most surprising data insight that you’ve uncovered?
Nitesh Chawla:
I think the most surprising data insight would be the example I shared earlier about the social worker story in the narrative. The reason being, our original hypothesis was—this was the work we did in São Paulo, Brazil, by the way— that once we collect all the data, we would be able to show that food interventions have a positive impact, and it’s good to keep doing it. And then we didn’t see that.
Then, we dug into the the social worker story, where we went back to São Paulo, sat in a circle like this—folks were talking in Portuguese; I don’t understand Portuguese but [for] emotions you don’t have to know a language—and we all cried together. So, that was an insight that, oh my goodness, if we could make sure that these psychological factors could be affected positively, and then food is provided, it has a better impact.
And the reason it was the most surprising insight is our hypothesis was very different. We had to double-click into it and look at data in a different way before we could find answers.
Jenna Liberto:
Good. Last question: research you’re working on right now that people would be surprised to know is happening at Notre Dame?
Nitesh Chawla:
Wow. That’s a tough one. You know, the research that we are working on—interdisciplinary initiatives—as I said earlier, may not be happening at other places but should be happening at Notre Dame, right? And foundational research that we do should be happening at the best places in the world, so . . . But I think it’s important for the world to know that this place has an amazing research enterprise, amazing potential. I was talking to Lou [Nanni, vice president for University relations] earlier and he mentioned this thing about, you know, what did he say? “Our reality is different than our reputation” kind of a thing. I think that’s where our reality is. We are a research enterprise—an amazing research enterprise—doing some very powerful things, so I don’t think there would be a surprise.
Jenna Liberto:
Well, we’ll tell them about it.
Nitesh Chawla:
I’m not giving you an answer.
Jenna Liberto:
No, that’s fair.
Nitesh Chawla:
I’m not giving . . . it was a lightning round and I don’t have a lightning answer to that. I am sorry, Jenna.
Jenna Liberto:
No, we appreciate the conversation. You said your story, your journey is one of gratitude, and we are certainly thankful you’re here at Notre Dame. Thank you for talking with me.
Nitesh Chawla:
Thank you so much.
Jenna Liberto:
Appreciate it. Thanks, Nitesh.
Thanks for joining us for Notre Dame Stories, the official podcast of the University of Notre Dame. Find us on stories.nd.edu and subscribe wherever you get your podcasts.
The executive producer of Notre Dame Stories is Andy Fuller, with producers Jenna Liberto, Josh Long, and Staci Stickovich. Videography and editing is by Zach Dudka, Tony Fuller, Josh Long, and Michael Wiens. Our music is by Alex Mansour, and I’m your host, Jenna Liberto.