NewLogo[40].png

An interview podcast with guests of the Stanford Center for the Study of the Novel

CSN Café

Ian Watt Lecture: Wai Chee Dimock, “A Long History of Pandemics” - 3/2/2023

For full episode transcript, read below or download here

Leah Chase: [00:00:00]

Welcome, and thanks for joining us on another installment of this Center for the Study of the Novel’s Podcast Café. In this episode, our host, Margaret Cohen, is joined by Wai Chee Dimock, William Lampson Professor Emeritus of American Studies and English at Yale University, and John Robichaux, Director of Education at Stanford’s Institute for Human-Centered Artificial Intelligence. Wai Chee visited the Center on March 2nd to deliver her Ian Watt lecture, A Long History of Pandemics. This conversation was recorded directly before that lecture, and we’re thrilled to now be sharing it with you. Thank you for listening in on another of our warm and informal exchanges, as we scholars have a friendly chat among ourselves.

Margaret Cohen:

Wai Chee and John, it’s my great pleasure to welcome

[00:01:00]

you to another episode of our Center for the Study of the Novel podcast. Wai Chee will be speaking this afternoon at our annual Ian Watt lecture. But today we’re doing something a little different, and we’ve invited John Robichaux from Stanford’s new Institute for Human-Centered Artificial Intelligence. And John and Wai Chee are gonna have a conversation about an exciting collaboration that they’ve just launched, and I’m just going to be a fly on the wall and every so often buzz a little bit. So thank you so much for coming, Wai Chee. I know you’ve come a long way. How was your trip?

Wai Chee Dimock:

It was great. I was picked up at the airport – and I’m going to mention this at the lecture as well – I was picked up at the airport by Alex Sherman who brought water and tangerines. And we had a wonderful conversation on the way about the passive and neutral voice in scientific writing in the 18th century and all the way to the present moment. So it was a great story,

[00:02:00]

deeply learned on the part of Alex because that’s what he’s writing his dissertation on. And it was really fascinating. I think it’s only here that one can get into a conversation like that right from the airport.

Margaret Cohen:

Well, I’m so glad that you’ve come to join us. And John, where have you come from?

John Robichaux:

I’m afraid I’m right here on campus here at Stanford. I did not have anything so as exciting as passive voice and tangerines on my way in, but what a privilege and real honor to be with you here today. Grateful to Wai Chee for the invitation to join in this conversation, and then Margaret, for you and the Center’s generosity in allowing me and HAI to be here with you. We hope we can be useful and looking forward to an exciting conversation.

Wai Chee Dimock:

Yeah, no, I mean I’m just so grateful for supporting this, you know, collaboration that is called AI for Climate Resilience,

[00:03:00]

and we can definitely talk about that when it comes up again. But, you know, I’ve reached out to various universities, including the University of Washington about this, and they haven’t been able to give either the kind of very tangible support that Stanford has been able to give us. So I’m just completely thrilled and just incredibly grateful and fired up by this collaboration.

John Robichaux:

It’s terrific. Well, it’s an exciting moment in Stanford’s history, where both interests in artificial intelligence and climate are core to the University’s activities right now and our strategic plan. So I know we’ve been grateful for your interest in our work. Also the connections that you’ve been able to make with Harvard and Yale and other colleges globally.

Wai Chee Dimock:

Yeah, yes.

John Robichaux:

So looking forward to talking more.

Wai Chee Dimock:

Yeah, no, this is definitely something to build up on. And Margaret, and also the Center for the Study of the Novel, we were just saying that, you know, AI ocean data,

[00:04:00]

it’s just so crucial for the health of the planet, for the wellbeing of the indigenous communities and the ocean – so many of them are island nations. So the ocean really has a key part to play and in all kinds of ways. I mean, you now, I think the novel form also has something to contribute to conceptualization and design an AI, which we can definitely talk about more. But in any case, this is also a moment when humanness can really jump in in a big way.

Margaret Cohen:

Yeah, I think it is an interesting moment. I mean, and I’ll just choose the podcast to do an advertisement for a grant I just received with Fio Micheli through the Public Humanities Seed Grant Program – so Fio Micheli is the chair of the Oceans Department – to think about what’s called public knowledge infrastructures in the ocean beyond the two cultures, to think about how that old divide between, you know, science, tech, and humanities, can be bridged, superseded with the urgency of climate change

[00:05:00]

and the need to collaborate on different ways of knowing. So it’s another facet to bridging the STEM-Humanities divide that I think is also really important to Human-Centered Artificial Intelligence.

John Robichaux:

Yeah, it’s one of the core missions of the founders of HAI to be thinking about AI across disciplines and feeling like Stanford has a unique opportunity as a tech leader and having that well-rounded curriculum that many of our tech peers don’t have at their disposal. And of course, being located in the valley as well. So really this moment to think across disciplines and the contributions that humanness can be making to the AI, to this moment, to the AI’s history that I know we’ll talk more about today. It’s central to our mission. It’s an exciting moment, and you’ll hear me say over and over again I think it’s also a moment where we bear a great amount of responsibility, those of us who are living through this to get these moments right.

Wai Chee Dimock:

Absolutely, to make sure that we know what we’re doing. At least, we have an

[00:06:00]

input in there to shape the development of AI. Because that’s not just for the future of AI, but also for the future of us, right?

John Robichaux:

Correct.

Wai Chee Dimock:

Because it’s really a case of, you know, building up of two futures for the non-human world and the human world. So this can’t be more important, and I think that all of us really have some contributions to make.

John Robichaux:

Well, I know regular listeners of the podcast will be familiar with, you know, a range of topics and interesting conversations that were privileged to be a part of and hear through those who come today. But I’m wondering, Wai Chee, if you could maybe kick us off by saying a few words about why you wanted to dive into the AI subject in earnest, especially giving the Ian Watt lecture’s connection to the Center of the Study of the Novel. What is it about AI that [indecipherable] you can bring?

Wai Chee Dimock:

Well, you know, actually, my interest in AI preceded the current interest in Chat-GPT, right? So everyone is interested in that, you know, plus all the big corporations

[00:07:00]

like Google and Microsoft. But my interest actually preceded that, and that’s because I think that in my conception the two things about any relation to the humanities and the novel form that the novel has A) has been very good at making listening an important part of this undertaking. And, in fact, AI can listen very well. I mean, it’s ability to analyze sound data is second to none, and that includes listening to the health of coral reefs, listening to the health of seals. I mean, you know, so the Alan Institute for AI really specializes in that, so, you know, they were there at this important conference that was co-organized by the US State Department – John Kerry was there. And it’s all about AI helping to monitor

[00:08:00]

illegal fishing and then monitoring all these other ocean auditory data. Because sound has a different property from visual data, and so AI specializes in, again, second to none in its ability to analyze sound data. And that’s something that actually humanness has done very well. You know, listening has been an important part of our training, really. I mean, it’s really central to our training. So there’s that. But I also think that the novel form has been very interesting in thinking about agency from unexpected sources, right? So the Gothic novels, you know, it’s not just humans who have an input – I mean, whoever has an important role to play. But various forms of deviation from the human norm, including non-human forces, are very important. And of course, recently, Kazuo Ishiguro has been really important. I mean, all his novels have really been about

[00:09:00]

non-human intelligence and how humans can learn to think about the non-human in a way that makes the most of them, that will allow them to have a voice that would make a difference to our own voice. And so, I mean, I love Ishiguro. So he’s been doing this for 20 years. Again, long before the current interest in stock by Chat-GPT. So I think that, you know, that literature professors, humanities professors really have a lot to bring to the table, and we just have to make a decision, you know, make a conscious decision that we have to learn more about AI. Because this is also a moment – I mean, AI is not that hard to ever come to a general understanding of. But at least we have to take us toward AI literacy,

[00:10:00]

and I don’t think that most of us in the humanities are doing that. So this is really a good moment to make a pitch for AI literacy as the kind of very loose foundation from which we can make other contributions.

John Robichaux:

I have to say that the commitment to AI literacy and understanding AI within the context of citizenship, self-fulfillment, one’s community, one’s life interests is also an essential part of our work at HAI. So I’m thrilled to hear you emphasize that, as it’s really what I’m doing day-to-day or trying to do in our programming as well.

Wai Chee Dimock:

Yeah. Right, I mean, for some time, I’ve kind of felt a little bit of disconnect between my kind of interest as a citizen, you know, just making sure that we’re not making disastrous climate decisions, that kind of stuff. And if things are right about, you know, it seems that the two are kind of disconnected. And oddly, this new turn to AI has enabled me to bring them together in a kind of frontal way.

[00:11:00]

So it’s totally unexpected, but for me, it’s been really helpful, and I feel – because there’s so much to learn. I mean, everyday I try to learn about what’s happening with AI, and everything there’s something new. I find myself getting up at 5 in the morning just to follow this train that’s moving so fast, and it’s really – I’m sure that it would have the same effect on other people – it can really energize humanities scholars in a way that makes us feel that it’s very funny because we know so little. But on the other hand, there’s so much interesting stuff to learn.

John Robichaux:

Yeah. Well, on that idea of unexpected connections, critical to both those of us who love the humanities, and yes, one of the great gifts that AI can help us bring to the technologies as well on expected findings. I know we’re gonna talk a little bit more about climate and the ocean piece in a moment, but because you mentioned Chat-GPT,

[00:12:00]

and because I don’t think we can walk away from this conversation, given the moment we’re living through right now where Chat-GPT and other new models of generative AI have really taken the public’s imagination by storm in an unprecedented way, we think right now, or so we think. And so I wonder, you know, we’ve got on the one hand a lot of really positive opportunities that folks are emphasizing around these large language models, around generative AI, both with text-driven models of Chat-GPT and in visual-based models like Stable Diffusion –

Wai Chee Dimock:

Like DALL-E.

John Robichaux:

– and DALL-E and the like. Again, at the end of 2022 and the beginning of 2023 when we’re recording here, the public has in their minds a real sea change. At the same time, we just a saw a piece recently from the New York Times –

Wai Chee Dimock:

By Kevin Roose.

John Robichaux:

– by Kevin Roose, exactly, who underscored

[00:13:00]

what many of us in the AI world often are reminding folks: these models have their limits.

Wai Chee Dimock:

Oh, yeah, no. I mean, for one thing because I think that generative AI, which is kind of the broader category for both DALL-E and Stable Diffusion and Chat-GPT. I mean, the way that the algorithms are trained are by feeding them billions of web pages. The Internet is the primary medium of education for training for the AI. And so, I mean, they are just so exposed to the misogyny, racism, everything else, hate speech. Everything that is on the Internet, they are just assimilating, and sooner or later that stuff is going to come up. You know, there’s somebody at Stanford, Erik…Brynjolf…?

John Robichaux:

Brynjolfsson?

Wai Chee Dimock:

Brynjolfsson, yeah. I don’t know how to say his name. But anyway, he’s a very important thinker, and he wrote a piece called “The Turing Trap”

[00:14:00]

that he put on his website – that’s how I found it. And he talks about how mimicry – how generative AI built on mimicry is actually quite dangerous. And I completely agree with him.

John Robichaux:

I agree. So Erik, I’m glad you mentioned Erik. Erik Brynjolfsson is HAI’s first senior fellow, and he directs the Digital Compact, which is one of our signature labs. So he both is tied to our work day-to-day, and he had this Turing Trap idea. For those who don’t know, the famous AI test is called the Turing Test.

Wai Chee Dimock:

The Imitation Game, as in the movie.

John Robichaux:

Exactly. So Alan Turing said that you will know AI has reached a general intelligence when you can’t tell it apart from a human interlocutor if you didn’t know who you were interacting with. And many people think today that Chat-GPT may have crossed that line

[00:15:00]

for the first time. That’s at least one of the conversations we’re having. I wonder though – but Erik is lifting up, I think, one of the dangers when we think about AI replacing humans, as opposed to AI more likely going to be technology that works with humans.

Wai Chee Dimock:

Complementing us.

John Robichaux:

Complements, augments us is the language we often use.

Wai Chee Dimock:

Yeah, that’s the word he uses, augmenting. Yeah.

John Robichaux:

And so that’s where the gift is gonna be, and that’s where most of the AI is gonna end up functioning for us. So that the question isn’t where AI might replace humans but rather where humans that are working with AI are probably gonna have the advantage over those that aren’t working with AI. Now that could be a controversial question alone.

Wai Chee Dimock:

That could be, yeah. But I think that, first of all, Turing is the one who actually brought up this danger. In fact, he was amazingly prescient. I mean,

[00:16:00]

he said in those seminal papers on computer numbers and computation and intelligent machinery – in his two seminal papers on machine intelligence, he said that quite often the programmers would not know quite what is going on inside the algorithms, that they would be just completely opaque, and they would take us completely by surprise. And that is exactly what happened in that conversation with Kevin Roose. So, I mean, Turing saw it from the very beginning. But I think that we definitely see it in terms of the projected job losses, you know. It’s not just people in Amazon warehouses, but it’s going to be people doing biomedical research. And journalists, I mean, they are the next ones to go. Maybe at the same time, I don’t know. Computer programmers – which you can speak to.

[00:17:00]

The CEO of IBM said that 90% of the data processing and software engineering will be taken over by AI because they can write their own code. So it’s just terrifying, and it’s just something that, you know, I think that we kind of see the handwriting on the wall, but we don’t respond to that, right? I mean, it’s just gonna happen just like that? So we really have to make sure we don’t go down that very, very dangerous path.

Margaret Cohen:

Can I ask a question, then, about pedagogy? As someone who’s designing undergraduate courses for the Spring Quarter and wondering what kind of assignments should I be giving my students so that I don’t spend a lot of time wondering, “Was this written by AI?” Do you think the research paper is dead?

Wai Chee Dimock:

No!

Margaret Cohen:

Like, where are we going?

Wai Chee Dimock:

No, I don’t think so. You know, what’s really funny is that a lot of high school teachers have also weighed in. And what they say about

[00:18:00]

Chat-GPT, or maybe GPT, which also actually does some of the same stuff, is not irrelevant to what we do. Because they say that, “So if you do an assignment on ‘What is the importance of the green light in The Great Gatsby?’, right, of course there will be people who will find an AI that can write their paper for you.” You just have to be more creative in your assignments that you give to your students, and if we are creative about our assignments, there’s no way, you know, if the AI hasn’t seen it before, it can’t go there. AI is really stupid in one sense. It has absolutely no knowledge unless it’s seen millions of iterations of that on the Internet. So if you can make the assignments so interesting and so unusual that only your students can write about them, there’s no fear.

John Robichaux:

This is a great question, and I think it’s one that we’re finding, I’m hearing working with professors working with teachers in primary and secondary school

[00:19:00]

is a moment – we’re all working through this together right now. How to challenge it, or the challenges that we’re facing, how to deal with them. I think a couple of things are on my mind here. Firstly, I love this idea that you might try and out-create or be more creative than the AI might possibly be, but where I begin to see the boundaries or worry that we need to be playing towards where the ball is going rather than where it is today, is acknowledging that the generative AI models, including the text-based ones, are going to improve.

Wai Chee Dimock:

Yes.

John Robichaux:

And so, I worry that I may not be able to be as creative to out-push that.

Wai Chee Dimock:

Yeah, I do worry about that.

John Robichaux:

One. But two, it’s not the case that we don’t currently have similar types of problems. I think the problem might be more at scale at this stage. So unfortunately, I do have a

[00:20:00]

former classmate of mine who went into the business of grade paper writing, where students who have enough income could pay him or his people – his staff – to write papers for them to submit on behalf of the student. We have plagiarism. We have opportunities already. We have, really, with the rise of Internet search in our lifetimes, seen the transformation of information that’s available. So I’m not sure if it’s really a difference in kind as much as a difference in scale, in degree. One. Two, I do think though, and I think a lot of the teachers and professors that I’m talking about, talking to, are recognizing that there is a change of pedagogy and a skillset that must come. Now, some have described it as like a calculator,

[00:21:00]

where the calculator made basic mathematical calculations moot or that you learn them earlier but then you transition to higher order operations and thinking earlier in your studies. So my daughter is using a calculator in fourth grade – or sixth grade – right now, at a time when I wasn’t permitted before high school. So she’s a few years ahead of where even I was a generation ago. And she’s then able to learn math concepts at a different level. So there’s one line of thinking we’re seeing, which is these text-based generative AI in particular will allow us to move away from one type of skillset – writing, by the way, you all should definitely weigh in on the benefits of still being able to write – and push more towards that editorial, that conceptual, that refinement of thinking that you would still expect of the student who is taking a grade paper from somebody and probably needing to refine it.

[00:22:00]

Let me put it this way. If you were a Y student, you would not take a third party’s work and necessarily submit it as our own. But there’s a definite different skillset. And then I think the third thing I want to point out as an educator is – and Wai Chee pointed this out earlier regarding the displacement of workers right now – that we’re hearing from our students here at Stanford and elsewhere that they’re seeing this moment in the AI’s development as a real crisis for them. Should I study code? Much less should I write the next great American novel? So, you know, for us as teachers and educators, one of the questions we’re having to wrestle with is how to respond to students who are sitting in front of us in tears questioning their entire course of study and life plan to this point, much less than those who are later in their careers and so on. So I’m wondering

[00:23:00]

if you can maybe help us think a little bit with those who are listening about, “What’s the difference between the Chat-GPT who can generate text versus the process of writing and what that really gives us that AI may not yet be able to replicate?”

Wai Chee Dimock:

Yeah, I mean I think that, you know, one – again this is something that came up in one of the conversations that high school teachers had about AI, and I think that it should be really illuminating, those conversations – but one of them said that maybe one way, one skillset that should be developed more is the ability to edit other people’s writing. Because if we already take something generative by Chat-GPT and asked students to edit that, or maybe have several different ones and ask them to compare them – which one is better than the other? – make them articulate

[00:24:00]

their own criteria in judging whether a piece is good or bad or whatever and how it can be improved, that is a very important skillset. And I think the advantage of something that would work not just – that’s a skill that would be useful not just in school but later in life if they become a lawyer. I mean, they have to look at different testimonies and try to find ways to integrate them or maybe eliminate some of them as not being on the same wavelength as the others. And so that’s a really important skill that could be developed at all levels in all disciplines. The other is the ability of AI to listen to very granular auditory data. So it can listen to the sounds made by coral reefs. It can listen to a sound made by whales, by basically all marine ecosystems, and be able to identify important trends. Listening is also an art

[00:25:00]

that is essential to the humanities. Listening is about as integral to our training as humanists as anything. So this is really interesting for the overlap between what AI can do and what humans have historically been doing and doing rather well. So this is again a really important area where people coming from outside science and technology can make a huge difference. We’re not too much concentrated in thinking of generative AI as the only future for AI. And I think that there’s just so much work for people in computer science to think about different forms of AI. I mean, it’s definitely not a forgone conclusion that generative AI is the only future. In fact, there’s a piece in WIRED Magazine that talks about the dirty little secret of generative AI and how environmentally unfriendly because in order to train those models,

[00:26:00]

those large language models, it takes up an enormous amount of energy. It’s like training a single one is equivalent to the energy used in just 60,000 households. I mean, can you imagine? This is just totally crazy. So it’s definitely not for me. It’s not a sustainable future for AI and definitely not for humans. So there’s just so much interesting work developing cheaper, low-power, less computation-dependent. Also the computation dependency means that you are dependent on a cloud server. So everyone dependent on either Microsoft or Google and just that dependency, I mean, again those data centers are enormously energy inefficient at this moment. Although, to their credit, all those companies have made the big point of wanting to make them more energy-efficient,

[00:27:00]

but at the moment it’s not a sustainable use of energy. So developing more low-cost, low-power, less computation-dependent AI such as done in edge computing, which is a very interesting type of ML, which is a small subset of edge computing. But edge computing is done on a device. It's on device analysis as opposed to the cloud server analysis. And that is just so important for communities that are more research constrained for them to be able to active users of AI.

John Robichaux:

I’m so glad you emphasized that. This has been a subject that HAI has been talking about for many years now, and so to see it picked up in the public press is great. And in fact, some of our graduate student fellows are working on precisely this question of bringing down the computer environmental impact of large model AIs and working at the edge.

Margaret Cohen: [00:28:00]

Wow, that's exciting!

John Robichaux:

So, you’re speaking our language.

Wai Chee Dimock:

Yeah. I mean, because suddenly from the standpoint of somebody interested in Indigenous communities, that’s what’s been driving my interest in AI in many ways. And they definitely have, they can see the importance of emphasizing this low-cost, low-connectivity, low-power kind of AI. So that’s just enough. But right now, even though edge computing is actually used significantly by large corporations like all those voice assistants, those use on-device computing. So they’re already doing it. They’re just not doing it for specific, you know, African-American communities. We don’t actually see anything catering to African-Americans. We see some catering to Indigenous communities, which is interesting because I think that the tribal colleges like Navajo Tech have done one of those workshops

[00:29:00]

with Harvard School of Engineering just to train the high school teachers and students to recognize the importance of artificial intelligence but on their own terms. You know, not this high-power, high-energy, intensive kinds of AI but designed specifically for communities in more remote regions who don’t always have connectivity, who don’t always have access to cloud servers for those communities. So he has work cut out for people who want to do something other than what hundreds of AI does.

John Robichaux:

Indeed, and in fact, on the Indigenous side in the non-North American or non-European context for emerging communities, HAI has identified this as a key area along with Stanford. So we look, we're launching this year – Stanford was just awarded a National Resource Center designation from the US Department of Education to work in the

[00:30:00]

Global Studies area. HAI's contribution to this is exactly on the question of marginalized communities of emerging countries and where AI can be developed with interests out of those communities, those stakeholders at center and what it would mean for us both as technologists developing AI and then for an educator like me, what it would mean for us to educate the next generation about AI. If we were to take seriously the needs, the interests, the unique dimensions of Indigenous communities, of marginalized communities outside of Europe and North America.

Wai Chee Dimock:

And in fact, it goes beyond Indigenous communities because, you know, we were just talking a little bit about, I mean, you know, just generally about the importance of kind of going outside the US frame of reference, right? You know, thinking about AI as kind of a global phenomenon. And what is really interesting is that Africa has been really an important player

[00:31:00]

A) because it can bring trillions of dollars, you know, to the African economy in general, and because some of those – a lot of countries, you know, not just the usual suspects like South Africa, but lots of countries, African countries that are not known for being high tech countries actually have very interesting national AI strategy plans. I think that Mauritius is the first country with a national AI strategy plan, and it's just such a surprise to me, you know, to find that the US doesn't have a national AI strategy plan. I mean, so here's Mauritius ahead of us. Likewise, there's some countries like – again, you know, I just want to emphasize it's not South Africa, which wouldn't have been so surprising – but it's Tunisia has

[00:32:00]

the highest funded AI, the first round of capital of the first-round funding for this Tunisia-based AI startup called InstaDeep. This was the startup in Tunisia. It was, in fact, one of those co-led by a woman, which partners with all the major big corporations now because it's so successful. But so Tunisia has that to its credit. And so a lot of interest and – you know, it seems that the gender dynamics, I mean, in terms of InstaDeep – I mean, I don't know everything about it, but I mean, because a lot of the stuff that they do, I mean, they do something – though, actually, I think we'll be talking a bit later about the protein folding and the importance of AI to understand the molecular biology, very important drugs and so on. But InstaDeep,

[00:33:00]

as far as I can see, really specializes in that. And so it can predict the three-dimensional protein structure of various, you know, drugs based on the one-dimensional amino acid sequence. I mean, it's just incredible, and it would have taken humans years and years and years to come up with what AI can come up in a matter of weeks. But I just want to emphasize, I mean, it's actually African countries who are taking the lead.

John Robichaux:

Indeed. And then I know we've been talking outside the US, but I know you're very committed to thinking about how AI is impacting Indigenous communities in the US, communities of color, other marginalized communities here, very much in line with, with HAI’s work around race and technology, race and AI over the past few years as well. All of which folks could find out more about,

[00:34:00]

of course, on our website. But I wonder, as you're thinking about communities within the US, if there's anything you'd want to lift up for us.

Wai Chee Dimock:

Yeah. I mean, I think that in the US, I mean, certainly I've been following, you know, what's happening with the Navajo Nation. And I'll be talking actually a little bit about the Cherokee Nation as well this afternoon. But in terms of the Navajo Nation, it's been quite surprising. I mean, they have hydrologists who are highly at home in the world of technology. And they – you know, I'm just a little bit giving away what I'm going to talk about – they collaborated with federal agencies like NASA and to develop a drought monitoring tool, which is important, I mean, just as a kind of basic necessity for the Navajo Nation, but especially in the context of pandemics

[00:35:00]

when you need to wash your hands. I mean, if you don't have running water in your home, it's just, you know, beside the point to talk about washing your hands. So I mean, the Navajo they really understand what is coming to all of us. I mean, drought is just – especially in California, even though with the strange weather pattern, you know, it's easing up a little bit – but nonetheless, it's a long-term problem. And the Navajo are feeling it right now. And they have a lot to tell us about, you know, how to optimize water distribution usage. I mean, you know, right now, some of the distribution policies are made on really kind of unsustainable models. Like, you know, if you assign a certain amount of water, if you don't use it up, you lose that. And that's a crazy way to conserve water. I mean, you know, all kinds of communities have access to water that they don't need that they could have conserved

[00:36:00]

that they're just using up. So I mean, definitely AI would change all that. Plus, you know, it would pinpoint, you know, if you have a relatively large area as the Navajo nation does – I mean, you know, it's 27,000 square miles. And so, they have such tremendous variation across the whole nation in terms of who is experiencing drought and who is not. And right now they don't, I mean, until NASA comes along and help them develop this, co-developed – NASA's very, very emphatic to the credit – co-developed with the Navajo Nation a drought tool that would enable them to pinpoint exactly which area has experienced the most severe drought so that the water can go there. It just makes such a huge difference, and it's not just for the Navajo Nation. I mean, the rest of the US is going to need that. So I think that Indigenous communities really are in a very interesting position. They can, they really can be pioneers, you know, in the tech field, both in terms of developing

[00:37:00]

low-cost AI and in identifying all those areas that would need, you know, for AI to intervene. I mean, they can really, they can play a role that nobody else can at this moment.

John Robichaux:

Well, you did a great pivot there or a seamless transition right into I think the next topic that we want to talk about, and that's AI and climate. So your example with the Navajo nation working with NASA co-developing models that would look at water availability is one of those great examples that I think about as we're entering this new moment in artificial intelligence's development, where we've got computer vision and satellites being able to help us monitor water at community, at regional, at national levels. We have a number of researchers, even here at Stanford, but several around the world, working on AI conservation efforts or opportunities with smart buildings,

[00:38:00]

with smart cities, infrastructure. One of our HAI affiliated faculty talks about how the old algorithms that we used to gauge when to let water out of a dam today, given the changing weather patterns, are actually leading to more flooding. And so, artificial intelligence or AI is going to allow us to mitigate local flooding, conserve more water, and really help cities and counties at those levels control what are real changes that are coming about because of climate change, A, and B, ensure that there's less devastation, less harm along the way. And then see how that have things like water more long-term and then apply that out, of course, to food and other places that I know we're interested in. Margaret, I know since you're here, I know you've taken a turn recently to thinking about oceans.

[00:39:00]

And I know the three of us have talked before about AI and ocean health as being a really interesting and exciting moment that we're living through as well. I wonder, Margaret, what it is about oceans that you're, that's got your interest right now.

Wai Chee Dimock:

It’s a long-standing interest, long-standing for like 20 years too.

John Robichaux:

Yeah, exactly. So maybe what’s happening now that you think is most interesting with it?

Margaret Cohen:

Yeah, yeah. Thank you for the question. Yeah, so as Wai Chee said, I've been researching the imagination and literature of the oceans, particularly narrative, going back to the beginnings of the European trans-oceanic voyaging on its impact on the novel. I think maybe it's partially being at Stanford, but I got very interested in technology as giving us access to realms of the planet and specifically oceanic realms that we don't have access to without it. And so, then,

[00:40:00]

I just recently finished a book on the history of film shot underwater and the way in which –

Wai Chee Dimock:

Yeah, wow. That’s, that’s great. Nobody has done this, right?

Margaret Cohen:

No, it’s kind of offbeat for film scholars. They don’t think that it would ever be more vibrant.

Wai Chee Dimock:

Because it’s so technology dependent.

Margaret Cohen:

Yeah, yeah. The whole thing is technologically mediated, and some of the most brilliant things that probably are in your popular imagination like from Blue Planet 2. You know, remote vehicles, for example, enable us to access the deep ocean and they enable us to, to monitor also from the surface whole areas of the world that, that we don't have access to. So, yeah, I've kind of moved from thinking about the imagination of the oceans in a more I'd say creator-oriented focus to thinking about – and this is, you know, under the umbrella of the climate crisis – how to diffuse knowledge about ocean environments that are so remote and yet so much part of our,

[00:41:00]

you know, planetary health. And obviously film and TV have played a big role in that and the popular imagination. So I think there are a lot of ways into that question. I'll just give you an image from a class that – I took my class on a whale watching trip two weeks ago. And we were out in Monterey Bay watching gray whales, and the bay was filled with mylar balloons that were from people's birthday parties, you know. And so the whale watching ship would go by and gaff these balloons. But, you know, there's so much detritus out there in the ocean that needs to be located and then cleaned up for the health of the ocean and for our health. I mean the Pacific garbage, I just – they call it this gyres – is like another example that I think – yeah, I could give you lots of different examples. But the ability to access these environments that are fundamentally toxic and hostile to humans,

[00:42:00]

yet also that sustain life on land is something that AI has a huge potential to impact.

Wai Chee Dimock:

Yeah, yeah. And just want to add my view for a long time. I was nine years [indecipherable]. So what we think is the bulk of, you know – that's what I wrote about in my first book. And there is just so much knowledge in what we think. I mean, it really is, you know, like a lot of the 18th century, 19th century novels, is encyclopedic, and the wealth of knowledge emerging. I mean, still put likewise, Thoreau, about the natural world. I mean, BU scientists – Boston University scientists – actually going back to Thoreau's notebooks and learning about the New England ecosystems and the variety of species back in the 19th century and comparing that with what we see now. That's really invaluable. So, you know, these are scientists. They're not – they’re professors, they're [?] professors.

[00:43:00]

They're interested in Thoreau in this fun way. Now, I mean, I think that in terms of just bringing this back slightly to the question of food and agriculture, I mean, the ocean is a very important source of food, right? So especially with ocean acidification, a lot of the seagrass and seaweed, you know, there's a market decline. All across the Pacific coast, Western United States, I mean, Washington, Oregon, California. This is, you know, like 90%. Also in Australia, 80, 90% decline in the seaweed population. And, you know AI can really do a lot monitoring that decline and should, you know, thinking of suggesting ways, you know? So for instance, using less fertilizer would be a very important

[00:44:00]

remedy for the ocean acidification. And AI is one of the most important means by which chemical fertilizers, especially nitrogen-based fertilizers can be reduced. In fact, I think that there was just one study about the Chesapeake Bay lessening its use of fertilizers and seaweed just making a comeback. So, I mean, in order to scale that up though, AI is absolutely crucial. So this is another way in which, you know, it’s not just the natural ecosystems but human food ecosystems as definitely impacted A) by the phenomenon of ocean acidification. And it's not a foregone conclusion that it's going to proceed in the way that it has been doing now. I mean, this is something that can be reversed.

Margaret Cohen: [00:45:00]

Yeah, I just want to go back to the Thoreau comment and the role of AI enabling us to understand all the documentation that we have from centuries and centuries of environmental practice. Because outside the great works of literature, there is a lot of data that comes from, for example, overseas voyages that is, you know, accumulated. And I was at this conference – I think, John, you and I were talking about it – it was Harnessing Data and Tech for Ocean Health that was put on by the Oceans Department in November. And one of the speakers was talking about the way in which the Smithsonian now has access to 200 or 240 years of Navy data in the logbooks and the way in which they’re using AI to go through the logbooks to get climate data – because every ship’s logbook will tell you what the weather is on every day of that voyage – and then accumulate that and be able to come up with a model for like what the climate has looked like globally because these ships were sailing all around the world.

Wai Chee Dimock: [00:46:00]

Absolutely. Because that’s the big database, you know, that, yeah, the AI can definitely help us.

John Robichaux:

I’m so glad you mentioned this because to my mind on this humanities podcast, I know Chat-GPT and the art and music applications get are getting a lot of attention in the public imagination. And also the ability to scour, as you say, you know, decades, centuries worth of text across modalities has been really central. So in fact, just in February, so just a couple of weeks ago, February 23, our weekly research seminar was on a historian who was looking at malaria outbreaks in islands using naval data, using burial records, using, in many cases, handwritten and merely scanned budgetary records from governments and the like.

[00:47:00]

And what he's attempting to do, or what his team is attempting to do across disciplines, is a historical reckoning of a malaria outbreak on the island, which is very famous in those who study pandemics, A. So it's a historical humanities project. It's got a history of colonialism attached to it. Intersection. And today he's trying to use that example of how the ocean waters change, deploying slices of coral, which act like tree rings in terms of grabbing ocean history data, in order to help predict current or future malaria outbreaks or pandemic outbreaks based on those historic models that otherwise without AI would never have been able to put all of that data together. And pump out models that are helping us at least, yeah, grab the right correlations

[00:48:00]

from history that might influence going forward. And that's just one example that crosses again, crosses history, crosses sociology, crosses narrative medicine and health, colonialism. All, however, across all of our disciplines, we really have an opportunity on that, on that data mining that five years ago would have been unimaginable, even for those who've been thinking about the digital humanities for a very long time. This is really, this is another edge that's been opened.

Wai Chee Dimock:

And the novel form, actually, has been – you know, I’m just thinking of Amitav Ghosh, The Calcutta Syndrome. I mean, that’s about malaria, and, you know, colonialism. I mean, so is Ireland and India. I mean, those are the two, and they have great writers, you know, who can definitely help us see, I mean, predict the future as well as understand the past. I mean, and Ghosh is really archival driven. You know, he's done so much research in terms of just going back to the British libraries and look at all those colonial records. So this is a

[00:49:00]

great way in which, you know, the humanities once again can really I mean the historical record of humanity. I mean, that's for us to use right now with the help of AI.

John Robichaux:

Yeah, tremendous. Let me ask: Wai Chee, you are, you know, from your position at Yale and working with the Jackson School of Public Affairs, you reached out to Stanford, and now currently I think you have a position at Harvard where they’re also giving you a platform to connect researchers across this AI and climate resilience interest. So I wonder if you – we’ve done a good jab talking about gesturing towards some areas where AI might be helping us think about climate, think about food, think about communities. I wonder if you’d say more word about what you’re hoping to do with your AI and climate resilience project and network.

Wai Chee Dimock:

Yeah, yeah. I mean that’s definitely some ways into the, you know,

[00:50:00]

into the future. But I, you know, once again, this question has come up when I was listening to those conversations from high school teachers about Chat-GPT. I mean, there’s such an overlap, you know, across different educational levels. I mean, you know, those high school teachers have been able to pinpoint some really important questions that college professors should be paying attention to. So, I mean, my hope in this AI for Climate Resilience project is really to remake education and make it completely, but at least, use AI, use the integration of AI into education in general to make education serve the educated, serve the students more, so that it can help them, you know, live lives that would enable them to integrate knowledge and the actual jobs that they’ll be taking.

[00:51:00]

And once again, I mean, I’ve been struck by how it’s not necessarily the elite colleges but community colleges. I mean, the Cal State system actually has been very proactive in terms of introducing AI into the programs. Likewise, Bunker Hill Community College near Boston has both an environmental focus – I mean, they have made that the foundation for the entire curriculum – but they also have AI programs as well because they see that this is where students can get jobs in community colleges. But also that it’s going to five them jobs that will be more satisfying in the long run. You know, if you really do something that you believe in, it makes all the difference in the world. So, I mean, there’s some really old-fashioned questions about, you know, what kind of job could be fulfilling. It’s really basic, old-fashioned but with a new meaning right now, you know,

[00:52:00]

with the way in which AI can support, you know, kind of mission-based, or at least purpose-based, you know, forms of work. And yeah, so, I mean, I think that the outreach to elite colleges can definitely do a lot, but the community colleges can do a lot as well.

John Robichaux:

I’m so glad you said that. This is definitely one of the areas that I work in day-to-day with our partners here at the Stanford Graduate School of Education, their new Accelerated for Learning, the Stanford Design School, Stanford Digital Education – a number of projects that have launched really in the past six months around AI and education. So, one, where we brought in K-12 teachers to rethink about how AI might be transforming this moment, which we alluded to earlier with the ChatGPT conversation, we’re training community college instructors. And really this is an extraordinary need in the country right now, given the lack of computer science talent, the lack of AI talent,

[00:53:00]

or the way that talent, because it's so rare, is being siphoned into Big Tech, elite universities sometimes. But even we’re suffering versus Big Tech, where the pay can be better, et cetera. And so one of the, one of the goals that, you know, those of us at Stanford are thinking about right now is how can we help? Those community college instructors, those secondary education teachers and faculty and teachers that are doing workforce training and within the higher education space as well – what tools do they need, even if they’re not fully AI or computer science literate so that they don’t need to be the technologist? But they need to also help students get up to speed on, you know, the skill set, maybe some of the soft skills in some cases, in many cases, yes, the technical skills. But do that either through digital and online or through upskilling of the community college instructors themselves. So I’m glad you see this. This is something that we see is really critical

[00:54:00]

at this moment, in our country's history, but I think if you put that globally, you know, the scarcity of talent is going to be even more. And if generative AI can solve its error problems – which again, I think we're headed towards – yeah, we're going to be in a world where students may not need to do most of the coding themselves. But again, we'll be thinking about the problem solving communities, the meaningful purpose that you're describing that will be behind the work.

Wai Chee Dimock:

Right, I mean, low code is a really important movement in, you know, AI development. So, you know, along with low cost and so on, low code is, I mean, AlphaCode is doing that, right? I mean, we didn’t even talk about AlphaFo, which is related.

John Robichaux:

Copilot, I think is what you – yeah.

Wai Chee Dimock:

Yeah, Copilot, exactly.

John Robichaux:

Yeah, so 2022 in many ways was an extraordinary year in AI in that you had Microsoft Copilot really take off, right, which was able to take a lot

[00:55:00]

of boilerplate code that coders would have to write and offload it to automation so that the coder’s time was freed up to do the higher level, higher order things.

Wai Chee Dimock:

Just think about this side, conceptually, you know, what we want from AI. I mean, I think that people like us certainly have lots of ideas, but I have no technical expertise to implement them. So somebody has to write a code so that ideas can take shape.

John Robichaux:

Yeah, exactly. And so for HAI, this is actually a big part of our mission to say: A lot of times the technologies will tell us we can solve the problems. The technical problems are the questions. The questions are the ones that the humanists would add, that the policymakers would add, that the stakeholders in the community would add, the community leaders would be adding. Help us get to those decisions and then we can write the technical solutions to get there. So – go ahead.

Margaret Cohen:

No, I’m realizing, Wai Chee,

[00:56:00]

that you have lunch for the graduate students at 12.

Wai Chee Dimock:

Yeah, but they’re just downstairs.

Margaret Cohen:

Yeah, so I just want to make sure that we give everybody time. But I just wanted to sort of bounce off that – and I don't know if this is a way to sort of start to wrap up – but I think the idea of collaboration is very exciting in humanistic context because certainly in our field, you know, everybody, it's single authored monograph, it's, you know, single authors or single scholars are kind of the focus, and yet so many of the problems that we have confronting us both intellectually and policy and more generally as humanly need collaboration, need different skill sets. And so one of these I find so insane about this conversation is the emphasis upon collaboration.

Wai Chee Dimock:

Yeah, yeah. Especially because, you know, I think that we can see something that people who are trained as programmers might not see, right? Because we can definitely see the different levels of education need, you know,

[00:57:00]

or different subsets of the education sector can have different AI needs, right, which I don't think that they have been thinking about those questions. So we have something to bring, but they definitely have the expertise to implement them. So in terms of the difference between division of labor between people who have kind of visions or sense of, you know, various kinds of purpose, you know, that we could bring to bear on AI and then people who can actually implement them. I think that that collaboration is increasingly important. So I mean, I think that – and that's something that could even be bipartisan. Because I don't think the Republicans are really against that, you know? I think that this is one area – I mean, just as they are now united, supposedly, against China, right, because this AI is right from China – I can see them actually potentially collaborating on how to make AI available

[00:58:00]

to the more general public.

John Robichaux:

I am so glad that you both are emphasizing the collaborative nature or opportunity we have here with tools. With the tools and humans, but also among different conversations that might happen among those of us who are in policy, those of us who are in leadership, those of us who are in policy, those of us who are in leadership, those of us who are in different academic disciplines, like this conversation, that’s been a centerpiece of our conversation work at HAI. It was part of the founding vision. And in fact, Margaret, the way you described collaboration, you know, thinking about how people working together with tools goes, the AI tools goes back to, I think, Erik Brynjolfsson’s comment and others who have emphasized that the future of AI is as likely to be those of us who are working with AI – once we, you know, understanding when there are limits and dangers that we have to overcome –

[00:59:00]

that that’s going to be where we get a lot of power out of this tool set. Not always in the replacement – there will inevitably be some replacements – but the tools where we work with. Much like, generations ago, my grandparents’ generation, moving from pre-tractor to post-tractor, those farmers that works with tractors are still able to be farmers. They are also able to work more efficiently, generate more food, et cetera. So there’s some displacement, but then there’s also this real sense of collaboration. Margaret, I think that was what I’m hearing from you. Wai Chee, I think, you know, this other dimension here that it’s also an opportunity like this podcast, bringing together folks who are thinking about the technologies but the big questions as well. And so when we come together around how do we want to organize the next tool set that we're coding out for climate, that we need to have the right stakeholders in the room, populations, et cetera.

[01:00:00]

And that that's the other dimension of collaboration here, which is absolutely essential. Both of those, I think, A) are essential to what Stanford is sought to do in putting a lot of investment in the Institute for Human-Centered Artificial Intelligence, one. And two, I'm going to say – and maybe this is a place to close – that what I often remind folks is, we, this generation today, have a unique moment in all of human history. There's no generation that's ever going to be able to shape the future of artificial intelligence more than our generation is right now at this late and early or this early stage. That comes with an enormous amount of responsibility, not just to those of us living today, but for generations to come. And so I'm grateful, Wai Chee, for you

[01:01:00]

to bring us together for these types of conversations to hopefully help nudge the arc of history more towards that broadly shared benefit of the tools and the technologies, of having the conversations that are cross disciplinary and really at the heart of what it is we want to be. Because we get those choices today in a way that maybe nobody else will going forward. And finally to recognize that it’s a work in progress and we're gonna have fits and starts. It's not going to be a singular positive trajectory. There are going to be grave failings, and they're impactful at this level of scale that AI can unleash. So thank you.

Wai Chee Dimock:

No, but yeah, no, I mean, I think that there’s just so much that’s depending on us. I mean, you know, not because we are especially imaginative, but just because we happen to be at a juncture where the future development of AI is not carved in stone.

[01:02:00]

I mean, it’s just so adaptive, and it could be – I mean, it’s not necessarily now – but it could be very responsive to human needs, and also to kind of the dangers that are facing us. So, if we could just make AI, push AI – or not push – but certainly point in that direction is one of the possible directions of AI to develop that that could make a tremendous difference to future generations.

Margaret Cohen:

Well, thank you both for joining us. It’s been really an immense pleasure to get to chat and to think so much. And Wai Chee, I’m so looking forward to your lecture, and I hope we’ll continue to be in touch, John. It’s really great.

John Robichaux:

Indeed. Thank you. Thank you, Margaret, as well.

Wai Chee Dimock:

Likewise. Our work is going to continue for a while, so yeah.

John Robichaux:

Looking forward to it. Thank you all.
Maritza Colon