Library Talks: Building the World We Want, Artificial Intelligence and Global Governance

By NYPL Staff
October 3, 2023

Listen on Apple Podcasts | Listen on Spotify | Listen on Amazon Music

In this episode of Library Talks, acclaimed scholar and writer Alondra Nelson leads a discussion on the transnational impacts of artificial intelligence and the need for global collaboration.

Featuring:

  • Karen Kornbluh, Distinguished Fellow for Technology and Competitiveness, German Marshall Fund of the U.S. and former U.S. Ambassador to the OECD
  • Maria Ressa, CEO and President of Rappler
  • Olatunbosun Tijani, the Honorable Minister of Communications, Innovation and Digital Economy for the Republic of Nigeria
  • Tim Wu, Julius Silver Professor of Law, Science and Technology, Columbia University
  • Alondra Nelson, Harold F. Linder Professor at the Institute for Advanced Study and Distinguished Senior Fellow, Center for American Progress

This panel of experts examines what forms of global collaboration are needed to address issues such as inequality and climate change, privacy protections, and human rights. From shared regulation and alignment of democratic laws, to equitable benefit distribution, to the prevention of dangerous AI proliferation, to the consolidation of power and more, the evolving nature of AI—combined with its global impact—requires the untangling of complex dynamics to provide a clear picture for effective future policymaking and to engender a more just and safer world.

Five speakers on stage at a Live from NYPL event

Transcript

[Music]

[Aidan Flax-Clark:] Hi. And thanks for listening to Library Talks, a podcast from the New York Public Library. My name is Aidan Flax-Clark. I'm the Director of Live from NYPL which is the library's premier cultural programming series. And I'm also the host of the show. Today, strap in folks, we're about to get seriously wonky. But it's about a super important issue, one that I'm sure all of us are reading about in the news all the time, artificial intelligence. And specifically today, you're going to hear a conversation that happened at the library just a couple of weeks ago, that was led by Doctor Alondra Nelson. If you're not familiar with Alondra Nelson, I'm going to encourage you to look her up. She is a serious writer and scholar. Currently, she is the Herald F. Linder Professor at the Institute for Advanced Study. You know, that place that you might remember from Oppenheimer. And there she's part of the AI Policy and Governance Working Group who convened this conversation. Before coming to the institute, she was a Deputy Assistant to President Biden, and served as the Acting Director of the White House Office of Science and Technology Policy. And while she was there, among a lot of other work, she led the development of the White House's AI Bill of Rights, which lays the groundwork to safeguard people's rights and access to opportunities, as AI reaches further into all of our lives. The AI Working Group represents a mix of sectors, disciplines, perspectives, and approaches. And it's goal is to ensure that AI is developed and used responsibly by researchers, industry leaders, policy makers, and the public. Part of their work is convening public conversations, like the one you're about to hear. And for it, Doctor Nelson brought together a group of people dealing with AI around the world, to talk about the transnational impacts of artificial intelligence and the need for global collaboration and policy. They examined what forms of collaboration are needed around AI, to address issues like inequality and climate change, privacy protections, and human rights. It's a complex subject, and a heavy one, but it's totally fascinating and it's already impacting all of us. On hand to talk with Doctor Nelson were Karen Kornbluh, Doctor Olatunbosun Tijani, Tim Wu, and Maria Ressa. It's a big group which can be tricky on audio, I know. But Alondra speaks first and addresses each person pretty clearly by name. So, you should be able to follow along. Here's the conversation.

[Applause]

[Alondra Nelson:] Good evening everyone, and thank you for being here both in person and online. It's still, I'm still getting used to sort of being together in space, over the last, given the last two years that have happened to us. So, we've been having a kind of small conversation, of a community of thinkers and doers we call the AI Policy and Governance Working Group. We've been meeting for two days, here in New York City, thinking about international governance, international collaboration in AI governance. Together, and part of our commitment as a working group is to always engage the public in our work. And what a better place to do it than at the New York Public Library, that's so committed to engaging the public. So, some of our colleagues are here. So, I just want to acknowledge and welcome them. So, this working group is I think, you know, like the broader public, an attempt to sort of bring people together across different perspectives, experiences, skill sets, methodologies, to think together. And often, we've not thought together, this particular group, around the challenge of AI and of AI governance in particular. And I think we all share in this group, I think a commitment to really attending to and thinking better and harder together, and actually doing, applying our work to, to solutions around current day harms and risks, and future potential harms and risks of AI. And over the last few days, we've been thinking about global governance in particular. So, I think given the breadth of AI tools and systems, I think many people agree that some form of multinational, transnational, you know global governance, more than one state, more than one community is needed. But the agreements really end there, right. There are lots of disagreements. There's a lot of competing kind of models about what this might look like, how we might do it. It's also I think, you know, the multilateral system that's been built out since the middle of the second world war, didn't have large corporations at the center of it. It was really about the nation state. And so, we're thinking about different actors in the space of this as well. So, in this moment, what kind of global governance do we want? What is the world that we want to build? Is it laws? Is it rules? Do we need norms, regulations, standards, research networks, more? What do we need to harness AI? And what kinds of global collaboration are needed to address these issues. Issues that range from the equitable distribution of benefits that might accrue through the use of AI. The prevention of illicit uses of AI tools and systems. The consolidation of power by the tech industry. And the just constant evolving nature of AI. So, it's very complex dynamics. And it means that our work as policy makers, people think about the policy space and about the public good has gotten increasingly harder. So, it's a big assignment that's going to require a lot of us working together. And some of our best thinkers, and most visionary thinkers. And so, I'm so delighted to have this extraordinary panel of brilliant people. You have their bios in your programs, so I'll just introduce them briefly. Ambassador Karen Kornbluh is a distinguished fellow for technology and competitiveness at the German Marshall Fund of the US. And she is the former US Ambassador to the OECD, which we'll say a little bit more about. Maria Ressa is a camerawoman.

[laughter] And the CEO and President of Rappler. And the 2021 Nobel Peace Prize laureate. And we're so delighted to have her. Olatunbosun Tijani is the Honorable Minister of Communications and Innovation and Digital Economy for the Republic of Nigeria. Welcome, Minister Tijani. And Tim Wu is the Julius Silver Professor of Law, Science and Technology at Columbia University. And until recently, served as the Technology and Competition Advisor to President Joe Biden. So, thank you all for being here. Okay. So, the first question I think, just to get us started is, what to you is the most pressing or urgent issue with regards to AI governance. And that could be a beneficial thing, or it could be a dangerous thing, that we need to think about with regards to AI tools and governance. And do you think they can be addressed by international governance, by global governance of AI? Karen, why don't we start with you?

[Karen Kornbluh:] I guess, you know for me, what I'm, what I'm interested in is a little bit meta. It's what we were talking about a little bit backstage, which is that I feel that a lot of our confidence in governance has been hallowed out. And a lot of our capability in governance has been hallowed out. And I see AI as the kind of challenge that could really break democracy, if we don't get ahead and figure out. And by that I don't mean we need to control AI. I mean that we need to make sure that there is you know, some of the issues that you were talking about, equitable access, equitable gains from it. You know, people aren't impoverished. But also that we get the stuff that's been promised on the upside. So, are we, how on earth are we going to get cures to rare illnesses, if there's a not a commercial upside to that, if governments don't get involved. So, what I, I guess my meta answer to that is, are we going to have the state capacity I guess to make sure that the rights, protections, social goals that we've all agreed to in the past, can be honored in the future, both on the upside and the downside.

[Alondra Nelson:] Thank you for that. Minister Tijani?

[Olatunbosun Tijani:] Thank you. This is an interesting conversation actually. Because for, for a public leader from a country like Nigeria, I think the first question is how do you join conversations around governing a phenomenon that you've actually not participated in creating? Right. I think that's the starting point of the conversation. Artificial intelligence is presenting us tremendous opportunity to reimagine how we do things, and also be more productive as humans. And while we've not participated deeply in shaping the knowledge, and building up the knowledge that is leading us to where it is today. It's been forced on us. It's a reality. It's almost a wicked problem. It's an opportunity for us to do things better, even in countries like Nigeria and so many other countries in Africa. But I think the biggest challenge for us is not just even from a governance perspective, I think it's how do we understand the opportunities that exist for us to apply AI in creating value and in providing opportunities for our people. So, that's the conversation we're dealing with on the African continent. But our biggest challenge with that is, we also know that inclusivity is a big problem with AI. So, if we don't focus on ensuring that, this phenomenon can actually support and is taking into consideration our reality from Africa, and global south, you know, how do we productively be part of the conversation on how to govern it? So, I think the biggest challenge for me, as Minister of Communications and Innovation in a country like Nigeria, is actually ramping up investment, in ensuring that we can understand the implication of this phenomenon, on our economy, on our people. But also perhaps provide some sort of leadership on how we think about inclusion in AI. When I say inclusion, it's about the data set itself. We do have significant challenge in ensuring that we can actually contribute to the quality of what's been built. That a lot of the data sets that we have, that is currently connected, you know, that data, what do we need to do to accelerate how we bring them on board and get them connected. To improve the quality of what we're referring to as this phenomenon. I think until we do that, we're going to struggle to actually have proper conversation and be on the table. And I get the sense that even the conversation on governance is being led from the west. And the construct itself is being dumped on global south to accept and forced to constructively do that. I think we also need to come to the table and find a way to enrich that conversation around governance.

[Alondra Nelson:] Thank you. We're going to come back to that. Thank you for that. Professor Wu.

[Tim Wu:] Thank you. It's good to see you again.

[Alondra Nelson:] Good to see you.

[Tim Wu:] Here. You know, I have two thoughts about this, which I'll speak on. The first and you know, on the concern side, they're both on the concern side but I, as a person who has studied a lot of technological revolutions over the last 200 years, one thing that strikes me about AI that feels very different is I've never seen, with the possible exception of nuclear power and nuclear weapons, a technology that it's inventors seem less excited about and more concerned about. And you know, sometimes I wonder why they devoted their life to something they seem so afraid of. And I think, I take that concern very seriously. And that moves me you know, in the direction where the people who are closest to it seem so concerned, you know, to wanting to have something like a you know, more nuclear approach to this. Is this really, or parts of it at the frontier, are so dangerous that we need the kind of strong state apparatuses that you were talking about, Karen. So, that's one set of thoughts I have. The primary thought I have, or concern with AI, and I think the challenge for the world and nation states individually, is trying to ensure that it does not make current imbalances in economic power even worse. In other words, that it doesn't make the rich richer and the poor poorer. Cause I think there is every sign that things could go in that direction. You don't have to be a genius or an economist to realize that there's only a few companies so far who have been able to play in this space. That the economic consequences of automating huge amounts of human conduct are far reaching. And you know, the risk of things getting worse instead of better is serious. I noticed the last time there were really epic, truly dramatic changes in technology, about 100 years ago with electrification, 150 years ago the most dramatic technological change period was electrification, then automobiles, industrial, internal combustion engines. You know, we had two world wars, revolutions all around the world. Another serious problem. So, I think we try, need to do better than we did in the early 20th century, when we had a catastrophe following very serious technological change. And I'm worried that the imbalances we're creating could be very dangerous to our civilization.

[Alondra Nelson:] Thank you for that. Maria.

[Maria Ressa:] I'll pull bits and pieces of what, what the Minister said. You know, we're in the global south. How do we be part of the discussion? Well, in 2016 I was in Mountainview, and I just said, you know, these 90 hate messages per hour, they are coming, they're with us right now but they're coming for you. I think what is happening to us has now happened to you. It has worsened. So, that's the first step. So, our voice at the table is, this is what's going to happen to you. It actually already has. So, that's the first. I think the second thing is in terms of the imbalance, that's already also happened, right. Are we going to make it better? The two, the biggest problems today, this morning, Canada and the Netherlands launched the Global Declaration on Information Integrity online. I'll use those two words. 27 nations signed up. Information integrity. I mean, in December of 2021 at the Nobel lecture, I kept saying three sentences over and over. That you know, if you have no facts, and we have no facts because as early as 2018, the social media platforms, and this is only the first iteration of AI, right, machine learning into AI that pulls up algorithms, and patterns and trends on social media. Lies spread 6 times faster than facts. That's an MIT study from 2018. With Elon Musk, we know that that's gotten worse. But I'd love to see the study. So, if lies spread faster than facts, without facts you can't have truth. Without truth, you can't have trust. Without these three, we have no shared reality. We can't have democracy. And then if you have no facts, how do you have integrity of elections? And I think the last part of this is, will individual will and agency. And that is the exploitation, and this is still the first impact of AI, the exploitation of what was once used for advertising and marketing by geopolitical power. And that is all before generative AI was rolled out in November of 2022. So, wow, talk about bleak. It isn't all bleak.

[Alondra Nelson:] It's not all bleak.

[Maria Ressa:] Cause of course, your work shows it's not all bleak. But those I think, are the greatest dangers. If we have no facts, lies spread faster. Do you tell your kids lie all the time and I'm going to keep rewarding you? That's our incentive structure today.

[Alondra Nelson:] Thank you for that. So, you know, it isn't, we hope it's not all bleak. Who knows? But I think some you know, glimmers of possibility and hope, and things that people have addressed have been organizations like the OECD. Which is the organization for Economic Cooperation and Development. It's an[inaudible], a UN body that has 38 member countries. That in 2019, released principles around AI that were later endorsed by the G7. Later endorsed by the G20. So, there's a kind of growing conversation, but still not significantly inclusive around you know, what we need to do and think about here. So, Karen, I would love your reflections, as the US Ambassador to the OECD. I mean, is this the kind of cornerstone of a global governance regime for us? You know, you've, and you work just in tech policy more generally, do you feel optimistic about the OECD in this space, or do you suggest other things for us?

[Karen Kornbluh:] Oh, optimism. There's a big word. Let me come back to that, and also to Tim, Tim's point. So, the OECD is really interesting. It's actually not a UN agency, which is what makes it so interesting. Because if you're a UN agency, you include everybody. And I think the OECD fell out of favor for a while, as the G20 became the hot game in town, because it was just the biggest. The G20 was the biggest. And I think as this contest between authoritarian countries and democracies has taken hold, the US and others have turned more towards the OECD as a place where likeminded countries gather. Now, it's also the rich man's club. And so that's, the upside is also the downside. It's likeminded countries, but it's a little too exclusive. Now, the OECD is working to change that. It's now 38 countries. It was 34 when I was there. It includes a bunch of South American countries. There's 6 new countries now in process to join the OECD, including Peru, Argentina, Brazil. Really interesting. And now Indonesia has expressed interest in joining, which is fascinating. Now, it still doesn't have an India. You know, so, it can never be the solution. But it's a really interesting place to start to build some scaffolding. And that's how I would think about it. So, when the oil embargo hit, Henry Kissinger actually turned to the treaty organization of the OECD to create an organization under it, called the International Energy Agency. And the International Energy Agency is where countries, likeminded countries come to share data about forecasts of energy. It's become a great source of information about the energy transition. It's where countries coordinate releases of the strategic petroleum reserve. It's all, the OECD was also the scaffolding for FATF, the Financial Action Task Force, where we go to combat money laundering. So, I think the idea is if the OECD can incubate a bunch of ideas, as you said they incubated the principles, the definitions, the taxonomy that are really behind the NIST, the National Institute of Science and Technology's risk framework, which is the US standard setting organization. And their risk framework has been praised by everybody. That comes a little bit out of the OECD had done. In a way, the EU's risk framework is a child of that approach. The G7, which is the 7 you know, big democratic countries coming out of World War II, those countries now have a process led by Japan called the Hiroshima Process. Which is supposed to produce some kind of code by the end of the year. It looks like they're looking towards the OECD to be the ones that sort of own that and keep it going. And so, you could see, as you were sort of suggesting, a process where it's at the OECD and then it goes to the G20. And there's another organization at the OECD called the Global Partnership on AI, that has many members including African members. And so you could see, the term that's used is socialized. But it's that, a global governance approach gets to be socialized. And then what I would hope, is that out of that comes some kind of more structured, more permanent organization. More akin to something like the International Energy Agency, but much, much, much more inclusive. And I just want to take a step back. Because whatever happens in a global setting, or in a multilateral setting, that's not going to be much more than principles. You can, I mean at the OECD, you can say, well you can only join if. But you're not, after they join, there's no enforcement mechanism. There's what they call name and shame. You know, if it's a really top rule, they can embarrass you, if you're not following it. So, really, these are principles, recommendations that countries have to implement on their own. And that's where I get back to this issue of, do our countries have the capacity to do that? And you know, I'm fighting with myself every day to be optimistic, to use your term. And I think back to the dawn of the digital age. So, I was at the Federal Communications Commission. And there were a lot of things that were done then that I don't think of as big government. I think of some of them as smart government. So, the mobile telephony was not a thing until regulators decided to free up spectrum. And I was looking at an early consulting report that said by the year 2000, they'll be 900,000 users of mobile phones. That was 1% of what we saw instead. Now, and we created, I was just telling you about this, being in a library makes me think we created this program called the E-Rate. Where we took some funding, $2 billion a year, and put it towards connecting schools and libraries. So, it's that kind of, to equitably, you know there used to not be telephones in classrooms. There was one telephone in a school. And now, there's internet in classrooms. So, I think those kinds of creative approaches have to be, they require state capacity. And so, at a multilateral organization, we can hope that we get some agreement on definitions, on principles, on what's okay, what's not okay. The enforcement is going to be really hard. And I think, another piece of it, sorry, but another piece of the enforcement may have to eventually be getting at what Tim was suggesting. If we, for some of the really outside the box frightening risks, the kinds of things that we think of as akin to atomic risks, we're going to need things like inspections. And that's a whole nother scaffolding that we're going to need to put in place. So, there's a lot to unpack there. But I think the OECD could be a starting place.

[Alondra Nelson:] Thank you for that. So, Tim I'm going to come to you next. Karen mentioned sort of the creativity of government. I mean, you've served in two administrations. And most recently, in the Biden administration, you led the development of the Declaration for the Future of the Internet. Which was a more kind of proactive vision for what the internet could be. You had 67 countries sign on. To Karen's point, it didn't have enforcement. But you worked really hard on it. And you clearly thought that you were doing something that was important for international collaboration in the world. And I would love for you to say a bit about that, if you don't mind.

[Tim Wu:] Yeah, sure, thanks. Yeah. NO, thanks for bringing that up. It was, so the idea of that effort was rooted in the sense, there'd been a lot of backsliding in internet freedom and internet rights. The growth of censorship, the growth of spying. Growth, a growth of you know really, aggressive state action, that was contrary to some of what I think were the founding principles back in the 90's. And the idea was for likeminded countries, mostly, almost all democracies, to sort of say you know, these are thou shall nots when it comes to the internet. You shall not you know, shut down the internet during an election because it's inconvenient to have you know, dissidence talk. You shall not spy on you know, your political opponents and use their information. You shall not like ban all journalists. You know, basic, sort of very basic almost human rights kind of stuff. And we wanted to put the foot down, make it clear. Now, was there enforcement? All along, they didn't really say this too much in the White House, but there was always a sense that there wasn't going to necessarily be enforcement by the organization, if there was one. But it didn't mean that, we could treat countries differently who broke the rules. You know what I mean, in terms of aid, in terms of how we dealt with them, in terms of everything. You know, we wanted to set out some rules and say, listen, if this is your approach, that has violated all sort of fundamental human rights, you know maybe that affects for example, how we treat your applications domestic policy wise. So, there's sort of a trade side to it too. You know, one of the things we noticed, this is a little more[inaudible], but China had you know, blocked almost every US application that had any kind of political meaning. And we were like, why is it guaranteed that every Chinese application has guaranteed access to American citizens? So, that was part of it. So, there's not, they had no formal sort of teeth inside the declaration, but teeth inside our own enforcement. Now, how does that translate to AI or how does that mechanism translate to AI? I mean, I think what we're in the process of doing now is trying to figure out the basic principles. And I don't, I think we're early in this conversation. You know, you've made a big contribution to this with the AI Bill of Rights. But we are figuring out right from wrong. And we need to have it be something that comes with consequences. You know, I'm not saying you have some kind of international police force. But I am saying there has to be consequences. You know right now, just by default, those consequences are going to be in the nation state. But we have to have them, and there has to be right, and there needs to be wrong.

[Alondra Nelson:] Thank you for that. Minister Tijani, there was a great deal of excitement about your appointment. You were appointed last month. In part because you had done extraordinary work as the, the founder and former CEO of CcHub, the co-creation hub which is this Pan-African technology and innovation center. That includes not only Nigeria, but Kenya, Rwanda, Namibia. And so, I wanted to with CcHub in mind, and also thinking about I know the African Union is working right now on an AI strategy. I wanted to get your thinking on a regional approach to AI governance. Like should we be thinking not only about global kind of multilateral approaches, but is there a particular approach that you think, from our experience at CcHub, and what the African Union is trying to do, of regional governance that's important here as well.

[Olatunbosun Tijani:] No, absolutely. I think there's tremendous value in taking a regional approach. Thinking through collectively you know, how we might create the right principles for how we exploit the, you know, the technology. But I think the danger with that, it touches on some of the things you were referring to, is the fact that we shouldn't look at the deep issues around AI as just being misuse and abuse. It's obviously one of the biggest issues. I think for countries like Nigeria, and global south, the biggest issue is actually what are we going to do with this amazing technology. Are we going to get the opportunity to use it to uplift our people and our economies equally at almost the same pace as the west? I think that's something we need to think about. And because a lot of these nations don't necessarily have the deep resources to go deep into understanding this technology, how to appropriate it, how to leverage it for economic prosperity, there's a danger in also mainstreaming a collective approach. Because the implication of that is our nations will not build the ecosystem locally, that is important for them to exploit it. Of course, we live in a connected world, where you know, Nigeria can easily leverage knowledge from all over the world. One of the things I'm passionate about is, while the research community in Nigeria may not be as deep as what you have in the US, there's a fantastic Nigerian researcher in almost every research institute that you look at in the US, or anywhere in the world. So, we do have access to these people. And part of our agenda is how do we connect them back to help us think through what we can be doing with it. But we don't want to be lazy about the fact that there's an African approach to artificial intelligence, which means the Nigerian government or Nigerian people are also not paying attention to the fact that Nigeria is not South Africa. And what do we need to be doing? How do we strengthen connectivity in Nigeria, for instance, to ensure everyone can have equal access to it? How do we improve the quality of our data sets, so that we can actually contribute to enriching the quality of outcomes that we're getting from this technology as well. What's going to be our contribution to this phenomenon? I think, we can only do that if we don't get too caught up in the fact that an African Union approach is going to solve everything. And because we do have other urgent issues, you know, poverty is always a wicked problem. Because we're battling with that, then there's something called artificial intelligence. I'm being attacked on Twitter every day by my people. How is AI a priority for Nigeria? But I look at my people and I say, you know, if we don't do something about it, by the time we're done solving our problems today, we'll have another problem to deal with. The requirement of building an inclusive society is constantly shifting. And phenomenons like this are making it a lot more difficult. So, I'm for a collective approach, but I think collective approaches should also encourage that nations are actually looking inward, and building the ecosystem to allow the technology thrive appropriately in tandem.

[Alondra Nelson:] Right. Thank you for that. So, we're going to go to questions shortly, to Q&A shortly. But Maria, you're a journalist, a freedom fighter. You have closely followed and experienced the impacts of algorithmically amplified news. You spoke about it at the beginning. You've argued that we're at a dangerous inflection point in our public discourse, in our democratic discourse. And that we're going to need more than our kind of conventional, typical ways of thinking about policy or thinking about activism. So, do these kind of heightened stakes that you laid out, earlier on in the conversation, make you more optimistic about, there's the word again Karen, global policy mechanisms or about global advocacy, less so. What other approaches should we consider?

[Maria Ressa:] I don't know how to begin with that one. No. I think, after generative AI was rolled out In November, 2022, and we looked at the, you know, we still haven't solved any of the harms of the first generation, right. It's still there. And what we saw is, that the companies that did it changed their names. And then, Twitter to X, Facebook to Meta. You know, TikTok is TikTok but they came in. generative AI, and look at this right, like the Screen Actor's Guild goes on strike. But news organizations, we keep going. But all of our, everything we've published, everything you've published has already been sucked up. If it is, what happens? There's, I guess, I have to be optimistic cause we're living now. And what we do today will matter, will make it better. But two things, one is transparency. Demanding greater transparency on the data. So, for generative AI, let's move there, right. We know as far as GPT-2, that they were at 1.5 billion parameters. Meaning, you know, the power is in word for word, 1.5 billion. And then the GPT-3, it's 175 billion. So, 1.5 to 175. Generative AI is exponential, exponential. So, if we don't stop it this second, it's going to be far worse than the first time we worked with AI, humanity touched AI. And then GPT-4, so if we're at 175 parameters, our brains can't do that. GPT-4, open AI didn't want to actually tell you what was in it, or even the parameters. It's anywhere from 1 trillion to 100 trillion. What that means is that any of the harms now will go off the scale before we get to it. So, the two words are transparency for the old AI, machine learning AI algorithms. You know, transparency of the social harms of the data flow, which the EU's Digital Services Act promises. Real time data. So, we look at patterns, which is exactly what generative AI pulls up. But it hallucinates. That's the tech word. It lies. You don't know whether it's true or not, right. But it, so, let me, last thing here is that if the first-generation AI weaponized our fear, our anger, our hate, our division, us against them, the second generation, large language models looks set to weaponize loneliness. So, does that make me optimistic or pessimistic? It's neither. This is the time, right. because, and I will say it is the United States, since both of you were with the government, why did the United States make the same mistake with the first one? Because it isn't necessarily with governance. We are feeling the impact of this. It is with putting a social, a safety net protecting us, creating a better business bureau, so that we're not insidiously manipulated. I have to be optimistic. I have no choice. And you know, look, it's been 6 years after, since President Duterte. My company should have been shut down. I should have gone to jail. But I'm still here, right. So, I'm optimistic.

[Applause]

[Alondra Nelson:] We need you. We need you. We're glad you're still here. So, we have a few, we've got quite a few questions here. Just jump in, whoever wants top open, to answer them. So, there's a wide chasm between AI engineers and eveyrone else. Policy makers, academics, historians, et cetera. If I were to probably interview an engineer, they likely wouldn't be thinking about the issues discussed here. How can we close the chasm?

[Maria Ressa:] Bring the engineers to the table. I think this is one of the biggest problems, right.

[Speaker:] Or higher BIPOC engineers.

[Alondra Nelson:] BIPOC engineers. Okay.

[Maria Ressa:] BIPOC. Well, yes. But the problem is that here we are, right. This is actually an engineering problem, right. Like, look at the building code for where we, at the New York Public Library, this was built 1901. There were building codes. There are no building codes for software. There's no ethics for software. The gatekeepers to this right now right, like you have two in the United States, two government agencies. But they don't necessarily look at what the software does to us. And then when Apple, if you build an app and you send it to Apple, they're a gatekeeper. It takes normally two weeks before they approve the app. But they don't look at it for safety. Where are those, Google is the other one, and Android, right. Where is that? And that's, the engineers should be at the table. One of the best ways we can, we can curtail it is by making them accountable, bringing them standards and ethics. Yes?

[Tim Wu:] I should say, one comment, I don't know if the gap is necessarily that great. I mean, I think engineers are also a profession. They have ethics. I think engineers have expressed some of them most profound concerns in this area. So, I give them more credit. I don't think, like engineers are a bunch of soulless evil nerds, who are just like trying to build something cause they can do it. I think they care. I think, I think the problem is the incentives are bad. You know what I mean? I don't think engineers want to build things that will decimate the human race. I think they want to build good products that people can rely on. And what I think is, the professionalism that should be in there, many of our professions law, medicine and so forth, have been corrupted, and have lost their sense of self identity. And frankly, it should be engineers, and in some ways, it is engineers, who are leading the way to make sure that AI is not dangerous.

[Karen Kornbluh:] Can I, I just want to add one thing. And picking up on what the Minister is saying. There are a lot of great uses of AI. And I think a lot of engineers are excited about what it can do, not just for productivity but in healthcare, and in agriculture. And there aren't always going to be, those aren't always going to be the highest profitable, profit making endeavors.

[Alondra Nelson:] Yes.

[Karen Kornbluh:] And so, I think there's another role in governance, in sponsoring. You know, just like in vaccine development, thank goodness that we had the scientific know-how that we did. But did we do as good a job as we could have in sharing with the entire globe? I think we need to think about you know, global funding of R&D, in some of these important sectors. And making sure that, that it's really inclusive.

[Maria Ressa:] But the incentive structure right now for private tech companies is profit. Attention economy. And that's part of the reason the engineers who may want to fix it, I mean, this is in Frances Haugen's, the 10,000 plus document she released. They knew certain things were wrong. But if you take it away, it brings profits down.

[Alondra Nelson:] Alright. So, here's another question that proposes that the incentive is fear. So, most companies, large and small, working on AI are largely driven by FOMO, Fear of Missing Out. Moreover, like the atomic bomb, they feel that they must be first or they will lose their business, apparently. Is there anything that can be done to stop advancement driven by fear?

[Tim Wu:] That is a great question. It's a deep question. I, whoever wrote that is thinking hard about corporate. Cause sometimes it is even worse than just wanting to profit. It's just this horrendous fear you're going to be left behind, and therefore you're doing, you know, things that are unethical or wrong, or you know, but you feel this sort of pressure. You got to do it. You know, like doctors who spend 10 minutes with a patient instead of an hour, cause they're afraid they're going to you know, get in trouble from their private equity owners. So, I think that's getting, I think that's getting deeply into the problem here. And I think, you know, there is a, somehow a, like it comes back to the beginning where I said, it's extraordinary how many people are you know, less excited about what the prospects for AI are than I would think. You know, slightly in contradiction, we said earlier. And I go back to the idea that the norms of, that some of these very sensitive technologies need a stronger professional norms, where we're not just driven by a race to get ahead, or this idea a company is going to be lost. I agree. Google has done all kinds of trying to incorporate everything into every product, and it's a very unhealthy dynamic. I guess I'm agreeing with the question, as opposed to solving it. Yeah.

[Alondra Nelson:] That's good.

[Tim Wu:] Yeah.

[Maria Ressa:] Could, on that one is, if you make the companies or the engineer accountable for what they build, right. In the same way, you treat again, the old, do you treat them as publishers or as tech? Do you reform or revoke section 230? Do you, what do you do for large language models and copyright, right? These are all things, the releases came out, but there's no accountability for the harms. And I think the minute the impunity stops, you'll see things getting better quickly.

[Alondra Nelson:] Alright. A couple of questions on China. So, let's see if folks want to... Any effective AI global governance regime will have to involve China. What prospects do you see for involving China, given the deteriorating US-China relationship. And a second, how does AI figure into our national defense? And how are we positioned with respect to China and Russia? Anyone want to take those?

[ Laughter ]

[Alondra Nelson:] No one's jumping up to the mic?

[Karen Kornbluh:] I'll say, I'll say, I'll say a little bit about that. I mean, I think, I think a lot of folks in the US who are very gloom and doom about our technological competitive stance, are pleased that the US is ahead in AI. And I think a lot of the export controls and other measures that we've seen come out of the Biden Administration, are intent on keeping that lead. Because we see a more aggressive China, from the US point of view. I think it's really important, I think there's this balancing act, obviously, between competing but not getting into a standoff, that we can't have another Cold War. And so, some of the things that we're alluding to, in terms of the upsides of AI, I think we need to work with China on figuring out what those are, and working together on them. But at the same time, we are using it in our military uses. They are using it in military uses. And so, you know, much as we want to think about the rosiest scenarios, you know, we're obviously also going to have to deal with making sure that we have man in the middle, in military uses, so two drones talking to their robot masters, don't start attacking each other and lead us into extinction. So, so, and I think that's where global governance has a role too. It's on the transparency and ethics side. But we have to think about some. And then the, I think, the three buckets that I see us talking about are sort of the ethics incentives, transparency, research, the upside, productivity. And then a third one has got to be in this worse case scenario, doom and gloom, how do we keep each other from the most horrible extinctive uses of it? Extinctive, is that a word?

[Alondra Nelson:] Sure.

[Tim Wu:] It's a good one.

[Karen Kornbluh:] It's in the library. I'll invent a new word.

[Maria Ressa:] The tech has been part of information warfare.

[Alondra Nelson:] Yeah.

[Maria Ressa:] Right. And we've certainly seen this in our countries. And in our case, we've seen information operations from China. We've seen information operations from Russia. You certainly have seen them from Russia. So, so that's, that's part of I guess, we talked about that. Geopolitical power is using information warfare. And that's helping determine the state of balance of the world. Russia invading Ukraine was preceded in 2014 with Meta narratives that were seeded, that were used to annex Crimea. And the same Meta narrative was used 8 years later for Putin to invade Ukraine itself. In the Philippines, I mean, let me just tweak the stuff cause there's so many and we don't have that much time. But look, I sound so negative about tech. But I love tech.

[Alondra Nelson:] Can I, one thing I think that's so interesting about your trajectory, is that you come to the place you are now by really engaging technology. I mean, that you were the foundation, the founding of Rappler. Your early, you were a early kind of adopter of social media technology. And so, you had a kind of process of disenchantment actually.

[Maria Ressa:] Well, it's not. I know, so, yes. Absolutely. Definitely. Cause I had the data. This is part of the way we were able to stand up to our government, right. Cause the data showed, we were being insidiously manipulated. But beyond that, look, if you pull Rappler up now, you'll see a black button on every article that will pull up GPT-4, that will summarize it in three bullet points. And we limited it so that it is there. Rappler is one of ten companies that open AI is working with, to look at how to make LLM. It's GPT safe for democracy. You have to be part of the discussion. And we don't have to be part of the global north in order to be part of it, but we have to embrace the tech. But we also need to call out the tech. It is back to accountability, right. Lots of money. There's lots of money. We need to make impunity stop.

[Alondra Nelson:] Okay.

[Applause]

[Alondra Nelson:] Okay. So, last question. Given the sort of commitments that many of us have to engaging the public in this conversation, and this is about how to engage the public further. How do you see social, local, or grassroots movements playing a role in shaping global rules, norms, and accountability around AI?

[Tim Wu:] I want to take that question a slightly different direction. And slightly react to this, some of what's been said, in the sense that I don't know why we should be asked whether we're pro tech or anti tech. You know, I'm in favor of good tech and against bad tech. And the tech that I think is good, empowers humans, and also makes possible economic decentralization, and gives a lot of economic power. The tech I think is bad, disempowers people and concentrates wealth, and concentrates power in unaccountable forms. So, you know, I don't know why good or bad tech. And I guess that's what people should be demanding of AI. I don't have a particular prescription for this. But I think that they are naturally resistant. And I think a lot of the resistance comes to AI from the sense that is, as it's coming out, a technology that seems kind of human disempowering, that seems to be leaving us on the sidelines for a lot of stuff, at least in what we see. Now maybe that will change. And also, seems to be wealth and power concentrating, as opposed to decentralizing. So, you know, in terms of what the public needs to figure out what it wants. And I think it wants technologies that empower more of us. And you know, maybe like you were talking, E-Rate, or the telephone, or you know, the technologies that have been great technologies, books, the printing press, and so forth. And you know, maybe get these great engineers and talents to put their energies and resources towards something that works for more people.

[Alondra Nelson:] Thank you for that. So, Minister Tijani or Ambassador Kornbluh, that question or last remarks from either of you.

[Olatunbosun Tijani:] For what I represent, I think, is really mainstreaming the conversation. And ensuring that we can articulate how we're going to be engaged in this. And there's urgency around that. We don't even have 12 months. The pace of development in this space means we're going to be in deep trouble, if we're not participating. And so that means mainstreaming it. I've just been in this position for 4 weeks. And my priority is actually, the first thing I did was to co-create. So, we designed a model that identified you know, top 1,000 Nigerians all over the world that are participating in AI research. And we're looking for ways to engage them, to help shape how we think about it. So, we're taking a co-creation approach. And that's the urgent thing that I believe we need to be doing.

[Alondra Nelson:] Thank you.

[Karen Kornbluh:] Yeah, I really like what Tim just said about we don't want to be pro tech or anti tech. It's tech for what. Tech for humans, and human flourishing, and human individual rights. And so I think, if we can help, I think we as maybe experts talking to the public, if we can move away from this binary that you know, tech is bad or tech is good, but instead that we can get more of the good and less of the bad. But to do that, we need to be willing to work together, which is a really big ask. And so, I think we need to, I think, I want to go back to what we haven't talked about enough is, the AI Bill of Rights that you led on. Because I just thought that was such a great example of the government opening up a conversation. And saying, let's really dream. What would we want, to make more of the good and less of the bad? And let's a have a conversation. And let's think about how we can implement this in the private sector, and civil society., through regulatory agencies. How should we go about it? And so, I hope we're in the middle of that, and that we can all keep doing it.

[Alondra Nelson:] Yeah. Thank you for that. So, let's build the world we want, yeah? Thank you, Ambassador Kornbluh, Minister Tijani, Professor Wu, the great Nobel laureate Maria Ressa. Thank you for joining me.

[Applause]

[Aidan Flax-Clark:] Okay. That was again, Doctor Alondra Nelson, speaking with Karen Kornbluh, Doctor Olatunbosun Tijani, Tim Wu, and Maria Ressa. You can learn more about the Institute for Advanced Studies AI Policy Working Group by going to ias.edu/aipolicy. That was very alphabety. And if you want to read more about AI, well, grab your library card. Head over to nypl.org or your nearest branch, and see what we've got there that peaks your curiosity. I'm sure you'll find something. And in the meantime, thanks for listening to Library Talks, which is produced by Christine Farrell with editorial support from me. Our theme music was composed by Allison Layton-Brown.

[ Music ]