FREE three month
trial subscription!

Is AI an existential threat? Yann LeCun, Max Tegmark, Melanie Mitchell, and Yoshua Bengio make their case

00:00:00
Podcast & Video

The following is a Hub exclusive of a series of interviews with Yann LeCun, Max Tegmark, Melanie Mitchell, and Yoshua Bengio, who are professors and scientists that recently participated in the Munk Debate contesting the following resolution: “Be it resolved, AI research and development poses an existential threat.” These interviews were conducted by The Hub’s executive director, Rudyard Griffiths, and were recorded ahead of the debate, which was held on June 22nd, 2023.

You can listen to this episode of Hub Dialogues on Acast, Amazon, Apple, Google, and Spotify. The episodes are generously supported by The Ira Gluskin And Maxine Granovsky Gluskin Charitable Foundation and The Linda Frum & Howard Sokolowski Charitable Foundation.

RUDYARD GRIFFITHS: Yann LeCun, welcome.

YANN LECUN: Thanks for having me.

RUDYARD GRIFFITHS: I want to begin our conversation, and if you could give our listeners a sense of the key argument that you want to make tonight. If there’s one point that you want audience members to leave with impressed upon their minds as we debate this motion, “Be it resolved: AI research and development poses an existential risk.” What is that thing?

YANN LECUN: The main point I will try to convince the audience of is the fact that this is just another engineering problem. And making AI systems safe is similar to making turbojets safe. And at least some of us may have difficulties imagining how we can make AI systems safe today because we have not invented the architecture for AI systems that can be capable of human-level AI. So how can we make it safe if we haven’t invented it yet? It’s like asking for turbojets to be safe in 1930.

RUDYARD GRIFFITHS: Is there a chicken and egg problem here? As this technology develops, I sense from your writing and some public comments that you might be concerned if there was something that emerged, which was an AGI, a general intelligence, that might make you more worried about potential existential risks. How do we get to that point without crossing over it, without the controls that we need to avoid that risk?

YANN LECUN: Okay, so there’s a fallacy behind the statement, which is that AGI is not going to just emerge. For this to exist, we’ll have to build it, and we’ll have to build it really explicitly for it to have superior intelligence. So this is not something that is going to happen, and we have to just accept we are building it; we have agency in building it, and of course we can build it to be safe. If it turns out we can’t build it to be safe, we’re just not going to build it. And it’s not going to be an event. It’s not going to be like we’re going to turn on a machine one day and it’s going to be super intelligent and then go way beyond human intelligence instantly. That’s science fiction. What’s going to happen is that we’re going to make systems that have similar capabilities but are considerably less intelligent than humans. And they’re going to be in the sandbox and the computer, or we can run the program or turn it off, and then progressively build it in such a way that it can get smarter and smarter.

RUDYARD GRIFFITHS: It is fascinating to have this conversation with you because, as a layperson, there’s a perception that this technology is somehow emergent. That it has the ability to iterate and reiterate itself somehow independently of us. So, therefore, something like AGI could be a phenomenon that could emerge inherently from the system. You’re telling me that that is sci-fi, and I should be leaving that on a bookshelf at home?

YANN LECUN: Exactly. It’s good sci-fi, perhaps. But it’s not the way things work in engineering. It’s hard enough to get the current top AI system to run without crashing for an hour. So it’s not like they’re going to take over the world and replicate themselves, and become more and more intelligent, at least not anytime soon. And then, when we build systems that have the potential capability of having superhuman intelligence, they will not have the type of drives that humans have. So humans have a drive to interact with other humans, perhaps have relationships of dominance or submission because we are a social species with a hierarchical organization. Evolution built us this way same as baboons and chimpanzees, not the same as orangutans. Orangutans don’t have any desire to dominate anybody. So we’re going to build machines to be subservient to us and have no desire for domination whatsoever.

RUDYARD GRIFFITHS: Another argument we’re going to hear in this debate, no doubt, is that these machines, these intelligences, however you want to characterize that, could empower bad actors. So there are lots of people in the world who want to do horrible things, and having really helpful, potentially powerful intelligences to allow them to further their goals creates an existential risk. It diffuses this power out to actors who may have interests which are very antithetical to yours and mine. Is that a legitimate concern?

YANN LECUN: It’s been a legitimate concern since the invention of technology, probably the first time proto-human took a piece of bone in their hand or something. This is a famous scene from 2001: A Space Odyssey. So yeah, sure, that’s a danger. But we’ve had countermeasures for such bad actors since day one. And then it’s a question of if the bad actors have powerful AI, the good people, who are considerably more numerous and well-funded, will have more powerful AI. So an interesting thing to know at the moment is that the best way, for example, on the internet, social media, for example, to take down dangerous content, violent content, terrorist propaganda, hate speech, this kind of stuff, makes massive use of AI. In that case, AI is the solution, it’s not the problem.

RUDYARD GRIFFITHS: Fascinating stuff. If we think about where we’re at now and how quickly it seems that, again, to a layperson, that this technology has evolved—you’ve been at the coalface, I know, for a couple decades now, working on this, so I want your perception. Have you been surprised over the last 18 to 24 months at the sophistication of the systems that we’re now seeing, like GPT-4? To what extent do you feel that we’re going to plateau in a place that has AIs of the current power that we’re seeing? I guess what I’m trying to get a sense from you, Yann, with all your knowledge and expertise is are we going to see a hockey stick, a graph ramping up and taking off into the future? Or should we be maybe a little more modest and less hyped about these technologies and their inherent power?

YANN LECUN: Well, we’re going to see what we’ve seen so far in technology and computer technology, which is continuous exponential growth until we reach the physical limits, for example, of fabrication technology, in which case the exponential starts levelling off and turns itself into a sigmoid. And every process in the real world, even if it initially looks like an exponential, eventually saturates. So when is that going to happen? I don’t know. It seems to be going on for longer than we expected initially. That said, in terms of—I can understand why the public has been surprised by some of the progress because, to the public, you have singular events of a product being made available.

For people like us in the research, those surprises are considerably less prominent, if you want. And they happened two or three years ago. We’ve seen enormous progress in natural language understanding due to transformer architectures, a particular type of neural net, self-supervised running just the basic idea of training a system to predict the next world, which is really what those dialogue systems do at the moment. And then the fine-tuning to get them to answer questions. There’s been a system of this type for a long time. So it’s a bit of a continuous progress, and frankly, not particularly surprising for us from the conceptual point of view, even if in practice, the size of them and when we train them to be very large, there are properties that are somewhat unexpected but not entirely surprising.

RUDYARD GRIFFITHS: So that leads me to a line of questioning that will emerge, no doubt, in this debate about the future. And again, it’s very hard, shouldn’t predict the future, but would there be things-

YANN LECUN: You should build a future.

RUDYARD GRIFFITHS: Let’s build it. But if you were building that future and you—what things would you have to see for you to become concerned that the technology was doing something that, for instance, your fellow Turing Award winner, Geoffrey Hinton, seems to see now? He seems to see something going on now, which has made him concerned enough to speak out in a very, very public way about the need for immediate restrictions on research and development. What would you need to see to become concerned?

YANN LECUN: So Geoff is an old friend. I worked with him in Toronto actually many years ago, in the late ’80s. And I think he went through some sort of epiphany about two or three months ago where he realized that the progress was faster than he thought and human-level intelligence was closer than he thought. He thought it might be 50 or 100 years in the future, and now he thinks it’s maybe 20 years or something like that. So all of a sudden, he started thinking about the consequences that he wasn’t worried about before. But some of us have been thinking about this for a long time and don’t have the same opinion as to whether the desire for dominance or the fact that if the system is intelligent necessarily it will dominate. I do not believe in this concept at all. I think it’s not even true of humans. It’s not the smartest among us that want to be the leaders generally. We have plenty of examples in the international political scene. So I disagree with him. I’ve disagreed with him on a number of different things. I also disagree with my friend, Yoshua Bengio, who is more concerned about more immediate threats really than simple intelligence system taking over the world.

RUDYARD GRIFFITHS: We’ll leave that to Max Tegmark, your other debating opponent tonight. It sounds, talking to you, Yann, as a scientist, that you have a lot of confidence that your fellow scientists are going to develop this technology responsibly. There are, though, other episodes in the development of dual-use technologies where scientists want to do things—they want to innovate, they want to discover—and that often leads to doing something maybe when you’re not supposed to do it. There are all kinds of interesting examples from the Manhattan Project where they took some ways, incredible risks. Luckily, they didn’t lead to the consequences that they thought, but there were small probabilities. At least, they thought there were small probabilities that there could be existential risks for the early experiments on the atomic bomb. To what extent do you feel the scientific community going forward is going to be responsible about existential threats? Is there a danger here that we just completely ruled them out? We say that’s not part of this technology, and therefore we’re not thinking about it as we’re developing the technology.

YANN LECUN: I mean, obviously, we have to develop technology in ways that makes it safe. And in fact, a good part of the effort in developing technologies that is deployed is to actually make it safe. I was using the example of turbojets before. An enormous amount of resources has been put into making turbojets incredibly safe and reliable, which is why we can fly in a complete peace of mind. So I think it’s going to be the same. It’s going to be a difficult, arduous engineering project to make machines that are helpful. If they’re dangerous, people will not want them. And so there is no incentive to build dangerous machines, except if you have bad intentions. But this is not going to happen by default, just because we are careless or anything like that, right?

We generally want to develop things for the benefit of humanity. And if we realize it doesn’t go that way, we just stop. So, for example, let me take a very simple example. In the 1950s, people seriously thought about possibly building nuclear-powered cars and nuclear-powered rockets. There was actually a big project funded by the U.S. government for that called Project Orion. The promises of this were incredible, but all those projects were stopped because it’s just too dangerous to have nuclear energy going around everywhere. Both for radiation risk but also for various proliferation questions. So it never happened. There are many examples of this, of technology that was initially promising and basically was not deployed because of safety reasons. So perhaps this is what will happen with AI. I’m pretty confident this is not the way it’s going to happen, that there will be a way to make it safe. In fact, I’ll talk about this, and how you can do that.

RUDYARD GRIFFITHS: Great. Final question. Which of any of your opponents’ hypothetical arguments would you give the most credence to? Is there one that you think, “Yeah, that is something that makes me think again or think twice, or in the middle of the night, I wake up and I reach for a pen and a paper when this issue comes to mind?”

YANN LECUN: Well, I mean, certainly, the question of how you design AI for safety is one that I’ve been giving a lot of thought about, and in fact, came to a potential solution, which we haven’t built. So we don’t know if it works. But certainly, if you take current AI systems such as the ChatGPTs of the world, autoregressive LLMs, they are intrinsically unsafe. So if you have the belief that by just scaling them up, you’re going to reach human-level intelligence, those systems—and I don’t believe that’s the case—I think we’re missing essential components for that. But if you believe that’s the case, those systems will be intrinsically unsafe. And I think, at some point, we’ll abandon them. My bet is that within five years, those autoregressive LLMs will disappear because they’ll be replaced by things that are more controllable, more steerable, better, they can reason, perhaps, they can plan, which the current systems aren’t capable of. So yes, it occupies my mind, but I have a solution. So I’m not worried.

RUDYARD GRIFFITHS: Amazing. Well, Yann LeCun, thank you so much for coming to Toronto to be part of this really important conversation. We really appreciate you accepting our invitation.

YANN LECUN: A real pleasure.


RUDYARD GRIFFITHS: Thanks for listening to these conversations that I’m having pre the June 22nd Munk Debate with all four of our main stage presenters. Up next is Max Tegmark, who is arguing for the motion, “Be it resolved: AI research and development poses an existential threat.” Max is a world-renowned professor at MIT, where he currently studies physics-based techniques to better understand biological and artificial intelligences. His impressive body of research and bestselling books have really set him apart as one of the leading scientific minds of his generation. You’ve possibly seen his name in the news the last few months. He led, with Elon Musk, a public call for a moratorium on AI research and development, supported by leading researchers and companies working in the field. Again, the next voice you’ll hear is mine in conversation with Max Tegmark.

Max Tegmark, welcome.

MAX TEGMARK: Thank you.

RUDYARD GRIFFITHS: Thank you so much for accepting our invitation. You were first into the pool here in Canada. The water’s often chilly. It takes a brave man to dive off the high board, but we’ve got a bunch of other great thinkers joining you for this important conversation on AI.

MAX TEGMARK: Thank you. Well, I’m originally from Sweden, so I consider Canada to be warm and balmy.

RUDYARD GRIFFITHS: I want to begin by giving our listeners a sense of how you’ve come around to believe that there is an existential risk associated with the development of artificial intelligence. Was this a moment of insight that came to you? Is this an accumulation of a series of studies or inquiries that you’re involved in? Tell us that story.

MAX TEGMARK: It was always pretty obvious to me that if we ever did build AI that was vastly smarter than us, that we could, in principle, lose control over it and get wiped out. That’s not very profound. What gradually really got to me, though, and unfortunately surprised me negatively, was that we did so little as a species to prevent this and decided to just go full steam in the wrong direction. The idea that intelligence gives power and that we could lose power to other entities if they’re way smarter than us is so old that even Alan Turing himself, one of the founding fathers of AI, wrote in 1951 that that’s the most likely outcome: that we will gradually lose control. And for that reason, many thinkers have been, for years and years saying, “We need to proceed with caution so that we can actually, despite that, keep control over this tech. Don’t connect powerful systems to the internet. Don’t teach them how to code. Don’t teach them how to manipulate humans, et cetera.”

Then what’s happened more recently is two things: One, it turned out to be easier than we thought to build AI that can pass the Turing test and get very close to exceeding us. And second, the commercial pressures have just thrown all our wisdom out the window. We’ve already connected these to the internet, “Hey, let’s build a chatbot.” And we’ve taught AI how to manipulate people by letting them read everything we’ve written on the internet and figure out how to manipulate us into clicking more on social media. And we’ve taught GPT-4, for example, to code really, really well. So this idea of self-improving AIs feels a lot less abstract now. And there’s even a risk denialism I honestly wasn’t expecting when I wrote my book Life 3.0 some years ago.

I have a story in there about how some people take over the world with AI. And they do it in a very sneaky, clandestine way because I couldn’t, in my wildest dreams, imagine that society would just let companies openly say, “We are going to build superhuman AI and sit back and watch these companies do it.” But that’s exactly what’s happened. And I also hadn’t, in my wildest dreams, thought that there would be so much denialism like in the movie Don’t Look Up, where people are like, “Oh no, it’s going to be fine.” We have no idea how these systems work, but we shouldn’t worry. We have no idea how soon we’re going to get superhuman, but I’m so sure it’s far away that we shouldn’t worry. And even actively snarky dismissal of people who weren’t about it. I didn’t see that coming, honestly.

RUDYARD GRIFFITHS: So you’re debating partner tonight, let’s build on that. Yann LeCun has said publicly that he thinks that the current AI systems have the equivalent of a rat brain. And he does not feel that AGI, artificial general intelligence, and certainly some breakout to superintelligence is anywhere within the scope of a reasonable timeframe that we can establish through scientific inquiry through a sense of the development of the field itself. You feel differently. Why is that?

MAX TEGMARK: Well, first of all, I think Yann LeCun has shortened his timelines a bit, but he should speak for himself. And rats are very cute and very smart, but they cannot translate English into Chinese or take your favourite lullaby and turn it into a sonnet the way GPT-4 can do. Rats have not mastered language. There are plenty of ways in which AI systems can exceed not just the capability of rats today but of humans as well in a lot of domains. And most AI researchers thought even three years ago that mastering language and passing the Turing test was decades away, and turned out they were all wrong because it’s already happened, right? I feel that we really need to win this race between the growing power of the AI and the growing wisdom with which we manage it. And what’s been disappointing to me is that the race has gone poorly for humanity in both ways. The power has grown faster than we thought, and the wisdom has grown slower than we thought. That’s why I’m one of many who’s called for pause so we can catch up with regulation and things like this.

RUDYARD GRIFFITHS: Paint for us a picture, because at the core of this debate is the idea of an existential risk. And it’s a big word, but it generally encompasses something that either results in the end of human civilization or the inability of human civilization to recover from a setback and return to its previous course or trajectory of development. Do you have a feeling of what that existential moment could look like when it comes to AI?

MAX TEGMARK: Yeah, I think there are three basic ways in which we could get wiped out by AI. The one that’s talked about the most, even in science fiction, is rogue AI. So you have an AI system that has a goal that it relentlessly optimizes, which just turns out to not be well aligned with our goals. When we humans have driven other species extinct in the past, it’s usually been this. We wanted to make more money and so we chopped down the rainforest, and more as a by-product, we drove extinct some species that live there. A second way is malicious use. Even if we figure out how to make AI safe in the sense that it will always obey its owner completely, there are a lot of people who actually want to do harm. This might sound very strange to you as a Canadian because Canadians are so nice, but in the U.S., we have a lot of people who do mass shootings where they literally want to kill as many people as possible. And if you gave someone like that a super-intelligent AI and they would probably, many of them tell it to just kill as many as possible, and it can end very badly. So that’s not the AI being unaligned; it’s the human being unaligned.

A third way, which is remarkably little discussed, even though it’s the most obvious way, is that we just get outcompeted because, by definition, superhuman AI is better than us doing all jobs and other economic activity, right? So companies who choose not to replace their workers by AI will be outcompeted by those who do. And it’s not just that you replace the jobs but also the decision-making. So companies that choose not to have an AI CEO get outcompeted by those that do. Militaries who choose not to have AI generals get outcompeted by militaries that do.

RUDYARD GRIFFITHS: Or AI missiles.

MAX TEGMARK: Or AI missiles. And countries that choose to not have AI governments get outcompeted by countries that do. And we end up in a future where, although there was a lot of stuff happening, a lot of economic activity, it’s not our future anymore because we lost control. And that’s the first step for a species to go extinct: to lose control. And moreover, we’ve then also ended up being economically useless because they, by definition, didn’t need us for anything. And if we’re not needed and not in control, it’s pretty obvious that that could end badly.

RUDYARD GRIFFITHS: Yeah. That point of agency, I think, is the one that I think the most about, is how does this technology potentially displace human agency at an individual level and at a societal level. If you have powerful, efficient decision-making machines that are optimized and are proving themselves to generate better outcomes, why would you ever make a choice? Why would you ever choose A, B, or C? You would just choose a machine.

MAX TEGMARK: Yeah. And the creepy thing about this is we can already see this beginning to play out in our society, right? Where ever more decisions are delegated from humans to machines.

RUDYARD GRIFFITHS: Another argument you’re going to hear in this conversation, no doubt, is that we’ve developed other existential technologies: atomic weapons, biological weapons, we’ve now got this great thing called CRISPR that can do genetic sequencing in small labs with minimal amounts of training, and people aren’t having the same conversations about those technologies that we’re having about AI right now. Why is that?

MAX TEGMARK: Some people have actually are. Some scientists they know are quite freaked out about both of those, but they’re not listened to as much. That’s certainly true. The basic challenge we face as a species is the same: when the power of the tech outpaces the wisdom in which we manage it, things don’t go well. I think biologists, with their CRISPR and so on, have so far been the ones out of the different disciplines that have handled their challenges the best. They decided to get together and ban biological weapons in the ’70s. They decided there are some things you could make a lot of money on, like making clones of you, that they just said, “That’s too risky for our species. We might lose control, so let’s not do it.” And in biology also, companies aren’t allowed to just launch and come up with some new medicine, and start selling it in supermarkets. You have to first persuade experts at the FDA or the Canadian government’s equivalent agency that this is safe before you can do it. And I think that’s why biotech is really thriving as an industry, and people think of it mainly as bringing positive things to the world. Whereas, in AI, it’s just total wild west right now. The three—

RUDYARD GRIFFITHS: Could that change, though? Because the argument from the industry is like, “We’re in the early days here; we’re committed to safety.” A lot of these companies have these safety units or committees, or boards, and they’re saying, “Look, we’re going to get there, trust us. We can develop this using a lot of the insights from genetics or these other technological threats where the threat has been mitigated.”

MAX TEGMARK: That’s exactly what biotech companies said in the 50s, also. And then there were so many scandals where they nonetheless killed a lot of people with some medicine that turned out to be very dangerous that policymakers says, “Okay, enough of this self-regulation, we are creating the FDA, done.” And tobacco companies—all companies—are always going to say, “Trust us. Let us self-regulate.” It’s also complete nonsense to say these are early days because there are many people who think we’ll have superhuman intelligence in two years, or maybe even next year.

So this is very imminent stuff. We can’t afford to futz around for 10 more years of the company’s doing whatever they want. They’re also linked, these different powerful technologies, because they can, in nine hours, do one year’s worth of research. And if someone tasks it with developing a new bioweapon which is going to kill all humans, it can go figure out how to do that in a way that would take way, way long for humans to do. Basically, any other technology that can be discovered, the fastest way to discover it is going to be using AI. So if we can’t control the AI, we can’t control any other tech either.

RUDYARD GRIFFITHS: Right. Final question I’m asking all the debaters: is there one argument on the other side of this debate that you would give the most credence to that would cause you to think about your own assessment of AI as an existential risk? Is it solving for the alignment problem? I guess you’re not too optimistic about self-regulation, but is there a piece of the other side of this debate that you think could be built on to avoid that existential outcome?

MAX TEGMARK: Good question. Well, my pessimism now is largely because society and many researchers have been so flippant and dismissive of the risk. This, I think, is changing actually in a really encouraging way, right? You just recently had this statement with lots of famous researchers calling for a pause, and even more recently, a who’s who in AI is saying that AI is an—it could cause extinction, not just signed by people like Jeff Hinton and Yoshua Bengio from the academic side but also from the CEOs: Demis Hassabis from Google DeepMinds, Sam Altman from OpenAI. And this, I think, is extremely encouraging because, for the first time now, I think we’re going to—it’s likely we’re going to see a lot of the wisdom development happening. And which makes me more hopeful. And I don’t want to come across to you as some sort of gloomer who thinks all is lost either. The reason why I’m so adamant about talking about this is because I think there still is hope to have a really amazing future, not just for our kids but for all future generations, and with advanced AI that we control, right? AI built by humanity for humanity. And the reason I’m so motivated to work on this is because I don’t want to squander all this upside.

RUDYARD GRIFFITHS: Yeah. Well a lot of upside having you as part of this debate, Max, thank you again so much for coming to Toronto to be part of the conversation.


You’re listening to a series of conversations that I had as your executive director with the debaters appearing at the Munk Debate on artificial intelligence. Up next is Melanie Mitchell. She’s a full professor at the Santa Fe Institute in Santa Fe, a world-leading research centre for complex systems science. Her fields of research include artificial intelligence and cognitive science. You may have caught some of her recent bestselling books. They’ve been instrumental to me and others trying to understand AI from a layperson’s perspective that most recent bestseller of hers is Artificial Intelligence: A Guide for Thinking Humans. She was joining the Munk Debate on AI to argue against the motion, Be it resolved: AI research and development poses an existential threat. The next voice you’ll hear is mine in conversation with Melanie Mitchell. Melanie Mitchell, welcome to the Munk Debates.

MELANIE MITCHELL: Oh, well, I’m excited to be here.

RUDYARD GRIFFITHS: Thank you for making the trip today to Toronto for this important conversation. So much going on right now in terms of, frankly, a lot of media hype. I think a lot of people are inundated with news and information about AI, trying to sort through what’s fact and what’s fiction. What do you think we’re currently missing in the conversation as it’s being presented publicly?

MELANIE MITCHELL: I think we’re missing a lot of the nuances. We hear stories about what some of these large AI systems can do, but often they’re told in a way that doesn’t tell the whole story. So recently, in the New York Times, we saw a report on existential risk actually that reported that GPT-4 had hired a human worker to help it solve a CAPTCHA task.

RUDYARD GRIFFITHS: Right. It said it was, I guess, visually impaired.

MELANIE MITCHELL: And lied to the human worker. That was what was reported. But if you actually dig into what happened, and I did for my substack, it turns out that’s not at all what happened. The GPT-4 was being guided via prompts from a human. It couldn’t hire anyone. It couldn’t access the web even.

RUDYARD GRIFFITHS: Right, yeah. How would that happen?

MELANIE MITCHELL: And the human was doing everything—typing everything in. And the human said, “What if you want to solve a CAPTCHA? How would you do that?” And then it gave some response, and the human said, “What about using a TaskRabbit worker?” And so I think these things are being reported in a way that emphasizes the hype and doesn’t really tell the real story of how much humans are involved in what AI can do.

RUDYARD GRIFFITHS: And where do you think this comes from, Melanie? Is it just too many decades of science fiction and bad movies? I mean, it does seem as if we’ve jumped to some conclusions here about this technology, about its impact, about just how transformative it is.

MELANIE MITCHELL: I think there are a lot of things going on. Science fiction is part of the way that we frame what we expect from AI. But also, I think now we have systems that can communicate with us in natural language and human language. It’s really hard not to see them as thinking. We humans are just programmed to project intelligence agency goals and whatever it is that we project on other humans, we project it onto these systems even if it’s not really there.

RUDYARD GRIFFITHS: Fascinating. Help our audience understand, because that’s a really interesting point that we’re imbuing these machines with a lot of human characteristics that maybe demonstrably just aren’t there. What are the differences between what we understand or don’t or very imperfectly human intelligence versus what machine intelligence is? Because I think we’re often conflating the two, and we’re assuming somehow that we can know what it is and it can know what we are.

MELANIE MITCHELL: Yeah, exactly. I mean, we humans have bodies. We interact with the world. We interact with each other. We’ve actively intervened on the world from the time we were babies to try and see how the world works. And so we have a very rich, deep understanding of basic things in the world and basic things about other people that often are not very well expressed in language. They’re not on Wikipedia or on the other sites that large language models are trained on. So there’s a lot of knowledge about the world that’s not language, that’s not in language, something that humans have experienced. And the language models don’t have those kinds of experiences. They don’t experience the world; they only are passively trained on language. So I don’t think they can have the kind of knowledge that we have.

RUDYARD GRIFFITHS: The key part of this debate is the contention that AI research and development poses an existential risk. We purposely did that. It’s a high bar, an existential risk. I think you can look at different definitions, but generally, it’s either the end of human civilization, we know it or human civilization so degraded and knocked off course that it can never regain its previous trajectory of progress. I sense that you’re in some ways in this debate and interested in this debate because you feel that that contention is just wrong.

MELANIE MITCHELL: Yeah, I mean, you never know what’s going to happen far in the future. So if someone asked me, “What’s AI going to be like 500 years from now?” There’s no way I could possibly answer. Imagine 500 years ago even talking about that kind of thing. But right now, we know that these systems have no agency of their own. They have no desires. They don’t want anything. They’re machines. They’re tools that we use. I think we have to talk about us using technology in harmful ways, and that happens all the time with humans. But talking about an existential risk—something that’s going to essentially kill off all of humanity—is such an extreme that we have to be really careful in putting forth that scenario, because I think that can really be harmful to the way people think about AI and could even wipe out some of its potential benefits for humanity.

MELANIE MITCHELL: One of the contentions about why this conversation has emerged is that AI scientists, the community itself, like a lot of different communities believes that it’s doing something pretty exceptional. And exceptionalism sometimes leads to dark places. A view that what you’re involved with is so important that it could literally change the fate and future of humanity. Do you think maybe the underlying story under all of this is something very, very human that we’re—I don’t know that a certain group of people are imbuing what they’re doing with like an essentialism that just isn’t frankly warranted?

MELANIE MITCHELL: I do agree with that. I think that that’s something that people want to believe, that their work is impacting the world. And in AI, there’s no question AI is impacting the world, but I think some people can take that to such an extreme that they can believe their own speculations about what might happen too strongly. And it’s also great marketing to say that your AI system is powerful enough to destroy humanity.

RUDYARD GRIFFITHS: Well, it’s certainly helped the stock prices of a lot of these companies over the last six months. At the Santa Fe Institute, a big part of what you study is complex systems. People in that field—we can think of Nick Bostrom and others—have argued there is a contention that complex systems are dangerous, that they are fragile, they have all kinds of unintended consequences that we cannot anticipate or figure out. And in fact, in that context, AI has been singled out as an exemplar alongside atomic weapons or the risk of CRISPR and genetic engineering as something that is another complexity that we’re introducing into an already overly complex and fragile and tasked world. Would you buy into the sense that AI could potentially have a negative amplifying effect on all the other stressors that are confronting us now, and therefore it could be existential and the proverbial straw that breaks the camel’s back?

MELANIE MITCHELL: I think it could magnify disinformation, for example, in an already polarized society. I think it can have a lot of harms in magnifying biases and perhaps even disrupting economies. But I don’t think any of those rise to the level of existential. And I think it’s really important to point that out because we have to be very realistic and understand what the risks are. The other side of that, the other side of what complex systems tells us, is that complex systems are fragile, but they’re also resilient. Our society, our institutions, our technologies give us these layers of complexity that protect us in some way against any sudden shock. And I don’t think, for that reason, that we’re going to see AI as an existential threat.

RUDYARD GRIFFITHS: Right. The Mayan civilization who goes through repetitive crop failures, and their entire way of life basically falls apart as a result of climate change. We’ve got more robustness redundancy built into the system. Final question about agency. Some people have argued that one of the existential dangers of AI isn’t some doomsday scenario of robots attacking us, the Terminator movies. It’s rather a subtle but relentless loss of human agency that AI systems will simply produce better outcomes. They’ll produce better outcomes for corporations, for individuals, for governments. And people will increasingly delegate what would’ve been human decision-making and human thought, and human agency in action to a machine, a program, a series of zeros and ones. And that as that process accelerates and as those individuals, governments, or corporations that adopt these technologies perform better than the ones that don’t, we end up in a world where everything is decided for us but not by us.

MELANIE MITCHELL: I don’t believe that scenario will come to pass because, as I said, humans are such a big part of what AI is. And I don’t think in any near future that that’s going to stop. That AI is going to become capable enough and autonomous enough to replace every aspect of what we do in our work life, in our entertainment, and any other aspect of our lives. So I just don’t see any evidence for that.

RUDYARD GRIFFITHS: It’s not going to make us extinct, or, I mean, not extinct physically, but extinct as a creative’s self-producing a post-human world post.

MELANIE MITCHELL: It’s not going to be a post-human world. I don’t believe that.

RUDYARD GRIFFITHS: The singularity is not going to happen. Okay. Ray Kurzweil, if you’re listening, we wish you the best. Final question we’re asking all of our presenters today, which is, which argument on the other side would you give the most credence to? If you had to think of the various cases that will be put forward for an existential risk, which one do you worry about? Is it the bad actor who adopts this technology and then uses it to amplify their ability to cause harm? Is it a miscalculation and that AGI, average general intelligence, suddenly comes upon us faster than we expected?

MELANIE MITCHELL: I think the bad actor scenario is the only plausible one, in my opinion. And we certainly have seen bad actors use technology to do very harmful things. The question is, does AI have some kind of unique and special threat associated with it that isn’t already in our technologies? Is AI something so new and so powerful that that’s going to give humans some new, incredibly powerful way to destroy humanity if they want to? And I don’t believe it. And it’s partially because of this thing we talked about, about the resilience of society, of all the things that—all the layers of complexity such a thing would have to get through.

RUDYARD GRIFFITHS: Yeah, I often think of like CRISPR and genetic engineering, that’s a similarly increasingly distributed technology that could do really, really bad things if people knew how to use it. And now smaller and smaller groups and labs with less and less sophistication can access that technology. But we’ve found ways, seemingly, up to now, to regulate it and to take an approach towards that technology which hasn’t turned it into the threat that many people thought it was when it first emerged. Would you agree with that analysis?

MELANIE MITCHELL: Yeah, I agree with that, and I think we will do the same with AI. I’m very optimistic about our governments trying to regulate AI and now thinking very deeply about what are the best ways to do that.

RUDYARD GRIFFITHS: And corporations themselves actively advocating for regulation, which is interesting and—

MELANIE MITCHELL: Advocating for it, but sometimes on the other side, lobbying against it.

RUDYARD GRIFFITHS: That’s always true. Well, Melanie Mitchell, thank you so much for coming to the Munk Debates. We really appreciate your analysis and insights, and it’s just terrific to have you as part of the conversation.

MELANIE MITCHELL: Thanks so much for having me.


RUDYARD GRIFFITHS: Thanks for tuning into these conversations with some of the world’s leading thinkers on AI. They were conducted by me, your executive director, Rudyard Griffiths, just before the June 22nd Munk Debate on AI. Our final speaker in this four-part series that we’re providing you is Yoshua Bengio. He is considered one of the world’s leading experts in artificial intelligence, known for his pioneering work on deep learning. Like Yann LeCun he has won the prestigious Turing Award for his contributions to computer science. He’s a full professor at the Université de Montréal, the founder of a scientific director of MILA, Quebec’s AI Institute, and a driving force behind the Montreal Declaration for the responsible development of artificial intelligence. He was appearing at the debate to argue in favour of the motion, Be it resolved: AI research and development represents an existential threat. The next voice is mine in conversation with Yoshua Bengio.

Yoshua Bengio, welcome to the Munk Debates.

YOSHUA BENGIO: Thanks for having me.

RUDYARD GRIFFITHS: Well, again, great to have a Canadian on stage here with our international panel, and we really appreciate you making the trip from Montreal. Let me begin with challenging you to explain to our listeners the one thing that you think that they should be taking away from this debate. What is the insight to help them understand the existential dangers, the existential risks of AI?

YOSHUA BENGIO: The danger is that once we have technology that is easily accessible by a lot of people, and that is very powerful, AIs that would be smarter than us, something that many experts, including Yann LeCun and Geoff Hinton, and I think could come in just a few years, once we have that, it’s almost certain that there will be people with malicious intentions or misguided understanding of AI that will intentionally or not instruct those machines in ways that could yield to major catastrophes. That’s the most important thing. And then, the subtleties of how these scenarios can unroll, there are many possibilities that people can agree or disagree on, but the main thing was we bring very powerful entities into this world that could be misused or that we could lose control of, is something that could be very dangerous for humanity.

RUDYARD GRIFFITHS: People in your own field are pushing back against the proposition that, in fact, there is even an ability to approximate AGI, a kind of human general intelligence. They’re saying that this stuff could still be decades in the future. What makes you less sanguine about the extent to which this technology is accelerating and that the potential for SGI and possibly a superintelligence breakthrough soon after is more something part of the near future than the distant?

YOSHUA BENGIO: Well, I don’t really know when it’s going to happen. And even if it was 20 years into the future, I would be worried because it’s going to take time. Think about how many decades we’ve needed to do not much against climate change. So either way, whether it’s three years or 30 years, I think we need to start working on it. Now, I changed my mind about the danger of superhuman AI because of the recent advances, which actually were not even scientific advances; they were just due to scaling up computing and data size. And also because of the work that I know is going on around the world to bridge the gap. In other words, the current systems indeed, I think, lack some ingredients, which I call System 2, and lots of people working on this. It could be just a few years before we find the missing ingredients, or maybe it’s going to be decades, but can we take a chance?

RUDYARD GRIFFITHS: A layperson I think is often, myself included, sometimes confused to think through why would AI systems want to harm us. I mean, they don’t have intentionality like we do. It’s not like, “I don’t like the way you looked at me across a bar, so I’m going to come over and sort you out.” How do we get to that? How do we even get to that point? It just seems incomprehensible.

YOSHUA BENGIO: Yeah, it’s complicated. And this is called the alignment problem. What happens is that for almost any goal that an entity an agent would have, a very useful subgoal is self-preservation. If you want to achieve anything, you need to survive ’til then. And once you have self-preservation as a goal, you might have other goals like, “Well, in order to survive and in order to achieve my goal, I need to control my environment.” So it means the AI needs to control us needs maybe to please us, to fool us, in order to achieve whatever we thought we asked them to do. Also, there’s a lot of evidence starting from economics—the most recent Nobel Prize—and more than a decade of work in reinforcement learning and AI safety, suggesting that it’s very hard to ask the machine to do what we intend. We can write something, but there’s going to be a mismatch. And that mismatch can be amplified, can lead to self-preservation, and to lead to actions that maybe the AI doesn’t realize is really bad for us. So there are lots of scenarios that we don’t fully understand. So what I’m saying is, let’s not ignore those possibilities. Let’s not deny those possibilities. Let’s make sure we study them and invest to protect ourselves.

RUDYARD GRIFFITHS: Do you think you can do that if you have a machine that is getting smarter, and as Geoff Hinton and you and others have explained, that once one machine learns something, all the machines learn that instantaneously? It’s not like humans, who have to write books or deliver lectures, or appear debates to share with each other in a slow, messy biological way. What’s the potential here? The risk that somehow this technology is emergent, that it is self-improving? Is that what you see right now? Or is that again a known unknown that it could exist in a nearer term future?

YOSHUA BENGIO: I do not think that the current state of the art, like GPT-4, is dangerous by itself. That it would become sentient or something like this, which is a term that is not well defined anyways. But what I believe is that once we figure out the principles that give us our main cognitive abilities, and maybe we are not far from that, the computers, because they’re digital and just as Geof Hinton argued, will have an extra advantage that they can learn faster because many computers can share information in a way we can’t, and things like that, and access to memory, and so on. For example, being able to read the whole internet very quickly, which obviously we can’t. Your brain is a machine; it’s a biological machine. It’s almost sure we’ll get there, and once we get there, there will be machines that are smarter than us.

RUDYARD GRIFFITHS: We anthropomorphize a lot of this conversation. So we call it AI, artificial intelligence, and we assume that intelligence is human intelligence. What are we getting wrong there, and what could machine intelligence actually look like? I’ve struggled with this. Some people have said that it could be incredibly alien to us, very incomprehensible, very unknowable. What’s your sense of what machine intelligence could actually look like?

YOSHUA BENGIO: I agree. I think it’s very likely that the forms of intelligences that we will be building we’ll be quite different from human intelligence. Evolution has put all sorts of mechanisms in us that work well for humanity, and it’s actually hard for us to decipher all of them and put that into machines. And so it’s very likely that as we make progress on the more intellectual abilities, but not necessarily all of the guardrails that evolution has put in us, we will build systems that think differently from us. And that’s a danger because it will be hard for us to predict how these systems will think, how they will potentially see us and what kind of decisions they will take that we will not anticipate, and so on.

RUDYARD GRIFFITHS: Yeah. So this is a really interesting point. What you’re saying is the alignment problem is much bigger than just I’ve set up an AI with a goal to get you to click on this website as much as possible. You’re saying that the alignment problem could be so big, it actually gets to the essence of the intelligence that we’re creating. And historically, intelligence has equalled power, that we’ve seen in the relationship of species on this planet or different groups of human beings at different levels of civilizational development, there’s this correlation between intelligence and raw power.

YOSHUA BENGIO: Yes. And that is something personally I feel like I should have been paying more attention to. And it’s really only in the last few months that I’ve been thinking through this because of GPT-4 and ChatGPT. The reason we are building AI is because we are seeking power, and we are building tools to give us power, but now we’re building these tools that may have more power than us. And in a way, it’s very different from all previous technologies. All previous technologies were, by construction, subservient to humanity. They couldn’t think by themselves. But now we’re building machines that are thinking, and we’re seeing the very early forms of that now.

RUDYARD GRIFFITHS: So just explain that when you say the machines are thinking, what do you see? I know it’s hard because you’re deep into maths here and other things which we’re not going to be able to communicate in the context of a podcast. But what is it that makes you use the word thinking?

YOSHUA BENGIO: Well, the current ones, the way that they’re thinking that is analogous to how we think is like our intuition. So what they’re missing is the form of thinking where we deliberate in our mind, where we think through before we act. So it’s like we have these very impulsive machines that can blurt out things that generally are pretty good and sometimes completely wrong. Whereas humans, especially if they have developed that skill, can look at their own thoughts and realize, “Well, maybe I should qualify this. I’m not really sure,” for example, “Or maybe it’s not the right context to say this.” So the current ones think in this very immediate input-to-output way that probably many animals do as well. They’re missing a lot of things. They don’t have a body; they can’t control their body and things like that, but they can perceive with images and they can imagine things, they can generate images, they can generate texts. They can understand a lot of aspects of our world, but they’re still missing some of the aspects that we have of thinking, of reasoning, of planning, and things like that.

RUDYARD GRIFFITHS: Is it that we say that they’re thinking because what they’re producing is what we would consider the product of thought? I know I’m getting a bit philosophical here, but I’m just trying to understand whether they actually computationally are mapping neural networks or whatever the analogy one uses to try to wrap your head around this. Are they actually doing something themselves, or are they simply producing results that we understand as biological sentience are the product of thought, but we don’t actually know that there is thought incurring inside of them?

YOSHUA BENGIO: Well, I don’t know if thought is occurring inside of you.

RUDYARD GRIFFITHS: That is a question that’s been asked before.

YOSHUA BENGIO: I think the objective truth is they can solve all kinds of puzzles that we would think require thinking, including ones that they haven’t been trained on. Of course, if you train them on something like playing go or chess, then they get superhuman. Now, they also can do badly on some of those things, and the people who are studying this can see signs of what is lacking and the thing I’ve been working on. But I can also see all the things that they’re doing well, which makes me think that part of the equation we’ve figured out.

RUDYARD GRIFFITHS: So it’s the proof is in the pudding.

YOSHUA BENGIO: Well, that’s the only objective thing we can say, really. Now, it turns out I’ve been working on understanding human consciousness and the neural mechanisms of that. And there are some theories, including some that we worked on, that suggest that our sense of subjective experience, that is the central ingredient of consciousness, may not be as magical as we tend to think. And we have a very strong sense that is something special, right? But it might just be a side effect of a particular form of computation that is useful for thinking—in fact, for the more reasoned thinking.

RUDYARD GRIFFITHS: Right. Yeah. We’ve had Lisa Feldman Barrett on this podcast, a neurologist who’s talked a lot about just that we’re looking for lions in the bushes, and we’re good at that again, for reasons of evolutionary biology. Let me just end on a question that I’ve asked all of our presenters this evening. What is the one argument on the other side of this debate that you would give the most credence to? Is it that you do think that there could be a path toward regulation that could head off this risk? Is it that our own sense of self-preservation is already—maybe in this debate, the very fact that we’re having this debate—it’s asserting itself. This has now become a big issue. Heads of state or meeting with heads of companies—heads of companies of these big powerful new emerging companies like OpenAI—are talking about the existential risk of the very technologies that they’re developing.

YOSHUA BENGIO: Well, I’m trying to be neither an optimist nor a pessimist here, but I really believe that there’s nothing that’s completely desperate that in any situation there’s something we can do to obtain more favourable outcomes. So think about the climate activists. They could be really discouraged, and we should have been acting 20 years ago or more, but we keep going and trying to do our best to reduce the damage in the case of existential risk or other large-scale harms that could happen with AI, we can reduce the probability of bad outcomes. And regulation is a huge part of that. So we need to move quickly. We need to invest in the research. We need to understand better what are the possibly bad scenarios so that we can create the countermeasures to minimize those risks.

RUDYARD GRIFFITHS: And do you think we’ll do that? I mean, if you think of climate change, boy, we’re failing that test.

YOSHUA BENGIO: I have to try my best, and that’s why I’m having this debate.

RUDYARD GRIFFITHS: Well, we appreciate your time, your attention, and lending your knowledge and expertise to this conversation. So Yoshua, thank you so much for coming to Toronto for the Munk Debate on AI. Thank you.

The Hub Staff

The Hub’s mission is to create and curate news, analysis, and insights about a dynamic and better future for Canada in a single online information source.

00:00:00
00:00:00