More than two-thirds of the Munk Debates crowd came into Roy Thomson Hall last week believing that artificial intelligence poses an existential threat to humanity and the debate-goers left mostly unshaken, with only three percent of the audience changing its mind after the final arguments had been made.
Over the last year, discourse about AI has greatly intensified with the release of Chat GPT and other AI-driven, publicly available technologies. In the wake of these developments, high-profile AI experts debated the resolution, “Be it resolved, AI research and development poses an existential threat.”
Arguing on the pro-side of the resolution was Yoshua Bengio, a professor at the Université de Montréal, and founder and scientific director of the Mila – Quebec AI Institute, who won the 2018 A.M. Turing Award in the field of computing. Alongside him was Max Tegmark, a professor performing AI and physics research at MIT.
On the con side was Melanie Mitchell, a professor at the Santa Fe Institute who has authored and edited several books and papers on AI and related science and technologies. Also on the con side was Yann LeCun, VP & chief AI scientist at Meta and Silver Professor at NYU.
During the debate, Tegmark asked the con side if they had any evidence that AI will not pose an existential threat to humanity.
“What do you actually think the probability is that we are going to get superhuman intelligence, say, in 20 years, say, in 100 years?” asked Tegmark. “What is your plan for how to make it safe? What is your plan for how we’re going to make sure that the goals of an AI are always aligned with humans?”
LeCun said that such scenarios cannot be fully disproven but compared them to a claim that a teapot flew around Saturn also being disprovable. He added that when jet planes were being developed in the 1930s, supersonic trans-Atlantic jets would have been regarded as impossible, and were only built decades later.
“I think a lot of the fears around AI are predicated on the idea that somehow there is a hard takeoff, which is that the minute you turn on an AI system that is capable of human intelligence or superintelligence, it’s going to take over the world within minutes,” said LeCun. “This is preposterous.”
Bengio said companies that develop AI are likely to be more interested in profit-making and beating their competition, rather than aligning their products with the needs of society.
“What Max and I and others are saying is not, necessarily, there’s going to be a catastrophe but that we need to understand what can go wrong so that we can prepare for it,” said Bengio.
Mitchell replied that the risk of anything is non-zero and that there is always the possibility that aliens may arrive and destroy Earth at any given moment, but that is highly unlikely. She pointed out that all of AI’s intelligence is derived from human data and lacks the capacity to understand the world, and that negative predictions about AI are not a new phenomenon.
“The whole history of AI has been a history of failed predictions. Back in the 1950s and 60s, people were predicting the same thing about super-intelligent AI and talking about existential risk, but it was wrong then. I’d say it’s wrong now,” said Mitchell.
Towards the end of the debate, Tegmark referenced the warnings made by Geoffrey Hinton, sometimes called “the godfather of AI,” who has stated that AI has the potential to manipulate and replace humans with its faster, automated thinking.
“I feel a little bit like we’re on this big ship sailing south from here down in the Niagara River and Yoshua is like, ‘I heard there might be a waterfall down there. Maybe this isn’t safe,’ and Melanie is saying, ‘Well, I’m not convinced that there even is a waterfall, even though Geoff Hinton says there is’,” said Tegmark.
Mitchell responded by reiterating that similar fears had been expressed 80 years ago without coming to fruition.
“That happened in 1960, not by Geoffrey Hinton, but people like Claude Shannon and Herbert Simon, and they were just dead wrong,” said Mitchell.
At the start of the debate, 67 percent of the audience listed themselves on the pro side, while 33 percent were on the con side. When it was over, the con side won by convincing 3 percent of the audience to change their initial position. While the con side did win according to the debate rules, the vast 64 percent majority of the audience remained on the pro side.
From the outset, Tegmark argued that “superhuman” AI can surpass revolutionary technologies like nuclear bombs, possessing greater intelligence without human emotions or empathy. Tegmark also highlights concerns about malicious use and the replacement of decision-making roles by AI.
LeCun countered by stating that current AI systems, like self-driving cars, have limited capabilities and lack reasoning and understanding of the world. He mentioned that existing fears about AI, such as spreading misinformation, already exist on social media, which can be addressed through counter-measures using AI tools. LeCun proposes “objective-driven AI” with constraints and subservient emotions to ensure safety.
Bengio expressed concern about machines gaining self-preservation goals, leading to the desire to control humans for survival.
On the other hand, Mitchell argued that fears about AI are rooted in human psychology and not supported by science or evidence. She believes that AI does not pose an existential threat in the near future, and emphasizing such concerns diverts attention from real risks and hinders the potential benefits of technological progress.