Home Learn How existential risk became the largest meme in AI

How existential risk became the largest meme in AI

0
How existential risk became the largest meme in AI

Who’s afraid of the massive bad bots? A number of people, it seems. The variety of high-profile names which have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Tons of of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, comparable to Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the previous president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and plenty of more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the danger of extinction from AI must be a world priority alongside other societal-scale risks comparable to pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test variety of statement, we might have said ‘existential risk’ because that may mean quite a lot of things to quite a lot of different people,” says CAIS director Dan Hendrycks. But they desired to be clear: this was not about tanking the economy. “That is why we went with ‘risk of extinction’ despite the fact that quite a lot of us are concerned with various other risks as well,” says Hendrycks.

We have been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views at the moment are mainstream talking points, grabbing not only headlines but the eye of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, a company that studies the social implications of technology.

What’s occurring? Has AI really turn out to be (more) dangerous? And why are the individuals who ushered on this tech now those raising the alarm?   

It’s true that these views split the sphere. Last week, Yann Lecun, chief scientist at Meta, and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterous”. Aiden Gomez, CEO of AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There is not any more evidence now than there was in 1950 that AI goes to pose these existential risks,” says Signal president Meredith Whittaker, who’s co-founder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious, it’s really exciting and stimulating to be afraid.”

“It’s also a option to skim over every part that is happening in the current day,” says Burrell. “It suggests that we’ve not seen real or serious harm yet.”

An old fear

Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

But at the center of such concerns is the query of control: how do humans stay on top if (or when) machines get smarter? In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold, a philosopher of artificial intelligence on the University of Toronto (who signed the CAIS statement), lays out the essential argument behind the fears.    

There are three key premises. One, it’s possible that humans will construct a superintelligent machine that may outsmart all other intelligences. Two, it’s possible that we’ll not give you the option to manage a superintelligence that may outsmart us. And three, it’s possible that a superintelligence will do things that we don’t want it to.

Putting all that together, it is feasible to construct a machine that can do things that we don’t want it to—as much as and including wiping us out—and we is not going to give you the option to stop it.   

There are different flavors of this scenario. When Hinton raised his concerns about AI in May, he gave the instance of robots rerouting the ability grid to present themselves more power. But superintelligence (or AGI) isn’t necessarily required. Dumb machines, given an excessive amount of leeway, could possibly be disastrous too. Many scenarios involve thoughtless or malicious deployment quite than self-interested bots. 

In a paper posted online last week, Stuart Russell and Andrew Critch, AI researchers on the University of Berkeley (who also each signed the CAIS statement), give a taxonomy of existential risks. These range from a viral advice-giving chatbot telling thousands and thousands of individuals to drop out of faculty, to autonomous industries that pursue their very own harmful economic ends, to nation states constructing AI-powered superweapons.

In lots of imagined cases, a theoretical model fulfills its human-given goal but does so in a way that works against us. For Hendrycks, who studied how deep learning models can sometimes behave in unexpected and undesirable ways when given inputs not seen of their training data, an AI system could possibly be disastrous since it is broken quite than all-powerful. “If you happen to give it a goal and it finds alien solutions to it, it should take us for a weird ride,” he says.

The issue with these possible futures is that they rest on a string of what-ifs, which makes them sound like science fiction. Vold acknowledges this herself. “Because events that constitute or precipitate an [existential risk] are unprecedented, arguments to the effect that they pose such a threat should be theoretical in nature,” she writes. “Their rarity also makes it such that any speculations about how or when such events might occur are subjective and never empirically verifiable.”

So why are more people taking these ideas at face value than ever before? “Different people discuss risk for various reasons, and so they may mean various things by it,” says Francois Chollet, an AI researcher at Google. But it is usually a narrative that’s hard to withstand: “Existential risk has all the time been a superb story.”

“There is a kind of mythological, almost religious element to this that cannot be discounted,” says Whittaker. “I feel we want to recognise that what’s being described, provided that it has no basis in evidence, is way closer to an article of religion, a form of non secular fervor, than it’s to scientific discourse.”

The doom contagion

When deep learning researchers first began to rack up a series of successes—consider Hinton and his colleagues’ record-breaking image-recognition scores within the ImageNet competition in 2012 and DeepMind’s first wins against human champions with AlphaGo in 2015—the hype soon turned to doom then too. Celebrity scientists, comparable to Stephen Hawking and fellow cosmologist Martin Rees, in addition to celebrity tech leaders like Elon Musk, raised the alarm about existential risk. But these figures weren’t AI experts.   

Eight years ago, AI pioneer Andrew Ng, who was chief scientist at Baidu on the time, stood on a stage in San Jose and laughed off all the idea. 

“There could possibly be a race of killer robots within the far future,” Ng told the audience at Nvidia’s GPU Technology Conference in 2015. “But I don’t work on not turning AI evil today for a similar reason I don’t be concerned in regards to the problem of overpopulation on the planet Mars.” (Ng’s words were reported on the time by tech news website The Register.)

Ng, who cofounded Google’s AI lab in 2011 and is now CEO of Landing AI, has repeated the road in interviews since. But now he’s on the fence. “I’m keeping an open mind and am speaking with a couple of people to learn more,” he tells me. “The rapid pace of development has led scientists to rethink the risks.”

Like many, Ng is worried by the rapid progress of generative AI and their potential for misuse. He notes that a widely-shared AI-generated image of an explosion on the Pentagon spooked people enough last month that the stock market dropped.  

“With AI being so powerful, unfortunately it seems likely that it is going to also result in massive problems,” says Ng. But he still stops in need of killer robots: “Immediately, I still struggle to see how AI can result in our extinction.”

Something else that is latest is the widespread awareness of what AI can do. Earlier this yr, ChatGPT brought this technology to the general public. “AI is a well-liked topic within the mainstream rapidly,” says Chollet. “Persons are taking AI seriously because they see a sudden jump in capabilities as a harbinger of more future jumps.” 

The experience of conversing with a chatbot can be unnerving. Conversation is something that is often understood as something people do with other people. “It added a form of plausibility to the concept that AI was human-like or a sentient interlocutor,” says Whittaker. “I feel it gave some purchase to the concept that if AI can simulate human communication, it could also do XYZ.”

“That’s the opening that I see the existential risk conversation kind of fitting into, extrapolating without evidence,” she says.

There’s reason to be cynical too. With regulators catching as much as the tech industry, the problem on the table is what kinds of activity should and shouldn’t get constrained. Highlighting long-term risks quite than short-term harms (comparable to discriminatory hiring or misinformation) refocuses regulators’ attention on hypothetical problems down the road.

“I think the specter of real regulatory constraints has pushed people to take a position,” says Burrell. Talking about existential risks may validate regulators’ concerns without undermining business opportunities. “Superintelligent AI that activates humanity sounds terrifying, nevertheless it’s also clearly not something that is happened yet,” she says.

Inflating fears about existential risk is nice for business in other ways too. Chollet points out that top AI firms need us to think that AGI is coming, and that they’re those constructing it. “If you happen to want people to think what you are working on is powerful, it’s a superb idea to make them fear it,” he says.

Whittaker takes an analogous view. “It’s a big thing, to forged yourself because the creator of an entity that could possibly be more powerful than human beings,” she says.

None of this might matter much if it was simply about marketing or hype. But deciding what the risks are, and what they don’t seem to be, has consequences. In a world where budgets and a spotlight spans are limited, harms less extreme than nuclear war may get missed because we’ve decided they aren’t the priority.

“It’s a crucial query, especially with the growing concentrate on safety and security because the narrow frame for policy intervention,” says Sarah Myers West, managing director of the AI Now Institute.

When UK Prime Minister Rishi Sunak met with heads of AI firms, including Sam Altman and Demis Hassabis, in May, his government issued an announcement saying: “The PM and CEOs discussed the risks of the technology, starting from disinformation and national security, to existential threats.”

The week before, Altman told the US Senate that his worst fears were that the AI industry would cause significant harm to the world. Altman’s testimony helped spark calls for a brand new form of agency to handle such unprecedented harm.

With the Overton window shifted, is the damage done? “If we’re talking in regards to the far future, if we’re talking about mythological risks, then we’re completely reframing the issue to be an issue that exists in a fantasy world and its solutions can exist in a fantasy world too,” says Whittaker.

But Whittaker points out that policy discussions around AI have been occurring for years, longer than this recent buzz of fear. “I do not believe in inevitability,” she says. “We are going to see a beating back of this hype, it is going to subside.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here