Opinion
Fueling fear is a dangerous game
Some weeks ago, I published my pro and con arguments for signing that very well-known open letter by the Way forward for Life Institute — in the long run, I signed it, though there have been some caveats. Just a few radio and TV hosts interviewed me to elucidate what all of the fuss was about.
More recently, I got one other email from the Way forward for Life Institute (FLI in the next) asking me to sign a declaration: this time, it was a brief statement by the Center for AI Safety (CAIS) focused on the existential threats posed by recent AI developments.
The statement goes as follows:
“Mitigating the chance of extinction from AI ought to be a world priority alongside other societal-scale risks similar to pandemics and nuclear war.”
Very concise indeed; how could there be an issue with this?
If the previous FLI statement had weaknesses, this one doubles down on them as a substitute of correcting them, making it unimaginable for me to support it.
Specifically, I even have the 4 following objections, which of course are going to be a bit longer than the declaration itself:
The brand new statement is actually a call to panic about AI, and not only to panic about some natural consequences of it that we will see immediately, but as a substitute about hypothetical risks which have been raised by random individuals who give very vague risk estimations like “10 percent risk of human extinction.”
Really? 10% risk of human extinction? Based on what? The survey respondents weren’t asked to justify or explain their reasons, but I think many were eager about “Terminator-like” scenarios. You realize, horror movies are intended to scare you, so that you go to the films. But to translate the message to reality isn’t sound reasoning.
The supposed threat to humanity assumes a capability to destroy us that hasn’t been explained and an agency—the willingness to erase humankind. How would a machine need to kill us when devices don’t have any feelings, be they good or bad? Machines don’t “want” this or that.
The true dangers of AI we see playing out immediately are very different. Certainly one of them is the aptitude of Generative AI to make fake voices, pictures, and videos. Are you able to imagine what you’d do for those who received a phone call together with your daughter’s voice (impersonated with a fake voice) where she asks you to rescue her?
One other one is public misinformation with fake evidence, like counterfeit videos. The one with a fake Pope was relatively innocent, but shortly, Twitter will likely be flooded with false declarations, images about events that never occurred, and so forth. By the best way, have you ever considered that the US elections are approaching?
Then there may be the exploitation of human-made content that AI algorithms are mining everywhere in the web to provide their “original” images and text: humans’ work is taken with none financial compensation. In some instances, reference to human work is explicit, like in “make this image within the type of X.”
If within the FLI letter of a month ago there have been suggestions of a “man vs. machine” mindset, it’s made very explicit this time. “Extinction from AI,” they call it, nothing less.
In the actual world where we reside—not in apocalyptic Hollywood movies—it’s not the machines that damage us or are threatening our existence: it’s more like some humans (by chance, the powerful and wealthy ones, the owners of huge corporations) leverage recent powerful technology to extend their fortunes, and infrequently on the expense of the powerless: we’ve got seen how the supply of computer-generated graphics has shrunk the small business of graphic artists in places like Fiverr.
Further, the idea that advanced machine intelligence would attempt to dethrone humans must be questioned; as Steven Pinker wrote:
“AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking on the world.”
Yann LeCun—the famous head of AI research at Meta—declared:
“Humans have all types of drives that make them do bad things to one another, just like the self-preservation instinct… Those drives are programmed into our brain, but there is completely no reason to construct robots which have the identical form of drives.”
No, machines gone rogue won’t change into our overlords or exterminate us: other humans, who’re currently our overlords, will increase their domination by leveraging the economic and technological means at their disposal—including AI if it’s fitting.
I get that the FLI mentioned the pandemics to relate their statement with something that we just lived—and left an emotional scar on a lot of us— but it surely’s not a sound comparison. Leaving aside some conspiracy theories, the pandemic we emerged from was not technology—vaccines were. How does the FLI assume catastrophic AI would spread? By contagion?
In fact, nuclear bombs are a technological development, but within the case of a nuclear war, we all know precisely how and why the bomb would destroy us: it’s not speculation, because it is within the case of “rogue AI.”
One last item that drew my attention was to see the list of individuals signing the statement, starting with Sam Altman. He’s the leader of OpenAI, which, with ChatGPT since November 2022, put the frantic AI race we live in motion. Even the mighty Google struggled to maintain pace on this race—didn’t Microsoft’s Satya Nadella say he desired to “make Google dance”? He got his wish at the fee of accelerating the AI race.
It doesn’t make sense to me that folks on the helm of the very corporations fueling this AI race are also signing this statement. Altman could say that he’s very apprehensive by AI developments, but when we see his company keeps going straight at full speed, then his preoccupation looks meaningless and incongruous. I don’t intend to moralize about Altman’s declarations, but accepting his support at face value undermines the statement’s validity –much more so after we consider that for Altman’s company leading the race is important to their financial bottom line.
It’s not that machines are going rogue. It’s the use that capitalistic monopolies and despotic governments make of AI tools that would damage us. And never in a Hollywood dystopic future, but in the actual world where we’re today.
I won’t endorse a fear-fueled vision of machines that’s hypocritical in the long run since it’s brought by the very corporations that attempt to distract from their profit-seeking operating ways. That’s why I’m not signing this recent statement endorsed by the FLI.
Further, I think that wealthy and influential leaders can allow themselves to look at imaginary threats because they don’t worry about more “mundane” real threats just like the income reduction of a contract graphic artist: they know thoroughly they’ll never struggle to make ends meet at the tip of the month, nor will do their children or grandchildren.