Home Learn What’s modified because the “pause AI” letter six months ago?

What’s modified because the “pause AI” letter six months ago?

0
What’s modified because the “pause AI” letter six months ago?

Last Friday marked six months because the Way forward for Life Institute (FLI), a nonprofit specializing in existential risks surrounding artificial intelligence, shared an open letter signed by famous people corresponding to Elon Musk, Steve Wozniak, and Yoshua Bengio. The letter calling for tech corporations to “pause” the event of AI language models more powerful than OpenAI’s GPT-4 for six months.

Well, that didn’t occur, obviously. 

I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Listed below are highlights of our conversation. 

On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had turn out to be clear that there was an enormous amount of hysteria in regards to the existential risk AI poses, but no person felt they may discuss it openly “for fear of being ridiculed as Luddite scaremongerers.” “The important thing goal of the letter was to mainstream the conversation, to maneuver the Overton window so that individuals felt secure expressing these concerns,” he says. “Six months later, it’s clear that part was a hit.”

But that’s about it: “What’s not great is that each one the businesses are still going full steam ahead and we still don’t have any meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this yr that meaningfully rein in probably the most dangerous stuff.”

Why the federal government should step in: Tegmark is lobbying for an FDA-style agency that will implement rules around AI, and for the federal government to force tech corporations to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But all of them know they will’t pause alone,” Tegmark says. Pausing alone could be “a disaster for his or her company, right?” he adds. “They simply get outcompeted, after which that CEO will likely be replaced with someone who doesn’t need to pause. The one way the pause comes about is that if the governments of the world step in and put in place safety standards that force everyone to pause.” 

So how about Elon … ? Musk signed the letter calling for a pause, only to establish a brand new AI company called X.AI to construct AI systems that will “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause identical to quite a lot of other AI leaders. But so long as there isn’t one, he feels he has to also stay in the sport.”

Why he thinks tech CEOs have the goodness of humanity of their hearts: “What makes me think that they actually need an excellent future with AI, not a foul one? I’ve known them for a few years. I talk with them repeatedly. And I can tell even in private conversations—I can sense it.” 

Response to critics who say specializing in existential risk distracts from current harms: “It’s crucial that those that care rather a lot about current problems and people who care about imminent upcoming harms work together fairly than infighting. I actually have zero criticism of people that deal with current harms. I feel it’s great that they’re doing it. I care about those things very much. If people engage in this sort of infighting, it’s just helping Big Tech divide and conquer all those that want to actually rein in Big Tech.”

Three mistakes we must always avoid now, in response to Tegmark: 1. Letting the tech corporations write the laws. 2. Turning this right into a geopolitical contest of the West versus China. 3. Focusing only on existential threats or only on current events. We’ve to comprehend they’re all a part of the identical threat of human disempowerment. All of us need to unite against these threats. 

Deeper Learning

These latest tools could make AI vision systems less biased

Computer vision systems are all over the place. They assist classify and tag images on social media feeds, detect objects and faces in pictures and videos, and highlight relevant elements of a picture. Nonetheless, they’re riddled with biases, and so they’re less accurate when the pictures show Black or brown people and girls. 

And there’s one other problem: the present ways researchers find biases in these systems are themselves biased, sorting people into broad categories that don’t properly account for the complexity that exists amongst human beings. 

Latest tools could help: Sony has a tool—shared exclusively with MIT Technology Review—that expands the skin-tone scale into two dimensions, measuring each skin color (from light to dark) and skin hue (from red to yellow). Meta has built a fairness evaluation system called FACET that takes geographic location and a lot of different personal characteristics under consideration, and it’s making its data set freely available. Read more from me here.

Bits and Bytes

Now you’ll be able to chat with ChatGPT using your voice
The brand new feature is an element of a round of updates for OpenAI’s app, including the power to reply questions on images. You can even pick from one in all five lifelike synthetic voices and have a conversation with the chatbot as in case you were making a call, getting responses to your spoken questions in real time. (MIT Technology Review)

Getty Images guarantees its latest AI comprises no copyrighted art
Just as authors including George R.R. Martin have filed one more copyright lawsuit against AI corporations, Getty Images guarantees that its latest AI system comprises no copyrighted art and that it is going to pay legal fees if its customers find yourself in any lawsuits about it.  (MIT Technology Review) 

A Disney director tried—and failed—to make use of an AI Hans Zimmer to create a soundtrack
When Gareth Edwards, the director of Rogue One: A Star Wars Story, was eager about the soundtrack for his upcoming movie about artificial intelligence, The Creator, he decided to try composing it with AI—and got “pretty rattling good” results. Spoiler alert: The human Hans Zimmer won in the long run. (MIT Technology Review) 

How AI might help us understand how cells work—and help cure diseases
A virtual cell modeling system, powered by AI, will result in breakthroughs in our understanding of diseases, argue Priscilla Chan and Mark Zuckerberg. (MIT Technology Review)

DeepMind is using AI to pinpoint the causes of genetic disease
Google DeepMind says it’s trained an artificial-intelligence system that may predict which DNA variations in our genomes are more likely to cause disease—predictions that might speed diagnosis of rare disorders and possibly yield clues for drug development. (MIT Technology Review)

Deepfakes of Chinese influencers are livestreaming 24/7
Since last yr, a swarm of Chinese startups and major tech corporations have been creating deepfake avatars for e-commerce livestreaming. With just a couple of minutes of sample video and $1,000 in costs, brands can clone a human streamer to work around the clock. (MIT Technology Review)

AI-generated images of naked children shock the Spanish town of Almendralejo
A completely horrifying example of real-life harm posed by generative AI. In Spain, AI-generated images of kids have been circulating on social media. The photographs were created using clothed images of the women taken from their social media. Depressingly, for the time being there’s little or no we are able to do about it. (BBC)

How the UN plans to shape the longer term of AI
There’s been quite a lot of chat in regards to the have to arrange a world organization that will govern AI. The UN looks like the plain selection, and the organization’s leadership desires to step as much as the challenge. This can be a nice piece taking a look at what the UN has cooking, and the challenges that lie ahead. (Time)

Amazon 🤝Anthropic
Amazon is investing as much as $4 billion within the AI safety startup, in response to this announcement. The move will give Amazon access to Anthropic’s powerful AI language model Claude 2, which should help it sustain with competitors Google, Meta, and Microsoft. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here