Many individuals in AI can be acquainted with the story of the Mechanical Turk. It was a chess-playing machine inbuilt 1770, and it was so good its opponents were tricked into believing it was supernaturally powerful. In point of fact, the machine had space for a human to cover in it and control it. The hoax went on for 84 years. That’s three generations!
History is wealthy with examples of individuals attempting to breathe life into inanimate objects, and of individuals selling hacks and tricks as “magic.” But this very human desire to consider in consciousness in machines has never matched up with reality.
Creating consciousness in artificial intelligence systems is the dream of many technologists. Large language models are the newest example of our quest for clever machines, and a few people (contentiously) claim to have seen glimmers of consciousness in conversations with them. The purpose is: machine consciousness is a hotly debated topic. Loads of experts say it’s doomed to stay science fiction without end, but others argue it’s right across the corner.
For the newest edition of MIT Technology Review, neuroscientist Grace Huckins explores what consciousness research in humans can teach us about AI, and the moral problems that AI consciousness would raise. Read more here.
We don’t fully understand human consciousness, but neuroscientists do have some clues about the way it’s manifested within the brain, Grace writes. To state the apparent, AI systems don’t have brains, so it’s unimaginable to make use of traditional methods of measuring brain activity for signs of life. But neuroscientists have various different theories about what consciousness in AI systems might appear like. Some treat it as a feature of the brain’s “software,” while others tie it more squarely to physical hardware.
There have even been attempts to create tests for AI consciousness. Susan Schneider, director of the Center for the Future Mind at Florida Atlantic University, and Princeton physicist Edwin Turner have developed one, which requires an AI agent to be isolated from any details about consciousness it could’ve picked up during its training before it’s tested. This step is vital in order that it will possibly’t just parrot human statements it’s picked up about consciousness during training, as a big language model would.
The tester then asks the AI questions it should only give you the option to reply whether it is itself conscious. Can it understand the plot of the movie Freaky Friday, where a mother and daughter switch bodies, their consciousnesses dissociated from their physical selves? Can it grasp the concept of dreaming—and even report dreaming itself? Can it conceive of reincarnation or an afterlife?
After all, this test just isn’t foolproof. It requires its subject to give you the option to make use of language, so babies and animals—manifestly conscious beings—wouldn’t pass the test. And language-based AI models may have been exposed to the concept of consciousness within the vast amount of web data they’ve been trained on.
So how will we really know if an AI system is conscious? A gaggle of neuroscientists, philosophers, and AI researchers, including Turing Prize winner Yoshua Bengio, have put out a white paper that proposes practical ways to detect AI consciousness based on quite a lot of theories from different fields. They propose a type of report card for various markers, corresponding to flexibly pursuing goals and interacting with an external environment, that might indicate AI consciousness—if the theories hold true. None of today’s systems tick any boxes, and it’s unclear in the event that they ever will.
Here’s what we do know. Large language models are extremely good at predicting what the subsequent word in a sentence ought to be. Also they are excellent at making connections between things—sometimes in ways in which surprise us and make it easy to consider within the illusion that these computer programs might need sparks of something else. But we all know remarkably little about AI language models’ inner workings. Until we all know more about exactly how and why these systems come to the conclusions they do, it’s hard to say that the models’ outcomes should not just fancy math.
Deeper Learning
How AI could supercharge battery research
We’d like higher batteries if electric vehicles are going to realize their potential of nudging fossil-fuel-powered cars off the roads. The issue is that there are one million different potential materials, and combos of materials, we could use to make these batteries. It’s very labor-intensive and expensive to do rounds and rounds of trial and error.
Enter AI: Startup Aionics is using AI tools to assist researchers find higher battery chemistries faster. It uses machine learning to sort through the big selection of fabric options and suggest combos. Generative AI can even help researchers design latest materials more quickly. Read more from Casey Crownhart in her weekly newsletter, The Spark, on the tech that might solve the climate crisis.
Bits and Bytes
Big Tech struggles to show AI hype into profits
Microsoft has reportedly lost money on certainly one of its first generative AI products. And it’s not alone: the opposite tech giants are equally struggling to search out a method to capitalize on their massive investment in generative AI, which is eye-wateringly expensive to coach and run. (The Wall Street Journal)
How AI reduces the world to stereotypes
Remainder of World analyzed 3,000 AI-generated images of various countries and cultures, and located they portray the world in a deeply stereotypical way. No surprises there, but this visual piece neatly shows just how deeply ingrained biases are in AI systems. (Remainder of World)
Even Google insiders are questioning the usefulness of the Bard chatbot
Glad to understand it’s not just me! In leaked messages from an official invite-only Discord chat, Google product managers and designers share their skepticism in regards to the utility of the corporate’s AI chatbot Bard, considering that the system makes things up. Google insiders appear to think it’s best for creative uses, brainstorming, or coding—and even then, it needs a lot of supervision. (Bloomberg)
The US is mulling escalating its AI tech blockade on China
Anxious in regards to the prospect of China gaining AI supremacy, the UA has been limiting its access to the pc chips needed to power AI. The US is now considering escalating its blockade and restricting China’s access to a broad category of general-purpose AI programs, not only physical parts. (The Atlantic)
How a billionaire-backed network of AI advisors took over Washington
The little-known Horizon Institute for Public Service, a nonprofit created in 2022, is funding salaries of individuals working in key Senate offices, agencies, and think tanks. The group is pushing to place the existential risk posed by AI at the highest of Washington’s agenda, which may gain advantage AI corporations with ties to the network.
(Politico)
Google offers to pay its customers’ legal feel in generative AI lawsuits
Google has joined Microsoft and Getty Images in promising to cough up legal fees if its customers get sued over the outputs of its generative AI models or the training data they use. This is a great move from Big Tech, because it could help persuade organizations which might be hesitating to adopt these corporations’ AI tools until there’s more legal clarity over copyright and AI. (Google)