Home Learn Google DeepMind’s recent Gemini model looks amazing—but could signal peak AI hype

Google DeepMind’s recent Gemini model looks amazing—but could signal peak AI hype

0
Google DeepMind’s recent Gemini model looks amazing—but could signal peak AI hype

Hype about Gemini, Google DeepMind’s long-rumored response to OpenAI’s GPT-4, has been constructing for months. Today the corporate finally revealed what it has been working on in secret all this time. Was the hype justified? Yes—and no. 

Gemini is Google’s biggest AI launch yet—its push to tackle competitors OpenAI and Microsoft within the race for AI supremacy. There isn’t a doubt that the model is pitched as best-in-class across a big selection of capabilities—an “every little thing machine,” as one observer puts it. 

“The model is innately more capable,” Sundar Pichai, the CEO of Google and its parent company Alphabet, told MIT Technology Review. “It’s a platform. AI is a profound platform shift, larger than web or mobile. And so it represents an enormous step for us.” 

It’s an enormous step for Google, but not necessarily an enormous leap for the sector as an entire. Google DeepMind claims that Gemini outmatches GPT-4 on 30 out of 32 standard measures of performance. And yet the margins between them are thin. What Google DeepMind has done is pull AI’s best current capabilities into one powerful package. To evaluate from demos, it does many things thoroughly—but few things that we haven’t seen before. For all the thrill in regards to the next big thing, Gemini might be an indication that we’ve reached peak AI hype. A minimum of for now. 

Chirag Shah, a professor on the University of Washington who focuses on online search, compares the launch to Apple’s introduction of a brand new iPhone yearly. “Perhaps we just have risen to a unique threshold now, where this doesn’t impress us as much because we’ve just seen a lot,” he says. 

Like GPT-4, Gemini is multimodal, meaning it’s trained to handle multiple sorts of input: text, images, audio. It could actually mix these different formats to reply questions on every little thing from household chores to varsity math to economics. 

In a demo for journalists yesterday, Google showed Gemini’s ability to take an existing screenshot of a chart, analyze a whole bunch of pages of research with recent data, after which update the chart with that recent information. In one other example, Gemini is shown pictures of an omelet cooking in a pan and asked (using speech, not text) if the omelet is cooked yet. “It’s not ready since the eggs are still runny,” it replies. 

Most individuals could have to attend for the total experience, nevertheless. The version launched today is a back end to Bard, Google’s text-based search chatbot, which the corporate says will give it more advanced reasoning, planning, and understanding capabilities. Gemini’s full release can be staggered over the approaching months. The brand new Gemini-boosted Bard will initially be available in English in greater than 170 countries, not including the EU and the UK. That is to let the corporate “engage” with local regulators, says Sissie Hsiao, a Google vice chairman in command of Bard. 

Gemini also is available in three sizes: Ultra, Pro and Nano. Ultra is the full-powered version; Pro and Nano are tailored to applications that run with more limited computing resources. Nano is designed to run on devices, resembling Google’s recent Pixel phones. Developers and businesses will have the option to access Gemini Pro starting December 13. Gemini Ultra, probably the most powerful model, can be available “early next 12 months” following “extensive trust and safety checks,” Google executives told reporters on a press call. 

“I believe of it because the Gemini era of models,” Pichai told us. “That is how Google DeepMind goes to construct and make progress on AI. So it would all the time represent the frontier of where we’re making progress on AI technology.”

Larger, higher, faster, stronger? 

OpenAI’s strongest model, GPT-4, is seen because the industry’s gold standard. While Google boasted that Gemini outperforms OpenAI’s previous model, GPT 3.5, company executives dodged questions on how far the model exceeds GPT-4. 

However the firm highlights one benchmark specifically, called MMLU (massive multitask language understanding). It is a set of tests designed to measure the performance of models on tasks involving text and pictures, including reading comprehension, college math, and multiple-choice quizzes in physics, economics, and social sciences. On the text-only questions, Gemini scores 90% and human experts rating roughly 89%, says Pichai. GPT-4 scores 86% on a lot of these questions. On the multimodal questions, Gemini scores 59%, while GPT-4 scores 57%. “It’s the primary model to cross that threshold,” Pichai says. 

Gemini’s performance against benchmark data sets could be very impressive, says Melanie Mitchell, an artificial-intelligence researcher on the Santa Fe Institute in Latest Mexico.

“It’s clear that Gemini is a really sophisticated AI system,” says Mitchell. But “it’s not obvious to me that Gemini is definitely substantially more capable than GPT-4,” she adds.

While the model has good benchmark scores, it is tough to know easy methods to interpret these numbers on condition that we don’t know what’s within the training data, says Percy Liang, director of Stanford’s Center for Research on Foundation Models. 

Mitchell also notes that Gemini performs a lot better on language and code benchmarks than on images and video. “Multimodal foundation models still have a ways to go to be generally and robustly useful for a lot of tasks,” she says. 

Using feedback from human testers, Google DeepMind has trained Gemini to be more factually accurate, to offer attribution when asked to, and to hedge reasonably than spit out nonsense when faced with an issue it cannot answer. The corporate claims that this mitigates the issue of hallucinations. But and not using a radical overhaul of the bottom technology, large language models will proceed to make things up. 

Experts say it’s unclear whether the benchmarks Google is using to measure Gemini’s performance offer that much insight, and without transparency, it’s hard to examine Google’s claims. 

“Google is promoting Gemini as an every little thing machine—a general-purpose model that may be utilized in many alternative ways,” says Emily Bender, a professor of computational linguistics on the University of Washington. But the corporate is using narrow benchmarks to judge models that it expects for use for these diverse purposes. “This implies it effectively can’t be thoroughly evaluated,” she says.  

Ultimately, for the typical user, the incremental improvement over competing models won’t make much difference, says Shah. “It’s more about convenience, brand recognition, existing integration, than people really pondering ‘Oh, this is healthier,’” he says. 

An extended, slow buildup 

Gemini has been an extended time coming. In April 2023, Google announced it was merging its AI research unit Google Brain with DeepMind, Alphabet’s London-based AI research lab. So Google has had all 12 months to develop its answer to OpenAI’s most advanced large language model, GPT-4, which debuted in March and is the backbone of the paid version of ChatGPT.

Google has been under intense pressure to indicate investors it might match and overtake competitors in AI. Although the corporate has been developing and using powerful AI models for years, it has been hesitant to launch tools that the general public can play with for fears of reputational damage and safety concerns. 

“Google has been very cautious about releasing these things to the general public,” Geoffrey Hinton told MIT Technology Review in April when he left the corporate. “There are too many bad things that would occur, and Google didn’t wish to smash its status.” Faced with tech that seemed untrustworthy or unmarketable, Google played it secure—until the greater risk became missing out.

Google has learned the hard way how launching flawed products can backfire. When it unveiled its ChatGPT competitor Bard in February, scientists soon noticed a factual error in the corporate’s own commercial for the chatbot, an incident that subsequently wiped $100 billion off its share price. 

In May, Google announced it was rolling out generative AI into most of its products, from email to productivity software. But the outcomes did not impress critics: the chatbot made references to emails that didn’t exist, for instance.  

It is a consistent problem with large language models. Although excellent at generating text that seems like something a human could have written, generative AI systems commonly make things up. And that’s not the one problem with them. Also they are easy to hack, and riddled with biases. Using them can be highly polluting. 

Google has solved neither these problems nor the hallucination issue. Its solution to the latter problem is a tool that lets people use Google search to double-check the chatbot’s answers, but that relies on the accuracy of the web search results themselves. 

Gemini could be the pinnacle of this wave of generative AI. Nevertheless it’s not clear where AI built on large language models goes next. Some researchers consider this might be a plateau reasonably than the foot of the subsequent peak. 

Pichai is undeterred. “Looking ahead, we do see a whole lot of headroom,” he says. “I believe multimodality can be big. As we teach these models to reason more, there can be larger and greater breakthroughs. Deeper breakthroughs are to come back yet. 

“After I absorb the totality of it, I genuinely feel like we’re on the very starting.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here