Google has launched Bard, the search giant’s answer to OpenAI’s ChatGPT and Microsoft’s Bing Chat. Unlike Bing Chat, Bard doesn’t look up search results—all the data it returns is generated by the model itself. Nevertheless it remains to be designed to assist users brainstorm and answer queries. Google wants Bard to grow to be an integral a part of the Google Search experience.
In a live demo Google gave me in its London offices yesterday, Bard got here up with ideas for a baby’s bunny-themed party and gave a lot of suggestions for taking care of houseplants. “We actually see it as this creative collaborator,” says Jack Krawczyk, a senior product director at Google.
Google has loads riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to reply. In a teaser clip for Bard that the corporate put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
Google won’t share many details about how Bard works: large language models, the technology behind this wave of chatbots, have grow to be helpful IP. But it can say that Bard is built on top of a new edition of LaMDA, Google’s flagship large language model. Google says it can update Bard because the underlying tech improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a method that trains a big language model to offer more useful and fewer toxic responses.
Google has been working on Bard for just a few months behind closed doors but says that it’s still an experiment. The corporate is now making the chatbot available without spending a dime to people within the US and the UK who join to a waitlist. These early users will help test and improve the technology. “We’ll get user feedback, and we are going to ramp it up over time based on that feedback,” says Google’s vp of research, Zoubin Ghahramani. “We’re mindful of all of the things that may go improper with large language models.”
But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she or he thinks pitching Bard as an experiment “is a PR trick that larger corporations use to succeed in hundreds of thousands of consumers while also removing themselves from accountability if anything goes improper.”
Google wants users to think about Bard as a sidekick to Google Search, not a alternative. A button that sits below Bard’s chat widget says “Google It.” The concept is to nudge users to move to Google Search to envision Bard’s answers or discover more. “It’s certainly one of the things that help us offset limitations of the technology,” says Krawczyk.
“We really need to encourage people to truly explore other places, form of confirm things in the event that they’re undecided,” says Ghahramani.
This acknowledgement of Bard’s flaws has shaped the chatbot’s design in other ways, too. Users can interact with Bard only a handful of times in any given session. It’s because the longer large language models engage in a single conversation, the more likely they’re to go off the rails. Most of the weirder responses from Bing Chat that individuals have shared online emerged at the top of drawn-out exchanges, for instance.
Google won’t confirm what the conversation limit shall be for launch, but it can be set quite low for the initial release and adjusted depending on user feedback.
Google can be playing it secure when it comes to content. Users won’t give you the chance to ask for sexually explicit, illegal, or harmful material (as judged by Google) or personal information. In my demo, Bard wouldn’t give me tips about how you can make a Molotov cocktail. That’s standard for this generation of chatbot. Nevertheless it would also not provide any medical information, reminiscent of how you can spot signs of cancer. “Bard will not be a physician. It’s not going to offer medical advice,” says Krawczyk.
Perhaps the most important difference between Bard and ChatGPT is that Bard produces three versions of each response, which Google calls “drafts.” Users can click between them and pick the response they like, or mix and match between them. The aim is to remind people who Bard cannot generate perfect answers. “There’s the sense of authoritativeness if you only see one example,” says Krawczyk. “And we all know there are limitations around factuality.”
In my demo, Krawczyk asked Bard to jot down an invite to his child’s party. Bard did this, filling on the street address for Gym World in San Rafael, California. “It’s a spot I drive by a ton but I truthfully can’t let you know the name of the road,” he said. “In order that’s where Google Search is available in.” Krawczyk clicked “Google It” to make certain the address was correct. (It was.)
Krawczyk says that Google doesn’t want to interchange Seek for now. “We spent a long time perfecting that have,” he says. But this may increasingly be more an indication of Bard’s current limitations than a long-term strategy. In its announcement, Google states: “We’ll even be thoughtfully integrating LLMs into Search in a deeper way—more to come back.”
That will come sooner moderately than later, as Google finds itself in an arms race with OpenAI, Microsoft, and other competitors. “They’re going to keep rushing into this, whatever the readiness of the tech,” says Chirag Shah, who studies search technologies on the University of Washington. “As we see ChatGPT getting integrated into Bing and other Microsoft products, Google is unquestionably compelled to do the identical.”
A 12 months ago, Shah coauthored a paper with Emily Bender, a linguist who studies large language models, also on the University of Washington, by which they called out the issues with using large language models as search engines like google and yahoo. On the time, the concept still seemed hypothetical. Shah says he was apprehensive that they could have been overreaching.
But this experimental technology has been integrated into consumer-facing products with unprecedented speed. “We didn’t anticipate this stuff happening so quickly,” he says. “But they haven’t any alternative. They should defend their territory.”