As a significant move in the company’s effort to recoup what many perceive as the lost ground in a new race to deploy AI, Google is now granting limited access to Bard, its ChatGPT opponent. Bard will first be available to a small group of users in the US and UK. Users may sign up for a waitlist at bard.google.com, but Google says the roll-out will be delayed and has yet to provide a timeline for full public access.
Like Microsoft’s Bing chatbot and OpenAI’s ChatGPT, Bard provides users with an empty text field and an opportunity to inquire about any subject they want. However, Google emphasises that Bard is not a replacement for its search engine but rather a “complement to search” — a bot that users can bounce ideas off of, produce writing draughts from, or chat with about life with. This is due to the well-documented propensity of these bots to invent information.
Bard demonstrated its ability
Sissie Hsiao and Eli Collins, two of the project’s leaders, cautiously describe Bard as “an early experiment … intended to help people boost their productivity, accelerate their ideas, and fuel their curiosity” in a blog post. Moreover, they describe Bard as a platform that enables users to “collaborate with generative AI” (emphasis added). This description appears to be an attempt to absolve Google of responsibility for any further outbursts.
In a demo, Bard demonstrated its ability to respond to common questions swiftly and fluently, providing trite advice on persuading a kid to take bowling and delivering a list of well-liked heist flicks. Each user question receives three replies from Bard, albeit there is no change in their content, and beneath each response is a large “Google It” button that links users to a relevant Google search.
The main text field also features a conspicuous disclaimer, similar to ChatGPT and Bing’s, informing users that “Bard may display false or objectionable content that doesn’t represent Google’s views” — the AI version of “abandon trust, all ye who type here.”
The chatbot was not able to completely answer a question
So, it is unpredictable to try and get accurate information from Bard. Despite being linked to Google search results, the chatbot could not completely answer a question about who presented the day’s White House press briefing. It also needed to provide three distinct but inaccurate responses to a challenging question on the maximum load capacity of a certain washing machine.
It also needed to provide three distinct but inaccurate responses to a challenging question on the maximum load capacity of a particular washing machine. Running the query got the right results, but consumers needed to consult a reliable resource, such as the machine’s manual, to distinguish between them.
This can be both a benefit and a burden for Google. As the chatbot was observed alternatively abusing, gaslighting, and flirting with people, Microsoft’s Bing attracted much-unwanted publicity, but these outbursts also won the bot many fans. The New York Times gave Bing a front-page article because of its propensity to deviate from the script, which may have highlighted the experimental nature of the technology.