WASHINGTON: Google's chatbot Bard will in future help users to recognise false statements in the AI system's answers, which are currently still unreliable.
Like responses from AI chatbot ChatGPT, Bard's answers are often written in flawless English and appear authoritative. However both systems have been known to generate statements that look factual but are entirely wrong.
To help with this problem, Bard now offers a "double check response" button with a colourful Google logo, Google manager Jack Krawczyk announced in Mountain View on Tuesday.
The parts of the answer that Bard is very sure about will then be highlighted in green. Passages where Bard has discovered sources online that could refute this statement or where no relevant sources could be found are then coloured orange.
"Google Search didn't find relevant content," the chatbot might then warn you. "Consider researching further to assess the statement." Occasionally Bard will also link you to sources that directly refute the AI's own statements.
For each sentence that's written, Google carries out a search to see if there's content out there that confirms or disproves a statement, Krawczyk said, claiming Bard as the first AI language model to willingly admit when it is unsure of something.
The checks are being made available worldwide, but initially only in English.
Google's AI service still has the problem that it can sometimes give out completely wrong or fabricated information. These errors, called "hallucinations", have something to do with how AI works.
Rather than looking up facts from a reliable source, the service instead relies on a so-called language model (LLM) to estimate word by word how a sentence should probably continue.
Bard's unreliability has been under scrutiny from launch, and Google shares plummeted after one space expert pointed out a factual error about the James Webb Telescope in Google's demo of its new AI chatbot.
In another innovation announced on Tuesday, Google wants to allow users to connect files and information from their personal lives with Bard's AI. That means you can let Bard process your emails and other files saved in Google services like Gmail and Googles cloud servers.
Giving an example, Krawczyk said Bard could now come in handy by helping parents keep track of the many emails being sent to them to prepare for the new school year.
Filling out forms and finding out key information like arrival and pick-up times are among the things that Bard can now handle for parents – if they grant it access to Gmail. Krawczyk says this could turn 20 minutes of work into 20 seconds of work.
This function is also available worldwide, but again only in English at first. Other languages, however, would be supported as soon as possible.
Handing private emails over to an AI is a terrifying thought to some, which is why Google stressed that linking Bard up with your personal content can be revoked at any time.
Krawczyk also said this content would not be used to train and further improve the language model, and that your personal content will never be seen by human reviewers, even when your Gmail, Docs and Drive files are processed by Bard. They also won't be used by Bard to show you ads in any way.
Google also wants to lets you combine voice input with uploaded images in future. For example, you could upload a photo of the label of a wine bottle and have Bard explain in detail which main course goes best with it. You're also able to share a Bard chat history with other people.
Google had initially been slow to respond to the push by Californian start-up OpenAI, which had gained over 100 million users in a few weeks since November last year with its chatbot ChatGPT.
Launched in mid-March in English in the US and UK, Bard is Google's answer to ChatGPT, which in turn is closely linked to the software corporation Microsoft through billions in investments. Bard has since been made available in 40 languages around the world. – dpa