A Google logo at a data centre exhibition in Hanau, Germany. as Google has further integrated Bard into its core products, the company has also been beset with complaints about the tool generating made-up facts and giving potentially dangerous advice. — Bloomberg
For months, Alphabet Inc’s Google and Discord Inc have run an invitation-only chat for heavy users of Bard, Google’s artificial intelligence-powered chatbot. Google product managers, designers and engineers are using the forum to openly debate the AI tool’s effectiveness and utility, with some questioning whether the enormous resources going into development are worth it.
“My rule of thumb is not to trust LLM output unless I can independently verify it,” Dominik Rabiej, a senior product manager for Bard, wrote in the Discord chat in July, referring to large language models – the AI systems trained on massive amounts of text that form the building blocks of chatbots like Bard and OpenAI Inc’s ChatGPT. “Would love to get it to a point that you can, but it isn’t there yet.”
Already a subscriber? Log in
Save 30% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
