LOS ANGELES: One of the problems with using artificial intelligence (AI) chatbots is the seemingly insouciant manner in which they "hallucinate" – industry jargon for them making stuff up instead of conceding they aren't sure about what they are writing.
Get used to it, researchers now say, as it appears that the way in which the bots work means that at least two kinds of fibs they tend to churn out cannot be eradicated.
"Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty," a team of OpenAI and Georgia Tech researchers in the US warn in a paper published in September.
The bots hallucinate "because the training and evaluation procedures reward guessing over acknowledging uncertainty," the team said.
According to analysts speaking to the science journal Nature, not only does it seem "impossible" that hallucinations can be stopped, it could be particularly difficult to engineer AI bots in a way that stops them from making up citations seemingly out of thin air.
The researchers believe eliminating "deceptions" – when a bot claims to have carried out an assigned task but has not – could be another hurdle.
According to Purdue University’s Tianyang Xu, the number of hallucinations is down to a level that could be deemed "acceptable to users."
"Hallucinations are a result of the fundamental way in which LLMs work," Nature said, explaining that the bots are "statistical machines" that make their predictions by "generalising on the basis of learnt associations" – a system that causes the machines "to produce answers that are plausible, but sometimes wrong." – dpa
