Use of AI in health care brings risk of 'serious errors,' WHO says


The use of AI in health care is expected to increase rapidly in the coming years, but the World Health Organization sees the risk of patients being given wrong information. — Photo: Jens Büttner/dpa

GENEVA: Research has found that artificial intelligence (AI) is already better than humans at picking donor organs for transplants and also better at answering health-related questions from patients.

And yet experts at the World Health Organization are saying that introducing AI into health care procedures brings the risk of "completely incorrect" information.

The WHO issued a call on Tuesday for caution in the use of large language models (LLMs) in health care, warning that their use could lead to health care errors and erode trust in AI.

The organization said it was enthusiastic about the appropriate use of technology, including LLMs, to support health care, but that there was concern that normal standards of caution were not being applied with LLMs.

Proponents believe doctors will soon use medical AI chat systems to faster answer patient questions about their health, while AI may also be used in some patient diagnosis.

However the two AI chatbots being pushed by Microsoft and Google (ChatGPT and Bard) both have shown their responses are not fully reliable when it comes to facts. The fact that these responses look reliable, however, is what concerns the WHO.

"LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses," the WHO says.

Among the WHO's concerns are that the data used to train AI may be biased and could generate misleading information posing risks to health, equity and inclusiveness and that LLMs could be misused to generate highly convincing disinformation that is difficult for the public to differentiate from reliable health content.

The WHO proposed that these concerns be addressed and that clear evidence of benefit be measured before their widespread use. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Crypto company Tether invests $200 million in brain-chip maker Blackrock Neurotech
EU to probe Meta over handling of Russian disinformation, FT reports
US man charged with sex-related crimes, used Instagram to lure teens
Apple's iPadOS subject to tough EU tech rules, EU says
TikTok creators fear economic blow of US ban
OpenAI to use FT content for training AI models in latest media tie-up
ChatGPT faces Austria complaint for ‘uncorrectable errors’
Social media platform X back up after outages, Downdetector shows
Sleeping Amazon driver’s fatal crash into teacher was preventable, US lawsuit says
Elon Musk’s China trip pays off with key self-driving hurdles cleared

Others Also Read