WHO warns against bias, misinformation in using AI in healthcare


FILE PHOTO: AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/

(Reuters) - The World Health Organization called for caution on Tuesday in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was "imperative" to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

(Reporting by Shivani Tanna in Bengaluru; Editing by Nick Macfie)

Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!

   

Next In Tech News

Deutsche Bank to review Postbank tech migration - Handelsblatt
Intel wins US appeal to overturn $2.18 billion VLSI patent verdict
Twilio to cut about 5% of total workforce
Caterpillar invests in tech startup Nth Cycle to boost recycling
Verizon announces Netflix, Max streaming bundle for customers
Crypto stocks surge as bitcoin hits fresh 2023 high
Foxconn and Pegatron halt Indian iPhone output due to extreme weather -sources
Intel, Siemens to collaborate on improving manufacturing, energy efficiency
SocGen issues 10-million-euro digital green bond on a public blockchain
Brazilian lender Itau launches crypto trading

Others Also Read