A new study, co-led by Omiye, cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients. — AP
SAN FRANCISCO: As hospitals and health care systems turn to artificial intelligence to help summarise doctors’ notes and analyse health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.
Powered by AI models trained on troves of text pulled from the Internet, chatbots such as ChatGPT and Google’s Bard responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday (Oct 20) in the academic journal Digital Medicine and obtained exclusively by The Associated Press.
Already a subscriber? Log in
Save 30% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
