AI appears to discriminate against people with disabilities


According to University of Washington (UW) researchers, the best-known model, OpenAI’s ChatGPT, appears to be biased against people with disabilities, going by how it ranks resumes. — dpa

WASHINGTON: AI innovators have already been told of the risks their chatbots bring, from potentially costing millions of people their jobs to "hallucinating" false information when fielding user queries.

But a more immediate risk could be the prejudices that such software reinforces: According to University of Washington (UW) researchers, the best-known model, OpenAI’s ChatGPT, appears to be biased against people with disabilities, going by how it ranks resumes.

The Seattle-based team found ChatGPT to have "consistently ranked resumes with disability-related honours and credentials" as "lower than the same resumes without those honors and credentials."

"When asked to explain the rankings, the system spat out biased perceptions of disabled people," the researchers found, ahead of presenting their report at the recent 2024 ACM Conference on Fairness, Accountability, and Transparency in Rio de Janeiro.

Bias has long been a problem for AI and algorithms, and such software has previously been shown to reflect and repeat common social prejudices. The latest findings should lead to questions about the growing use of AI to assess CVs and job applicants, the team says.

"Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective," said Kate Glazko of UW.

"Some of GPT’s descriptions would colour a person’s entire resume based on their disability," Glazko said, pointing out that the bot "hallucinated the concept of 'challenges' into the depression resume comparison, even though 'challenges' weren’t mentioned at all."

The team found that the bot could at the same time be trained to at least partly overlook disability.

The team reported that when they modified the tool with written instructions directing it not to be ableist the results was to reduce bias "for all but one of the disabilities tested."

The bot's apparent bias was "presumably drawn from training data containing real-world biased statements made by humans," meaning that their report "suggests additional avenues for understanding and addressing human bias." – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show

Others Also Read