
To gauge whether an algorithm is biased, scientists can’t peer into its soul and understand its intentions. Some algorithms are more transparent than others, but many used today (particularly machine-learning algorithms) are essentially black boxes that ingest data and spit out predictions according to mysterious, complex rules. — Dreamstime/TNS
Late last year, the Justice Department joined the growing list of agencies to discover that algorithms don’t heed good intentions. An algorithm known as Pattern placed tens of thousands of federal prisoners into risk categories that could make them eligible for early release. The rest is sadly predictable: Like so many other computerised gatekeepers making life-altering decisions — presentencing decisions, resume screening, even healthcare needs — Pattern seems to be unfair, in this case to Black, Asian and Latino inmates.
A common explanation for these misfires is that humans, not equations, are the root of the problem. Algorithms mimic the data they are given. If that data reflect humanity’s sexism, racism and oppressive tendencies, those biases will be baked into the algorithm’s predictions.
Subscribe to The Star Yearly Premium Plan for 30% off
Cancel anytime. Ad-free. Full access to Web and App.
Monthly Plan
RM 13.90/month
RM 9.73/month
Billed as RM 9.73 for the 1st month, RM 13.90 thereafter.
Annual Plan
RM 12.39/month
RM 8.63/month
Billed as RM 103.60 for the 1st year, RM 148 thereafter.