Bad moral code? When algorithms have bigoted world views


  • TECH
  • Sunday, 07 Jul 2019

A visitor to the Cebit expo in Hamburg pictured in front of a light display. — dpa

Technology is supposed to help humans be more productive, and algorithms are taking all kinds of tasks out of our hands. But when algorithms go wrong, it can be a real horror story.

Like when an algorithm to help Amazon's hiring process suggested only male applicants. Or the times when Google's image recognition software kept mixing up black people with Gorillas and telling Asian people to open their eyes.

So what's up with that? Can algorithms be prejudiced?

Lorena Jaume-Palasi, founder of the Ethical Tech Society in Berlin, says it's more complicated than that. "People are always the reason for discrimination," she says.

"Instead of trying to regulate the reasons discrimination exists, we are focusing on the technology, which just mirrors discriminatory practices," she says.

Algorithms are instructions on how to solve a particular problem. They tell the machine: This is how to do this thing. Artificial intelligence (AI) is based on algorithms.

AI copies intelligent actions, and the machine is instructed to make informed decisions. In order for it to do that successfully, it needs large amounts of data, which it can use to recognise patterns and make decisions based on those patterns.

This is one explanation for why algorithms can turn out so nasty: Often, they are making decisions based on old data.

"In the past, companies did have employment practices that favoured white men," says Susanne Dehmel from Bitkom. If you train an algorithm using this historic data, it will choose candidates that fit that bill.

When it comes to racist photo recognition software, it is also very likely that it was not the algorithm's fault – instead, the choice of images used to train the machine may have been problematic in the first place.

Now, there is a positive side to all this: The machines are holding a mirror up to human society, and showing us a pretty ugly picture. Clearly, discrimination is a big problem.

One solution is for tech companies to take more of an active role in what algorithms spit out, and correct behaviours when needed.

This has already been done. For example, when US professor Safiya Umoja Noble published her book Algorithms Of Oppression, in which she criticised the fact that Google's search results for the term "black girls" were extremely racist and sexist, the tech giant decided to make some changes.

We need to ask how we can ensure that AI technologies make better and fairer decisions in the future. Dehmel says there needn't be any government regulation.

"It is a competency problem. When you understand how the technology works, then you can counter discrimination carefully," she says.

Past examples have already shown that it isn't enough to just take out information about gender and race – the algorithms were still able to make discriminatory connections and produced the same results. Instead, Dehmel suggests developers create diverse data sets, and conduct careful trials before training the machines.

Jaume-Palasi believes continuous checks on algorithmically based systems are necessary, and AI should be created by more than just a developer and a data scientist.

"You need sociologists, anthropologists, ethnologists, political scientists. People who are better at contextualising the results that are being used across various sectors," she says.

"We need to move away from the notion that AI is a mathematical or technological issue. These are socio-technological systems, and the job profiles we need in this field need to be more diverse." – dpa

Win a prize this Mother's Day by subscribing to our annual plan now! T&C applies.

Monthly Plan

RM13.90/month

Annual Plan

RM12.33/month

Billed as RM148.00/year

1 month

Free Trial

For new subscribers only


Cancel anytime. No ads. Auto-renewal. Unlimited access to the web and app. Personalised features. Members rewards.
Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Legislative roadmap for AI is coming in weeks, Schumer says
Google DeepMind unveils next generation of drug discovery AI model
Google fights $17 billion UK lawsuit over adtech practices
Bain Capital in talks to buy education-software provider PowerSchool, source says
Turkey's competition board to fine Meta $37.2 million in data-sharing probe
SpaceX's unit Starlink secures Indonesia operating permit
Reddit shares soar as earnings show advertising, AI licensing revenue potential
Uber shares tumble as second-quarter forecast disappoints
EU asks X for details on reducing content moderation resources
New York governor regrets saying Black kids in the Bronx don’t know what a computer is

Others Also Read