Bad moral code? When algorithms have bigoted world views


  • TECH
  • Sunday, 07 Jul 2019

A visitor to the Cebit expo in Hamburg pictured in front of a light display. — dpa

Technology is supposed to help humans be more productive, and algorithms are taking all kinds of tasks out of our hands. But when algorithms go wrong, it can be a real horror story.

Like when an algorithm to help Amazon's hiring process suggested only male applicants. Or the times when Google's image recognition software kept mixing up black people with Gorillas and telling Asian people to open their eyes.

So what's up with that? Can algorithms be prejudiced?

Lorena Jaume-Palasi, founder of the Ethical Tech Society in Berlin, says it's more complicated than that. "People are always the reason for discrimination," she says.

"Instead of trying to regulate the reasons discrimination exists, we are focusing on the technology, which just mirrors discriminatory practices," she says.

Algorithms are instructions on how to solve a particular problem. They tell the machine: This is how to do this thing. Artificial intelligence (AI) is based on algorithms.

AI copies intelligent actions, and the machine is instructed to make informed decisions. In order for it to do that successfully, it needs large amounts of data, which it can use to recognise patterns and make decisions based on those patterns.

This is one explanation for why algorithms can turn out so nasty: Often, they are making decisions based on old data.

"In the past, companies did have employment practices that favoured white men," says Susanne Dehmel from Bitkom. If you train an algorithm using this historic data, it will choose candidates that fit that bill.

When it comes to racist photo recognition software, it is also very likely that it was not the algorithm's fault – instead, the choice of images used to train the machine may have been problematic in the first place.

Now, there is a positive side to all this: The machines are holding a mirror up to human society, and showing us a pretty ugly picture. Clearly, discrimination is a big problem.

One solution is for tech companies to take more of an active role in what algorithms spit out, and correct behaviours when needed.

This has already been done. For example, when US professor Safiya Umoja Noble published her book Algorithms Of Oppression, in which she criticised the fact that Google's search results for the term "black girls" were extremely racist and sexist, the tech giant decided to make some changes.

We need to ask how we can ensure that AI technologies make better and fairer decisions in the future. Dehmel says there needn't be any government regulation.

"It is a competency problem. When you understand how the technology works, then you can counter discrimination carefully," she says.

Past examples have already shown that it isn't enough to just take out information about gender and race – the algorithms were still able to make discriminatory connections and produced the same results. Instead, Dehmel suggests developers create diverse data sets, and conduct careful trials before training the machines.

Jaume-Palasi believes continuous checks on algorithmically based systems are necessary, and AI should be created by more than just a developer and a data scientist.

"You need sociologists, anthropologists, ethnologists, political scientists. People who are better at contextualising the results that are being used across various sectors," she says.

"We need to move away from the notion that AI is a mathematical or technological issue. These are socio-technological systems, and the job profiles we need in this field need to be more diverse." – dpa

The Star Festive Promo: Get 35% OFF Digital Access

Monthly Plan

RM 13.90/month

Best Value

Annual Plan

RM 12.33/month

RM 8.02/month

Billed as RM 96.20 for the 1st year, RM 148 thereafter.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Anthropic buys Super Bowl ads to slap OpenAI for selling ads in ChatGPT
Chatbot Chucky: Parents told to keep kids away from talking AI dolls
South Korean crypto firm accidentally sends $44 billion in bitcoins to users
Opinion: Chinese AI videos used to look fake. Now they look like money
Anthropic mocks ChatGPT ads in Super Bowl spot, vows Claude will stay ad-free
Tesla 2.0: What customers think of Model S demise, Optimus robot rise
Vista Equity Partners and Intel to lead investment in AI chip startup SambaNova, sources say
Apple plans to allow external voice-controlled AI chatbots in CarPlay, Bloomberg News reports
Goldman Sachs teams up with Anthropic to automate banking tasks with AI agents, CNBC reports
US Justice Department casts wide net on Netflix's business practices in merger probe, WSJ reports

Others Also Read