The ‘CEO’ is a man: how Chinese artificial intelligence perpetuates gender biases


  • AI
  • Thursday, 30 Sep 2021

The study found searching for words like ‘CEO’ or ‘scientist’ often returns images of mostly men. Algorithms have been scrutinised across the world for their ability to reinforce cultural stereotypes. — SCMP

As artificial intelligence becomes increasingly integrated into modern society, evidence suggests that the algorithms meant to help eliminate cultural biases have problems of their own, and Chinese companies are no different.

A report published on Monday by the Mana Data Foundation, a Shanghai-based public welfare foundation, and UN Women, found systematic prejudices against women in many programmes.

For example, on major Chinese search engines like Baidu, Sogou and 360, words like “engineer”, “CEO” or “scientist” returned mostly images of men.

Furthermore, searching keywords like “women” or “feminine” often resulted in derogatory videos and photos, or links to content such as “a women’s sexual techniques” and information about vaginas.

The team behind the gender biases report celebrates its release. Photo: UN Women

The report’s purpose was to provide concrete evidence for gender discrimination in AI algorithms so that companies learn about the problem and fix it.

The report provided gender discrimination cases in new media, search engines, open-source coding, employment algorithms and consumption models.

Globally, artificial intelligence is often accused of perpetrating cultural biases. For example, US authorities have used artificial intelligence to predict recidivism rates, or the likelihood a convicted criminal would reoffend.

When Pro Publica analysed a product called Compas, it found that black people were twice as likely to be labelled “high risk” as white people with the same criminal background.

The Chinese report found that deep learning algorithms often missed offensive content directed towards women, such as phrases like “besides giving birth, women are useless”, but were good at catching pornography and violent imagery.

Online advertisers also use algorithms to try and target their campaigns to boost sales, but they can easily objectify women. On an unnamed e-commerce platform, searching for beer products led to an advertisement from a beer company that featured a semi-pornographic picture of a woman, the report said.

Kuang Kun, an expert on the project, said: “Gender discrimination exists in algorithms because the data collected to train AI reflects the discrimination that exists in the human world, and because algorithm engineers lack awareness, they do not include solutions.”

A survey in the report found that 58% of respondents working in AI-related fields did not know gender discrimination existed in algorithms, and 80% did not know how to solve the problem.

Companies had to improve their facial recognition algorithms after dark-skinned women were found to be more likely to encounter errors than other racial and gender groups. Graphic: UN Women

However, change is possible. In one case, Baidu, the largest search engine in China, was able to link information to fight domestic violence to 11 keywords and phrases typically searched amid an abusive relationship.

The company also changed 176 anti-women job descriptions during an internal clean-up.

The report said Chinese society should provide more education opportunities for women because there is occupational gender segregation and added that companies should provide equal training and promotion opportunities between men and women.

In 2019, 89.4% of all computer programmers were men, compared to 10.4% for women, according to the China Internet Information Centre.

As for the algorithms themselves, the report said companies need to reduce developer biases and be transparent about what the algorithm does and how it uses the data.

Companies can also create mechanisms like feedback channels, and they can use a human instead of an algorithm when making an important decision.

In 2018, a paper from MIT and Stanford University also examined race and gender in facial recognition technology.

The team examined three commercially released facial-analysis programmes and found an error rate of 0.8% for light-skinned men versus 34.7% for dark-skinned women.

In April 2020, Google announced that it was dedicating energy towards fixing gender biases in its translation tool.

For example, the tool had incorrectly identified Marie Curie as a man during the translation process. Google is building a data set to help improve Translate’s machine learning regarding gender biases. – South China Morning Post

Article type: free
User access status:
Join our Telegram channel to get our Evening Alerts and breaking news highlights

Algorithms , gender bias

   

Next In Tech News

SPAC linked to Trump's venture outperforms others in sector
Panasonic to invest $700 million to produce new batteries for EVs - Nikkei
Scientists in SG develop computer program that can detect those at higher risk of depression
China accused of interference as Australia PM's WeChat account vanishes
Senior UK police officer jailed for spying on naked women
Police in England use Twitter to find stolen van that is boy’s ‘lifeline’
Apple AirTags connected to cases of stalking, car theft
Philips expects summer recovery from supply chain woes
Xiaomi develops smart potty for children
Toshiba halts operations at chip plant after quake

Others Also Read


Vouchers