Old problem of balancing individual rights with social good just as important with AI, says China governance expert


  • TECH
  • Tuesday, 20 Aug 2019

AI poses question of how to balance individual and societal benefits against privacy concerns, Xue says. — SCMP

China has been pushing full-steam ahead with applications of artificial intelligence despite a rising chorus of concern in the West over surveillance and the potential for breaches of personal privacy.

The central government sees it as an efficient means of better policing and managing a country of 1.4 billion people, using the ability of AI to gather and process vast amounts of data to solve a raft of social problems, such as petty crime and more equal access to quality healthcare and education services.

What’s more, AI is also expected to create a wave of new, higher skilled, technology-driven jobs.

The effectiveness of AI “use cases” in China also depends to some degree on its geography and demographics, says Xue Lan, a policy expert who is advising China’s science and technology ministry on AI governance. For example, China is a vast land mass with acute differences in population density between urban and rural areas, throwing up some particular problems.

Citing the example of facial recognition and public surveillance, Xue says finding petty criminals such as pickpockets, or small children separated from their parents in a public park in a big city, can be like finding the proverbial needle in a haystack without the help of advanced camera surveillance technologies.

“Facial recognition does solve quite a big problem in these cases, even though they may not be as prevalent in other countries [in the West], so at this point, I’d say let’s not rush to pass judgment on who is right and who is wrong,” said Xue, who is also dean of the Schwarzman College at Tsinghua University, in an interview in Guangzhou last month.

“Yes, facial recognition may infringe on personal privacy to a certain degree, but it also brings a collective benefit, so it is a question of how to balance individual and societal benefits.”

The AI governance expert committee was established in February this year with experts from both academia and the AI industry.

Xue Lan. Photo: Tsinghua University

Besides Xue, members of the committee include Zeng Yi, deputy director of the Research Centre for Brain-inspired Intelligence at the Chinese Academy of Sciences, Kai-fu Lee, former Google China head and veteran technology investor, Yin Qi, chief executive of facial recognition start-up Megvii, and Zhou Bowen, vice-president and head of AI and research at e-commerce operator JD.com.

In June, the expert committee released eight new AI governance principles with the theme of “Developing Responsible AI”, aimed at promoting the healthy development of AI technology from research to application and offering a framework to relevant industry associations for detailed standards setting.

The eight principles listed are harmony and friendliness, fairness and justice, inclusivity and sharing, respect for privacy, secure and controllable, shared responsibility, open collaboration and agile governance.

Human rights groups have criticised China for using pervasive surveillance technology to monitor citizens in areas such as Xinjiang, and accused the government of running internment camps for an estimated one million or more Uygurs and other mostly Muslim minorities, which it has sought to characterise as “boarding schools”.

Asked about the example of a BBC journalist who was caught seven minutes after having his face scanned as part of an experiment to see how long he could evade capture in Shenzhen, Xue said that stability is a topmost concern in China.

“China is such a big country experiencing a giant transformation, there are great challenges to maintain social stability and protect people’s property and lives,” said Xue, who was a member of another committee who advised on, and reviewed, the central government’s 13th Five-Year Plan on science, technology and innovation.

Visitors are tracked by facial recognition technology from state-owned surveillance equipment manufacturer Hikvision at the Security China 2018 expo in Beijing, China. Photo: AP

“In whichever country, to guarantee security, there are trade-offs between protecting public safety and violating privacy.” Even the US, when faced with the same issue, may veer toward enforcement in areas such as terrorism and national security, he said.

Balancing individual and social benefits also extends to areas such as the economy – new AI technology may create a lot of new jobs and economic prosperity for some but this has to be weighed against potential job losses for others who may be replaced by machines.

With the transformative power of artificial intelligence being compared by some to the widespread availability of electricity at the turn of the 20th century, new industries will be created and entire sectors swept away.

At stake is trillions of dollars of economic output that the winner will get the lion’s share of, with the crumbs left to the laggards, perhaps never to recover from surrendering the first-mover advantage.

Being slower to industrialise compared with the Western powers and Japan, China is aware of the historical dimensions to the current AI opportunity, especially as the raw material for the technology is something that plays to the strength of the nation – people and the data they generate.

This urgency has been compounded by the current trade and tech stand-off with the US, which has awakened to the threat that a resurgent China poses in science and technology, from AI to the 5G telecommunication networks that send the data.

President Donald Trump signed an executive order in February directing the US government to prioritise AI in its research and development spending, following his State of the Union address in which he called investments in “cutting-edge industries of the future” a necessity.

Yet for China, where more than 40% of its 1.4 billion people still live in rural areas, unfettered application of AI could wipe out millions of lower-value repetitive jobs that could be better done by smart machines.

The country also has varying levels of economic development and industrialisation that defies a one-size-fits-all approach to regulation – what is a no-brainer for urbane, built-up Shanghai could be madness for wild, verdant Guizhou, for example.

With social stability a top concern of the central government, a firmer hand to guide the use of AI may be expected compared with the US, where industry sought assurances that the Trump administration would purse a hands-off approach to regulating AI.

“The rhetoric is very strong but talk is cheap and enforcement is key,” said Jeffery Ding, a researcher on China AI policy at the Future of Humanity Institute in the University of Oxford.

Ding added that though there were already a lot of standards and regulation efforts happening in China, the country’s government-led approach to setting standards could lock in the technology, versus the more decentralised approach to standards-setting like in the US, which could allow more innovation.

AI-based automation can increase productivity and help China achieve its economic development goals. But it could come at a cost. As many as 20 million manufacturing jobs could be replaced by robots by 2030, with 14 million of those jobs in China alone, according to a report by Oxford Economics in June.

Such automation is expected to add 0.8 to 1.4 percentage points to GDP growth annually depending on the speed of adoption, according to a 2017 report by McKinsey & Co.

In its “Next Generation Artificial Intelligence Development Plan” published two years ago, China laid out plans to ultimately become the world leader in AI by 2030, with a domestic AI industry worth almost US$150bil (RM627.24bil).

China's science and technology ministry has roped in many of the country’s tech giants to champion the development of various aspects of AI, including Baidu, Tencent and iFlyTek.

Xue said that the government, as the guardian of public interest, should play a foundation role in governance. However, in terms of concrete measures, companies and industry associations can actually play a better role as they are closer to the implementation and development of the technology.

Good self-regulation not only helps the development of the companies but also reduces the government’s cost of supervision, he said.

Given the far-reaching and fast-changing nature of AI, with its implications for various aspects of people’s lives and in industry, regulation has to be dynamic and responsive to the changes, he said.

“Our governance is a dynamic model that constantly adjusts and continuously learns,” Xue said. “We cannot expect legislation to effectively regulate AI ... once the bill is passed, it’s hard to adjust. On the other hand, AI develops very fast. if we are not cautious, laws and regulations might restrict the development of the industry.” – South China Morning Post

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
   

Did you find this article insightful?

Yes
No

Across the site