Today, practitioners of artificial intelligence (AI) have become technology world’s rock stars. “Deep learning” (a more fashionable branch of AI), which uses natural networks to churn through volumes of big data looking for patterns that is inspired by the activity of layers of neurons in the brain, has proven to be in high demand to help build software, particularly for the companies collectively known as FAANG (Facebook, Apple, Amazon, Netflix and Google).
Data, runs the common refrain, is the new oil.
Products and services of the FAANG are all built on a foundation tying big data to human beings-people (les stats, iest moi). In essence, data is abstract, technical and intangible. Still, data become valuable after being increasingly well-understood by individuals, not least because personal data are so often hacked, leaked or stolen. This has caused a tectonic shift in public perception and understanding of data collection.
Indeed, people have started to take notice of data they gave away. Far more solid, however, is the concept of identity. So, it is only when data are understood to mean (and refer to) “people” that individuals start to demand accountability and privacy. Such accountability stretches far beyond an obligation to secure protection, considering that data are used: (i) to decide what sort of access people have to products and services - Uber ratings determine who gets a taxi; Airbnb reviews decide what sort of property you can stay in; and dating-app algorithms choose your potential life partners.
Also, firms use location data and payment history to sell products; online searches set the price to pay; those with a good Zhima credit score (administered by an Alibaba subsidiary) enjoy discounts and waived deposits; and (ii) by states which pose a far greater threat. Algorithms that are able to recognise patterns in data can pinpoint dissidents or even those with counter-establishment opinions. In 2012, Facebook experimented with using data to manipulate peoples’ emotions. In 2016, Russia was alleged to have used data to influence US presidential elections.
The question is not whether someone is doing something wrong. Rather, whether others can do wrong to them. In the end, we need to understand that it is not data that are valuable. It is us, the people - tangible human beings, not abstract data, that really matter.
Often, there is a confusing lack of clarity about what AI means. In many cases, it refers to machine learning, often better described as computational statistics, or more crudely, labelling stuff. AI does have many important uses. But it is most certainly not some magic pixie dust that can be sprinkled on to any base metal to turn it instantly into gold. The “Gartner hype-cycle” (attempts to plot expectations of technologies against their commercial viability) reckons that “deep learning” is at the “peak of inflated expectations.”
Perhaps, the recent stock market big tech sell-off is indicative of the AI hype-cycle turning and ready to slide into the “trough of disillusionment.”
Not surprisingly, it is reported often enough that a speculative investment bubble is a necessary condition for the mass deployment of disruptive technologies. The previous five techno-economic surges over the past three centuries - industry, railway, electricity, mass automation and information technology - all saw bubbles followed by busts, followed by booms, sometimes repeatedly.
Further, there is history behind “Amara’s law,” which states that the impact of emerging technologies often over-estimates in the short run, and under-estimates in the long run. In spite of AI’s current limitations, few analysts doubt its long-term potential to transform sectors such as healthcare, transport, energy, business and education. Surely, there is real substance to the AI revolution, driven by the availability of massive data sets and powerful computing.
In particular, advances in human-level text, image and voice recognition will enable us to reimagine ways of doing things. Sure enough, some of the investments in AI research may be wasted, but it has helped prime the technology’s take-up.
There is, however, a second benefit to the AI hype: helps concentrate the minds of policymakers on general-purpose technology.
Tensions between short-term hype and long-term hope for AI were debated often enough among AI experts and policymakers the world over. What’s interesting is that there is also focus on the political and geopolitical challenges. If data is the feedstock for AI programmes, then what does it mean for national governments when multinational companies (such as FAANG) control so much of it?
To what extent does this erode democratic and political sovereignty? The ways in which AI can be abused are also broad and frightening, from facial recognition technology to killer robots, to cyberwarfare and computational propaganda. A geo-technological struggle for supremacy is already developing between US and China, sort of a new arms race.
AI is sure to have a profound long-term economic impact, hopefully for good. But the resultant disruption will have to be carefully managed. Today, populists’ rail against immigrants stealing their jobs. Already, they are beginning to shout the same about robots. As ever broader areas of peoples’ lives are swayed by algorithms, what’s needed is to ensure societal and individual fairness when it comes to automating decisions on bank loans, job selection or court sentences. Some experts describe algorithms aptly as “opinions embedded in code.” It is important that those opinions also reflect the views of women, diverse ethnic groups and the 3 billion people in the world who are yet to come online – not just those who are rich and white. As I see it, there are no simple and clear solutions to many of these challenges. Indeed, AI experts still can’t reach consensus around many of the most pressing issues of concern. “AI remains a wide-open field.” So, they all now concluded.
Disillusionment so far with the progress of AI despite big leaps in machine & deep learning, is not unique to China. In US, IBM laid off engineers at its flagship AI IBM Watson last summer.
But China, where the hype – and funding – went into overdrive in 2017, the reversal has cut more deeply.
It’s reported that China overtook US in terms of such private investment in 2017, pulling in just shy of US$5bil; but the US$1.6bil invested in 1H’18 is less than one-third US’s level. While many see industry specific applications as the next big leap forward, there are still opportunities in what China tech investors now call “good enough” technology, that can still make a difference.
Such applications have the advantage of addressing China-specific problems, without requiring cutting edge technology or significant computing power. The latter is perhaps the biggest gap in China’s AI arsenal, and explains 2018’s move by big Chinese tech companies into hardware.
Baidu, Huawei and Alibaba are among those working to build their own AI chipsets, and spearhead the drive into quantum computing.
Building chips and computing capacity also sits with Beijing’s aim, outlined in the Made in China, 2025 industrial policy, to ramp up self-sufficiency. But there remain high hurdles still to jump over. No doubt, industries need to work together with tech companies to develop specialised AI, while the techs need to be more realistic in ramping up processing power and start-ups.
Looking forward, things will develop slower; returns on investment, lower; hence, recoup of investments, longer.
Rapid advances in AI, if left unchecked, can lead to malevolent new strains of cyber-attacks and assaults. How to prevent one of the world’s most powerful new technologies from being misused? The recent report: “The Malicious Use of Artificial Intelligence” warned that if breakthroughs in AI continue at their recent pace, the technology will soon become so powerful that it could outflank many of the defence mechanisms built into today’s digital and physical systems.
AI will make it easier for attackers by lowering the cost of devising new cyberweapons, and by making it possible to create more precise targeted assaults. In the field of cyberweapons, this might lead to far more effective “spear phishing,” with attacks personalised for each target.
The report also warns that drones and driverless cars could be commandeered and used as weapons.
Meanwhile, political systems could be hacked by using tools developed for online advertising and commerce to manipulate voters, taking advantage of an improved capacity to analyse human behaviours, moods and beliefs on the basis of available data.
As I see it, AI researchers should start to limit their work to take account of potential risks.
It is about time, perhaps, for algorithms to finally create meritocratic workplaces.
Several HR managers have since reported that their new automated recruitment processes are now selecting a much more diverse range of people than human managers ever did.
But there are dangers too. The most obvious risk is that the algorithms embed biases of their own, including the prospect of workers being subject to the “inhuman” judgements of computers. Or, even to human managers hiding behind the veneer of “data science” to offload responsibility.
Nevertheless, data science is still regarded as the “next frontier for the labour movement.”
It has already a set of principles to adopt in collective agreements, including the right for workers to understand how and why an AI system has made a decision, and the right to appeal it. As I see it, companies will need to be wary of wading-in too deep without limits or safeguards – of “surrendering power to numbers.”
It is because of what they might lose: the subtle flexibility of human judgement; decisions tempered by empathy or common sense, and the simple ability to sort out problems by sitting down across a table. Indeed, companies which remove “human” from human resources, do so at their peril.
What then are we to do
On my most recent visit, MIT had just launched an ambitious program to understand how human intelligence (HI) work in engineering terms; and apply that knowledge to build more useful machines. As I understand it, MIT’s aim is to deliver answers through two linked concepts: (i) “the core” will work to gain fundamental understanding of how natural and computer brains work to generate machine-learning algorithms for more specific applications; and (ii) “the bridge” will then apply discoveries in natural intelligence and AI to a wide variety of disciplines, including disease diagnosis, drug discovery, materials and manufacturing design, automation, synthetic biology, and finance.
Final objective: to reproduce in a machine, the way HI develops from birth through infancy and childhood; also, to build an “intelligent home” for patients with chronic disease, which constantly monitors their health and predict problems and forestall emergencies before they occur.
This MIT initiative uses AI to eventually build enabling machines to do things that human brains can do without effort. I am told that MIT has since investigated the social and ethical dimensions of these AI applications as well.
As I see it, in applying AI across society, MIT needs to think really hard about fundamental moral & ethical issues. There is no escaping this responsibility.
The race to develop and deploy AI depends on it.
Former banker, Harvard-educated economist and British Chartered Scientist, Prof Lin of Sunway University is the author of “The Global Economy in Turbulent Times” (Wiley, 2015) and “Turbulence in Trying Times” (Pearson, 2017). Feedback is most welcome.
Did you find this article insightful?