The cognitive era: Whither the machine brain?


Speedy progress: It’s only a matter of time before we create a computer that’s smarter than the human brain. — IBM

Speedy progress: It’s only a matter of time before we create a computer that’s smarter than the human brain. — IBM

It's almost inevitable, given dire warnings about the threat to the human race by some prominent commentators, including a billionaire and visionary entrepreneur, and one of the world’s most prominent physicists, that you would associate Artificial Intelligence (AI) with the rise of the machines, aka, Skynet, aka the Terminator.

Futurists predict that it’s only a matter of time before we create a computer that’s smarter than the human brain, and after that the very smart machines can create even smarter machines, eventually leapfrogging the capacity of human intelligence.  It doesn’t take a Hollywood imagination to predict the doomsday scenario of super smart computers creating machines with capabilities vastly beyond the human ken – after all, even the lowliest computer today can out-compute, at vastly superior speeds, the average human.

Humans however, have always been able to think, and therefore outsmart fast, efficient computers programmed to do whatever they’re programmed to do.  Until now.

Cognitive computing, artificial intelligence, machine learning systems, call it what you will, is on the cusp of a technology explosion.  We’ve been here before. In the 1970’s and again in the 1980s, there was a big surge of interest in AI, with scientists and computer technologists predicting that every home would be equipped with a talking, intelligent robot. Movies of super-intelligent machine assistants abounded, and in popular culture, humans spoke effortlessly to benign, superintelligent computers (remember Star Trek and Knight Rider?). 

And then, nothing.  Thanks to the hype and hoopla, and the failure of these futuristic visions in materializing, the public at large lost interest in the subject and turned to other things, such as social media.

How did cognitive get in?

Yet, if you read the news, there is a groundswell of interest in smart machines.  Facebook, Apple, Microsoft, Google, IBM, have all announced or launched some sort of cognitive computing capability, from Apple’s Siri on the phone to the blue-hued personality of Microsoft’s Cortana to the ever-expanding capability of IBM’s Watson in fields as disparate as oncology and your connected washing machine.

In 1997, IBM’s Deep Blue defeated Gary Kasparov, the reigning world chess champion, and in 2011, IBM did it again with Watson playing the TV game show Jeopardy! and defeating the reigning world champions.  In 2016, Google announced that its Deep Mind system had taken on, and defeated a human champion at the ancient game of GO, a game said to be exponentially more complex than chess.

Although experts argue over the differences between ‘deep learning’ and ‘shallow learning’, the fact is, that we now have computer systems which understand, to varying extents, natural human language, including its subtleties, colloquialisms and contradictions.  We have computer systems that learn, and therefore improve, over time, and machines that have reasoning capability.

While general computer systems are binary and deterministic, cognitive or AI systems are probabilistic, which is to say, they don’t always come up with absolute answers, but may assign probabilities to a range of possible outcomes. Anyone who saw IBM Watson’s televised Jeopardy! match on TV in 2011 will remember that Watson came up with several hypotheses, with probabilities assigned to each answer.

While programmed computers (which is everything we’re used to) come up with predictable results always (conditioned by their programs), intelligent systems often consider context and the local conditions prevailing at the time. It considers changes in circumstances before deducing a response.

Elements of AI already exist in the digital assistants of our smartphones, and some commercially large systems, behind the scenes, use intelligent machines to perform functions that could have been done by a call operator in the near past, to answer complex queries that might earlier have flummoxed a human operator.  What gives?

The rise of cognitive (used here interchangeably with ‘artificial intelligence’ or ‘AI’, although pundits will hold that AI has specific parameters in its meaning) is consistent with another trend of the modern age, and that is the generation and consumption of vast amounts of data.

This is the first installment in a two-part series. For the second part, go here.

Lee Yu Kit is the executive architect of IBM