Artificial intelligence is a research field that attracts its fair share of hyperbole. Fuelled by popular depictions of intelligent robots that behave just like human beings, public debate has focused on how AI will transform our lives, replace our jobs and, in the most dystopian fantasies, take over the world. Even Sundai Pinchai, the chief executive of Google, is reported to have said “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.”
Michael Wooldridge, a professor of AI at the University of Cambridge in the UK, seeks to rewrite that mainstream narrative in his latest book The Road to Conscious Machines: the Story of AI. For a start, he says, creating a robot that thinks and acts like us is fiendishly difficult, and it’s far from certain whether it would one day be feasible, let alone desirable. What’s more, those futuristic fairy tales overshadow the myriad examples of the real-world benefits already delivered by AI research, such as language translation, as well as healthcare monitoring and diagnosis.
As someone who often reports on how AI has become a crucial tool in many areas of science, I was particularly interested in Wooldridge’s examination of what AI is, and what it is not. Even the term “artificial intelligence”, originally coined by American academic John McCarthy in 1956, can be misleading for a non-expert, as it suggests that the goal is to create a machine that is capable of independent thought. Instead, it involves creating algorithms and programs that allow computers to follow hundreds of millions of instructions – all of which has become possible in recent years, thanks to exponential improvements in data-processing power, and the advent of big data.
This means that computers can now do specific tasks very well: they can beat the best human players of chess and Go, and can identify anomalies in thousands of X-rays more reliably than human clinicians. But machines still can’t understand a story well enough to answer questions about it, and nor can they interpret what’s really happening in a picture. Those and other abilities that we might call general human intelligence are far from being replicated by a machine.
To understand the possibilities and limitations of AI, Wooldridge explores its evolution from the initial ideas of Alan Turing right up the present day. He is an informative and entertaining guide, capturing the buzz of new research breakthroughs, while also explaining the key concepts that underpin modern AI research. But the book is not all about the achievements – indeed, Wooldridge also offers a candid appraisal of the many missteps along the way.
One such example was the Cyc project, initiated by AI visionary Doug Lenat in 1984, which aimed to capture everyday human knowledge in a vast expert system that would be able to answer questions in a human-like way. The information it contained was patchy, however, and its responses unpredictable, a failure that has become part of computing folklore: the scientific unit for measuring bogus claims is called a micro-Lenat, says Wooldridge, because “nothing could be as bogus as a whole Lenat”.
Despite setbacks like these, AI has in recent years been transformed from a curiosity for academics into one of the most feted areas of science and technology. That seismic shift has been enabled by advances in machine learning, which have allowed computers to solve problems without needing to follow a comprehensive list of instructions. Instead, powerful programs based on neural networks can be trained to perform specific tasks, such as playing a board game or basic language translation.
The power of these approaches was deftly illustrated in 2014, when AI start-up DeepMind showed that a computer program could teach itself to play a series of video games at a better-than-human level. Rather than relying on a vast input dataset, the program was designed to learn through a process of trial and error – in one case discovering a winning strategy that even its designers had not considered.
Even the most advanced AI systems have no real understanding of what they are doing
But even the most advanced AI systems have no real understanding of what they are doing, cautions Wooldridge, which can create problems if we place too much trust in a system that appears to make the right decisions most of the time. The first autonomous cars, for example, have caused fatal accidents when their human drivers weren’t paying attention, while a facial recognition system for identifying potential criminals failed because the input dataset was skewed by police mugshots in which no-one was smiling.
Wooldridge offers an even-handed analysis of these and other ethical issues, but I wasn’t convinced that someone so embedded in the research community could really scrutinize some of the moral issues that non-expert users of AI technology will need to tackle. While he recognizes that AI systems have the potential to be biased and to be used for the wrong reasons – lethal autonomous weapons anyone? – he doesn’t offer much reassurance that tentative guidelines proposed by scientists will become a regulatory framework anytime soon.
Wooldridge also indulges in some speculation on the prospects for strong AI: the road to conscious, self-aware, autonomous machines. Any such system would need to replicate what philosophers call the “theory of mind”: the human ability to understand and predict the behaviour of other people based on their beliefs and desires. Only this type of machine, says Wooldridge, would truly be able to pass the Turing test – to act in a way that would be indistinguishable from the real thing – and that goal may remain a fantasy for many years to come.
- 2020 Pelican Books, Penguin 416pp £20hb