Skip to main content
Artificial intelligence

Artificial intelligence

Friend or foe? Machine learning and how it is shaping our lives

05 May 2021
Taken from the May 2021 issue of Physics World where it first appeared under the headline "Judge, jury and AI-xecutioner".

Achintya Rao reviews A Citizen’s Guide to AI by John Zerilli and co-authors

Photos of a wolf and a husky side by side
Wolf or husky? Machine learning can be used to train algorithms to do specific tasks such as categorizing images. (Courtesy: iStock/KenCanning; iStock/MajaMitrovic)

“Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?” So asks Will Smith’s Detective Del Spooner of the artificial being Sonny, in a memorable scene from the 2004 film I, Robot. Sonny, portrayed by Alan Tudyk, retorts with a question of his own: “Can you?”

This idea that we may have differing expectations of artificial intelligence than we do of our fellow human beings is a recurring theme of A Citizen’s Guide to Artificial Intelligence, by John Zerilli, a philosopher of cognitive science and researcher at the University of Cambridge, UK, and six co-authors, all experts in the field. Take for example this claim in the second chapter: “Transparency of an exceptionally high standard is being trumpeted for domains where human decision makers themselves are incapable of providing it.” Can we be accused of double standards? Perhaps it depends on the kind of AI we’re looking at, and what it was designed to do.

As a science-fiction enthusiast, I opened the book having only glanced at the cover, expecting it to deal with the kind of AI seen in Asimov’s works or Star Trek or Blade Runner. However, the book – which defines AI as “the science of making computers produce behaviours that would be considered intelligent if done by humans” – in fact discusses the sub-field of machine learning (ML). ML involves the use of predictive models that can examine a given dataset and perform certain tasks, such as distinguishing between wolves and huskies, or determining whether a loan-seeker has a suitable credit score. ML has seen a huge leap in recent years, making it a relevant real-world focus for the book; while we are still some way from a general AI – the holy grail of AI research – ML is already rapidly permeating every facet of our lives.

The authors seek to equip the reader with the knowledge necessary to make informed decisions concerning the regulation of the use of ML tools, in what Zerilli and co-authors call the “new algorithmic world order”. The book’s scope is primarily political, rather than technical. The authors frequently quote philosophers and discuss legal matters, introducing the reader to the ethical considerations of implementing AI systems. After all, can we rely on automated tools to decide who gets a loan or who gets released on bail, without human intervention and reasoning?

Despite this philosophical emphasis, the authors do provide the reader with a theoretical framework for understanding the concepts discussed, presenting the history of predictive models going all the way back to actuarial tables. A crucial aspect that is articulately covered is neural networks, which are modelled after human brains. I learnt a lot from this section, which explained how these tools receive inputs and provide outputs via numerous “hidden units” that perform classification tasks. Though the hidden units can be tuned by adjustable weights assigned to them, much like the strength of a synapse between neurons can be regulated, the inner workings of these units are a complete black box, even to the programmers that develop them. The question of how human societies can accept the outputs of these black boxes, without explanation or reasoning, naturally arises here, as does the aforementioned question of whether we have double standards.

The book compares the transparency of neural networks with human cognition, arguing that we ourselves are not aware of the real reasons we make decisions, but we trust judges and juries almost implicitly. I am, however, largely unconvinced of this line of reasoning – judges do often provide an explanation of their verdict, based on experience and much more than can be “quantified” for an ML algorithm. The authors make the case that algorithms can help overcome human biases by taking into account many more factors than humans can in making a decision. They are of course careful to note that this is a theoretical ideal: in practice, the extraordinary power of ML is ultimately reliant on the humans who build and train the tools, and “both the building and the training processes are open to bias”. Zerilli and co-authors do their best to decouple the theoretical technology from its practical implementation, simultaneously providing the reader with the resources to question AI’s slow creep into our lives and serving as defence counsel for AI itself.

Zerilli’s background in law is evident throughout, with many legal processes described in excruciating detail. To be fair, this is understandable given the objective of the book. For example, in discussing the role of expert reports and committees as providing grounds for deliberation and consensus-building, Zerilli and co limit themselves to judges (experts) and juries (committees). And in the last chapter, the story of CERN’s Large Hadron Collider and micro black holes – a story I’m familiar with – is examined almost entirely through the legal lens without discussing the science or the nature of scientific consensus.

The release of A Citizen’s Guide to Artificial Intelligence is well timed. It seems not a week goes by without the tech world embroiled in a new ethical controversy involving AI. The Facebook–Cambridge Analytica scandal of 2018 is brought up every time an election is discussed, and 2020 saw Timnit Gebru unceremoniously forced to leave Google’s Ethical AI Team, which she co-led (see “Fighting algorithmic bias”). Questions are also being asked about the ethics of – and lack of informed consent in – harvesting the personal data of millions of people in order to train ML algorithms to provide personalized advertisements in the name of surveillance capitalism.

So should you pick up a copy for your bookshelf? As you might expect of a book that has considerable overlap with academic papers published by the same authors, it is thoroughly researched, but it is not a brisk read. With new concepts on nearly every page, some sections may need to be read more than once to be understood. However, the book can be read in any order, so a reader can dip into a chapter at any time, rather than tackling the whole thing at once. The breadth of topics covered means that there is something to learn for everyone, from experts in AI to lay citizens curious about the role of AI in their lives. Students of technology and of ethics will find it particularly valuable.

I for one plan on giving it a second read to prepare myself for endless debates over a pint or two once our ongoing health crisis is behind us.

  • 2021 MIT Press $40hb 232pp
Copyright © 2024 by IOP Publishing Ltd and individual contributors