Skip to main content
Philosophy, sociology and religion

Philosophy, sociology and religion

The problem of missile defence

15 May 2014
Taken from the May 2014 issue of Physics World

Arguments That Count: Physics, Computing and Missile Defense, 1949–2012
Rebecca Slayton
2013 MIT Press &pound24.95/$35.00hb 272pp

Too many parameters

The idea of building a missile system to defend a nation from the horrors of nuclear attack first entered the public consciousness in the 1980s, when US president Ronald Reagan – backed by prominent (and controversial) scientific advisers such as the physicist Edward Teller – promoted the Strategic Defense Initiative as a supposedly impenetrable shield against the Soviet Union’s nuclear arsenal. At the time, debate about the technical feasibility of this so-called “Star Wars” system centred on questions about whether it was possible to detect a fast incoming missile, distinguish it from decoys or hit it with another missile or laser.

But while the physics problems associated with hitting incoming missiles are probably soluble (given sufficient resources), the computing problems are rather more challenging. Both types of problem had been hotly debated in the 1950s and 1960s as missile technology, software and computers evolved, and in 1963 the physicist Herbert York gave evidence to Congress that offensive missiles would always have the advantage over defensive ones. For a “missile shield” to be effective, it had to work quickly and precisely. It needed to work correctly the first time even though it would be impossible to test it in a realistic way. It needed to cope instantly with a whole range of (possibly unknown) countermeasures. And crucially, all of its multiple technologies and layers had to fit together seamlessly in order to pass flight data between them within minutes and without errors. Creating such a system would be like designing a computer that could beat Garry Kasparov at chess, first time and every time, in matches where the pieces and board were constantly changing.

These and similar arguments prevailed for many years, but when missile defence re-emerged as a hot topic in the 1980s, they were initially disregarded. Only in 1985, when a very senior software engineer resigned from the US programme, citing the risk of catastrophic failure, did these arguments begin to register with the Reagan administration.

The question of why it took so long for certain key issues, such as computing problems, to attract attention is explored in Rebecca Slayton’s book Arguments That Count: Physics, Computing and Missile Defence, 1949–2012. Slayton, an expert in technology policy, is interested in why some scientific arguments “stick” and others do not, and her book offers some fascinating insights into how people from various disciplines involved in missile defence presented their arguments and counter-arguments. However, Slayton’s book is far more than just a historical account of various missile-defence programmes. It also includes a whole series of thoughts and conclusions about the nature of scientific advice and decision-making in our highly technological world.

One of Slayton’s conclusions is that scientific advice cannot be separated from politics. Missile-defence systems are a good example. Because their primary goal is to reduce the effectiveness of attacking missiles, one of the cheapest responses for an adversary is simply to deploy more missiles. The better a shield becomes, the more missiles the adversary must deploy to counter it. A complex system such as “Star Wars” could also be used to destroy space-based sensors, thus rendering an opponent electronically blind and facilitating a possible surprise attack. Both are prime examples of how scientific improvements can have politically destabilizing consequences – and indeed, they contributed to a breakdown in US–Soviet negotiations related to the START anti-missile treaty in 1986.

Slayton also argues that “experts” are never just technical experts; rather, they are also experts in persuading and presenting. In that sense she regards scientific advice as being staged and dramatized to gain maximum impact. Again, the history of missile defence offers supporting examples. Slayton points out that many supposedly objective field tests of anti-missile systems have, in fact, been heavily biased in ways that make it easier for systems to find and hit incoming “warheads” and distinguish them from decoys (some of which have helpfully been made much larger). During the Gulf War, the US Patriot missile system tragically led to the shooting down of an Allied Tornado jet, while failing numerous times to shoot down incoming Iraqi Scud missiles (despite initial claims to the contrary).

Slayton uses the term “disciplinary repertoire” to describe how experts use a set of rules, knowledge and habits to rhetorically distinguish the subjective (and politically controversial) aspects of a problem from its (putatively objective) technical parts. This definition may seem controversial to some, but in my view it is a useful way of stripping away the pretence that science can operate in an objective way when it is dealing with highly political, human-related issues.

From my UK perspective, it seems a pity that Slayton’s analysis is solely based on US sources. During the 1980s a whole series of British-based analysts wrote books and articles opposing the Strategic Defense Initiative on political, strategic and technical grounds, and Slayton’s software-related arguments were also made in the UK by groups of computer professionals such as Electronics and Computing for Peace. These groups later merged into organizations such as Scientists for Global Responsibility, which I now chair, that continue to make similar points today. Another factor that Slayton does not address is the role of funding, lobbying and finance in decision-making, which in my view is also a key issue.

Despite these limitations, however, her conclusions certainly have implications far beyond missile defence. For example, we rely upon “expert” opinion as to how far we should adopt genetic-modification technologies in our food products. But is it really possible to extricate such evidence from the fact that a handful of large corporations control the world’s food supply? On the other hand, we have an unprecedented consensus among experts regarding climate change, but they still struggle to get governments to take sufficient action. I would argue that the problem lies partly with these experts’ “disciplinary repertoire”, but also with the existence of a strong, politically and financially motivated opposing lobby.

In her subtle and understated style, Slayton concludes that we must “recognize that the risks we face can only partly be addressed by the physical ingenuity of America’s top scientists and engineers”. She adds that all “complex technological systems…can never be only physical, but…are simultaneously social and political to the core”. I agree with her, while adding that I wish scientists and engineers of all nations could be more usefully engaged in less complex technological solutions to problems such as climate change, rather than dedicating their skills to conflict and destruction.

Related events

Copyright © 2024 by IOP Publishing Ltd and individual contributors