Several eminent science authors have recently claimed that bad scientific ideas “held back” good ones throughout human history, delaying the progress of science. But as Philip Ball argues, it just isn’t that simple

If you believe in scientific progress, you will agree that the fate of all theories is to be replaced with better ones. Newton’s theory of gravitation was good; Einstein’s was better; someday we will find one that is better still, and so on. But does this mean that better theories are actually held back by inferior ones? In other words, if “wrong” ideas had not been so widely accepted, might “right” (or less wrong) ideas have arrived sooner?
To judge from comments I’ve seen in (and about) several recent popular-science books, some eminent scientists seem to think so. Their arguments are rather like a scientific version of Gresham’s law: the economic principle that “bad money drives out good”. This law got its name in the 19th century, but it was remarked upon much earlier by (among others) Nicolaus Copernicus – the father of heliocentric theory. He observed that when pared-down gold and silver coinage circulated widely, people hung on to any unadulterated currency they could find. Eventually, only the bad currency changed hands.
Copernicus’s link with Gresham’s law adds irony to the views of cosmologist Joe Silk, who argued (in a Nature review of Frank Wilczek’s book A Beautiful Question, which I reviewed for Physics World in October) that Copernicus’s Sun-centred universe was “held back” by Ptolemy’s Earth-centric, epicycle-laden version. According to the physicist Steven Weinberg, meanwhile, Ptolemy himself suffered from similar problems: in his book To Explain the World (see “The cradle of modern science”), Weinberg sighs that the ancient astronomer allowed his scientific acuity to be clouded by the “bad theory” of astrology. Weinberg also argues that the 14th-century French polymath Nicole Oresme was on the threshold of discovering heliocentrism before he “finally surrendered” to the misconceived Ptolemaic orthodoxy – the good idea crowded out by a bad one. (In a nice twist, Oresme, too, is sometimes credited with Gresham’s law.)
Cosmology is not the only sphere in which this notion that “bad ideas drive out good” supposedly applies. In To Explain the World, Weinberg also argues that René Descartes’ muddled ideas in physics “delayed the reception of Newtonian physics in France”. Meanwhile, in chemistry, the putative element phlogiston is often portrayed as a bone-headed impediment to Antoine Lavoisier’s oxygen-based system. And in the closest thing I’ve seen to a suggestion that science would work a whole lot quicker if history didn’t get in the way, the evolutionary biologist Jerry Coyne has proposed that “if after the fall of Rome atheism [and not Christianity] had pervaded the Western world, science would have developed earlier and be far more advanced than it is now”. That’s just one of many reasons why Coyne has argued (most recently in his book Faith Versus Fact) that “science and religion are incompatible”.
Historians of science tend to be much more relaxed about “wrong” ideas. Their task, after all, is not to adjudicate on science but to explain how ideas evolved. This requires them to understand theories in the context of their times: to see why people thought as they did, not to hand out medals for getting things “right”. In other words, they do history. At its worst, however, this position has sometimes led to the suggestion that there is no right and wrong in the history of science. In this extreme “relativist” view, modern science is no more valid than medieval philosophies, and today’s theories have gained acceptance solely because of social and political factors, not because they are objectively any better.
Plenty of scientists and historians have exposed this view as an imposture; David Wootton’s recent book The Invention of Science is but one example. But even if we reject extreme relativism and accept that science develops ever-more-reliable theories about the world, must we also conclude that better theories are delayed by worse ones?
I believe this idea should be resisted, but not so much because it makes for bad history (although it does), or because it gives the likes of Weinberg licence to call Plato silly and Francis Bacon irrelevant (see what I mean?). Rather, I think the scientific version of Gresham’s law denies the realities of how science is done.
No-one sticks with a wrong theory, in the face of a better one, knowing that it is wrong. We do so because we are human and stubborn and attached to our own ideas, and also because we are terribly prone to confirmation bias, seeing only what suits our preconceptions. But whatever the reason, we also think the old theory is actually better – that it gives a better account of why things are the way they are. In other words, at least some of our reasons for sticking with old theories are the same as those that prompt us to come up with new (and occasionally better) theories.
What’s more, theories aren’t good solely (or even) because they are eventually proved “right”. They are good if (among other things) they offer an adequate account of why things are the way they are, without too many arbitrary assumptions. They should be both consistent with and motivated by observations, and ideally they should also have a degree of predictive power. Ptolemy’s cosmology met those conditions, more or less, for centuries. So did Newton’s theory of gravity. In contrast, Max Planck’s proposed quantum fell short, at least at first. Taken at face value, quanta undermined the Newtonian physics that was otherwise so successful, without (at that point) a compelling reason to do so. That’s why Planck initially regarded quanta as a mathematical convenience, rather than real physical entities. Just imagine if Newton himself, lacking even Planck’s motivation, had pulled quanta arbitrarily out of a hat – would that have been a “good” theory? I think not.
We love to deride people who dismissed an idea that proved to be right. But sometimes they had good grounds for doing so. There was no widely accepted empirical evidence for quantization as a fundamental property until Einstein’s work on the specific heat of solids in 1907; contrary to common belief, studies of the photoelectric effect didn’t offer compelling support until several years later. A similar defence can even be made for the cardinals who allegedly refused to look through Galileo’s telescope to confirm his claims with their own eyes. After all, the telescope was a new invention of unproven reliability (could one be sure it didn’t create illusions?), and without some practice it was far from easy to use or to interpret what one saw.
So how can we distinguish “good” theories from “bad” ones? When we are taught the scientific method at school, the answer is, usually, “Do an experiment!” Indeed, Richard Feynman (in one of his many quotable moments – see “Between the lines”) attested that “Nature cannot be fooled”. Unfortunately, the notion that experiments can be trusted to deliver a clear verdict on the rights and wrongs of theories is simplistic – just ask anyone who has had to defend their conclusions against rival interpretations. In peer review, your clean, decisive experimental result quickly becomes a battle against potential confounding factors and alternative explanations. If you’ve experienced that, you have encountered something called the Quine–Duhem thesis, which says, in essence, that there’s always more than one way to read the data. (More strictly, the thesis is that no scientific hypothesis can make predictions independently from other hypotheses.)
Of course, some experiments are decisive. In The Invention of Science, Wootton cites the early 16th-century voyages of Amerigo Vespucci, which showed that the New World was a separate continent, not the “back end” of Asia – thereby destroying the theory that the Earth was made of non-concentric spheres of earth and water, with the solid sphere breaking the surface of the liquid one at just one place. But even so, the Quine–Duhem thesis deserves to be much better known among working scientists. Add in the current talk of a “replication crisis” in the life and social sciences (Nature 526 163), and the fantasy that experiments resolve everything – so-called “experimental realism” – looks increasingly threadbare.
Some famous scientists have stated explicitly that they refuse to accept experiment as the ultimate authority anyway. If observations of the 1919 solar eclipse had failed to support general relativity, Einstein averred that “I would have been sorry for the dear Lord, for the theory is correct.” His Princeton colleague, the mathematician Herman Weyl, claimed that “My work always tries to unite the true with the beautiful; but when I had to choose one or the other, I usually chose the beautiful.” Not all theorists hold such strong views about beauty as a guide, but even so, if a theory were dropped the moment an experimental result seemed to contradict it, progress would be impossible.
Ultimately, science does seem able to find ever more dependable, more accurate, more predictive theories. It works. But this doesn’t mean we should imagine that bad theories or ideas hold back good ones. To do so is to put the cart before the horse, or to suppose that history has a goal (something that, of all people, an evolutionary biologist like Coyne ought to recognize as a mistake). Instead, we need to explore in detail how science evolves: as Wootton puts it, “to understand how reliable knowledge and scientific progress can and do result from a flawed, profoundly contingent, culturally relative and all-too-human process”. When we start wishing away history, we lose sight of that process.