A new book suggests that traditional notions about “the scientific method” are flawed and misleading, as Robert P Crease discovers
You know what the scientific method is until you try to define it: it’s a set of rules that scientists adopt to obtain a special kind of knowledge. The list is orderly, teachable and straightforward, at least in principle. But once you start spelling out the rules, you realize that they really don’t capture how scientists work, which is a lot messier. In fact, the rules exclude much of what you’d call science, and includes even more of what you don’t. You even begin to wonder why anyone thought it necessary to specify a “scientific method” at all.
In his new book The Scientific Method: an Evolution of Thinking from Darwin to Dewey, the University of Michigan historian Henry Cowles explains why some people thought it necessary to define “scientific method” in the first place. Once upon a time, he writes, science meant something like knowledge itself – the facts we discover about the world rather than the sometimes unruly way we got them. Over time, however, science came to mean a particular stepwise way that we obtain those facts independent of the humans who follow the method, and independent of the facts themselves.
Cowles’s story of this transformation begins with the naturalist Charles Darwin, who worried that professional scientists would regard his mixture of observing, recording and analysing things like barnacles, birds and worms as not rigorous enough to count as legitimate science. Darwin’s account of scientific method, in short, was largely motivated by his own professional anxiety. That anxiety motivated him to reflect on, as Cowles says, “how you thought what you thought”. Darwin found the answer in what he was studying – nature itself.
A natural view
Just as nature takes alternative forms of life and selects among them, Darwin argued, so scientists take hypotheses and choose the most robust. Nature has its own “method”, and humans acquire knowledge in an analogous way. Darwin’s scientific work on living creatures is indeed rigorous, as I think contemporary readers will agree, but in the lens of our notions of scientific method it was hopelessly anecdotal, psychological and disorganized. He was, after all, less focused on justifying his beliefs than on understanding nature.
In the lens of our notions of scientific method, Charles Darwin’s scientific work was hopelessly anecdotal, psychological and disorganized
Robert P Crease
Following Darwin, the American “pragmatists” – 19th-century philosophers such as Charles Peirce and William James – developed more refined accounts of the scientific method that meshed with their philosophical concerns. For Peirce and James, beliefs were not mental judgements or acts of faith, but habits that individuals develop through long experience. Beliefs are principles of action that are constantly tested against the world, reshaped and tested again, in an endless process. The scientific method is simply a careful characterization of this process.
For Cowles, though, the turning point into the modern conception of scientific method occurred in 1910 when fellow pragmatist John Dewey published a book called How We Think. In it, Dewey specified five steps that epitomize what we now think of as the scientific method, which was no longer intimately connected with the experience of the individual scientist but instead became a set of rules that turns those who adopt them into scientists. These rules were: “(i) a felt difficulty; (ii) its location and definition; (iii) suggestion of possible solution; (iv) development by reasoning of the bearings of the suggestion; (v) further observation and experiment leading to its acceptance or rejection; that is, the conclusion of belief or disbelief”.
While Dewey intended these steps to describe organic human thinking and how they can function in education, it was all too easy to interpret them as a special way to think, and to calcify them into what Cowles calls “a manual for technical practice”. Authors of textbooks seized on the idea as a way of explaining what science was and why it worked. Dewey’s steps, Cowles says, were quickly turned into “a symbolism of the separation of science from everyday thinking, a talisman of scientific exceptionalism”. The result, he argues, was “a shared method that seemed less and less natural as the 20th century wore on” though seemingly confirmed by episodes carefully selected from the history of science.
While the ruminations of Darwin and the early pragmatists on method had assumed that it was a way of thinking, in the end professional anxieties and the desire to elevate and insulate scientific knowledge above other kinds of knowledge transformed it into something apart from thinking, something potentially programmable into a computer. In this way “the scientific method” – an abstract set of rules purporting to be about scientific practice – had been turned into a logo; an identifier and attestation.
The critical point
Reviewing Cowles’s book, the Stanford University historian Jessica Riskin recently argued that the “scientific method” originated not within science itself, but “in the popular, professional, industrial, and commercial exploitation of its authority” (New York Review of Books 2 July 2020). Integral to this idea, she writes, was the claim that “science held an exclusive monopoly on truth, knowledge, and authority, a monopoly for which ‘the scientific method’ was a guarantee”. Yet it is possible, Cowles argues, to reject such a view of scientific method and to think of science “as the flawed, fallible activity of some imperfect, evolving creatures and as a worthy, even noble pursuit”.
The power of authority: why we need to rely on experts
To me, the scientific method is a good example of how philosophers and others create a problem by overstressing the orderly and formal character of scientific practice, and then set themselves the task of solving the problem they have created. As I argued in my own book The Workshop and the World, scientists working in a laboratory should be compared to a jury who place themselves for a short time in isolation to evaluate evidence. The jury members bring their experience to the deliberations, which is essential to being able to judge fairly. Those who think that hard-and-fast rules can be drawn up to guarantee justice have never been on a jury.