“The science wars” is the colourful but rather hyperbolic name given to a period, in the 1990s, of public disagreement between scientists and sociologists of science. Harry Collins, a sociologist at Cardiff University, was one of those making the argument that much scientific knowledge is socially constructed, to the dismay of some scientists, who saw this as an attack on the objectivity and authority of science. Rethinking Expertise could be seen as a recantation of the more extreme claims of the social constructionists. It recognizes that, for all that social context is important, science does deal in a certain type of reliable knowledge, and therefore that scientists are, after all, the best qualified to comment on a restricted class of technical matters close to their own specialisms.
The starting point of the book is the obvious realization that, in science or any other specialized field, some people know more than others. To develop this truism, the authors present a “periodic table of expertise” — a classification that will make it clear who we should listen to when there is a decision to be made that includes a technical component. At one end of the scale is what Collins and Evans (who is also a Cardiff sociologist) engagingly call “beer-mat expertise” — that level of knowledge that is needed to answer questions in pub quizzes. Slightly above this lies the knowledge that one might gain from reading serious journalism and popular books about a subject. Further up the scale is the expertise that only comes when one knows the original research papers in a field. Collins and Evans argue that to achieve the highest level of expertise — at which one can make original contributions to a field — one needs to go beyond the written word to the tacit knowledge that is contained in a research community. This is the technical know-how and received wisdom that seep into aspirant scientists during their graduate-student apprenticeship to give them what Collins and Evans call “contributory expertise”.
What Collins and Evans claim as original is their identification of a new type of expertise, which they call “interactional expertise”. People who have this kind of expertise share some of the tacit knowledge of the communities of practitioners while still not having the full set of skills that would allow them to make original contributions to the field. In other words, people with interactional expertise are fluent in the language of the specialism, but not with its practice.
The origin of this view lies in an extensive period of time that Collins spent among physicists attempting to detect gravitational waves (see “Shadowed by a sociologist”). It was during this time that Collins realized that he had become so immersed in the culture and language of the gravitational- wave physicists that he could essentially pass as one of them. He had acquired interactional expertise.
To Collins and Evans, possessing interactional expertise in gravitationalwave physics is to be equated with being fluent in the language of those physicists (see “Experts”). But what does it mean to learn a language associated with a form of life in which you cannot fully take part? Their practical resolution of the issue is to propose something like a Turing test — a kind of imitation game in which a real expert questions a group of subjects that includes a sociologist among several gravitational physicists. If the tester cannot tell the difference between the physicist and the sociologist from the answers to the questions, then we can conclude that the latter is truly fluent in the language of the physicists.
But surely we could tell the difference between a sociologist and a gravitational-wave physicist simply by posing a mathematical problem? Collins and Evans get round this by imposing the rule that mathematical questions are not allowed in the imitation game. They argue that, just as physicists are not actually doing experiments when they are interacting in meetings or refereeing papers or judging grant proposals, the researchers are not using mathematics either. In fact, the authors say, many physicists do not need to use maths at all.
This seemed so unlikely to me that I asked an experimental gravitational-wave physicist for his reaction. Of course, he assured me, mathematics was central to his work. How could Collins have got this so wrong? I suspect it is because they misunderstand the nature of theory and its relationship with mathematical work in general. Experimental physicists may leave detailed theoretical calculations to professional theorists, but this does not mean that they do not use a lot of mathematics.
The very name “interactional expertise” warns us of a second issue. Collins and Evans are sociologists, so what they are interested in is interactions. The importance of such interactions — meetings, formal contacts, e-mails, telephone conversations, panel reviews — has clearly not been appreciated by academics studying science in the past, and rectifying this neglect has been an important contribution of scholars like Collins and Evans. But there is a corresponding danger of overstating the importance of interactions. A sociologist may not find much of interest in the other activities of a scientist — reading, thinking, analysing data, doing calculations, trying to get equipment to work — but it is hard to argue that these are not central to the activity of science.
Collins and Evans suggest that it is interactional expertise that is important for processes such as peer review. I disagree; I would argue that a professional physicist from a different field would be in a better position to referee a technical paper in gravitational- wave physics than a sociologist with enough interactional expertise in the subject to pass a Turing test. The experience of actually doing physics, together with basic physics knowledge and generic skills in mathematics, instrumentation and handling data, would surely count for more than a merely qualitative understanding of what the specialists in the field saw as the salient issues.
Collins and Evans have a word for this type of expertise, too — “referred expertise”. The concept is left undeveloped, but it is crucial to one of the pair’s most controversial conclusions, namely the idea that it is only the possession of contributory expertise in a subject that gives one special authority. In their words, “scientists cannot speak with much authority at all outside their narrow fields of specialization”. This, of course, would only be true if referred expertise — the general lessons one learns about science in general from studying one aspect of it in detail — had no value, which is a conclusion that most scientists would strenuously contest.
This book raises interesting issues about the nature of expertise and tacit knowledge, and a better understanding of these will be important, for example, in appreciating the role of scientists in policy making, and in overcoming the difficulties of interdisciplinary research. Collins and Evans have bigger ambitions, though, and they aim in this slim volume to define a “new wave of science studies”. To me, however, it seems to signal a certain intellectual overreach in an attempt to redefine a whole field on the basis of generalizations from a single case study, albeit a very thorough one.