Skip to main content

Spot the physicist

“The science wars” is the colourful but rather hyperbolic name given to a period, in the 1990s, of public disagreement between scientists and sociologists of science. Harry Collins, a sociologist at Cardiff University, was one of those making the argument that much scientific knowledge is socially constructed, to the dismay of some scientists, who saw this as an attack on the objectivity and authority of science. Rethinking Expertise could be seen as a recantation of the more extreme claims of the social constructionists. It recognizes that, for all that social context is important, science does deal in a certain type of reliable knowledge, and therefore that scientists are, after all, the best qualified to comment on a restricted class of technical matters close to their own specialisms.

The starting point of the book is the obvious realization that, in science or any other specialized field, some people know more than others. To develop this truism, the authors present a “periodic table of expertise” — a classification that will make it clear who we should listen to when there is a decision to be made that includes a technical component. At one end of the scale is what Collins and Evans (who is also a Cardiff sociologist) engagingly call “beer-mat expertise” — that level of knowledge that is needed to answer questions in pub quizzes. Slightly above this lies the knowledge that one might gain from reading serious journalism and popular books about a subject. Further up the scale is the expertise that only comes when one knows the original research papers in a field. Collins and Evans argue that to achieve the highest level of expertise — at which one can make original contributions to a field — one needs to go beyond the written word to the tacit knowledge that is contained in a research community. This is the technical know-how and received wisdom that seep into aspirant scientists during their graduate-student apprenticeship to give them what Collins and Evans call “contributory expertise”.

What Collins and Evans claim as original is their identification of a new type of expertise, which they call “interactional expertise”. People who have this kind of expertise share some of the tacit knowledge of the communities of practitioners while still not having the full set of skills that would allow them to make original contributions to the field. In other words, people with interactional expertise are fluent in the language of the specialism, but not with its practice.

The origin of this view lies in an extensive period of time that Collins spent among physicists attempting to detect gravitational waves (see “Shadowed by a sociologist”). It was during this time that Collins realized that he had become so immersed in the culture and language of the gravitational- wave physicists that he could essentially pass as one of them. He had acquired interactional expertise.

To Collins and Evans, possessing interactional expertise in gravitationalwave physics is to be equated with being fluent in the language of those physicists (see “Experts”). But what does it mean to learn a language associated with a form of life in which you cannot fully take part? Their practical resolution of the issue is to propose something like a Turing test — a kind of imitation game in which a real expert questions a group of subjects that includes a sociologist among several gravitational physicists. If the tester cannot tell the difference between the physicist and the sociologist from the answers to the questions, then we can conclude that the latter is truly fluent in the language of the physicists.

But surely we could tell the difference between a sociologist and a gravitational-wave physicist simply by posing a mathematical problem? Collins and Evans get round this by imposing the rule that mathematical questions are not allowed in the imitation game. They argue that, just as physicists are not actually doing experiments when they are interacting in meetings or refereeing papers or judging grant proposals, the researchers are not using mathematics either. In fact, the authors say, many physicists do not need to use maths at all.

This seemed so unlikely to me that I asked an experimental gravitational-wave physicist for his reaction. Of course, he assured me, mathematics was central to his work. How could Collins have got this so wrong? I suspect it is because they misunderstand the nature of theory and its relationship with mathematical work in general. Experimental physicists may leave detailed theoretical calculations to professional theorists, but this does not mean that they do not use a lot of mathematics.

The very name “interactional expertise” warns us of a second issue. Collins and Evans are sociologists, so what they are interested in is interactions. The importance of such interactions — meetings, formal contacts, e-mails, telephone conversations, panel reviews — has clearly not been appreciated by academics studying science in the past, and rectifying this neglect has been an important contribution of scholars like Collins and Evans. But there is a corresponding danger of overstating the importance of interactions. A sociologist may not find much of interest in the other activities of a scientist — reading, thinking, analysing data, doing calculations, trying to get equipment to work — but it is hard to argue that these are not central to the activity of science.

Collins and Evans suggest that it is interactional expertise that is important for processes such as peer review. I disagree; I would argue that a professional physicist from a different field would be in a better position to referee a technical paper in gravitational- wave physics than a sociologist with enough interactional expertise in the subject to pass a Turing test. The experience of actually doing physics, together with basic physics knowledge and generic skills in mathematics, instrumentation and handling data, would surely count for more than a merely qualitative understanding of what the specialists in the field saw as the salient issues.

Collins and Evans have a word for this type of expertise, too — “referred expertise”. The concept is left undeveloped, but it is crucial to one of the pair’s most controversial conclusions, namely the idea that it is only the possession of contributory expertise in a subject that gives one special authority. In their words, “scientists cannot speak with much authority at all outside their narrow fields of specialization”. This, of course, would only be true if referred expertise — the general lessons one learns about science in general from studying one aspect of it in detail — had no value, which is a conclusion that most scientists would strenuously contest.

This book raises interesting issues about the nature of expertise and tacit knowledge, and a better understanding of these will be important, for example, in appreciating the role of scientists in policy making, and in overcoming the difficulties of interdisciplinary research. Collins and Evans have bigger ambitions, though, and they aim in this slim volume to define a “new wave of science studies”. To me, however, it seems to signal a certain intellectual overreach in an attempt to redefine a whole field on the basis of generalizations from a single case study, albeit a very thorough one.

Hard up

“The events of the past few months have exposed serious deficiencies within the senior management [of the Science and Technology Facilities Council], whose misjudgements could still significantly damage Britain’s research reputation in [physics], both at home and abroad.” That was the damning verdict of a report published by MPs at the end of April into the crisis in UK science funding (see “Report slams UK’s leading physics funding agency”). The MPs, who form the House of Commons select committee on innovation, universities, science and skills, called for “substantial and urgent changes” to the way in which the council is run and said that there were “serious questions about the role and performance of the chief executive [Keith Mason]”.

It is rare for such a report to criticize an individual or organization so strongly. But the report does reflect the anger in the UK physics community at the way in which the Science and Technology Facilities Council (STFC) has handled an £80m shortfall in its budget that emerged late last year. As has been well documented, the government will increase the STFC’s budget by about 4.5% a year from £573m in 2007/8 to £652m in 2010/11. But from this the STFC has to pay for subscriptions to international labs like CERN and for depreciation of its capital assets. Moreover, like all the other UK research councils, its grants now also have to include a much larger proportion of university infrastructure costs. Given these constraints, the council ended up with 25% less cash than it needed for individual grants and projects in particle physics and astronomy.

But STFC bosses made a bad situation worse by deciding last December — seemingly without consulting physicists — that it would deal with the shortfall by pulling out of plans for the International Linear Collider, withdrawing from the Gemini telescopes and ending funding for solar–terrestrial physics. As the MPs point out, the STFC’s communication on this and other matters has been “lamentable”. To its credit, the STFC has done better in recent months, having consulted the community over a ranking that it drew up in February of the projects that it currently supports. The STFC expects to announce next month which projects it can afford to fund, but some, despite the consultation, will still face the chop.

Many in the UK physics community would no doubt be delighted to see Mason step down or be sacked, which could help to restore their faith in the STFC. But having received “full support” from the STFC’s Council, the chances are that he will remain in place until his term as chief executive ends in 2012. And even if Mason were to go, his successor would still face huge problems. The simplest solution would be for the government to make good the £80m shortfall, although that seems unlikely, as does the prospect of the STFC postponing any decisions over what to fund until the Wakeham review into the health of physics is published in September.

Part of the blame for this fiasco certainly lies with the government. By creating the STFC in April last year from two previous research councils — just months before the government’s spending review — the new council had little time to make a proper case for funding. The resulting shortfall could not have come at a worse time for physics, just as the Large Hadron Collider at CERN comes online (see “A taste of LHC physics”). For the want of just £80m, Britain’s physics base has been unnecessarily and badly hit.

Beauty in simplicity

By Michelle Jeandron

What would you choose as the most beautiful science experiment ever performed? Some Physics World readers may remember being asked a similar question by columnist Robert Crease a few years ago. The resulting article inspired US science writer George Johnson’s new book The Ten Most Beautiful Experiments, although interestingly the Physics World winner — the double-slit experiment with electrons — doesn’t make it onto Johnson’s list.

Johnson was here in Bristol last night giving a talk about the book. Beauty is a tricky concept at the best of times, let alone when applied to something abstract like science, and he explained the thought processes behind his list: “I was nostalgic for the time when a single mind could confront the unknown.” For Johnson, then, a beautiful experiment is one that poses a question to nature and gets a “crisp, unambiguous reply.” It also needs to be simple enough that it could conceivably be done by anyone, with a few simple pieces of equipment.

He didn’t have time to go through the whole ten, but the audience were treated to discussions of Johnson’s favourite three: Newton’s use of prisms to understand colour; Faraday’s Oersted experiment in which he discovered that light could be influenced by a magnetic field; and the Michelson-Morley experiment, which Johnson describes as a “beautiful failure”. Johnson was an eloquent speaker, and his graphic description of Newton inserting a needle behind his eye to make himself see different colours elicited much squirming. The book promises many more such fascinating gems — to see whether or not it lives up to the hype look out for a full review in the August issue of Physics World.

The secrets of random packing

For centuries, physicists and mathematicians have been trying to work out the most efficient way of packing spheres in order to minimize wasted space. They think they know how to maximise efficiency in uniform packing, for example when stacking oranges. But they know less about how to improve random packing, a process that describes the behaviour of a wide range of granular materials such as sand and marbles. Now, a group of physicists in the US has modelled this random process statistically and has calculated that it can never be used to fill more than about 63.5% of available space.

Musings over sphere packing probably began in the 16th century with mathematician Thomas Harriot, a friend of the English explorer Walter Raleigh, when he tried to work out how many cannonballs could be stacked neatly on top of one another. In 1611 the German mathematician Johannes Kepler concluded that no arrangement was more efficient than the regular “face-centred cubic”, which allows a packing fraction — the fraction of overall space occupied by spheres — of 74%. This belief was proved rigorously to most experts’ satisfaction by the US mathematician Thomas Hales in 1998.

Random packing is less straightforward. There is “random close packing” which results, for example, when spherical grains are dumped in a box and then shaken. Experiments indicate that this leads to a packing fraction of 64%. But if the grains are left to settle gently, scientists instead end up with a “random loose packing” of about 55%.

Packed science

Hernán Makse, Chaoming Song and Ping Wang of the City College of New York have a theoretical model to describe random packing in detail. They assume that packed spheres obey the statistics of particles in a gas at thermal equilibrium, with a Boltzmann-like distribution of volumes. Building on research carried out by Sam Edwards of Cambridge University, they describe the volume occupied by each particle in terms of the number of neighbouring particles that it touches, a quantity known as the “geometrical coordination”. Then by inserting this equation into the statistical expression they derive a simple formula linking packing fraction to geometrical coordination (Nature 453 629).

Using their formula, the researchers have calculated that the packing fraction for random close packing is 0.634, whereas the corresponding figure for loose packing is 0.536, roughly in line with conventional wisdom. Makse and co-workers have also illustrated their formula using a phase diagram, which, they point out, highlights the crucial role of friction in random packing (see Packing phase diagram). With no friction between particles only the maximum close packing can be attained, but as friction is increased, and the geometrical coordination drops as a result, the system of particles can take on a wide range of densities. “Previously people only dealt with frictionless systems,” says Makse, “so they missed the most important part of the phase diagram.”

The researchers now plan to carry out experiments to test their model. In particular, they intend to attach polymers to tiny silica spheres in a colloid in order to generate friction between the spheres and test their ideas about the role of friction in packing. Makse points out that even if these experiments back the theory there is still more work to be done before scientists have a unified understanding of both ordered and random spherical packing. In part, that is because this latest work makes a number of shortcuts, such as modelling average particle interaction across the whole sample. “Our work is just the first step on the road to a theory on a par with that of Hales,” he adds. “The road is very long.”

Makse believes that his team’s work could have practical applications, however. He points out that granular materials are widely used in industry but that they are in fact poorly understood. He thinks that a quantity known as “compactivity” — a measure of how much a system can potentially be compacted — could help pharmaceutical, food or oil companies refine their products. “Compactivity could be used to characterize the state of a powder in the same way that temperature determines the state of a liquid or a gas,” he says.

New high-Tc superconductors share magnetic properties with cuprates

There’s been another development in the nascent field of iron-based high-temperature superconductors, which were recently shown to be able to turn superconducting at the very respectable temperature of 55 K.

Scientists at the National Institute of Standards and Technology (NIST) in the US have used neutron beams to investigate the magnetic properties of the iron-based materials. They found that, at low temperatures and when undoped, the materials make a transition into an antiferromagnetic state in which magnetic layers are interspersed with non-magnetic layers. But when the materials are doped with fluorine to make them into high-temperature superconductors, this magnetic ordering is suppressed.

This is reminiscent of the behaviour of cuprates — the highest temperature superconductors known to-date. Is this more than a coincidence? We’ll have to wait and see.

The research is published online here in Nature.

LHC ready by June, says Aymar

Robert Aymar, the director-general of CERN, has said that the Large Hadron Collider (LHC) — the world’s biggest particle physics experiment — will be in “working order” by the end of June, according to the French news agency Agence France-Presse (AFP).

It is not clear what Aymar means by this, given that the last announcement from CERN was for a July start-up. It seems unlikely that LHC has raced ahead of schedule, so it might be that he thinks the cooling of the magnets will be complete by the end of June. However, the status report on the LHC website would indicate otherwise.

I spoke to a press officer at CERN, and she said that the AFP journalists quoted Aymar from a recent meeting they had at the European lab. She said that, as far as she is aware, the beam commissioning is still set to take place in July.

I have not yet spoken to James Gillies, the chief spokesperson for CERN, because he is tied up in meetings all day. When he gets back to me, I will give you an update.

UPDATE 3.15pm: I have just spoken to Gillies and he said that there is no change to the start-up schedule — the plan is still to begin injecting beams towards the end of July. Aymar was indeed referring to the cooling of the magnets, which should be complete by the end of June. Four of the eight sectors have already been cooled to their operating temperature of 1.9 K; the last (sector 4–5) began the cooling process today.

The reason for the gap between the cooling and beam-injection is that there must be a series of electrical tests, which will take around four weeks.

Upper troposphere is warming after all, research shows

Research performed in the US has helped lay to rest one of the lasting controversies surrounding climate models: whether or not the upper troposphere is warming.

Climate models have long predicted that the upper troposphere — a region of the Earth’s atmosphere that lies beneath the stratosphere at an altitude of 10–12 km — should be warming at least as fast as the surface. However, since the 1970s temperature measurements carried out by weather balloons have found the lower-troposphere temperature to be fairly constant. This conclusion was backed up in 1990, when researchers used data taken from satellites to measure temperature changes in the troposphere.

For a while climate scientists have known that weather-balloon instruments are affected by the warming effect of the Sun’s light. They have also struggled to interpret the extent to which the satellite data of the troposphere could be influenced by the stratosphere. But the awareness of these uncertainties has not made it any clearer as to what temperature changes, if any, are taking place in the upper troposphere.

Now, Robert Allen and Steven Sherwood of Yale University have used wind data taken from weather balloons as a proxy for direct temperature measurements to give the first conclusive evidence that the upper troposphere has been warming after all. Although they are an indirect measure of temperature, these wind records can be backed up by satellite and ground instruments, making them more reliable than existing direct temperature measurements (Nature Geoscience doi: 10.1038/ngeo208).

‘Put the controversy to rest’

Allen and Sherwood took wind data from 341 weather-balloon stations — 303 in the northern hemisphere and 38 in the southern hemisphere — covering a period from 1970 to 2005. To covert the data to temperature measurements, they employed a relationship known as the thermal-wind equation, which describes how vertical gradients in wind speed change with horizontally varying temperature. They found that the maximum warming has occurred in the upper troposphere above the tropics at 0.65 ± 0.47 °C per decade, a rate consistent with climate models.

“This research really does show the tropical troposphere has been warming over the past three decades,” says Benjamin Santer of Lawrence Livermore National Laboratory. “And it will, I hope, put this controversy of weather balloon and satellite data to rest.” Santer, who was one of the lead authors of the 1995 report by the Intergovernmental Panel on Climate Change, thinks the next step is to confirm Allen and Sherwood’s findings with direct temperature records. These, he explains, must be taken with advanced weather-balloon instruments that can be calibrated against older models to remove biases.

“The approach by Allen and Sherwood is a promising start,” says John Lanzante of Princeton University. “But more confidence can be established as other investigations further scrutinize the wind data and method used to translate winds into temperature-equivalent measures.”

Terahertz laser source shines at room temperature

Terahertz beams could be employed in many scientific and technological applications, such as biological imaging, security screening and materials science. Now these applications are a step closer, as researchers in the US and Switzerland have made the first room-temperature coherent terahertz source based on commercially available semiconductor nanotechnology.

Terahertz radiation lies between the microwave and far-infrared regions of the electromagnetic spectrum, at wavelengths from about 1 to 0.03 mm. Until now, the only compact semiconductor lasers to emit light at terahertz wavelengths were “quantum cascade” lasers (QCLs). These devices comprise many identical stages made of nanometre-thick quantum wells.

When a voltage is applied, electrons briefly hop into a quantum-well energy-level before dropping down into a lower one, emitting a photon in the process. The same electrons are then injected into a new stage where they emit another photon. In this way, as many photons as there are stages are emitted by a single electron “cascading” through the structure.

Until now, however, QCLs have only been able to emit terahertz radiation at cryogenic temperatures of less than 200 K. The new QCL device — made by Federico Capasso of the Harvard School of Engineering and Applied Sciences and colleagues from Texas A&M University and ETH Zurich — emits terahertz radiation with several-hundred nanowatts of power at room temperature (Appl. Phys. Lett. 92 201101). At commercially available thermoelectric cooler temperatures around 259 K this power is increased to microwatts. Moreover, the power can be further increased up to a few milliwatts by optimizing the semiconductor nanostructure layers of the laser’s active region and by improving the extraction efficiency of the terahertz radiation.

The team made their QCL from a material that exhibits “difference-frequency generation” (DFG). This means that when it is illuminated by two frequencies of light, the electrons re-emit photons at both of the individual frequencies as well as the frequency difference. When the two frequencies are in the mid-infrared, the QCL can produce a difference frequency of 5 THz.

Terahertz radiation sources based on DFG have been around for years, but they have required powerful pump lasers and large non-linear crystals to generate the required frequency difference. In contrast, the new device is electrically pumped. “Our device does everything in one small semiconductor crystal a few millimetres in size with no need for bulky external lasers,” explains Capasso. “This means the device is compact, portable and consumes little power.”

Since terahertz radiation can pass through most materials except metals, it could be used to detect concealed weapons or explosive chemicals, or to image biological samples. “Detecting material defects, such as cracks in foam, is also an important application,” adds Capasso.

The researchers will now work on increasing the output power of their coherent terahertz source at room temperature, as well as at thermoelectric cooler temperatures. They will do this by increasing the surface area used for light emission and by optimizing the design of the quantum wells.

“This is a very exciting and important result as it circumvents problems associated with normal terahertz semiconductor lasers, which only work at low temperatures,” says Christian Pflügl of Harvard University, who was not involved in the research. “This novel approach could lead to the realization of compact semiconductor light sources with output powers sufficient for many spectroscopic applications, such as studies of pharmaceutical products, drug detection and determination of disease in skin tissue.”

Phoenix reveals Martian permafrost

Polygons similar in appearance to surface patterns in Earth’s arctic regions are among the first features identified by NASA’s Phoenix mission, which touched down on Mars early yesterday morning at 0053 GMT.

The features imply that the landing area around Phoenix has permafrost, which is known to generate polygonal patterns on Earth by continual expansion and contraction. Although polygons had been spotted before from space, these ones seen at close range appear to be somewhat smaller — 1.5 to 2.5 m across — leading some NASA scientists to suggest that there is a hierarchy of “polygons within polygons”. As of yet, there have been no glimpses of surface ice.

Other images taken by Phoenix’s onboard cameras confirm that the spacecraft is in “good health”, having endured a nine-month, 679 million-mile journey and a tricky landing involving descent engines — the first time this type of landing has been performed successfully since 1976. NASA scientists are relieved that Phoenix did not suffer the fate of its two predecessors — the Mars Climate Orbiter and the Mars Polar Lander — which both failed in 1999.

Phoenix is now preparing to begin its three-month mission on Mars to investigate the origin of the ground ice, the operation of climate cycles and the possibility of microbial life.

UPDATE 29/05/08: You can listen to a sound recording of Phoenix’s landing here.

New tests of the Copernican Principle proposed

Revolutions in science don’t come that often, but the book De revolutionibus orbium coelestium (On the revolutions of the heavenly spheres) published in 1543 certainly caused one. The work by Nicolas Copernicus overthrew the ‘geocentric’ model of the solar system where Earth is at the centre, and suggested an alternative view whereby Earth revolves around the Sun.

No-one disputes the fact that the sun is at the centre of our solar system, and no-one also seems to dispute the idea that we are not at the centre of the universe. Indeed, this is encapsulated in a principle known as the ‘Copernican Principle’ that states that the Earth is not in any specially favoured position and is taken as a fait accompli among researchers. But how can we test it? Two independent teams of physicists think they know how, and argue their cases in back-to-back papers in the journal Physical Review Letters.

In the first paper, Robert Caldwell from Dartmouth College and Albert Stebbins from Fermi National Laboratory in the US explain how the Cosmic Microwave Background (CMB) radiation spectrum — an all pervasive sea of microwave radiation originating just 380 000 years after the Big Bang — could be used to test whether the Copernican Principle stands (Phys. Rev. Lett. 100 191302).

Cosmic acceleration and dark energy

Cosmologists like Caldwell and Stebbins are interested in the Copernican Principle because it plays an important role in the interpretation of the observational evidence for cosmic acceleration and dark energy. If the Copernican Principle is invalid, then there may not be any need for exotic dark energy and in order to explain the observation for the acceleration of the universe, we would need to be living at the center of a ‘void’. This void would then leave a distortion in the CMB away from being a black body. “This is so fundamental that we need to test it”, explained Caldwell.

I would bet my house now that the results will come out null so the Copernican Principle is valid on the scales we observe Paul Steinhardt, Princeton University

To measure if this holds, the team propose to measure the black body nature of the CMB more precisely than before. The void would lead to large anisotropies of scattered light coming from the CMB giving a slight deviation in the CMB from being a black body. But if we see further evidence that the CMB is a black body then it will be evidence that the Copernican Principle holds.

However, it is already accepted that the CMB is a black body. Indeed, the Nobel Prize in Physics was awarded to John Mather and George Smoot in 2006 who showed the CMB is a black body and is also anisotropic. The Nobel Prize winning work came from data collected on NASA’s Cosmic Background Explorer (COBE), which housed the Far Infrared Absolute Spectrophotometer (FIRAS) which recorded the perfect black body spectrum.

Caldwell and Stebbins think the next NASA missions such as the Absolute Spectrum Polarimeter will be able to detect possible deviations from the black body behaviour of the CMB which COBE with FIRAS was not sensitive too. “We need to measure the CMB at different frequencies, which previous missions were not able to do” said Caldwell.

Another test

In a separate paper, Jean-Philippe Uzan from the Pierre and Marie Curie University in France along with Chris Clarkson and George Ellis from the University of Cape Town in South Africa suggest another way to test the Copernican Principle (Phys. Rev. Lett. 100 191303). Their scheme involves measuring the red-shift of galaxies — the shift in wavelength of light to longer wavelengths due to a speedup — very precisely over time to see if there are changes. The team argues that this red-shift data can be combined with measurements of the distance of the galaxies to infer if the universe is spatially homogeneous — which is a tenant of the Copernican Principle.

However, it seems one of the cornerstones of cosmology is not about to be quickly overturned. “I would bet my house now that the results will come out null so the Copernican Principle is valid on the scales we observe,” says Paul Steinhardt a cosmologist at Princeton University, “But I think the experiments should be done.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors