Skip to main content

Should we scrap science news?

By Jon Cartwright

Here’s a statistic for you, taken from a website called Sense About Science. It claims that there’s over a million scientific papers published every year. If that’s right, there must be something in the region of 20,000 published a week. Even if physics accounts for only a small fraction of the sciences, that still means we’re looking at several hundred every day. (I could dig out a reliable figure, but it’s probably not far wrong.)

There’s no way we at Physics World can even hope of keeping you up to date with that many papers. Nor would you want us to — let’s face it, most papers only deal with very minor developments that would only interest those working in exactly the same field.

So, I would like to raise a question: should we bother to comb the journals for the biggest developments, or should we give up reporting research news altogether?

Actually, I’m not the first to raise it. I discovered the idea nestled at the bottom of an article written last week in the Guardian by Peter Wilby. He had been haranguing the Daily Mail for the way they report “breakthrough” developments in health research. (It’s the same old score: this week they tell you a glass of wine a day will increase your life expectancy; next week they will tell you the opposite.) Wilby proposes that, instead of mindlessly regurgitating seesawing opinions from the medical community, the media should offer “state of knowledge” features that give an overview of where the present scientific consensus is headed.

Would this feature-only approach benefit physics too? Conclusions seen in physics papers are usually straightforward to interpret — particularly compared with, say, the vagaries of health surveys — which would suggest the answer is no. However, there are still many difficulties.

One is that small developments in research are seen as being less newsworthy than those that go against prevailing opinion. In the latter instance, unless there is enough context to show how the research fits into the grand scheme of things, a news story can be misleading. Another, as I showed in my recent article on the use of embargoes in science publishing, is that much (if not most) science news is artificially timed to fit in with publishers’ agendas; in a sense, the news is not “new” at all. A feature-only approach would avoid both of these.

The main point I can see in favour of science news is that there are certain developments that deserve to be brought to people’s attention immediately. Think of the recent claims by the DAMA experiment team in Italy that they had detected dark matter on Earth. Or the discovery by Japanese physicists of a new class of high-temperature superconductors based on iron. Should we only report on such critical research? If so, where do we draw the line?

Let’s hear your thoughts. But bear in mind that if we do decide to scrap science news, I’ll be out of a job.

Multi-particle entanglement in solid is a first

An international team of physicists has entangled three diamond nuclei for the first time. The development promotes solid-state systems to a rank of quantum systems including ions and photons that have achieved entanglement for more than two particles.

Entanglement lies at the heart of fields such as quantum computation and quantum teleportation. At its most basic level, if two particles are entangled a measurement of the state of one reveals something about the state of the other, regardless of the distance separating them.

But entanglement is difficult to achieve. It requires quantum states to be manipulated while preventing them from interacting with their environment, which tends to degrade the quantum system into a classical state. Physicists have had some successes, having entangled up to eight calcium ions and up to five photons. So far, however, solid state systems have proved trickier.

Now, a team led by Jöerg Wrachtrup of the University of Stuttgart, Germany, has demonstrated that two or three diamond nuclei can be entangled (Science 320 1326). “If we compare the quality of entanglement in our experiments with those [of ions and photons], our results compare favourably,” says Wrachtrup. His team includes researchers from the University of Tsukuba, the National Institute of Advanced Industrial Science and Technology, and the Nanotechnology Research Institute, Japan, and Texas A&M University, US.

Method ‘not new’

The researchers’ system is a piece of synthetic diamond containing a large proportion of carbon-13 isotopes. At one point in the lattice they place a nitrogen atom, which leaves a defect containing a single electron.

Because this electron interacts with the neighbouring carbon nuclei, Wrachtrup’s team can shine laser light onto it to put some of the nuclei into a certain quantum state. Then, by applying radio-frequency pulses of a magnetic field, they can drive the spin of the nuclei so that they become entangled with one another.

“The method itself is not new,” says Wrachtrup, who adds that equivalent magnetic pulses are also used in nuclear magnetic resonance (NMR) spectroscopy. “It is the system itself which we discovered to be addressable as a single quantum system almost a decade ago. Meanwhile we can engineer this defect to such a degree that we do have excellent control.”

Diamond nuclei are an attractive option as a quantum system for computation. They can be kept in a coherent state for a long time and are easier to control than other systems, which will become vital for minimizing errors. However, because they currently have to be manipulated using the defect electron as an intermediary, it might be difficult to entangle many of them. Wrachtrup says that his team are working on scaling-up their system now, and believes in the future they should be able to control up to five or six nuclei per electron spin.

What’s your Wu index?

Ed Witten has once again been ranked as the world’s number one physicist, according to a new index that ranks scientists in terms of the citations generated from their published papers. The new measure, called the w-index, has been developed by Qiang Wu from the University of Science and Technology of China in Hefei.

Wu’s index is similar, but subtly different, to the h-index that was developed in 2005 by physicist Jorge Hirsch at the University of California at San Diego, which quantifies the published output of a scientist.

According to Hirsch’s criterion, a researcher with an h-index of, say, 9, indicates that he or she has published at least 9 papers, each of which has been cited 9 or more times. The w-index, on the other hand, indicates that a researcher has published w papers, with at least 10w citations each. A researcher who has a w-index of 24, for example, means he or she has 24 papers with at least 240 citations each.

Wu says in his paper that the index is a significant improvement on the h-index, as it “more accurately reflects the influence of a scientist’s top papers,” even though he concedes that the index could “be called the 10h-index” (arXiv:0805.4650v1).

Tops by both measures

Wu has calculated the w-index for some physicists who also have high h-indexes. Theortical physicist Ed Witten from the Institute for Advanced Study in Princeton, who has the highest h-index, also comes top in the w-index ranking with a score of 41. Witten is followed by condensed-matter theorist Phillip Anderson at Princeton University, with a w-index of 26, and cosmologist Stephen Hawking at Cambridge University coming third with a w-index of 24. Particle theorist Frank Wilczek (Massachusetts Institute of Technology) and Marvin Cohen (University of California, Berkeley) are joint fourth with a score of 23.

While Witten, Anderson and Wilczek also took three of the top five slots in the h-index ranking, the big winner under the new criterion is Hawking, who has a relatively modest h-index of just 62, compared to Witten’s score of 110.

According to Wu, a researcher with a w-index of 1 or 2 is someone who “has learned the rudiments of a subject”. A w-index of 3 or 4 characterizes a researcher who has mastered “the art of scientific activity”, while “outstanding individuals” are those with a w-index of 10. Wu reserves the accolade of “top scientists” to those with a w-index of 15 after 20 years or 20 after 30 years.

The w-index is easy to work out using a publication database such as ISI Web of Knowledge from Thomson Reuters, Scopus from Elsevier or Google Scholar. It can be determined in the same way as the h-index by simply searching for a researcher’s name in a database and then listing all their papers according to citations, with the highest first.

The hunt for ‘God’s particle’?!

By Jon Cartwright

We have Leon Lederman to blame. For the “God particle”, that is. Since he published his 1993 book, The God Particle: If the Universe Is the Answer, What Is the Question?, the layperson might be forgiven for believing the Large Hadron Collider (LHC) is not searching for a particle called the Higgs boson, but a path to spiritual enlightenment.

Many physicists hate referring to Him. For some particle physicists, the “God particle” belittles the hoards of other theoretical particles that might be detected at the LHC. They say it reveals little of the particle’s function, and is savoured by writers with little rhetoric. For some non-particle physicists, the God particle epitomizes the hype that surrounds particle physics. Then there are those who think divine connotations are always a bad idea.

Are they, though? When a furore about the use of “God particle” began bouncing around the blogsphere last August, mostly in response to an article written by Dennis Overbye of the New York Times in which he defended the term, several agreed that religious metaphors should be an acceptable part of our language. Einstein used them all the time (e.g. “Quantum mechanics…yields much, but it hardly brings us close to the secrets of the Old One”) yet historians tend to conclude he was not a theist. Even when I began writing this blog entry I thought I might be clever and refer to the Higgs as the light at the end of the LHC’s tunnel — before I reminded myself that the Higgs is not the only particle of import they expect to find.

As Sean Carroll noted on the Cosmic Variance blog, it is a fear of pandering to the religious right that is driving the expulsion of religious metaphors. If certain atheists succeed, religious metaphors will go the way of the dodo. The God particle is not one of the best, but it might be one of the last.

Which brings me to the point of this entry (not that Earth-shattering, I’ll warn you now). This morning I was looking at the news links posted on the Interactions website, only to find one from the Guardian newspaper headlined “The hunt for God’s particle“. That’s right — you read it right the first time. “God’s particle”? Where’s the metaphor in that? Have we now come full-circle, having compared the search for the Higgs boson to the path for spiritual enlightenment, only to reduce it to another of God’s creations?

Poor old Lederman must wonder what he started.

A cool $50m for theoretical physics

mike.jpg

By Hamish Johnston

In my line of work I don’t usually get to talk to multi-millionaires — but a few weeks ago I had the pleasure of speaking with high-tech magnate Mike Lazaridis, who made his fortune developing the Blackberry handheld email/mobile phone device.

Lazaridis and I were in a conference call with Neil Turok, one of the world’s leading cosmologists who had just been enticed by Lazaridis to leave Cambridge and become executive director of Canada’s Perimeter Institute for Theoretical Physics in Waterloo, Ontario.

The institute was founded in 2000 by Lazaridis who put up about CDN$100m of his own money. Now, Lazardis has donated a further $50m to Perimeter.

If you count the millions that he and his wife have given to the Institute for Quantum Computing at the nearby University of Waterloo, Lazaridis (who is not a physicist) has spent an amazing $200m on physics research!

When I asked Lazaridis why Turok was the right person to lead the institute he said: “We share deep convictions in the importance of basic science, the importance of funding basic science, and the importance of philanthropy in promoting basic science for the advancement of mankind”.

Lazaridis is one of a small but growing number of benefactors with deep convictions and deep pockets when it comes to the more esoteric disciplines of physics such as cosmology, astrophysics and particle physics.

Just two weeks ago an anonymous benefactor donated $5m to Fermilab, which has been particulalry hard hit by US government cuts on physics spending.

And staying with the topic of funding cuts, during our conversation Turok told me that recent cutbacks in the UK made Perimeter’s offer all the more attractive — something that he has discussed in great detail in a recent interview with the Times.

Extreme UV light made easy

A new system to generate coherent extreme-ultraviolet (EUV) light has been developed by researchers in Korea. The device, based on a nanostructure made of bow-tie shaped gold “antennas” on a sapphire substrate, is smaller and cheaper than existing systems and might allow an EUV source the size of a laptop computer to be made. Potential applications for the source include high-resolution biological imaging, advanced lithography of nanoscale patterns and perhaps even “X-ray clocks”.

EUV light has a wavelength of between around 5 and 50 nm (100–10 times shorter than that of visible light). It can thus be used to etch patterns at tiny length scales and is ideal for spectroscopic applications because the wavelength is the same as that of many atomic transitions.

However, EUV radiation is currently produced in a very complicated process involving the use of amplified light pulses from an oscillator (a source of laser light) to ionize noble gas atoms. The electrons freed during this process are accelerated in the light field and their surplus energy is freed as attosecond (10–18 s) pulses of light of different wavelengths. The shortest wavelengths of light can then be “filtered out” to produce a single EUV pulse.

Scientists would ideally like to produce EUV light directly from the oscillator without the need for expensive and bulky amplifiers. In this way, EUV-light generation could be simplified and the size of the source significantly reduced to tabletop dimensions. In contrast, current devices usually measure around 2–3 m across. Now, Seung-Woo Kim of KAIST in Daejeon and colleagues have shown that this might be possible.

The researchers report that a bow-tie nanostructure of gold — measuring around 20 nm across — can enhance the intensity of femtosecond laser light pulses by two orders of magnitude. This is high enough to generate EUV light with a wavelength of less than 50 nm directly from a small pulse with an energy of 1011 W/cm2 injected into argon gas (Nature 453 757). The energy needed is about 100 times less than in traditional approaches.

Surface plasmons

The technique works thanks to “surface plasmons” (surface excitations that involve billions of electrons) in the “gap” of the bow-tie gold nanostructures (see figure). When illuminated with the correct frequency of laser light, the surface plasmons can begin to resonate in unison, greatly increasing the local light field intensity. This phenomenon, known as resonant plasmon field enhancement, is already exploited in imaging techniques, such as surface-enhanced Raman scattering, which is sensitive enough to detect individual molecules on a metal surface.

Immediate applications include high-resolution imaging of biological objects, advanced lithography of nanoscale patterns and making X-ray clocks. These exploit a frequency-stabilized femtosecond laser and are being investigated worldwide to replace the current caesium atomic clocks for better time precision.

“This new method of short-wavelength light generation will open doors in imaging, lithography and spectroscopy on the nanoscale,” commented Mark Stockman of the Georgia State University in a related article (Nature 453 731). The spatially coherent, laser-like light could have applications in many areas: spectroscopy; screening for defects in materials; and, if extended to X-ray or gamma-wavelengths, detecting minute amounts of fissile materials for public security and defence.

The team now plans to improve the conversion efficiency of the generated light by modifying the design of their nanostructure — for example, by making 3D cones with sharper tips. These will not only enable higher local field enhancement but also better interaction of the femtosecond light pulses with injected gas atoms. The team will also test the spatial and temporal coherence of the generated EUV light.

Superconductivity mystery deepens

By Michael Banks

I have been closely following events concerning a new class of iron-based superconductors ever since Physics World broke the story about their discovery in March. The new materials, containing planes of iron and arsenic separated by planes of lanthanum and oxygen, offer high temperature superconductivity without the need of copper-oxide planes as in the cuprates.

The challenge now is to understand how these superconductors work, i.e. what the responsible pairing mechanism is. Early calculations showed that the superconductivity cannot be described by electron-phonon coupling. The mechanism could therefore be similar to cuprate-based superconductors, which currently hold the record for the highest superconducting transition temperature (although the mechanism in the cuprates is still not understood).

Now, however, a paper published in Nature suggests that SmFeAsOF, which is the same as the material in the story we reported in March but with the lanthanum replaced by samarium, may behave quite differently to the cuprates. The paper’s authors, who are based in the US and China, show that SmFeAsOF has a ‘single gap’ in the density of states of the Cooper-pair fluid (an energy gap originates since there is a finite amount of energy needed to break the pair of electrons held in a Cooper-pair). The temperature dependence of the gap was found to obey conventional BCS predictions — the theory named after Bardeen, Cooper and Schrieffer that proposes electron attraction, via phonons, to form cooper-pairs.

This is all different from the cuprates, which don’t follow BCS predictions and also have a so-called ‘pseudo-gap’, which as I understand only allows certain electrons to ‘see’ a gap depending on how the travel with respect to the crystal lattice. The authors found no evidence of a ‘pseudo-gap’ in the new materials. So it seems that the materials follow BCS predictions, but with a superconducting transition temperature that is too high to be explained via electron-phonon coupling. The mystery deepens.

In another recent development, researchers in Switzerland have managed to grow single crystals of the Sm based iron superconductor. All research done before was performed on polycrystalline samples, but now opening research into single crystals means finding those elusive mechanisms may be a step closer.

Electrical noise measures Boltzmann constant

Physicists in the US have developed a technique that may help to make a more accurate measurement of the Boltzmann constant, a fundamental value that relates the kinetic energy of a group of particles to their overall temperature.

The technique could mark another step on the road to redefine the Kelvin unit of temperature. Currently, the International Committee for Weights and Measures (CIPM) in Paris — the hub of the metrology community — defines the Kelvin as 1/273.16 of the temperature difference between absolute zero and the triple point of pure water (roughly 0 °C) at a certain pressure. However, the CIPM would prefer to define the Kelvin, along with other SI units, in terms of fundamental constants. The Kelvin could be obtained from the second, which is already known to about one part in 1016, and the Boltzmann constant, kB.

The best current technique for determining kB involves measuring the speed of sound in argon gas, which gives a measurement to within two parts per million. Other techniques include measuring the dielectric constant of a gas; the radiation from a black body; and the absorption of laser light by molecules. The CIPM would like to combine several such techniques to rule out systematic errors in the final value of kB.

Samuel Benz and colleagues at the National Institute of Standards and Technology (NIST) have developed another technique, known as Johnson-noise thermometry (JNT). “There is a lot of research worldwide with many different approaches, all trying to improve measurements of the Boltzmann constant,” says Benz. “Our approach is the only ‘electrical’ approach.”

Johnson noise is white noise generated by the random motion of electrons in all electrical components that have resistance, and has a magnitude that can be predicted directly from the component’s resistance and temperature, and kB. To get kB — or, more precisely, a ratio of kB to Planck’s constant, h, which is known with much less uncertainty — the Johnson noise must be compared with another, reliable noise source at the same temperature. The development made by Benz’s team is the realization of such a reliable noise source — a quantum voltage noise source (QVNS) that comprises thousands of superconducting “Josephson junctions”.

Using their JNT-QVNS technique, the NIST researchers have measured the ratio of kB to h with an uncertainty of 25 ×10–6 (IEEE Trans. Inst. Meas. to be submitted). Although this is less accurate than other techniques, the researchers think that they should be able to reduce the uncertainty to 6 ×10–6 in the future, which would make it an attractive method for determining the Boltzmann constant for the CIPM.

The CIPM are currently planning to redefine four of the SI units, including the Kelvin, by 2011.

Technique probes nanoscale magnetism

Researchers in Japan have used a new technique to measure the magnetic and electronic structure of subsurface atomic layers in a material for the first time. The technique, dubbed diffraction spectroscopy, will be crucial for understanding nanoscale magnetism and developing high-density “perpendicular” magnetic recording materials.

Future data storage densities will soon need to exceed one terabyte (1012 bytes) per square inch, requiring bits just 10 nm or less across. But this is the scale at which surface magnetism appears, so it is critical to understand whether there are any unusual magnetic effects.

Fumihiko Matsui and colleagues of the Nara Institute of Science and Technology and other Japanese institutions, combined two existing techniques to make theirs: Auger electron diffraction and X-ray absorption spectroscopy. They analysed “forward focusing” peaks that appear in the spectra along the directions of atoms on the surface of a sample. By examining the intensity of the peaks, they could distinguish the magnetic and electronic structures of individual layers (Phys. Rev. Lett. 100 207201).

Layer by layer

The researchers used their technique to analyse the magnetic structure of a thin film of nickel on a copper surface, an important material for magnetic data storage. Until now, the atomic magnetic structure of nickel thin films has been unclear, although scientists know that the magnetization axis in the films goes from being parallel at the material surface to being perpendicular at 10 atomic layers deep. Matsui and colleagues analysed this transition region and measured the magnetic moments in each individual layer.

Knowing exactly how these magnetic moments change throughout the structure could be useful for making perpendicular magnetic recording devices. Perpendicular magnetism should be capable of delivering more than triple the storage density of traditional longitudinal recording materials. This is because the magnetic particles can be packed closer together for greater density, which leads to more data per square inch.

Several diffraction techniques that image atomic structure already exist, but they have their drawbacks. Scanning tunnelling spectroscopy, for example, can only analyse the surface of a sample. The Japanese team’s diffraction spectroscopy technique can be used to visualize both magnetic and electronic properties of subsurface layers at the atomic scale in a non-destructive way for the first time. “Our technique makes it possible to focus on the subsurface region, which connects surface and bulk worlds,” Matsui told physicsworld.com.

The researchers are now extending their technique to analyse the surface of superconducting materials. “We are especially interested in correlating electronic properties and geometric structure at the superconducting phase transition,” says Matsui.

Spot the physicist

“The science wars” is the colourful but rather hyperbolic name given to a period, in the 1990s, of public disagreement between scientists and sociologists of science. Harry Collins, a sociologist at Cardiff University, was one of those making the argument that much scientific knowledge is socially constructed, to the dismay of some scientists, who saw this as an attack on the objectivity and authority of science. Rethinking Expertise could be seen as a recantation of the more extreme claims of the social constructionists. It recognizes that, for all that social context is important, science does deal in a certain type of reliable knowledge, and therefore that scientists are, after all, the best qualified to comment on a restricted class of technical matters close to their own specialisms.

The starting point of the book is the obvious realization that, in science or any other specialized field, some people know more than others. To develop this truism, the authors present a “periodic table of expertise” — a classification that will make it clear who we should listen to when there is a decision to be made that includes a technical component. At one end of the scale is what Collins and Evans (who is also a Cardiff sociologist) engagingly call “beer-mat expertise” — that level of knowledge that is needed to answer questions in pub quizzes. Slightly above this lies the knowledge that one might gain from reading serious journalism and popular books about a subject. Further up the scale is the expertise that only comes when one knows the original research papers in a field. Collins and Evans argue that to achieve the highest level of expertise — at which one can make original contributions to a field — one needs to go beyond the written word to the tacit knowledge that is contained in a research community. This is the technical know-how and received wisdom that seep into aspirant scientists during their graduate-student apprenticeship to give them what Collins and Evans call “contributory expertise”.

What Collins and Evans claim as original is their identification of a new type of expertise, which they call “interactional expertise”. People who have this kind of expertise share some of the tacit knowledge of the communities of practitioners while still not having the full set of skills that would allow them to make original contributions to the field. In other words, people with interactional expertise are fluent in the language of the specialism, but not with its practice.

The origin of this view lies in an extensive period of time that Collins spent among physicists attempting to detect gravitational waves (see “Shadowed by a sociologist”). It was during this time that Collins realized that he had become so immersed in the culture and language of the gravitational- wave physicists that he could essentially pass as one of them. He had acquired interactional expertise.

To Collins and Evans, possessing interactional expertise in gravitationalwave physics is to be equated with being fluent in the language of those physicists (see “Experts”). But what does it mean to learn a language associated with a form of life in which you cannot fully take part? Their practical resolution of the issue is to propose something like a Turing test — a kind of imitation game in which a real expert questions a group of subjects that includes a sociologist among several gravitational physicists. If the tester cannot tell the difference between the physicist and the sociologist from the answers to the questions, then we can conclude that the latter is truly fluent in the language of the physicists.

But surely we could tell the difference between a sociologist and a gravitational-wave physicist simply by posing a mathematical problem? Collins and Evans get round this by imposing the rule that mathematical questions are not allowed in the imitation game. They argue that, just as physicists are not actually doing experiments when they are interacting in meetings or refereeing papers or judging grant proposals, the researchers are not using mathematics either. In fact, the authors say, many physicists do not need to use maths at all.

This seemed so unlikely to me that I asked an experimental gravitational-wave physicist for his reaction. Of course, he assured me, mathematics was central to his work. How could Collins have got this so wrong? I suspect it is because they misunderstand the nature of theory and its relationship with mathematical work in general. Experimental physicists may leave detailed theoretical calculations to professional theorists, but this does not mean that they do not use a lot of mathematics.

The very name “interactional expertise” warns us of a second issue. Collins and Evans are sociologists, so what they are interested in is interactions. The importance of such interactions — meetings, formal contacts, e-mails, telephone conversations, panel reviews — has clearly not been appreciated by academics studying science in the past, and rectifying this neglect has been an important contribution of scholars like Collins and Evans. But there is a corresponding danger of overstating the importance of interactions. A sociologist may not find much of interest in the other activities of a scientist — reading, thinking, analysing data, doing calculations, trying to get equipment to work — but it is hard to argue that these are not central to the activity of science.

Collins and Evans suggest that it is interactional expertise that is important for processes such as peer review. I disagree; I would argue that a professional physicist from a different field would be in a better position to referee a technical paper in gravitational- wave physics than a sociologist with enough interactional expertise in the subject to pass a Turing test. The experience of actually doing physics, together with basic physics knowledge and generic skills in mathematics, instrumentation and handling data, would surely count for more than a merely qualitative understanding of what the specialists in the field saw as the salient issues.

Collins and Evans have a word for this type of expertise, too — “referred expertise”. The concept is left undeveloped, but it is crucial to one of the pair’s most controversial conclusions, namely the idea that it is only the possession of contributory expertise in a subject that gives one special authority. In their words, “scientists cannot speak with much authority at all outside their narrow fields of specialization”. This, of course, would only be true if referred expertise — the general lessons one learns about science in general from studying one aspect of it in detail — had no value, which is a conclusion that most scientists would strenuously contest.

This book raises interesting issues about the nature of expertise and tacit knowledge, and a better understanding of these will be important, for example, in appreciating the role of scientists in policy making, and in overcoming the difficulties of interdisciplinary research. Collins and Evans have bigger ambitions, though, and they aim in this slim volume to define a “new wave of science studies”. To me, however, it seems to signal a certain intellectual overreach in an attempt to redefine a whole field on the basis of generalizations from a single case study, albeit a very thorough one.

Copyright © 2025 by IOP Publishing Ltd and individual contributors