Skip to main content

Physics education under the microscope in Argentina

By James Dacey in Córdoba, Argentina

I’m writing this blog entry from the heart of Argentina, following a marathon 30-hour journey from my home in Bristol, UK. It was a trip that included two planes, a few buses, a couple of taxis and several long walks, but I’m finally here in Córdoba – Argentina’s second largest city – to attend this year’s International Conference on Physics Education (ICPE).

The meeting’s all about bringing together people to discuss the latest developments in education – including lecturers, teachers, trainers, students and educational researchers. It’s an event with a global outlook, accompanied by a satellite meeting where local high-school teachers will be discussing issues more focused on their day-to-day experiences. At the registration session, things already took a welcome Argentine twist as we were treated to a performance from some local musicians (see picture above).

(more…)

Hello Kitty in space, Lord of the Rings physics homework and more

This week, South Korea’s one and only astronaut, 36-year-old Yi So-yeon, has quit her job, thereby signalling the end of the country’s crewed space programme for the time being. In 2008 Yi became the first Korean to go into space, when for 11 days she travelled on board a Russian Soyuz spacecraft to the International Space Station, after being chosen through the government-run Korean Astronaut Program. Yi cited personal reasons for quitting, but has been studying for an MBA in the US since 2012. You can read more about her work and reasons for leaving in articles from Australia Network News and abc News.

(more…)

Making better solar cells with polychiral carbon nanotubes

A new solar cell made from carbon nanotubes (CNTs) that is twice as good at converting sunlight into power than the best previous such cells has been unveiled by a team of researchers in the US. The National Renewable Energy Laboratory (NREL) has already independently certified the performance of the device – a first for a CNT-based solar cell.

Thin-film photovoltaic materials are better than conventional solar-cell materials (such as silicon) because they are lighter, more flexible and cheaper to make. They work by absorbing photons from sunlight and converting these into electron–hole pairs (or excitons). To generate electric current, an electron and hole must be rapidly separated before the two particles have a chance to come back together and be reabsorbed into the material. In solar cells, the exciton must quickly travel to another layer in the device (where the charge separation will occur) for the best light-absorption efficiencies.

Single-walled carbon nanotubes (SWCNTs) are ideal as thin-film photovoltaics because they absorb light across a wide range of wavelengths from the visible to the near-infrared and possess charge carriers (electrons and holes) that move quickly. However, most thin-film cells containing SWCNTs have so far suffered from limited current and voltage, and therefore poor power-conversion efficiencies.

Broader solar spectrum

Now, a team led by Mark Hersam of Northwestern University and Shenqiang Ren of the University of Kansas, along with colleagues at the Massachusetts Institute of Technology, has designed a new type of solar cell containing polychiral SWCNTs and fullerenes that maximizes the amount of photocurrent produced by absorbing a broader range of solar-spectrum wavelengths. In particular, the cells significantly absorb in the near-infrared portion of the spectrum – a range that is currently inaccessible to many leading thin-film photovoltaic technologies, says Hersam.

A SWCNT is a sheet of carbon just one atom thick that has been rolled up into a tube with a diameter of about 1 nm. The atoms in the sheet are arranged in a hexagonal lattice and the relative orientation of the lattice to the axis of the tube is its chirality. “Previous CNT solar cells were mainly made from single-chirality CNTs, whereas our solar cells make use of tubes that are polychiral,” explains Hersam. “By using these multiple chiralities, our CNT solar cells absorb across a wider portion of the solar spectrum, which leads to higher currents and efficiencies.”

Record highs

The researchers say that they also maximized the photovoltage produced by their solar cells by controlling the interface between the active photovoltaic layer and the underlying hole-transport layer. This interface layer allows the generated electrons and holes to meet and efficiently recombine.

The devices could reignite interest in all-carbon solar cells, a research area that has been neglected in recent years. The fact that the new cells absorb across a broad range of wavelengths, including in the near-infrared, means that they could be especially useful as the active elements in tandem or multi-junction devices. As their name suggests, these devices contain two or more junctions, each of which absorbs light of different wavelengths from the Sun. For example, the junctions at the front of the cell can be made of a wider band-gap material that harvests high-energy photons, while more abundant lower-energy photons can be collected by a smaller-band-gap material situated at the back of the cell. These devices perform better than their single-junction counterparts, with power conversion efficiencies of about 42% compared with just over 30%.

The team says that it is now busy trying to further improve the power-conversion efficiency of its CNT-based solar cells. “We also intend to introduce additional materials apart from fullerenes into our future cell designs that complement the properties of CNTs,” says Hersam.

The research is published in Nano Letters.

Electrons in magnetic field reveal surprises

The best glimpse yet of electrons moving in a magnetic field has revealed that the particles’ behaviour differs strongly from what is predicted by classical physics but is consistent with quantum-mechanical theory. Instead of rotating uniformly at a particular frequency, an international team of researchers has found that electrons in a magnetic field are capable of rotating at three different frequencies, depending on their quantum properties.

Cyclonic movements

Little is known about the behaviour of electrons in a magnetic field and scientists are keen to improve our understanding of the physical processes that are involved. Free-electron Landau states are a form of quantized state adopted by electrons moving through a magnetic field. All charged particles interact with electromagnetic fields via the Lorentz force. This interaction causes electrons in a magnetic field to move in a corkscrew pattern. “Landau states can be envisaged as vortices occurring naturally in the presence of magnetic fields. The magnetic field plays the same role for electrons as the Earth’s rotation plays for the creation of cyclones, but on a much smaller scale,” says Peter Schattschneider of the Institute of Solid State Physics at the Vienna University of Technology, who is part of an international team that includes researchers from France, Japan and the US that has now devised a way to reconstruct these states.

According to classical physics, electrons should rotate about the magnetic-field direction with a single frequency, called the “cyclotron frequency”. But in their experiments, the researchers found that, contrary to what was predicted, they were able to induce a multitude of rotation frequencies in their moving electrons, namely the cyclotron frequency, zero frequency and the Larmor frequency (which is half the cyclotron frequency).

Vortex beams

The team did not observe the electrons’ Landau states directly. Rather, the researchers used a transmission electron microscope to create so-called electron vortex beams, which can be shaped so that their rotational behaviours closely resemble Landau states. “In an electron vortex beam, electrons are swirling around a common centre similar to air molecules in a tornado. Typically, this bunch of whirling electrons is also moving along its axis of rotation, thereby moving along a spiral path,” says Schattschneider.

The team used the microscope’s focusing lenses to reconfigure the electron vortex beams so that these matched the size of the Landau states. Schattschneider compares the task of determining the rotation of the electrons to figuring out how many times a thin wire is wound around a rod. “When looking at the wire directly, it is extremely difficult to count the number of windings. But when it is stretched along the direction of the rod, the wire takes the form of a well-spaced spiral, for which it is easy to count the revolutions,” he says. “This is precisely what we did with the Landau states: we ‘elongated’ them to vortex beams. That way we could measure [their frequencies] with very high accuracy.”

“This is a very exciting finding, and it will contribute to a better understanding of the fundamental quantum features of electrons in magnetic fields,” says Franco Nori of the RIKEN Centre for Emergent Matter Science in Japan, who led the research. In addition to showing that the rotational dynamics of the electrons are more complex and intriguing than was once believed, the new findings could have practical implications for technology, according to the researchers.

Jo Verbeeck of the University of Antwerpen in Belgium believes that the quantum effects of electrons revealed in the new study are “thought-provoking”. “What is interesting now is that the authors succeeded in taking these Landau states into free space, away from the material in which they normally manifest themselves, in order to better study the peculiarity of their motion,” says Verbeeck, who was not involved in the study.

“We hope that this will lead to new insights and a better understanding of the delicate interaction between magnetic fields and matter, which might one day give rise to new and better technologies such as sensors and memmory devices,” Schattschneider says.

The research is published in Nature Communications.

Discovering your inner scientist

Chad Orzel

Chad Orzel writes one of the most active and longest running science blogs on the net, having posted the first entry on his blog Uncertain Principles back in June 2002. A physicist at Union College in Schenectady, New York, he’s also written two popular-science books, based on the cute premise of trying to teaching first quantum physics and then relativity to his dog.

So, a couple of months back, when we noticed that Orzel was coming to the UK, we decided to invite him to give a talk as part of the Bristol Festival of Ideas. Orzel kindly accepted our offer and last night saw him speak here at the offices of IOP Publishing, which publishes Physics World. The talk was entitled Eureka! Discovering Your Inner Scientist, which just happens to be the title of Chad’s next book. (And what’s wrong with a spot of self-publicity?)

(more…)

Scientific booms and busts

Workers with skills in the so-called STEM disciplines – science, technology, engineering and mathematics – are in short supply. Countries like the US and the UK, which have traditionally led the world in these areas, are facing tough competition from emerging nations such as China and India, both of which are training large numbers of scientists and engineers. If the established countries do not up their game, they risk losing out in the globalized “knowledge economy” of the future.

Statements like these are ubiquitous in discussions of science policy. Sympathetic business leaders and politicians repeat variations of them regularly, and within the scientific community their truth is widely (though not universally) regarded as self-evident. But there is a problem with this consensus: according to the US demographer and veteran labour-market scholar Michael S Teitelbaum, it is not necessarily true.

In Falling Behind? Boom, Bust and the Global Race for Scientific Talent, Teitelbaum mounts a sustained attack on the idea that a STEM shortage exists at all, at least in the US. Based on a substantial (though sometimes frustratingly incomplete) body of economic and social data, he offers three conclusions. The first is that since the end of the Second World War, repeated alarms about looming shortfalls in the US supply of scientists and engineers have led to damaging cycles of “boom and bust” in the scientific job market. A second, more eyebrow-raising, conclusion is that increases in science funding are far from a panacea, and can even be destabilizing. Most controversially, though, Teitelbaum argues that current concerns about shortages of scientists and engineers in the US are “quite inconsistent with nearly all available evidence”.

The nuances of this final conclusion – and, in particular, whether it could apply to the UK – will be the focus of a separate Physics World article later this year. In this review, I will concentrate instead on the book’s first two points.

Falling Behind? begins by describing past cycles of boom and bust in the US scientific job market. The first such cycle began in 1957, when the launch of the Soviet Union’s Sputnik satellite triggered panic across America. The US government responded by increasing federal funding for science (channelled via the Department of Defense, the National Science Foundation and the newly created National Aeronautics and Space Administration, among others) and making sustained efforts to train more scientists and engineers. By doing so, they hope to fill the perceived gap between the country and its Cold War opponent.

By the late 1960s, however, the situation had changed. As Teitelbaum explains, the success of the Apollo space programme, coupled with increasing resistance to the Vietnam War, made big defence-related research budgets seem both less necessary and less appealing. As politicians’ interest faded, funding slowed. Thousands of highly qualified scientists – some of whom had been hearing about “shortages” in their chosen disciplines since they were teenagers – found that the jobs they had trained for no longer existed. In July 1971 a prominent US chemist, Wallace R Brode, lamented in Science that new science graduates “go out into the cold cruel world, only to find no jobs available, or else below the level of their training and ability”.

For US physicists, this first cycle of “alarm/boom/bust” was the most damaging. Subsequent busts at the end of the Cold War and during the dotcom crash of the early 2000s had their epicentres in other areas of science, while the ongoing crisis in the American biomedical sector has left the country’s physicists almost unscathed. Interestingly, though, Teitelbaum traces the current biomed bust to a 1991 report by the physicist and Nobel laureate Leon Lederman. In this report, Lederman – who was, at the time, president-elect of the American Association for the Advancement of Science – argued passionately for a repeated doubling of federal funding for scientific research, until it reached levels above and beyond those of the 1960s “golden age” in physics. While Lederman did not specify how this new funding should be distributed across disciplines, his ideas gained their greatest traction among advocates for the nascent biotechnology and genetics industries. Between 1998 and 2003, the budget for the biomedical-focused National Institutes of Health (NIH) was, accordingly, doubled.

What happened next was not the utopia that proponents had expected. The flood of new funding was more than absorbed by a flood of new PhD students, postdocs and grant applications. When the doubling stopped and normal service resumed (meaning budgets that were flat or falling in inflation-adjusted dollars) resumed, there wasn’t enough money to go around. By 2012, Teitelbaum observes, an applicant’s chances of winning a major NIH grant were significantly worse than they had been before the doubling started, and senior researchers were spending ever-larger fractions of their time chasing grants that they were increasingly unlikely to get. A brief return to inflation-busting annual budget increases brought temporary relief, but as Teitelbaum notes, “even members of Congress who were strong supporters of biomedical research did not seem responsive to pleas of a ‘funding crisis’ from a research sector that had doubled its budget so rapidly only a few years earlier”. Advocates for rapid increases in science funding should, he concludes, “be careful what they wish for”.

Photo of Leon Lederman standing in front of a blackboard

Readers with interests in science policy, careers or funding will find this book fascinating, although often disquieting. Teitelbaum’s analyses of historical alarm/boom/bust cycles and (in particular) the NIH budget-doubling brouhaha are illuminating, and he has a knack for anticipating potential criticisms. As I read Falling Behind?, I often found myself thinking “But what about x?” only to find, later on, that Teitelbaum had in fact addressed x, while also parrying counter-arguments y and z that had not occurred to me.

Perhaps the most important criticism of Teitelbaum’s argument concerns whether past claims about scientific “shortages” have any bearing on similar arguments being made today. After all, at the end of the fable about the “Boy Who Cried Wolf”, the wolf turns out to be real, with serious consequences for the unbelieving villagers. Teitelbaum’s response is to acknowledge that the past is an imperfect guide to the present. While many previous claims of a STEM shortage turned out to be overblown, he writes, this “should not lead to the conclusion that present concerns also can be predicted to prove unwarranted”.

Unfortunately, many data on STEM employment are patchy. As Teitelbaum observes, even apparently straightforward questions such as “how many postdocs are there in the US?” can be difficult to answer. Data also go out of date quickly. For example, in comparing the job prospects for scientists with those of other highly educated professionals, Teitelbaum states that between 2006 and 2008, lawyers earned, on average, around 50% more than PhD-level scientists. Today, however, this figure sounds implausibly high because the strong demand for lawyers (as evidenced by their earnings) in the mid-2000s soon produced a severe oversupply of law graduates. Indeed, PhD scientists are arguably having the last laugh, since they – unlike law students – at least get their tuition paid and living expenses subsidized during their training.

Teitelbaum’s book concludes with a handful of recommendations. Although he is cautious about suggesting major changes to a hugely successful system – the US is still a major powerhouse for scientific research – several of his ideas would be worth implementing regardless of whether the current alarm about STEM shortages is justified. Giving potential PhD students more and better information about their career prospects, for example, would help dampen the boom/bust pattern by coupling the supply of scientists more tightly with market demands for their services. Changes to funding mechanisms might also reduce what Teitelbaum calls the “tendency to expand beyond whatever funds are available – no matter how large”. And of course, better data on employment would be helpful. Until such data exist, though, readers should treat “shortage” rhetoric with a healthy degree of scepticism.

  • 2014 Princeton University Press £19.95/$29.95hb 280pp

Between the lines: mathematics special

Multicoloured image of the Brillouin zones of a square crystal lattice in 2D

Mathematical visions

The UK Institute of Mathematics and its Applications (IMA) celebrates its 50th anniversary this year. As part of the festivities, it has put together a book, 50 Visions of Mathematics, to illustrate the depth and reach of the subject. The 50 essays in the book cover both pure and applied topics, and even the most esoteric subjects are addressed in an accessible way. A good example is Yutaka Nishiyama’s essay “The mysterious number 6174”, which is about a process called Kaprekar’s operation. Named after the Indian mathematician D R Kaprekar, the operation begins by taking a four-digit number in which the digits are not identical (say, 1964, the year the IMA was founded) and rearranging them to make the largest and smallest numbers possible (9641 and 1469 in this case). If you subtract the smaller number from the larger one, and repeat the same rearranging-and-subtraction operation for the result, you will eventually arrive at the number 6174. At this point the process reaches a dead end because 7641–1467 = 6174; hence, 6174 is the “kernel” of Kaprekar’s operation for four-digit numbers. It makes a nice party trick, but Nishiyama argues that it could be more: “maybe, just maybe, an important theorem in number theory is hiding in Kaprekar’s numbers”. Several well-known writers (including Marcus du Sautoy, Simon Singh and Ian Stewart) have contributed essays to the collection, but the prize for effort must surely go to Thilo Gross, who wrote about applying Leonhard Euler’s “Seven Bridges of Königsberg” problem to his (and Physics World‘s) home town of Bristol, UK. Unlike medieval Königsberg, the geography of modern Bristol is such that it is possible to devise a continuous path that crosses all of the city’s major bridges exactly once. Unfortunately for Gross, who walked one such path on 23 February 2013, there are 42 major walkable bridges in Bristol, and the route that links them is 33 miles long. Now that’s dedication.

  • 2014 Oxford University Press £24.99hb 208pp

 

Pyramid schemes

When it comes to book titles, authors of mathematics texts could learn a thing or two from the ancient Egyptians. Consider the scroll that today’s scholars call, prosaically, the Rhind Mathematical Papyrus. Its authors, who lived in the second millennium BC, called this same text Accurate Reckoning, the Entrance into the Knowledge of All Existing Things and All Obscure Secrets. Despite the papyrus’ fantasy-world title, though, its contents might disappoint schoolchildren: it’s an introductory text on basic fractions. However, as the mathematician David Reimer vividly demonstrates in his book Count Like an Egyptian, even simple arithmetical operations were once performed in ways that now seem alien to us. Egyptian fractions, for example, had no numerator. When they wanted to express a quantity such as 2/5, they did so with a hieroglyph that “translates” literally as /3 /15; we would write it as 1/3 + 1/15. In Reimer’s view, the Egyptian system has an advantage over ours because it makes estimating much easier. For example, it is immediately obvious that the Egyptian fraction 3 /2 /1310 is about 3 1/2, but the value of a fraction like 4586/1310 is much harder to evaluate at a glance. The Egyptians also had a different method of multiplication and division, and Reimer demonstrates convincingly that it beats “modern” long-division algorithms in terms of its speed, simplicity and ease of mastery. As he puts it, “somewhere in the heavens, Thoth, the scribal god, is looking down at us with a smile on his face thinking, ‘Who’s primitive now?’ ”

  • 2014 Princeton University Press £19.95/$29.95hb 256pp

 

Undiluted rubbish

The late Martin Gardner, who died in 2010, was one of the 20th century’s greatest and most revered popularizers of mathematics. As the long-time author of the “Mathematical Games” column in Scientific American, he was responsible for introducing a wide, if quirky, range of mathematical topics to a popular audience, including Penrose tiles and John Conway’s “Game of Life” simulation. A noted debunker of pseudoscience and a gifted amateur magician to boot, Gardner had lifelong interests in philosophy, theology and literature, as well as mathematics. In fact, since 1993 his work has inspired a regular series of fan-organized conferences called “Gatherings for Gardner”. Unfortunately, Gardner’s autobiography Undiluted Hocus-Pocus adds little to his legend. Gardner himself seems to have been aware of the book’s failings, but calling his autobiography “slovenly” and “rambling” (as Gardner himself does) is insufficient. In truth it is nearly unreadable, with the chapters on his early life being particularly replete with references to now-obscure philosophical debates and anecdotes of the “maybe you had to be there” variety. Readers who stick with the book long enough may find that its disjointed style grows on them slightly. But Gardner was a prolific author. With so many better examples of his writing out there, there is little point in bothering with this one.

  • 2013 Princeton University Press £16.95/$24.95hb 288pp

Going mobile with NMR spectroscopy

Nuclear magnetic resonance (NMR) spectroscopy could be about to go mobile, thanks to a team of researchers in the US that has shrunk the electronic components needed for the spectroscopic technique down to fit on an integrated circuit the size of a grain of sand. The team’s chip, combined with compact, state-of-the-art magnets, could lead to portable devices that can help identify chemicals in lab reactions and on industrial production lines.

NMR spectroscopy, a technology that has helped visualize the chemical structures of countless compounds, allows scientists to gather information from the spins – the inherent magnetic moments – of atomic nuclei. When compounds with certain nuclei, like those of hydrogen or the isotope carbon-13, are placed in a strong magnetic field, the nuclear spins align with or against the magnetic field. If the nuclei are then bombarded with electromagnetic radiation at a frequency determined by the magnetic-field strength, the directions of the nuclear spins will precess. It is then possible to measure the precession frequencies of the spins of nuclei in a sample to determine how a molecule’s atoms are arranged.

Mini spectroscopy

While scientists have used NMR spectroscopy since the 1950s, the necessary hardware has typically been bulky, requiring superconducting magnets larger than a person and electronics the size of a kitchen cabinet. Recently, smaller permanent magnets that are good enough for NMR have come on to the market and some of the electronic components have been integrated onto semiconductor chips, which has enabled table-top systems that can probe small molecules. But a miniaturized integrated system with a full range of NMR spectroscopy capabilities had not been developed.

Now, though, a team based at Harvard University has done just that. The researchers placed a radio-frequency (RF) transmitter and receiver along with a component known as an “arbitrary pulse sequencer” onto a silicon chip with a surface area of 4 mm2. The scientists then combined their chip with a cube-shaped magnet around the size of a large grapefruit and were able to analyse a variety of compounds. To do the analyses, samples are placed inside a small hole in the centre of the magnet.

The key advance was miniaturizing and integrating the pulse sequencer, which controls the timing, shapes and amplitudes of the RF pulses directed at the sample being measured, says Donhee Ham, the Harvard physicist who led the research. “The arbitrary pulse sequencer is the brain of the entire chip,” he says.

Multidimensional probes

The electronics the team developed are an improvement on those of previous portable systems, which have so far only implemented simplified NMR techniques that cannot fully resolve complex molecular structures, says Ham. The more sophisticated technique of “multidimensional” NMR spectroscopy can be extremely useful when trying to probe structures beyond the most basic molecules. With the new integrated pulse sequencer, the researchers can “control the RF transmitter in any way we desire, so the transmitter can produce any RF pulse sequence”, according to Ham – a requirement for multidimensional NMR spectroscopy.

In addition to enabling portable spectroscopy, the team’s miniaturized electronics could be coupled with larger magnets to greatly speed up the NMR process. By incorporating dozens of the chips into a large superconducting magnet, researchers could scan many samples at once rather than one at a time, which can be a laborious process. Such a “high throughput” spectroscopy scheme could accelerate drug discovery, Ham says.

The team’s work “represents a further step towards the complete miniaturization of an NMR spectrometer”, says Giovanni Boero of the Swiss Federal Institute of Technology in Lausanne. But Boero says that the integration of the pulse sequencer is a technical advance rather than a game changer. “It is not a revolutionary paper, but it is an important work in the frame of the worldwide effort towards the goal of performing NMR spectroscopy using a low-cost, highly portable system.”

The work is published in the Proceedings of the National Academy of Sciences.

The power and pitfalls of science advice

When physicist and astronomer Penny Sackett was appointed to be Australia’s chief scientist in 2008, many other scientists thought she was an excellent choice for the job. US-born Sackett is a successful researcher, most notably in the hunt for extrasolar planets, and as head of astronomy at the Australian National University was also an accomplished administrator. But two and a half years into her five-year term as the government’s leading science adviser, she resigned. Sackett cited “personal and professional reasons”, adding that “institutions, as well as individuals, grow and evolve”. It was clear, however, that all had not been well between her and the politicians she was employed to advise.

Sackett’s experience highlights some of the tensions inherent in the science advisory process, according to James Wilsdon of the Science Policy Research Unit at the University of Sussex. Scientists and politicians, he says, have a kind of pact they need to honour: politicians having to respect the evidence put before them, while scientists have to steer clear of policy prescriptions. But in 2010 Sackett effectively reneged on this deal, stating publicly that Australia was “not acting with sufficient speed” to combat global warming, following the government’s decision to put its emissions trading scheme on hold. Within nine months, Sackett had quit her post.

While Wilsdon says he does not know to what extent Sackett’s statement was directly linked to her stepping down, he has no doubt that scientists often are not prepared for the pitfalls of giving advice. “They may have the respect of their peers, while lacking the political and organizational skills needed to navigate complex policy questions,” he says.

Avoiding such pitfalls will be one of the central themes of a two-day conference entitled “Science Advice to Governments” that is being held in Auckland, New Zealand, from 28–29 August. Organized by New Zealand’s chief science adviser Sir Peter Gluckman on behalf of the International Council for Science (ICSU), the meeting will bring together leading science advisers and policy experts, including Wilsdon, from several dozen countries in order to stimulate “frank and fruitful” discussions about what works and what does not when scientists pass on their expertise to politicians.

Heather Douglas, a philosopher of science at the University of Waterloo in Canada, who will also be attending the meeting, says that such discussions are sorely needed. Douglas points out that several countries and multinational organizations have set up new advisory posts in recent years, and she says that these new positions are leading scientists to “finally think about what the advisory job involves”. She maintains, however, that many scientists who agree to take on advisory roles still do not fully understand their responsibilities, particularly when it comes to more complicated aspects of the job such as public communication and whether or not to be an advocate of the science community. “It is only in the past couple of years that these things are starting to be grappled with,” she says.

Advice in meltdown

Both the US and the UK governments have had chief science advisers since the 1960s – a presidential adviser in the former case and an adviser to the prime minister and cabinet in the latter (supplemented, since 2011, by an adviser in each government department). A few other countries have since followed suit: Australia set up an advisory post in 1989, India did so in 1999 and Ireland likewise in 2007. In the last few years, however, such posts have become even more fashionable: New Zealand appointed Gluckman in 2009; the EU hired its first chief scientific adviser – Scottish biologist Anne Glover – in 2012; while the United Nations inaugurated a new scientific advisory board earlier this year (the latter having four physicists including Fabiola Gianotti, a former spokesperson for the ATLAS experiment at CERN).

Many of the incumbents will make the trip to Auckland, where they will have plenty to get their teeth into. One recent disaster they are sure to discuss, and the one that prompted ICSU to set up the meeting in the first place, is the tsunami-induced meltdown at the Fukushima Daiichi nuclear power plant in Japan in March 2011 and experts’ ensuing failure to properly inform the public about its potential impact. Another episode likely to be on the agenda is the deadly earthquake that struck the central Italian town of L’Aquila in 2009, which led to the trial and conviction on manslaughter charges of six scientists and a government official – full or acting members of a government advisory committee – after they were accused of providing false reassurances to the public.

Observers agree that the controversy in Italy highlights the need for a clear definition of an adviser’s role, but opinions differ on exactly what that role should be. Thomas Jordan, an earth scientist at the University of Southern California and chair of an international commission that reviewed earthquake forecasting in the wake of the L’Aquila disaster, believes that an amateur seismologist’s baseless alarms “trapped” the Italian experts into downplaying the risk of a major quake, and that as such the roles of science adviser and decision-maker should be cleanly separated in the future. However, Roger Pielke, a social scientist at the University of Colorado, opposes what he calls “the cartoon image” of a wall isolating science from politics. “The challenge is not to keep these things separate but how to effectively integrate them,” he says. “Separating them goes against the whole spirit of the enterprise, which is to get expertise in to the decision-making process.”

Pielke points out that many scientists provide such expertise by acting as advocates – by backing specific courses of action such as the introduction of quotas on greenhouse gas emissions. Such advocacy, he says, is a central element in a healthy democracy, but he believes it is also vital that some scientists act as “honest brokers”. In this role, Pielke explains, they would spell out the science behind a range of policy options rather than narrowing the choice and effectively making policy decisions.

Best placed to play this role, according to Pielke, are learned societies, especially, he says, since effective brokering often requires several specialists working together. Pielke maintains, however, that in practice these organizations sometimes act more as advocates. He cites a 2006 letter written by Bob Ward, a then senior manager at the Royal Society, to the British arm of energy giant Exxon Mobil criticizing the company for funding organizations that “have been misinforming the public” about the science of climate change. “There is no shortage of scientists willing to get into skirmishes,” says Pielke, “but we do have a shortage of organizations that will rise above them.”

John Pethica, a physicist from Trinity College Dublin and a vice president of the Royal Society, responds by saying that the letter was written by an individual rather than the institution as a whole, and that the many statements on scientific policy made by the Royal Society go through an extensive review process before publication. “The only position we take is: the science must be high quality,” he says.

Maintaining trust

The idea that expert advisers should steer clear of backing specific policies is endorsed by many of the scientists who will attend the New Zealand meeting. To illustrate the point, EU adviser Glover takes the example of genetically modified crops. She says that all of the evidence points to such crops being safe. Indeed, last year she was quoted as saying that opposition to the technology on scientific grounds “is a form of madness I don’t understand”. But Glover recognizes that politicians can have legitimate reasons, such as those based on ethics or economics, for opposing the cultivation of GM crops. “The important thing is that they are in a position of strength so that they know about the evidence beforehand and are transparent about the reasons for rejecting it if they choose to,” she says.

Tokyo Electric Power employee measures radiation levels at the Fukushima Daiichi Nuclear Power Plant

To ensure that decision-makers get the information they need, Wilsdon says that scientists need a number of skills beyond those of the researcher. These include communicating complex ideas and scientific uncertainty in simple terms, as well as being able to pull together many different sources of evidence and then present “a set of navigable options” to decision makers. “Advisers are often appointed because they are eminent scientists,” he says, “but they also need a sophisticated understanding and sensitivity regarding the policy-making process.”

This sentiment is shared by Gluckman, who says that one of the hardest aspects of his job is simultaneously maintaining the trust of the public, the media, policy-makers, politicians and the scientific community. In his experience it is actually the last of these groups that is the toughest to deal with, pointing out that scientists often criticize advisers for “not batting for their interests in public”, in part because they confuse the provision of scientific advice with the constant championing of funding for research. “The point about being a science adviser is being an honest broker and leaving the values stuff to the politician,” he says. “We don’t live in a technocratic world, as much as some scientists would wish we do.”

Difficulties notwithstanding, Gluckman believes that it is the one-to-one relationships between a chief scientist and senior politicians that best encourage governments to respect the evidence advisers put before them. However, he acknowledges that cultural factors might make such an approach difficult in some countries. “In the English-speaking world individuals can be put into trusted positions in a number of ways,” he says. “But in some European countries that is not so easy. In Germany, for example, there is a tradition of collectivism, which means there is more of a tendency to work through academies.”

Japan too tends to prefer committees over individuals, says Reiko Kuroda, a chemist at the Tokyo University of Science. According to Kuroda, Japanese politicians discussed reintroducing science advisers in the wake of the Fukushima disaster (a chief scientist having existed between 2006 and 2008), but in the end no new appointments were made. “We need someone with a broad scientific background who is respected by everyone to advise the prime minister,” she says. “But politicians and the general public prefer to have a group deciding. The problem is that the people in the group don’t take any responsibility.”

Beginning a conversation

Back in Europe, Glover has worked hard to try to bring the continent’s science advisers together. After taking up her post, she wanted to get scientists to discuss some of the technical issues underlying EU policies to see where they could reach consensus. But Glover realized that she did not know who to speak to and has since built up a network of individuals from many of the member states – currently 14 – comprising heads of academies, members of science advisory councils and government employees, in addition to the chief scientists of the UK and Ireland. “We had an absolutely outstanding meeting,” she says of the group’s first get-together at the EuroScience Open Forum in Copenhagen in June. “We can learn a lot from each other.”

It is in this spirit that the New Zealand conference will also be held, says Wilsdon. “We hope that this is the beginning of a conversation that improves expertise across different advisory systems,” he says. “The object of the exercise is not to promote one particular model, but to discuss the underlying questions at the boundary between science and politics that apply the world over.”

Using antineutrinos to monitor nuclear reactors

A new system that uses an antineutrino detector to monitor a nuclear reactor for the production of weapons material has been proposed by an international team of physicists. A detector parked close to the facility could fully assess the state of the reactor core by detecting the antineutrinos it emits. However, the researchers admit that current detectors are not up to the job and additional research and development will be needed before their method is viable.

Nuclear power plants can be appropriated to produce plutonium for weapons, and this is why governing bodies such as the International Atomic Energy Agency (IAEA) are looking at developing methods that can be deployed outside a facility to confirm that the reactor’s operations are as declared and that no material has been removed from the site. The idea of using antineutrino detectors as a non-proliferation safeguard for nuclear reactors is not new – it was first suggested more than 30 years ago in 1978 and much research has been carried out since. The method itself is useful, because an operating reactor would generate an enormous number of antineutrinos – on the scale of 1026 per day, from a typical power reactor.

Across the spectrum

But the energy spectrum of the antineutrinos depends on whether they are produced via fission in uranium or plutonium – those from plutonium have a lower average energy. This means that antineutrinos carry with them signature information about the amount and type of fissile material in the reactor core. So, by observing the spectrum, it is possible to determine the relative fraction of fissions that arise from plutonium, and this in turn can be used to work out the amount of plutonium that are in the core. Many studies on using such antineutrino detectors as monitors have been carried out – indeed, one such prototype detector, SONGS1, was installed at the San Onofre Nuclear Generating Station (SONGS) in California in 2003, but the detector was unsuccessful due to its limited design. However, detector sensitivities are still limited, and the entire energy distribution of the antineutrino spectrum from a reactor is unclear.

Now, Patrick Huber from Virginia Tech in the US, along with colleagues in Vienna, Austria, say that neutrino-detector technology has improved vastly in the past 5 years. The team has now combined detailed reactor simulations with state-of-the-art reactor neutrino-flux calculations and a statistical analysis, which they say “fully accounts for the energy distribution of the antineutrinos” and focuses on the IR-40 Iranian heavy-water reactor at Arak. In their latest research, recently published in Physical Review Letters, Huber and colleagues build on their other study of the 1994 North Korean nuclear crisis and suggest that the reactor’s core could have been successfully probed when inspectors were allowed to return.

Based on that work, the team has now analysed the publicly available information on the IR-40 reactor. They suggest using 20 tonnes or less of scintillator in a detector system placed in a standard shipping container and parked just outside the reactor building. “The reactor building of the Arak reactor has a diameter of 32–34 m and the reactor sits at the centre, a few metres above ground,” says Huber. “This makes the distance between the centre of the reactor and the centre of the detector 19 m, if one puts the detector right on the outside of the exterior wall of the reactor building.” This proximity is necessary so that sufficient measurements can be completed within the IAEA’s required time of 90 days, according to the team’s analysis.

Direct detection

Huber told physicsworld.com that the team’s method would allow inspectors to detect fuel removal, even if the monitoring was interrupted for a period – due to technical glitches or diplomatic standoffs – and then restarted. During such an interruption, plutonium-enriched material generated by the reactor could be removed and the reactor could be replenished with fresh uranium fuel. Current methods, including cameras and video monitoring, would not allow inspectors to detect such a substitution, shutting down the reactor and making expensive and lengthy measurements of the core. An antineutrino detector, on the other hand, would allow inspectors to inspect the state of the core after a period of absence, and could also allow the international community to be reassured that fuel has not been removed, according to Huber. “That is one of the key advantages of our method compared with any other means of safeguards,” he says. Also, the proposed system would provide a complete spectrum of reactor antineutrino flux – something that is essential. According to Huber’s analysis, their ideal system could detect as little as 2 kg of plutonium being removed from the reactor.

Above-ground advances

Huber does acknowledge that the researchers’ system still requires significant technological advances. “What we need is a detector that can work at the surface, with a good efficiency, with reasonable energy resolution,” says Huber, pointing out that individually each parameter has been met, but not combined. He also says that if a sterile-neutrino detector is successfully built, then that would also serve perfectly as a safeguard detector.

David Wark, a neutrino specialist at the University of Oxford, acknowledges that the researchers’ case study shows that a detector with realistic capabilities should be able to detect the diversion of interesting quantities of plutonium from a real reactor. However, he also points out that the analysis depends on understanding the shape of the antineutrino spectrum from a reactor, and “deviations between measured spectra and calculations that are as big as the differences this paper is using to distinguish plutonium-239 from uranium-235 have recently been seen by three reactor experiments, which sort of calls into question how well we can reliably model the differences between uranium and plutonium…although that could be checked in advance with known reactors”.

Hopefully, the next five years will determine whether such antineutrino detectors can reliably be used to monitor nuclear reactors.

The research is published in Physical Review Letters.

Copyright © 2026 by IOP Publishing Ltd and individual contributors