Skip to main content

How to advise a politician about science

“Ensuring government is properly informed by science is something that all scientists should be involved in.” So wrote Sir John Beddington, the UK government’s chief scientific adviser from 2008 to 2013, in the book of essays Future Directions for Scientific Advice in Whitehall. Beddington’s goal seems a noble aim given that so much of the modern world – from mobile communications and medicine to disease control and climate change – is intrinsically linked to science. But how exactly should we heed his advice?

Many governments and policy-makers have professed an interest in science. Jean-Claude Juncker, current president of the European Commission (EC), wrote in Future Directions that the European Union needs to “make sure that Commission proposals and activities are based on sound scientific advice”. Meanwhile, the Organisation for Economic Co-operation and Development (OECD) concluded in its recent report Scientific Advice for Policy Making that “science is truly at the centre of many important policy issues and scientists are increasingly visible and, in many cases, increasingly vulnerable, in policy-making processes”.

Scientists often say that our immediate concern should be to improve policy-makers’ understanding of “the imperfect nature of science” (Nature 503 335). We believe, however, that what’s even more important is to improve scientists’ understanding of the imperfect nature of politics. Drawing on our experiences of the difficulties of bridging the scientific and political worlds – one of us (JT) served on the UK’s civil-contingencies secretariat, while both of us have contributed to meetings between scientists, politicians and political advisers sponsored by the International Risk Governance Council – we have distilled our thoughts into 12 top tips for scientists who have to advise politicians on how to deal with slowly developing risks that could have catastrophic economic, social or environmental consequences.

When science meets politics

Before we get to that advice, let’s briefly look at science in government and what politicians seek from scientists. All governments have to make three types of decision. They need to fulfil a promised programme, such as backing renewable energy. They need to solve problems and manage unplanned crises as they arise. And they need to prepare for potential future problems, which includes maintaining their own long-term political credibility. Scientists have a role to play in all three areas and those who wish to be involved are, perhaps surprisingly, helped (in the UK at least) by ongoing cuts to the size of the civil service.

Such job losses mean that civil servants increasingly need – in fact, actively want – advice from different sources, with the civil service itself being more focused on implementing that advice. Access to policy-makers is, if anything, becoming easier as the UK government is, in principle, committed to open, evidence-based policy-making, with all major government departments now having their own science adviser.

But what do we mean by “risk”? In their 1985 book Perilous Progress: Managing the Hazards of Technology, Robert Kates, Christoph Hohenemser and Jeanne Kasperson define it as “an uncertain consequence of an event or activity with regard to something humans value”. Inherently uncertain it may be, but managing risk is now a core political preoccupation. In fact, a study conducted by the UK Cabinet Office in 2002 noted that the nature of risk had changed for two reasons.

First, the accelerating pace of scientific and technological development means that we are now faced with what are known as “manufactured risks”. These occur when existing risks, such as natural hazards, are compounded by previously unknown or unexpected vulnerabilities, such as cyber attacks or geomagnetic storms. Manufactured risks force governments and regulators to make risk-based policy judgements across a huge range of technologies, many of which – from nanotechnology to energy – have a strong physics component.

Several rescue workers in the rubble of collapsed buildings

The nature of risk has also changed because the world is increasingly interconnected. As a result of the growth in air travel, IT and mobile communications, the global economy and environment are linked at every level. That interconnectedness has brought huge opportunities, but it’s also exposed citizens to distant events such as the spread of the Ebola virus in Sierra Leone last year. These “systemic” risks are now high on the policy agenda in many countries and, again, there are many areas for physicists and mathematicians to get involved in, especially in understanding and predicting the behaviour of networks and other complex systems.

Scientists who wish to become involved are helped by two recent changes in society: people are increasingly unhappy when governments cannot assess and manage risk, while the media increasingly seek independent validation of governments’ policy prescriptions and professed commitment to open, evidence-based policy making. As Beddington went on to say in Future Directions: “What is more difficult is ensuring that science is brought to bear effectively on the questions which policy-makers know matter but which don’t present a single decision moment, or where it is less obvious that science can help.” In other words, individual scientists must make their specialist knowledge and expertise more widely available, especially when it concerns important scientific issues that politicians may not be aware of.

So for scientists who want to get involved, here are those 12 key pieces of advice, based on recent examples of both success and failure.

The not-so-dirty dozen

1. Be aware that scientists are often seen as just another lobby group. This one simple (if unpalatable) fact means that science advice is more than a matter of speaking truth to power. It is also about persuading those who hold power that the advice is reliable, that the adviser does not have a hidden agenda, and that the advice is worth both listening to and acting upon.

2. Know how government policy-making is structured. Government decision-making is complex, and there is a clear distinction between political decision-making and policy-making. We don’t believe it’s helpful for scientists to involve themselves directly in the former (there are many unhappy examples to illustrate our point). Nor is it easy to break through the barriers that government officials erect to protect themselves and their spheres of influence.

But it is realistic for scientists to contribute evidence when new policies are being crafted, especially when governments have declared (as they have in the UK) that they want open and transparent policy-making and also when those scientists are contributing to areas in which economic growth depends largely on exploiting scientific innovation. Governments are increasingly adapting their policy-making machines to accommodate a more systematic scientific voice. Microbiologist Anne Glover’s success in collaborating with the EC on digital communication during her time as the EC’s chief scientific adviser shows what can be achieved when these new structures work.

3. Realize that science is not usually the only, or even the major, consideration. In terms of policy-making, science can be very low down the pecking order. Few ministers have science degrees, and so they tend to reach for experts and advisers in economic, legal and social issues. It follows that science advice is most likely to be listened to if it can be integrated with information from these other fields. But don’t expect politicians to pick out the salient political or economic advantages from a complex mess of science. Do it yourself! The report The Importance of Physics to Economic Growth from the Institute of Physics (which publishes Physics World) is an excellent starting point.

4. Point out the role of your speciality in contributing to the solution of cross-disciplinary problems. The most intractable science-related problems that governments face tend to be cross-boundary, especially in risk analysis, mediation and prevention. This is where governments need to bring teams of scientists from different areas together to develop a solution. It is not always obvious which areas may hold the key to a solution, so in cases of present or future risk, be prepared to consider if you might have something to contribute – and don’t be shy about coming forward.

5. Appreciate the importance of personal contact. The political process is based largely on developing trust and understanding through personal contacts. Scientists who wish to be heard should aim to develop such contacts, rather than banging the drum from the outside. The point of contact will not necessarily be a politician – it may be a committee chair, a departmental science adviser, a civil servant or other member of a government department, or even a lobbyist. The key is to find the right conduit for communication.

6. Be aware of political priorities and the need to engage with them. All too often, researchers with a passion for their subject seem to think that politicians just need to be “put straight” on the science surrounding a particular issue. Research on why certain types of advice are accepted, and others ignored, shows that this approach is ineffective unless it’s framed in terms of the needs and preoccupations of the decision-maker – in this case, that person’s political priorities. These may include the social context (such as how voters in the decision-maker’s constituency might be affected), the economic cost or benefit, and even the practicality of implementing a decision before the next election. To be truly effective, scientists must make themselves aware of the political impact of their advice, and point these out in clear, unambiguous terms. Scientists should also realize that politicians and other policy-makers are constantly bombarded with information, and short, pithy statements are much more likely to be heeded – especially if the writer takes the time and effort to use effective words and phrases that can be borrowed and repeated.

7. Be aware of political timescales. Politicians are primarily concerned with the short term. Any policy with benefits that will be felt only in the distant future is likely to assume less importance than one with benefits that can be proudly displayed before the next election. It follows that scientific advice (which is often concerned with long-term issues) is most likely to be accepted and acted on if at least some short-term benefits can be identified and “sold” to politicians. This is not cynical – it is practical (after all, a politician cannot implement a policy if he or she is not in power).

8. Offer options, not policies. Evidence suggests that science advice is most likely to be heeded if the scientist is perceived as an “honest broker”, integrating scientific knowledge and understanding with other concerns to provide even-handed advice within a policy context. By acting in such a way, scientists can help to break down the often-held political view that scientists are “just another” pressure group, or that they are acting to promote the interests of particular pressure groups.

9. Don’t over-claim. Hubris is as much of a sin among scientists as it is in other specialisms – perhaps more so, since in trying to persuade politicians and the public to take notice, scientists too often tend to overstate their case. In particular, scientists should avoid making predictions. Politicians don’t trust them (having seen so many fail), and are much more likely to be receptive to an understated, even-handed analysis of opportunity versus risk.

10. Keep it as simple as possible, but not simpler. Einstein’s famous dictum is especially appropriate when it comes to providing scientific direction for policy. Politicians and other policy-makers are aware that science is complex, but don’t appreciate (or trust) oversimplification any more than they appreciate over-complexity. The important point in communicating science in a policy context is to focus on those aspects that are relevant to the problem in hand.

11. Be aware that “more research” is seldom an option. The timescales of politics are such that politicians usually need fast answers to immediate problems. It’s counterproductive to use these occasions to push for more support for research, even if that support might be needed. It cannot be said too strongly that requests (or demands) for support for further research simply reinforce most politicians’ belief that scientists, like all other pressure groups, are promoting their views mainly to get a larger share of the financial cake. More cash is more likely only if the arguments for it are separated from the offering of scientific advice on particular issues.

12. Establish long-term gain. Scientific advice is often concerned with long-term issues, but the people who have to implement it (especially politicians and civil servants) often get replaced or change jobs over much shorter timescales. One way to overcome this problem is for scientists to keep an eye on developments (perhaps through a scientific society or other network), to point out short-term opportunities, and to urge that policies based on their advice should be flexible and responsive so that actions can be modified as new information comes in or circumstances change.

Policy in action

Billows of black smoke above a volcano

One example of scientists working well with politicians and policy-makers took place following the eruption of the Eyjafjallajökull volcano in Iceland in 2010. The potential risk of such an event causing an ash cloud and widespread disruption to air travel had already been identified by the relevant UK government department as part of a national risk assessment process in 2005. Unfortunately, no-one had been found who’d been willing to estimate the likelihood of such an event actually causing such a disruption. That’s because the risk depended on a number of factors – such as the frequency and nature of an eruption as well as atmospheric and weather conditions – that were themselves unpredictable. The risk was held “in reserve” for further study.

So after the Eyjafjallajökull eruption, the government’s then chief scientific adviser was invited to pull together a cross-disciplinary team, including volcanologists, meteorologists and aerosol researchers, to help policy-makers understand the risk of such an ash cloud recurring and to estimate what the “reasonable worst case scenario” might be. The team also asked if Eyjafjallajökull was the very worst thing that could happen, the answer to which was “no”. Much more damaging, though rather less likely, than another ash cloud was the risk of something like a recurrence of the eruption in 1783 of the Icelandic volcano Laki. It produced large quantities of gases, including carbon dioxide and sulphur dioxide, that caused famine throughout western Europe. Such an eruption would cause massive problems not just for transport but for health and agriculture too.

The Eyjafjallajökull eruption provided many lessons about using science to support government policy. First, don’t try to predict risk, because wrong predictions are common and they merely undermine trust in science. Do, however, try to give a best estimate, even if this guess is provisional upon further research, because governments may otherwise interpret “no opinion” as meaning “there is no problem”. Third, form a team of experts that includes not only people from the most prominent relevant discipline but also anyone who has a relevant contribution to make – including policy-makers themselves. Fourth, assess not just the phenomenon but also its impact. And finally, use your team to build networks that can solve problems in other areas, as was the case with the Cabinet Office’s “natural hazards team” containing scientific experts both in government and beyond.

An eruption of risk

Dealing with risk is far from easy. In 2011, for example, six Italian scientists and one government official were charged with manslaughter following the April 2009 earthquake in the city of L’Aquila, their fault consisting of having contributed to the spread of misleadingly reassuring messages to the public about the earthquake risk. Although the six were later acquitted, their case illustrated the legal perils if responsibilities are unclear between governments and their official or unofficial advisers – and if scientists are to be heard safely, they need a formal framework.

Such frameworks exist in countries, such as the UK, that recognize the benefits and challenges of integrating scientific advice into policy-making. Indeed, the OECD report Scientific Advice for Policy Making articulates the essential conditions for an effective and trustworthy science-advisory process – namely, a clear remit to produce advice that’s sound, unbiased and legitimate, and the involvement of a full range of scientists, policy-makers and other relevant parties.

Scientists need to be aware of these two conditions when deciding whether to offer advice, but what’s also important is to know – at a practical level – how to communicate effectively with politicians and policy-makers. We hope, therefore, that our advice will be of help – indeed, one example of good, positive interactions between scientists and politicians occurred in the UK after the 2010 eruption of the Icelandic Eyjafjallajökull volcano (see box). What this incident showed is that the systematic use of science is now part of the policy-making landscape and – for those who have seen how it can work – it is a “gift that keeps on giving”.

Scientists who want their advice to be heeded need to put themselves in the shoes of their policy-making audience. They should make things easy for that audience by pointing out political benefits (if there are any), making connections with other politically relevant areas, and providing appropriate words and phrases that those whom they wish to influence can pick up and use.

These well-established principles of communication may seem self-evident, but if they were that obvious, then many more scientists would already be using them. More of us need to catch on to them if science is to take its rightful, essential place in the hierarchy of political decision-making.

New sunspot analysis shows rising global temperatures not linked to solar activity

A recalibration of data describing the number of sunspots and groups of sunspots on the surface of the Sun shows that there is no significant long-term upward trend in solar activity since 1700, contrary to what was previously thought. Indeed, the corrected numbers now point towards a consistent history of solar activity over the past few centuries, according to an international team of researchers. Its results suggest that rising global temperatures since the industrial revolution cannot be attributed to increased solar activity. The analysis, its results and its implications for climate research were discussed today at a press briefing at the IAU XXIX General Assembly currently taking place in Honolulu, Hawaii.

Looking back

Measuring the sunspot number – or Wolf number – is one of the longest running scientific experiments in the world today, and provides crucial information to those studying the solar dynamo, space weather and climate change. Scientists have been observing and documenting sunspots – cool, dark regions of strong magnetism on the solar surface – for more than 400 years, ever since Galileo first pointed his telescope at the Sun in 1610. Scientists have also known about the solar cycle – an approximately 11-year period during which the Sun’s magnetic activity oscillates from low to high strength, and then back again – since the mid-18th century, and they have been able to reconstruct solar cycles back to the beginning of the 17th century based on historic observations of sunspot numbers.

Although solar activity has oscillated consistently, the timings and characteristics of individual cycles can vary significantly. Between 1645 and 1715, for example, solar activity did not pick up, and the Sun remained in an extended period of calm known as the Maunder minimum. Historically, this period coincided with the “Little Ice Age”, during which parts of the world including Europe and North America experienced colder winters and increased glaciation than today. These suggested that there exists a strong link between solar activity and climate change.

Until now, the general consensus was that since the end of the Maunder minimum, solar activity has been trending upwards over the past 300 years, peaking in the late 20th century – an event referred to as the modern grand maximum. The trend has also led some to conclude that the Sun may play a significant role in modern climate change. However, a long-running and contentious discrepancy between two parallel series of sunspot number counts has made this role difficult to pin down.

The two methods of counting the sunspot number – the Wolf sunspot number (WSN) and the group sunspot number – deliver significantly different levels of solar activity before about 1885 and also around 1945. The WSN was established by Rudolf Wolf in 1856, and is based on both the number of groups of sunspots and the total number of spots within all of the groups. In 1994 the question began to arise as to whether the WSN was good enough to construct an accurate historical sunspot record. Because of the limitations of telescopes in those days, the smaller spots could have easily been missed, some have argued. A new index – the group sunspot number (GSN) – was established in 1998, which is easier to measure and has been backdated to measurements since Galileo’s time. This index was based solely on the number of sunspot groups. Unfortunately, the two series disagreed significantly before about 1885, and the GSN has not been maintained since the 1998 publication of the series.

Then and now

The new correction of the sunspot number, called the sunspot number version 2.0, led by Frédéric Clette, director of the World Data Centre for Sunspot Index and Long-term Solar Observations (WDC–SILSO) and based at the Royal Observatory of Belgium, Ed Cliver of the National Solar Observatory in the US and Leif Svalgaard of Stanford University in the US, nullifies the claim that there has been a modern grand maximum. Indeed, the researchers say in their abstract that their study is “the first end-to-end revision of the sunspot number since the creation of this reference index of solar activity by Rudolf Wolf in 1849 and the simultaneous recalibration of the group number”, and that their results mean that there is no longer any substantial difference between the two historical records.

Clette and colleagues’ results make it difficult to explain the observed changes in the climate that started in the 18th century and extended through the industrial revolution to the 20th century as being significantly influenced by natural solar trends. According to the researchers, they have identified the apparent upward trend of solar activity between the 18th century and the late 20th century as a major calibration error in the GSN. Now that this error has been corrected, solar activity appears to have remained relatively stable since the 1700s.

The researchers say that their results now provide a homogenous record of solar activity dating back some 400 years, and that existing climate-evolution models will need to be re-evaluated, given this entirely new picture of the long-term evolution of solar activity. Their work, they hope, will stimulate new studies, both in solar physics and climatology.

The new data series and the associated information are distributed from WDC-SILSO.

Quantum mechanics in a cup of coffee, hamming it up to the space station, the laws of political physics and more

 

By Hamish Johnston and Michael Banks

Physicists tend to drink lots of coffee so I wasn’t the least bit surprised to see the above video of Philip Moriarty explaining quantum mechanics using a vibrating cup of coffee. Moriarty, who is at the University of Nottingham, uses the coffee to explain the physics underlying his favourite image in physics. You will have to watch the video to find out which image that is, and there is more about the physics discussed in the video on Moriarty’s blog Symptoms of the Universe.

(more…)

How far away can you see light from a candle?

 

By Andrew Silver

Can the unaided eye see the light from a single candle from 10 miles away? According to some claims on the Internet, the answer is yes – but now two scientists in the US have borrowed techniques from astronomy to show that a pair of binoculars would probably be needed.

The story behind this work began high in the Andes one moonless night when a candle was lit on the Cerro Tololo Inter-American Observatory telescope catwalk. Somebody walked 400–600 m away and said the flame was as bright as the brightest stars in the sky. Nobody wrote down any numbers.

(more…)

Could quantum ‘clocks’ tread two different paths to general relativity?

A new way of probing the intersection between quantum mechanics and Einstein’s general theory of relativity using interferometry has been devised by physicists in Israel. The researchers have developed a “self-interfering clock” that comprises two atomic spin states put into a quantum superposition. The researchers hope that their proof-of-principle experiment will provide new insights into the study of time, the interplay between quantum mechanics and relativity, and in particular the role that gravity could play in destroying the coherence of a quantum system.

Different ticks?

Quantum mechanics and general relativity are both well-established and well-tested theories. Despite this, the two are not always in agreement. The concept of time, for example, is treated differently by both: while quantum theory states that time is global and all clocks “tick” uniformly, general relativity dictates that time is influenced by gravitational fields, and so clocks tick at different rates in different places. The latter having been verified experimentally using clocks at different heights above the Earth.

Another inherent property of quantum mechanics is “superposition”, wherein a quantum particle such as an electron is considered to simultaneously be in all possible “states” (or spatial positions) until a measurement is made and the wavefunction collapses. This is more commonly known as the Schrödinger’s cat paradox.

Using an interferometer (the simplest being the double-slit experiment), researchers can make photons or electrons take two paths simultaneously. As long as the observer does not know which of the two paths is taken, when the paths are rejoined at a detector, an interference pattern – which is the hallmark of superposition – is seen, and the particles are thereby in spatial superpositions. On the other hand, if the observer is able to tell which path was taken, the interference pattern will disappear because there was no superposition. Such “which path” information can be revealed using a tag referred to as a “which path” witness. For example, a polarization filter placed in one path would allow the observer to distinguish light that had taken that path.

Which way?

So, what would happen if a “quantum clock” is simultaneously sent along two paths of an interferometer? General relativity says that time can “tick” at different rates along each path, and therefore time itself could be a “which path” witness.

This is precisely the question that Ron Folman and colleagues at the Ben-Gurion University of the Negev aimed to answer in their latest research. Thanks to the discovery of Bose–Einstein condensates (BEC) and the idea of using ultracold atoms as “clocks”, it is possible to send such a clock through an interferometer. “What we have shown in our proof-of-principle experiment is that time itself can also be a ‘which path’ witness,” says Folman. To do this, the researchers used ultra-cold rubidium atoms at nano-kelvin temperatures in a new Stern–Gerlach type of interferometer that they developed and demonstrated two years ago. Here, a strong magnetic field from an atomic chip interacts with the spin of the atoms, “and if the atom is in a superposition of two spin states, then it will evolve into a superposition of two momentum states that form (after some time) a spatial superposition”, explains Folman. In their latest experiment, the researchers do not actually send their clock down an interferometer – instead, two copies of the clock (wavepackets), are separated in space, thereby forming the two interferometer paths.

“We turn the atom into an atomic clock by manipulating its internal degrees of freedom (spin states),” says Folman, further explaining that as their clock is not sensitive enough to feel the different ticking rates caused by gravity, “we induce an artificial difference in the ticking rate by exposing the two paths to different magnetic fields that make the two clock wavepackets tick at different rates”. When the team actually induced its time lag – the wavepackets were put in easily differentiable orthogonal states – it found that the interference pattern disappeared, thereby proving that time may serve as a “which path” witness, according to the researchers.

Folman says that as the team was able to show revivals of the interference pattern, it is not clear yet if this may be called decoherence and there is a debate among theoreticians about the role general relativity may be playing. “Our proof-of-principle experiment opens the road to investigate this interplay,” he says. Indeed, recent theoretical work done by Časlav Brukner of the University of Vienna and colleagues looked into this, and suggested sending a cold-atom clock through an interferometer to test the boundary between the quantum and classical worlds, and Folman’s simulated interferometer clock is a first step in that direction.

Brukner tells physicsworld.com that the new work beautifully simulates what he and colleagues theoretically predicted about what a single “clock” – a time-evolving internal degree of freedom of a particle – undergoes when put in a superposition of regions of space–time with different ticking rates. “The time as shown by the clock is not well defined, and gets entangled with its position,” Brukner explains. He adds that “this implies that by ‘reading-out time’ from the clock, one could reveal the “which path” information, and consequently, one has a loss of coherence of the clock’s centre-of-motion degree of freedom”. This latest work “succeeded to demonstrate the very exact effect that we expect in a future experiment with a natural lag due to time dilation”, says Brukner.

Folman also points out that their device is a new type of interferometer that produces signals not seen before. “For example, people have become accustomed to the fact that when you join a split BEC, you always get an interference pattern in each repetition of the experiment. This new interferometer completely destroys the interference pattern of a BEC in every single repetition, when the two clock wavepackets have orthogonal time readings,” he says.

Higher sensitivities

Chad Orzel, a physicist at Union College in the US who was not involved in the work, says it is a clever idea, although he remains sceptical that this sort of mechanism could have anything to do with the quantum-to-classical transition because of the very small time lag that is induced. “In terms of implications for other experiments or tests of quantum gravity and the like, I think it will be a massive challenge to do anything with this. They’re using an artificial phase shift of order π between their clocks, and that much of a shift would be hard to realize with gravitational shifts near the Earth,” he says. Orzel adds that even if the technical challenges involved in developing a more sensitive clock (using say strontium) were surpassed, it would still be very difficult to show that it is indeed gravity degrading the contrast of interferometer fringes.

Folman acknowledges that the challenge now facing his team is to reach a sensitivity that would “allow us to directly observe the effect of general relativity on the interferometer. For this to happen, the distance between the two paths must be enlarged (so that the difference in ticking rate is larger) and the clock needs to be made much more accurate. It remains to be seen how quickly this can be achieved.” In addition to testing the overlap between relativity and quantum mechanics, the team hopes its work will help us to “learn more about time itself”.

The research is published in Science.

Browsing the Milky Way at the IAU General Assembly in Honolulu

 

Artist's impression showing the Milky Way over Hawaii

By Hamish Johnston

Earlier this week the triennial XXIX General Assembly of the International Astronomical Union (IAU) kicked off in Honolulu, Hawaii. Founded in 1919, the IUA has about 10,000 members based in 96 countries worldwide. About 3500 astronomers are attending this year’s meeting, which runs until 14 August and is hosted by the American Astronomical Society.

A long-standing tradition of the congress is the production of a daily newspaper for delegates and 2015 is the first year that an electronic version is available to the general public. You can catch up with all the daily news by downloading a copy of Kai‘aleleiaka, which is pronounced “kah EE ah lay-lay-ee AH kah” and means “the Milky Way” in Hawaiian.

(more…)

Re-examining the decision to bomb Hiroshima

By Hamish Johnston

Today marks the 70th anniversary of the bombing of Hiroshima – the first time that a nuclear weapon was used in war. Many argue that the bombing of Hiroshima, and three days later Nagasaki, was a necessary evil that saved hundreds of thousands of lives by ending the war and avoiding an allied invasion of Japan.

Over on The Nuclear Secrecy Blog, the science historian Alex Wellerstein asks “Were there alternatives to the atomic bombings?”. Wellerstein argues that the choice facing the US in 1945 was not as simple as whether to bomb or to invade. He points out that some physicists working on the Manhattan Project – which built the bombs – argued for a “technical demonstration” of the weapons.

In June 1945 the Nobel laureate James Franck and some colleagues wrote a report that argued that the bomb should first be demonstrated to the world by detonating it over a barren island. Wellerstein surmised that “If the Japanese still refused to surrender, then the further use of the weapon, and its further responsibility, could be considered by an informed world community”. Another idea being circulated at the time was a detonation high over Tokyo Bay that would be visible from the Imperial Palace but would result in far fewer casualties than at Hiroshima, where about 140,000 people were killed.

On the other hand, Wellerstein points out that Robert Oppenheimer and three Nobel laureates wrote a report that concluded “we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use”. This report was written for a US government committee, which decided to use the weapon against a “dual target” of military and civilian use.

(more…)

Buckyballs give copper a magnetic attraction

Thin layers of two non-magnetic metals – copper and manganese – become magnets when they are in contact with buckminsterfullerene molecules. This discovery has been made by physicists in the UK, US and Switzerland, and could lead to new types of practical electronic devices and even quantum computers.

Ferromagnets – such as familiar fridge magnets – are materials that have permanent magnetic moments. There are only three metals that are ferromagnetic at room temperature – iron, nickel and cobalt – and this is explained in terms of the “Stoner criterion”, which was first derived in 1938 at the University of Leeds by Edmund Stoner.

Stoner knew that magnetism in metals is a property of the conduction electrons. These electrons are subject to the exchange interaction that allows them to reduce their energy by aligning their spin magnetic moments in the same direction – thus creating a ferromagnetic metal. However, having spins that point in the same direction increases the overall kinetic energy of the electrons. Stoner realized that ferromagnetism will only occur when the reduction in energy caused by exchange is greater than the gain in kinetic energy. Quantitatively, he showed that this occurs when the product of the electron density of states (DOS) – the number of energy states available to the electrons – and the strength of the exchange interaction (denoted by U) is greater than one.

Giving U a boost

U is called the Stoner criterion, and it is greater than one for iron, nickel and cobalt but not for their neighbours in the periodic table – manganese and copper. Now, an international team including Fatma Al Ma’Mari and Tim Moorsom of the University of Leeds in the UK has found a way to boost the DOS and exchange interaction in copper and manganese so that they are ferromagnetic at room temperature.

The team made its samples by depositing several alternating layers of C60 and copper (or manganese) onto a substrate. The copper layers were about 2.5 nm thick and the C60 layers about 15 nm thick. C60 is used because it has a large electron affinity, which means that each molecule will take up to three conduction electrons from the copper. This is expected to increase both the DOS and the strength of the exchange interaction in copper.

The team then measured the magnetization of the layered samples and found them to be ferromagnetic materials. The researchers also looked at samples in which the copper and C60 layers were separated by layers of aluminium and found no evidence of magnetism, which suggests that ferromagnetism occurs at the interfaces between the copper and C60. This was backed up by experiments using muons, which are depth-sensitive and showed that the ferromagnetism occurs in the copper near to the C60 interface. The team also found room-temperature ferromagnetism in C60/manganese layers, but with a weaker magnetization.

Critical field

Surprisingly, when the researchers calculated U for their copper samples, they found it to be less than one. In other words, the samples should not have been ferromagnetic according to the Stoner criterion. However, further theoretical investigations suggest that the samples should become ferromagnet when exposed to a relatively small magnetic field – something that would have happened during the preparation of the samples. This suggests that other non-magnetic metals could be made ferromagnetic by boosting U but not necessarily all the way to one.

Although further work is needed to increase the strength of the copper and manganese magnets, the research could result in the development of new types of tiny magnetic components. These could find use in spintronic devices, which use the spin of the electron to store and process information, or even in quantum computers in which electron spins are used as quantum bits of information.

The research is described in Nature.

Giant Magellan Telescope Organization president resigns

Ed Moses, the head of the body constructing the $1bn Giant Magellan Telescope (GMT), has announced he has left the organization, citing personal reasons. Moses, who was president of the Giant Magellan Telescope Organization (GMTO), has departed after only 10 months in the role “to deal with family matters that require his attention”, according to a GMTO statement. The news of Moses’ departure comes just weeks after Wendy Freedman, chair of the organization’s board, stepped down in early July.

The GMT is scheduled to be fully operational in Chile’s Atacama Desert by 2024, when it will become the world’s largest optical telescope. Moses joined the GMTO as its first president last September, after a stint running the National Ignition Facility at the Lawrence Livermore National Laboratory. In June the project received a major boost when the GMT’s 11 international partners committed more than $500m to start construction of the telescope.

Rapid growth

Colleagues of Moses have praised his achievements. “[He] brought his deep experience and has left us stronger,” says Patrick McCarthy, former executive vice president of GMTO, who is now interim president. That view is backed up by Taft Armandroff, director of the McDonald Observatory, who is the new board chair of the GMTO. He pays tribute to Moses’ recruitment programme, with the project office growing from 30 to 90 people in the space of a year. “We now have a strong technical and corporate staff dedicated to GMT,” he says. “Their experience from past projects makes this team ideally suited to establish the GMT as one of the most powerful telescopes in the world.”

‘Comprehensive search’

Freedman became GMTO’s first board chair in 2003, and has now stepped down “to do more science”. Indeed, she joined the University of Chicago last year, and is now principal investigator on a project to measure the Hubble constant to higher accuracy. “The GMTO has been fortunate to have had her guidance for so long,” says Armandroff, who adds that the board will now conduct a “comprehensive search” for a new president.

Meanwhile, construction of the $1.4bn Thirty Meter Telescope (TMT) on Hawaii’s Mauna Kea has yet to begin, following protests that have blocked building work. TMT board member Michael Bolte from the University of California, Santa Cruz, says in an official statement that a restart date for construction has yet to be determined. “In the construction timetable, the delay is small, and the time has been well spent in better understanding the concerns about the project,” adds Bolte.

Plan for supersized entanglement is unveiled by physicist

An experiment that could lead to the quantum-mechanical entanglement of everyday objects in the form of two 100 g mirrors has been proposed by Roman Schnabel of the University of Hamburg and the Max Planck Institute for Gravitational Physics in Germany. If successful, the mirrors would be by far the largest objects ever to be entangled, and the experiment would confirm that quantum physics applies to large and heavy objects, not just tiny particles. It could also test a prediction made in 2010 about how the mutual gravitational attraction of the mirrors affects their entanglement.

Entanglement is a purely quantum-mechanical phenomenon that allows two particles, such as photons or electrons, to have a much closer relationship than is predicted by classical physics. The concept of entanglement was introduced in the 1930s when physicists were debating the seemingly bizarre implications of quantum mechanics as identified by the Einstein–Podolsky–Rosen (EPR) paradox. EPR points out that if two particles are entangled and separated by some distance, then a measurement made on one particle seems to instantaneously affect the outcome of a measurement made on the other particle. Since no communications can travel between the particles faster than the speed of light, EPR surmises that “hidden variables” unknown to the experimenter have caused the effect. In 1964 John Bell came up with a way of disproving the existence of hidden variables – by a violation of Bell’s inequality – and subsequent experiments have confirmed that entanglement cannot be explained by hidden variables.

Since then, entanglement has become a fascinating phenomenon for physicists to study and has also found practical application in quantum-encryption systems. Indeed, over the past decade or so, physicists have been successful at entangling ever-larger objects, including micron-scale mechanical resonators. Now, Schnabel has come up with a way to entangle two mirrors that are enormous in comparison with previously entangled objects.

Swapping entanglement

The mirrors are entangled via photon radiation pressure and a process called entanglement swapping. This is done by placing two mirrors into a Michelson-type interferometer (see figure). Two beams of light are sent into the interferometer so that each mirror is struck on both sides by light. As light travels through the system, it is reflected from the surfaces of the mirrors. If the mirrors are free to oscillate, then momentum can be transferred between the mirrors and the light. The motion of the mirrors will also affect the phase of the light that is reflected from them. In this way, the light in the interferometer and the motion of the mirrors becomes entangled.

This entanglement is then “swapped” to become an entanglement between the two mirrors. This is done by measuring the interference of the two light beams as they exit the interferometer. Crucially, this measurement provides information about the nature of the entanglement but does not destroy it because the measurement does not provide any information about an individual mirror. The Michelson-type interferometer is ideal for this because it can be set up to measure the relative difference between the positions of the mirrors and the relative difference between the momenta of the mirrors – but not the individual positions and momenta of each mirror.

Once entanglement is achieved, the next step is to verify that the motions of the mirrors are indeed entangled. This involves switching off the light to allow the system to evolve for a few milliseconds before further measurements are made. Then the set-up is modified by removing one beamsplitter, which allows the experimenter to measure the position and momenta of each mirror individually.

Repeated measurements

In a practical experiment, the mirrors would be entangled, allowed to evolve for a few microseconds and then have their positions and momenta measured. This would be repeated over and over again, and the presence of entanglement would be signalled by correlations between the measured properties of the mirrors that are greater than those allowed by classical physics.

Schnabel and colleagues have already started building the experiment in the lab, but Schnabel says that there are several important practical challenges that must be overcome. The most significant challenge will be how to cool the mirrors to a temperature of about 4 K and how to keep them isolated from their surroundings so they do not absorb heat energy, which would affect their motion and destroy entanglement.

If they can realize the experiment, Schnabel and colleagues will prove that not only small particles can show quantum behaviour, but also massive objects. If successful, they will also be able to test a prediction made in 2010 by Haixing Miao – then at the University of Western Australia – and colleagues. This group calculated that the mutual gravitational energy of the mirrors would destroy the entanglement on the microsecond timescale, which is something that Schnabel’s experiment should be able to see.

Earlier this year, Schnabel and colleagues demonstrated a new way of cooling a mirror using light in a Michelson interferometer (see “Physicists reveal new way of cooling large objects with light”).

The experimental proposal is described in Physical Review A.

Copyright © 2025 by IOP Publishing Ltd and individual contributors