Skip to main content

Secrets and spies

There is no shortage of popular histories of the creation of nuclear weapons. From the mid-1940s to the present day, scientists, historians and others have tried to explain the genesis of these awesome and awful weapons, and the reasons for their use against Japan at the end of the Second World War. From the official 1945 Smyth Report on the Manhattan Project to Richard Rhodes’ 1986 Pulitzer Prize-winning The Making of the Atomic Bomb and beyond, the history of nuclear weapons and the Cold War continues to exert a powerful and sometimes macabre fascination for those interested in the history of modern science.

Secrecy has always been part of the allure of nuclear technology. One reason for a recent proliferation of nuclear histories has been the gradual release of new archival documentation, particularly from the former Soviet Union and Eastern Europe. British Cold War archives (including selected MI5 files) have also become more accessible in recent years following the advent of “open government” and the Freedom of Information Act. Historians have been busy in these archives for two decades, with results that include Mark Walker’s superb work on the Nazi atomic-bomb project, David Holloway’s on the Soviet Union, and ongoing efforts of military, diplomatic and science historians to elucidate the reasons for the Hiroshima and Nagasaki bombings. These histories – and many more – have transformed our understanding of the early history of the nuclear age.

But one consequence of these excellent, specialized histories is that there is now a place, even in such a crowded field, for a book that brings some of this fresh information together into a good, accessible general history. With Atomic, Jim Baggott, a business consultant and popular-science writer, may well have written it. His book draws on reams of more specialized scholarship to produce a well-informed and highly readable overview of early nuclear history.

A bit of a brick at nearly 600 pages, the book is divided into four sections. It covers the mobilization of physicists at the outbreak of war and the initial wartime nuclear programmes in Britain, America and Germany; the subsequent development of these programmes and the creation of ENORMOZ, the Soviet espionage operation against the Manhattan Project; the demise of the Nazi bomb project and the development and denouement of the Allied one; and the way in which these events played out at the beginning of the Cold War, concluding with the first Soviet nuclear test in 1949.

All the familiar names of nuclear history are here – Heisenberg, Bohr, Chadwick, Oppenheimer, Fermi, Lawrence, Frisch, Peierls (see pp38–39), Teller, Kurchatov and the rest – and many other less well-known figures. Linking the various episodes are two interweaving themes: the story of the Allies’ attempts to destroy a heavy-water factory in Nazi-controlled Norway; and the role of spies, espionage and counter-espionage in the transfer of nuclear information from country to country in the 1940s and 1950s. Indeed, through his accounts of Klaus Fuchs and the other atomic spies, one of Baggott’s neatest accomplishments is to show clearly how secrecy and spying both permeated the wartime nuclear projects and shaped the ensuing Cold War, and national and international politics.

The author does an excellent job both of describing the various parallel national programmes and of integrating the scientific, technical, political, strategic, special operations and espionage elements on the larger international stage. His account is dramatic, pacey and engaging, and manages to convey a rich sense both of the various personalities involved and of the larger forces that shaped the events. He mostly manages to resist sensationalizing the material, though his journalistic tendency to end each subsection with a punchy, prophetic sentence does occasionally grate. A comparative timeline of the various national developments and a listing of the key dramatis personae with brief biographical details form very useful appendices.

The book is significantly weaker on the uses and consequences of the first nuclear weapons and their role in the early Cold War. Here Baggott misses a substantial literature from political and diplomatic historians who have been exploring the myths and realities around the “half a million lives saved” by the bombings of Hiroshima and Nagasaki, and the idea of “atomic diplomacy.” Elsewhere, he is perhaps too ready to take his sources at face value, and tends to offer simplistic judgements (for example about the clichéd “Faustian” pact between physicists and the military) that do not reflect the complex historical debates currently taking place about much of this material. Similarly, an overlong “epilogue” takes the story from 1950 through the Oppenheimer trial of 1954 and the Cuban Missile Crisis in 1962 and into the 21st century, but draws on a far too narrow a source base, making for a rather patchy and untidy end to the narrative.

Despite these criticisms, Atomic is now one of the most accessible synthetic and up-to-date histories of the early nuclear age. The book will be useful as a broad introduction to the history of nuclear weapons in their wider context – though interested readers may well wish to progress quickly to the more nuanced historical accounts from which Baggott draws.

The lure of synchrotrons

Shortly after he was sworn in as US energy secretary earlier this year, the Nobel-prize-winning physicist Steven Chu discussed what he called the “energy challenge” during a visit to the Brookhaven National Laboratory on Long Island. The challenge, according to Chu, is threefold. First, US national security as well as economic prosperity depends on the availability of clean and affordable energy. Second, competition for energy resources threatens to spark geopolitical conflict. Third, the development of alternative energy sources that do not depend on fossil fuels is critical to address climate change.

Meeting the energy challenge, Chu said, requires more than “political will”. It must involve improving the efficiency of existing technologies by a factor of 5–10. This in turn requires not just fine-tuning existing recipes for producing and distributing energy, but also developing “transformative technologies”. To develop these, Chu said, he was placing many hopes in the National Synchrotron Light Source II (NSLS II) – a $912m facility currently being built at Brookhaven that was awarded $150m in stimulus money by the American Recovery and Reinvestment Act and is scheduled to be commissioned in 2015. “I think that one of [NSLS II’s] major contributions will be at the energy frontier,” he said (referring to fuels not particle physics).

In the past, Chu said, transformative technologies often emerged from fundamental conceptual breakthroughs. He gave two familiar examples. One was electronic amplification: the transistors that replaced vacuum tubes were made possible by quantum mechanics, which revolutionized ideas of how electrons are transported in solid materials. The other example was food production, where ammonia synthesis and other chemical-technology breakthroughs led to the ability to get more food from the same land through the use of better fertilizers. Research planned for NSLS II, Chu said, has a good chance of achieving breakthroughs of the sort that lead to transformative technologies.

Those are high hopes, but how solid are they? Synchrotron light sources, after all, are used for a variety of purposes, many of which have nothing to do with energy but instead with things like crystallography. Moreover, such facilities are renowned for not behaving exactly according to plan; their contributions to science often differ from expectations for reasons that have less to do with our intentions than with nature’s complexity. Chu, a former director of the Lawrence Berkeley National Laboratory, surely knows this. So what justifies his confidence that NSLS II can address the energy crisis?

Engineered solutions

Physicists build more powerful machines for three different reasons. One is to look more closely at familiar phenomena and see whether new phenomena can be spotted. These are perennial questions. Or there may be tantalizing hints – indications that something unusual is happening just beyond the horizon of existing machines. Finally, there may be specific objectives if compelling arguments exist that solutions to existing problems await at a specific resolution. Chu’s claims about NSLS II suggest it can solve such specific objectives. What are they?

Existing synchrotron sources have a spatial resolution of about 15–20 nm, whereas NSLS II expects to reduce this to just 1 nm. This is the transitional scale between atomic and bulk matter, where properties change rapidly and in poorly understood ways. The ability to study this domain has several implications.

One is that NSLS II will be able to scrutinize the kinetics of existing energy processes. The rate at which a battery charges, for instance, is a function of the interface between electrolyte and electrode, and the kinetics that control this process take place at the nano-scale. The interface can be seen in bulk with existing X-ray sources and at the atomic level under vacuum conditions with electron microscopes – but studying it with nanometre resolution would reveal how it behaves under operating conditions (i.e. not in a vacuum) and in real time.

Moreover, nanometre-scale resolution will let researchers engineer nanometre-scale structures, such as catalysts and solar cells that use quantum dots, which may be important at the energy frontier. Because of their different surface-to-volume ratios, nanomaterials have different properties from those at the bulk scale. Nano-scale engineering requires the ability not only to inspect such structures, but also to get them to self-assemble. Indeed, there is a kind of “reverse-bootstrapping” effect in which ever smaller structures enable the development of better technologies to engineer ever smaller structures.

Yet another implication of studying this particular scale relates to emergent properties. This scale is an inhomogeneous region in which different forces struggle for dom_inance. Studying how these forces compete and new properties emerge may let researchers design and build materials with qualitatively new properties.

A good example of all three of these implications involves gold. At the atomic scale, gold has certain, well-understood chemical properties, while in bulk it is chemically inert. In between, gold has marvellous but poorly understood properties as a catalyst that seem to peak at the 6 nm scale. In allowing researchers to scrutinize gold catalysts in action and in real time – as well as to study nano-scale particles involving gold – an instrument of nano-scale resolution like NSLS II could help us to understand what kinds of catalysts we should be building to convert fuel more efficiently. It would be a step towards the dream articulated half a century ago by materials scientist Arthur Von Hippel of being able to “play chess with elementary particles according to prescribed rules until new engineering solutions become apparent”.

The critical point

The energy challenge will, of course, also involve reducing demand: we will have to learn to live with less energy, which is a formid_able challenge of a separate kind. But with the world’s population set to rise to 10 billion by 2050, there is no way round having to increase our overall energy supply. The arguments are sound that a more advanced light source, able to reach a resolution of about 1 nm, will help in that quest.

Web life: Colliding Particles

So what is the site about?

Colliding Particles is a series of short films chronicling what it is like to be a physicist at CERN’s Large Hadron Collider (LHC). Each instalment of the series focuses on a different aspect of life as a full-time Higgs-hunter, while also (loosely) following the progress of a single team of researchers. The first episode was filmed before the LHC’s gala launch on 10 September 2008, and it introduces a few basic ideas about the collider and the Higgs boson. Later episodes explore the media hype on “Big Bang Day”, scientific conferences, problems with the collider and why the LHC is worth its multibillion-Euro price tag.

Who is involved?

The main people in the films are members of “Project Eurostar”, a group of researchers working on a new technique for finding the Higgs using ATLAS, one of the LHC’s two main all-purpose detectors. The group’s name derives from its origins as an informal collaboration – facilitated by the Eurostar rail link – between London-based experimentalist Jon Butterworth and Paris-based theorist Gavin Salam. Butterworth’s PhD student Adam Davison makes fun of the name on camera, but it seems to have stuck. The site itself is supported by the UK Science and Technology Facilities Council’s “Science in Society” programme, with films produced by documentary filmmaker Mike Paterson.

Who is it aimed at?

Although CERN researchers will undoubtedly get a kick out of spotting their colleagues in the background, students interested in science are the site’s main audience. Such students will benefit from seeing how science really works – including the disappointing/boring/frustrating bits – and there are some teaching resources available on the site to help them. Unfortunately, the films contain relatively little physics, so anyone who wants more than a cocktail-party-level understanding of the Higgs, the LHC or Project Eurostar itself will need to dig into the site’s “further reading” section.

Why should I visit?

As Physics World readers know, the LHC is due to restart later this month, following a long shutdown that began just nine days after its initial switch-on in 2008. Oddly enough, this delay – while terrible for CERN’s would-be particle colliders – has been good for Colliding Particles. Recent episodes on the LHC’s breakdown and science funding are far more insightful than previous ones filmed in its heady early days. In one scene in the “Collidonomics” episode, for example, Eurostar’s Butterworth is at a funding review. As a researcher in the background talks dispiritedly about the rising cost of liquid helium, he manages a wry grin. Life as an academic, he says, “sure as hell doesn’t feel like an ivory tower when you have to stand up and defend yourself” from budget cuts. This is something all physicists can appreciate – and that any would-be physics students should see.

What are some highlights?

In one segment of the “Problems” episode, several researchers struggle to explain what caused the LHC to be shut down last autumn – without using the phrase “blew up”, which is apparently off limits. “Engineering breakdown”, “technical malfunction” and “catastrophic release of liquid helium – wait, scratch the ‘catastrophic’ bit” are some of the euphemisms they offer; but amid the silliness, their explanations are sound and easy to follow. At the end of the same episode, Salam, the Eurostar theorist, suggests that the universe is like a piece of music. With the lower-energy collisions at Fermilab’s Tevatron, he says, we could hear the double basses, but the LHC will add the cellos – and from there, we will begin to figure out what the rest of the orchestra is playing. In a field full of analogies, most of them over-used, this one feels both fresh and insightful. But here’s hoping the next episodes in the series contain some new science, not just new metaphors.

A very good Englishman

Rudolf Peierls “would make a very good Englishman” observed Lord Rutherford, who got to know the young German theorist during Peierls’ long stay in Cambridge in the early 1930s. Then in his mid-20s, Peierls was reserved, slightly aloof but unassuming, and averse to political extremes. He also had a strong sense of both duty and fair play.

Rutherford’s description was prescient as well as apt. About five years later, Peierls became an Englishman, having been granted British nationality after resolving not to return to his native land after Hitler came to power in 1933. By then, Peierls had attained an international reputation for his work on quantum theory, which was earned not so much for coming up with new ideas as for answering questions that had sprung up on the research agendas of others.

As Peierls’ colourful Russian-born wife Genia suggested, scientists can be classified as golfers (who prefer to work on their own) or as tennis players, who thrive by working with others. Peierls was a tennis player, who collaborated with dozens of first-rate physicists, including even the usually solitary Paul Dirac. To stretch Mrs Peierls’ analogy, her husband was a virtuoso player on a wide variety of surfaces – he did fine work on solid-state and nuclear physics, and on quantum field theory. He also took a lively interest in the political dimensions of physics and wrote copious letters to his colleagues.

Sabine Lee, a historian at the University of Birmingham in the UK, has spent the better part of a decade collecting Peierls’ correspondence. Sir Rudolf Peierls: Selected Private and Scientific Correspondence presents 1320 of Peierls’ letters in two hefty volumes that span seven decades. Lee divides the material into 11 periods and introduces each one with a biographical essay, pointing out important themes and particularly telling letters. The result is a remarkably detailed record of Peierls’ life, much more rewarding than his worthy but rather bloodless autobiography Bird of Passage, which suffered from his inability to write a word about anything or anyone that might be deemed unjust or unseemly.

Most of the first volume concerns the intertwining stories of Peierls’ courtship of Genia (also a physicist) and of his ascent as a researcher, working closely with Werner Heisenberg, Lev Landau, Hans Bethe and a host of others. Neither Peierls nor his wife spoke each other’s mother tongue, so they spoke and wrote in their second language, English. Genia never did quite get the hang of it, but Peierls swiftly became bilingual. The lovers’ correspondence is charming and full of insights into the physicists of the time, especially when discussing Peierls’ friendship with Bethe and his sometimes difficult relationship with the hypercritical Pauli, who had no compunction about telling Peierls that he did not like his physics. Lee has chosen not to translate a few dozen letters written in German, which is unfortunate for those who do not speak the language; the footnotes only add to the pain of exclusion, as she presents them – tantalisingly! – in English.

In 1937, when Peierls was just 30, the University of Birmingham appointed him to its chair of theoretical physics, his first permanent job. When the Second World War began, he put his fundamental research on hold and promptly wrote his most influential paper. In a few days in March 1940, he and fellow refugee Otto Frisch conceived a way in which it might be possible to build an atomic bomb in the foreseeable future. The idea, written up in a splendidly clear memo, was scrutinized by a government committee but Peierls, anxious that the Nazis might be first to develop the weapon, thought the British authorities were dragging their feet. In April 1940, he wrote to them with his characteristic pith and directness: “I feel I cannot permit myself the luxury of reserve”. Several letters in this section give useful insights into the development of the atom-bomb project and its gradual takeover by the Americans. When they eventually set up the Manhattan Project to build the first nuclear weapons at Los Alamos, Peierls became one of the project’s leading theorists.

After the war, he returned to Birmingham, where the theoretical-physics department soon “left Oxford and Cambridge far behind”, in the words of Freeman Dyson, who contributes an eloquent foreword to the collection. Happy to be described as a Peierls protégé, Dyson recalls the months he spent in his boss’s happy but chaotic home, and how Peierls – despite his management commitments – did the best work to emerge from Birmingham at that time.

In 1963 Peierls took up the Wykeham Chair of Physics at Oxford University, apparently intending to repeat his success in Birmingham. But this time he seems to have found the job quite a challenge. Impatient with inflexible administrators, he was also past his best as a theorist; according to Lee, “he saw himself as a facilitator of dialogue more than an active participant”. It is slightly painful to read about Peierls trying to come to terms with the idea of quarks, though he proved a cogent critic of John Bell’s work on the fundamentals of quantum theory. Peierls retired from the post in 1974 but continued to work hard, mainly trying to persuade governments to limit their nuclear arsenals.

Most touching of all his letters are the ones he wrote after his wife’s death in 1986 to Bethe, his closest friend, in the twilight of their years. By the summer of 1994, when Peierls’ health was failing rapidly, he moved into a residential home just outside Oxford. On 8 September 1995, Bethe wrote to him, looking back on their lives: “You had a full and good life, and I thank you for letting me participate in it.” A few days before the arrival of this wonderful note, Peierls died.

Lee has done a superb job of editing this correspondence. Impressively researched, meticulously annotated and immaculately produced, both volumes are a pleasure to read and should have a place in every library of modern science. She has helped us to better appreciate someone who was not just a first-rate theorist, but also an uncommonly distinguished Englishman.

Uranium bombs

Enrico Fermi was a brilliant physicist, but he did occasionally get things wrong. In 1934 he famously bombarded a sample of uranium with neutrons. The result was astounding: the experiment had, Fermi concluded, produced element 93, later called neptunium. The German physicist Ida Noddack, however, came to an even more spectacular conclusion, namely that Fermi had split the uranium nucleus to produce lighter elements. Noddack’s friend Otto Hahn judged that idea preposterous and advised her to keep quiet, since ridicule could ruin a female physicist. She ignored that advice, and was, indeed, scorned.

This incident is important for two reasons. First, had physicists taken Noddack seriously, then Hitler might have got himself an atom bomb. As it turned out, Noddack was not vindicated until late 1938 – a crucial delay. Since a male scientist would probably have encountered a more sympathetic ear, the world can be thankful that physics was still riddled with misogyny in 1934.

The episode is also important because it is missing from Amir Aczel’s Uranium Wars. The omission may seem small, but is unfortunately indicative of the author’s inability to recognize what makes a good story. Aczel promises a “suspenseful” account but delivers instead a rather banal book drained of drama. Since so many gripping and immensely readable histories of atomic physics have been produced, there is little point wasting time on this one.

Aczel’s title refers to the “scientific intrigue” behind the race to unlock the atom’s power. That is certainly an interesting aspect of atomic history, but it is misleading to emphasize it. Far more important than rivalry was co-operation; immense danger arose because physicists were pathologically nice to each other. The Hungarian émigré Leo Szilard recognized this problem, warning as early as 1937 that co-operation had ominous political ramifications, since the Germans could not be trusted with the secrets of the atom.

Aczel also tends to leap to massive conclusions from mere shards of evidence. The fault is most noticeable in his discussion of Hiroshima and Nagasaki. True, his basic argument – that the bombings were not justified, since Japan was ready to surrender – has merit. However, he provides little evidence to support his position, and other writers have advanced it far more convincingly. The same can also be said for Aczel’s contention that the bombs were in truth directed against Russia – the first blows of the Cold War. Again, this argument has merit, but the author neglects to strengthen it with a discussion of, for example, the Yalta accords of February 1945, when Stalin agreed to attack Japan in exchange for territorial concessions. Soviet troops did indeed invade Manchuria on 9 August, hours before Nagasaki was hit. The quick end to the war allowed the Americans to renege on Yalta and, in turn, deny the Soviets a say in postwar Japan. Given what happened in Eastern Europe after 1945, that seems significant. It might not be sufficient justification for the bombings, but it deserves more analysis.

Aczel loves phrases like “it can now be revealed”. Unfortunately, though he promises startling revelations, he delivers none. A case in point is the famous meeting in Copenhagen in 1941 when Werner Heisenberg conveyed a rather cryptic message about Germany’s atomic project to Niels Bohr. Aczel offers what he claims is a new interpretation based on Bohr’s correspondence – but that material has been available since 2002, and many historians have already mined it.

Aczel ends with two woefully simplistic paragraphs about the future. “It cannot be stressed enough”, he argues, “that the MAD doctrine…will not work in the case of Iran.” Why not? The theory of mutually assured destruction (MAD) assumes that no leader will use the bomb since complete annihilation of their country will result. For 60 years that doctrine has forced leaders, even unstable ones, to act rationally. Granted, President Ahmadinejad of Iran might be incurably irrational, but it is equally possible that the bomb will force sanity upon him. Whatever the case, the problem deserves analysis, not snap judgments.

Finally, Aczel argues that nuclear power has immense potential “if we can solve the…problems of nuclear proliferation, nuclear waste and nuclear safety”. Those are very big “ifs”. A few years ago, guards looking after Ukrainian stocks of plutonium were paid in potatoes. The temptation to sell the stuff is huge. As for the safety of reactors, I am reminded of the technician at Three Mile Island – the site of the US’s worst nuclear accident – who confessed to “a feeling of awe and humility that the technology we thought was foolproof wasn’t”.

Complex problems do not lend themselves to simplistic conclusions. The great danger of this book is that nuclear novices may conclude that it is the entire gospel. In fact, the atomic story is much more dramatic than Aczel suggests, and in that drama lurk not only an entertaining tale, but also some important lessons for our future.

High-temperature superconductor goes super thin

Gennady Logvenov and colleagues at Brookhaven National Laboratory in Upton, New York, have created layered films of copper-oxide or “cuprate” materials and have discovered that they can localize the superconducting behaviour to a single atomic plane. They say that the discovery will help theorists to build more comprehensive models of high-temperature superconductivity, and lead to thin-film devices that have their superconducting properties tuned by electric fields.

“We wanted to answer a fundamental question about such films,” says team member Ivan Bozovik. “Namely: how thin can the film be and still retain high-temperature superconductivity?”

No resistance

Discovered at the beginning of the 20th century, superconductivity is a phenomenon whereby a material’s electrical resistance can suddenly drop to zero as the substance is chilled below a specific temperature – known as the transition temperature (Tc). It exists in some pure metals close to absolute zero, and scientists believe that this is because electrons distort the metal lattice to let subsequent electrons flow freely, a mechanism outlined in so-called Bardeen-Cooper-Schrieffer (BCS) theory.

In 1986, however, physicists discovered that superconductivity also exists in certain compounds, including cuprates, at much higher temperatures of 30 K and more. This discovery of high-Tc superconductivity triggered a lot of initial excitement due to suggestions that, if extended up to room temperature, it could lead to novel applications such as levitating trains and ultra-efficient power cables. Over the past 20 years, however, these exciting new technologies have not materialized because physicists and engineers have struggled to understand the mechanism behind the phenomenon.

Now, Logvenov and colleagues have performed an experiment that could help to point theorists in the right direction. They have created a “bilayer” film with one layer of a cuprate metal and another of a cuprate insulator, using a technique called molecular beam epitaxy. Superconductivity in such bilayers tends to manifest at the interface between the layers, so the researchers were able to isolate where the effect occurs by carefully doping atomic planes within the layers with zinc, which suppresses superconductivity.

Crucial planes

The researchers found that when they doped the entire film with zinc, it did not superconduct at all. However, when they doped a certain plane – specifically, the second copper-oxide plane away from the interface – they found that the transition temperature for superconductivity dropped from 32 to 18 K. This, they say, is proof that that plane alone is crucial for the high-temperature superconductivity.

Elisabeth Nicol, a solid-state physicist at the University of Guelph, Canada, calls the Brookhaven study “a very clever piece of investigative work”, and explains that it will help researchers create superconductors that work at higher temperatures. “If we could understand where the source of the high transition temperature comes from,” she says, “we could possibly engineer things such that the transition temperature becomes higher.”

The discovery may also have direct practical benefits. Superconductivity can be controlled with electric fields, but because these can penetrate films by only a nanometre or so this ability has proved difficult to realize. Now that Logvenov and colleagues have found how to identify the crucial plane, engineers could find it possible to create tunable high-temperature superconductors for a variety of electronic devices.

The research is published in Science.

Renewables revolution needs clear scientific advice

windturbines.jpg
Cutting through turbulence on the way to Copenhagen

By James Dacey

European leaders have been in Brussels over the past couple of days and there has been an lot of talk about climate change. The latest reports suggest they are reaching some sort of agreement over how to help the world’s poorer nations to commit to restricting greenhouse gas emissions.

The EU summit in Brussels represents one of the last opportunities for European nations to iron-out disagreements ahead of December’s UN conference in Copenhagen, which is could result in a global treaty on climate change.

So assuming that the world’s politicians can wrangle their way to solid, legally binding targets in the Danish capital, we will then be faced with the next big set of choices – how to achieve the targets.

Whatever way the green revolution is played out over the next few decades, it will be necessary for the developed world to quickly get over its addiction to fossil fuels, and to deploy a whole raft of renewable energy solutions. More than ever, governments will need clear scientific advice about the different options ahead of them.

Despite currently lagging many of its European neighbours over renewables, the UK now at least has a clear-thinking scientific advisor in the form of David Mackay.

mackay sml.jpg

If you’re not already familiar with Mackay, he is author of the book Sustainable Energy – Without the Hot Air. Despite being available for free on-line, the book has been something of a publishing phenomenon and was described by the Guardian as “this year’s must-read book”.

You can also read Physics World’s review of the book here.

When I saw Mackay on Wednesday giving a talk at the Institute of Physics in London, he was quick to establish his philosophy. He says we need are in need of a “grassroots arithmetic movement” in which members of the public should lobby/educate their local MPs with the figures of renewables.

To a packed-out lecture room, Mackay explained why he prefers to express energy consumption in terms of kilowatt hours per day per person. His reason being, that these figures mostly fall in the range 1-100, and results can easily be translated into personal forms. “I am pro arithmetic, not any specific energy policy,” he said.

Dark-matter paper raises questions over data sharing

A preprint that uses NASA data to claim “possible evidence” for dark matter has led some researchers to question the US space agency’s data-sharing policies.

The preprint – which was uploaded to the arXiv internet server earlier this month by physicists Lisa Goodenough of New York University and Dan Hooper of Fermilab in Batavia, Illinois – makes the claim by matching a theoretical model of dark matter to freely available data from NASA’s Fermi gamma-ray telescope. But with an official analysis of the same data yet to be published, some scientists have pointed out that, depending on the validity of the evidence, the preprint will either cause confusion or steal glory from the Fermi team.

“If this turns out to be the first convincing discovery, it will become known as the Goodenough and Hooper discovery,” says Alex Murphy, a physicist at the University of Edinburgh who works on Europe’s ZEPLIN III dark-matter experiment. “Publicitywise, that’s a catastrophe.

“You can’t really stop this. NASA has this problem where they have to release their data. I kind of disagree with that myself – I’d hate for our data to be released too early and for others to be looking at it.”

‘Fermi has a PR problem’

The Fermi telescope launched into space in June last year to study the universe’s high-energy phenomena, including the annihilation of dark matter, an unknown entity that many astrophysicists think makes up over 80% of the universe’s mass. Team members used the first year of data to calibrate the telescope’s instruments but now, following NASA policy, they make all data public immediately.

In their study, Goodenough and Hooper examined some of the new Fermi gamma-ray data of the centre of our galaxy. They then compared it with a simple model of dark matter annihilation, which produces gamma rays, and discovered that it fitted a dark matter particle with a mass in the range of 25 to 30 GeV — about 30 times heavier than a proton – and an annihilation cross-section of about 9 × 10–26 cm3/s. According to Hooper, this could indicate a particle such as a neutralino in “supersymmetric” extensions to the Standard Model of particle physics.

Gordon Watts, a physicist at the University of Washington in Seattle, wrote on his blog Life as a Physicist that news coverage of this result on other websites has shown that “things got away” from the Fermi team before they have issued their own analysis. “Now Fermi has a PR problem on its hands – people are running round talking about their data and they’ve [the Fermi team] not really had a voice yet,” he adds.

But Hooper tells physicsworld.com that he sees it as his job to study data as soon as it is released. “Most papers in my community are posted on arXiv prior to being accepted by a journal, and this is no exception,” he says. “This is entirely not out of the ordinary. The reason for this is simply that in the months that it can take for peer review to be carried out, a great deal can change regarding the state of the research, and to not share the up-to-the-minute progress with the rest of the community can be counterproductive.”

Fermi analysis will have ‘lasting impact’

Fermi project scientist Julie McEnery believes her team’s work is not in competition with studies like those of Goodenough and Hooper. “I think a carefully done analysis will have the longer-lasting impact. Had there been an obvious dark-matter signal in the data, something that could be done in a reasonably quick analysis, we would of course have already published – in a refereed journal first, and to [arXiv’s subsection] astro-ph later.”

She adds, however, that all apparently “groundbreaking” results would be better off being published in refereed journals before they appear on arXiv. “There’s a danger that we’re going to confuse the field with many results that prove not to be true, and by the time some really strong, key result comes out essentially the scientific community will still be interested but the media and the public may not be.”

The Fermi team are planning to present their analysis on the galactic-centre data next week. Meanwhile, however, other groups studying the same data have come to different conclusions. For example, in a seperate preprint on arXiv, Gregory Dobler of Harvard University and colleagues attribute the gamma-ray signal to “inverse Compton scattering”, a phenomenon in which photons gain energy when they interact with matter.

“Several groups are doing their own analysis of Fermi data, which I think is fine,” says Katherine Freese, an astrophysicist at the University of Michigan. “The data are public, and there is nothing wrong with people trying to glean from it what they can. In fact, I think it is great. We are all aware that there will be some disagreement in the short term. In the long run it will shake out as to who is right.”

Helium atoms get the ride of their life

To the adrenaline junkie midway through a bungee jump, gravity must feel like it can accelerate matter at a spectacular rate. At the atomic scale, however, when it comes to shifting around neutral particles, gravity is incredibly ineffective compared with other fundamental interactions such as the strong and weak nuclear forces.

Now, however, a team of physicists in Germany has shown that a little-known interaction caused by electric fields known as the “ponderomotive” force can accelerate neutral particles at up to 1014 times the Earth’s gravitational acceleration. As well as being of interest to fundamental physics, this ability to transfer large amounts of momentum to neutral particles could lead to a host of novel applications in surface science, say the researchers.

‘Electrical pressure’

All students are taught at school that when objects possessing electrical charge are exposed to an electric field, these objects experience an electric force that can lead to motion. If the electric field is oscillating, however, then the charged object is then exposed to a second force that is proportional to the field-intensity gradient. Depending on the amount of matter and the scale of this intensity gradient, the ponderomotive force can have a significant effect.

Until now, however, physicists had assumed that the ponderomotive force would have a negligible effect on matter that is neutral. But, according to Ulli Eichmann and colleagues at the Max-Born Institute and the Institute for Optical and Atomic Physics in Germany, there is no reason for this to be the case. These researchers argue that the effect is largely independent of charge and they designed an experiment to demonstrate the magnitude of the effect on neutral matter.

The physicists began by aiming a beam of helium atoms at a detector, before firing a series of laser pulses at the beam so that individual atoms were exposed to a localized electromagnetic field. Then, by analysing data from their position-sensitive detector, they were able to show that at least one per cent of the helium atoms had undergone an acceleration, and in some cases this was as much as 1014 times that of the Earth’s gravitational acceleration (9.8 m s–1.

Like ants dragging a mountain

To explain the mechanism of this acceleration, Eichmann and colleagues refer to a model that they put forward in a paper last year. When the atoms are exposed to a laser pulse, an electron can gain energy from the laser field, causing it to be briefly “liberated” from the atom. However, this surge of energy is not sufficient for the electron to break free entirely from the Coulomb forces and it is recaptured so that it sits a long way from the nucleus in what is known as a “Rydberg state”.

It is in this state that the atom is subject to the ponderomotive force and the “quivering” electron can drag the entire atom in the direction of the localized electric field. Fortunately for the researchers, this state was long-lived enough for them to locate the positions of helium atoms at the detector and thus rule out other effects that could have caused a beam of neutral particles to be deflected and spread.

Eichmann told physicsworld.com that he can envisage applications resulting from the “instantaneous” transfer of momentum to an atom. An example of this might be the accurate and efficient deposition of atoms on surfaces for optical applications. “Atoms may be steered by manipulating the spatial geometry of the laser fields,” he says.

Robert Jones, an atomic physicist at the University of Virginia in the US is impressed by the new research. “The possibility of controlled interactions between atoms or molecules through precisely timed collisions at well-defined relative velocities is particularly intriguing,” he told physicsworld.com.

This research is published in this week’s issue of Nature.

Special relativity passes key test

Scientists studying radiation from a distant gamma-ray burst have found that the speed of light does not vary with wavelength down to distance scales below that of the Planck length. They say that this disfavours certain theories of quantum gravity that postulate the violation of Lorentz invariance.

Lorentz invariance stipulates that the laws of physics are the same for all observers, regardless of where they are in the universe. Einstein used this principle as a postulate of special relativity, assuming that the speed of light in a vacuum does not depend on who is measuring it, so long as that person is in an inertial frame of reference.

Unifying the cosmic with the quantum

In over 100 years Lorentz invariance has never been found wanting. However, physicists continue to subject it to ever more stringent tests, including modern-day versions of the famous Michelson–Morley interferometry experiment. This dedication to precision stems primarily from physicists’ desire to unite quantum mechanics with general relativity, given that some theories of quantum gravity – including string theory and loop quantum gravity – imply that Lorentz invariance might be broken. In particular, these theories allow for the possibility that the invariance does not hold near the minuscule Planck length – about 10–33 cm – since at this scale quantum effects are expected to strongly affect the nature of space–time.

It is not possible to test physics at the Planck length directly because this length corresponds to an energy of around 1019 gigaelectronvolts – way beyond the reach of particle accelerators (the most powerful of which, CERN’s Large Hadron Collider, will generate collision energies of around 104 gigaelectronvolts). However, this latest research, carried out by a collaboration of physicists under the leadership of Jonathan Granot of the University of Hertfordshire in the UK, has provided an indirect test of Lorentz invariance at the Planck scale.

Granot and colleagues studied the radiation from a gamma-ray burst – associated with a highly energetic explosion in a distant galaxy – that was spotted by NASA’s Fermi Gamma-ray Space Telescope on 10 May this year. They analysed the radiation at different wavelengths to see whether there were any signs that photons with different energies arrived at Fermi’s detectors at different times. Such a spreading of arrival times would indicate that Lorentz invariance had indeed been violated; in other words that the speed of light in a vacuum depends on the energy of that light and is not a universal constant. Any energy dependence would be minuscule but could still result in a measurable difference in photon arrival times due to the billions of light years that separate gamma-ray bursts from us.

The Fermi team used two relatively independent data analyses to conclude that Lorentz invariance had not been violated. One was the detection of a high-energy photon less than a second after the start of the burst, and the second was the existence of characteristic sharp peaks within the evolution of the burst rather than the smearing of its output that would be expected if there were a distribution in photon speeds. The researchers arrived at the same null result when studying the radiation from a gamma-ray burst detected in September last year, but could only reach about one-tenth of the Planck energy. Crucially, the shorter duration and much finer time structure of the more recent gamma-ray burst takes this null result to at least 1.2 times the Planck energy.

Constraining quantum-gravity

According to Granot, these results “strongly disfavour” quantum-gravity theories in which the speed of light varies linearly with photon energy, which might include some variations of string theory or loop quantum gravity. “I would not use the term ‘rule out’,” he says, “as most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance. However, our observational requirement that such an energy scale would be well above the Planck energy makes such models unnatural.”

Granot says that far more precise measurements would be needed to probe the Planck scale for theories that postulate a quadratic or higher-order dependence of light speed on photon energy. He also points out that his group’s approach probes just one of a number of possible effects of Lorentz invariance violation, and that extremely precise constraints on this violation have been obtained by studying the possible dependence of light speed on photon polarization from X-rays emitted by the Crab nebula. But he adds that his group’s new limit is the most precise for simple energy dependence.

Giovanni Amelino-Camelia of the University of Rome La Sapienza believes that the latest work points to the coming of age of the field of quantum gravity phenomenology, with physicists finally able to submit theories of quantum gravity to some kind of experimental test. “Nature, with its uniquely clever ways, might have figured out how to quantize space–time without affecting relativity. But even a slim chance of being on the verge of a new revolution is truly exciting,” he says.

Copyright © 2026 by IOP Publishing Ltd and individual contributors