Skip to main content

Darwin's pit bull not so aggressive in real life

DAWKINS.jpg
Double-act Richard Dawkins and his wife give life to the evolutionist’s new book

By James Dacey

We are like detectives who have arrived on the scene after the crime has been committed; we find traces of evidence everywhere including DNA footprints.

A good scientific theory is one that is vulnerable to being proven wrong but has not yet been disproved.

Historically, religion has attempted to dispel confusion but science does this too and it does it better!

These were just three sound bites that stuck in my mind after going along to see Richard Dawkins as he delivered another dogged defence of the theory of evolution in Bristol yesterday.

Speaking at the city’s Festival of Ideas, Dawkins was accompanied on stage by his wife Lalla Ward, an English actress best known for appearing in the BBC science fiction series Dr Who where she played the part of Romana in the late 1970s.

According to Dawkins, the couple have been giving talks as a double act for the past few years — ever since Lalla stepped in when Dawkins lost his voice on a tour of the US.

The couple used the first section of the event to read extracts from Dawkins’ new book The Greatest Show on Earth: The Evidence for Evolution. “This is not another anti-religious book – I’ve done that and got the T-shirt. Evolution is now a fact and this book will finally lay out all the evidence,” he said.

The thing that immediately struck me was that, despite all the sharp put-downs, Dawkins is actually incredibly mild-mannered in person. After reading his last book The God Delusion, I had assumed that the man they call “Darwin’s pit bull” had by now fully-militarised his atheist campaign and would appear slavering at the mouth.

Indeed, one of Physics World‘s correspondents had his own recent brush with Dawkins when the evolutionary biologist refused to get involved with his research project. Dawkins’ reason being that the project was funded by the Templeton Foundation – an organization that purports to fuse the ideals of science and religion but which Dawkins views as a “subversion of science”.

But the way Dawkins delivered his readings, equipped with a few funny voices, was more like a polished actor bringing to life a children’s book.

And according to Dawkins, he is not actually an atheist but agnostic. However, this is only because “technically we all have to be” as there is no such thing as an immutable fact. He said that a Christian God is “no more likely than Yahweh, leprechauns, or the flying spaghetti monster”.

Hmm, clearly he’s has not lost his sense of mischief.

Ringo Starr spotted in bouncing water droplet

DACEY - ringo starr.jpg
The Starkey Effect Ringo keeps psychedelia alive

By James Dacey

Apologies.

I realize this is supposed to be a hard-hitting news site reporting physics breakthroughs, but I just couldn’t resist flagging this up.

It was whilst writing a story this afternoon about water-repellency in lotus leaves that I noticed something very strange. Bizarrely, everybody’s favourite mop-topped Liverpudlian seems to reveal himself in the high-speed photo images of water-droplets being ejected from the leaf surface.

Well, it made me laugh anyway…

Lotus leaves shake off water

Many plants are extremely water-repellent owing to their rough textures, which can trap air to provide a waterproof cushioning. In some cases, plant leaves are so repellent that no droplets can stick at all; instead, they simply bounce and roll off. A lotus leaf is an example of a natural material that possesses this “superhydrophobicity” and a pair of physicists in the US are proposing that natural vibrations lie at the heart of the phenomenon.

If the effect can be mimicked by materials scientists, it could lead to a range of novel applications. “Vibrations are everywhere, they are there as you walk, they are associated with the computer fan, they are in your automobile and spacecraft,” said Chuan-Hua Chen, one of the researchers at Duke University, North Carolina. “As long as you know how to harvest the essentially waste energy from environmental vibrations, you can achieve superhydrophobicity”.

A flowery youth

In the past few decades researchers have had a lot of success at mimicking rough surfaces in nature in order to make water-repellent materials. One key limitation, however, is that engineered rough surfaces do not retain water repellency when water condenses on the surface, rather than landing as water droplets. Some structures in nature, such as the lotus leaf, do not suffer from this limitation and always maintain their water-repellency.

Chen and Jonathan Boreyko claim that they have found a physical explanation for this natural advantage. Apparently inspired his childhood experiences, Chen recalled lotus leaves flapping vigorously in the wind and realized that this is due to their unusually large leaf being supported by a long thin stem. He had the idea that the lotus leaf may use this vibration to shake off water condensate that may have otherwise penetrated their rough surfaces.

To test this hypothesis, Chen and his colleague decided to study a vibrating lotus leaf in fine detail in the laboratory using high-speed microscopic imaging. They first confirmed that leaf motion does play a role by fixing the lotus leaf to a cold plate and observing that it loses its superhydrophobicity. In this set-up the water condensate could easily penetrate into the cavities of the surface texture, displacing the air pockets.

Lotus music

The next big challenge was to accurately reproduce the large swings of a lotus leaf within the laboratory setting. Fortunately for Chen, Boreyko is an intuitive experimentalist who figured out that the speed at which the lotus leaf flaps in the wind is the important parameter in the process. In light of this, the researchers realized that they could simply attach the leaf to a basic audio speaker and vary the frequency and amplitude to mimic the effect of wind.

In order to observe a transition in the leaf from a “sticky” state to the “non-sticky” water-repellent state, the researchers applied a mixture of water and ethanol (2:1 vol) to the lotus leaf and fixed the laboratory conditions at 21 °C with a relative humidity of 51%. As the ethanol evaporated, this simulated water condensation on the leaf surface. After 6 minutes, when more than 90% of the ethanol had evaporated, the researchers turned on the speaker to vibrate the leaf.

Using a video camera attached to a long-distance microscope, Chen and Boreyko altered the vibrations until they captured a complete “de-wetting” of the lotus leaf. All water droplets were ejected completely intact from the leaf surface when vibrations were at a frequency of 80 Hz and a peak-to-peak amplitude of 0.6 mm. In cases where vibrations were too weak, the droplets remained on the surface; and in cases where vibrations were too strong, a sticky residue was left on the surface of the leaf.

The Duke University researchers intend to develop their research by exploring ways to apply the findings to practical applications such as self-cleaning, non-sticking surfaces. A robust superhydrophobic surface could also help to reduce drag in a range of places including condenser pipes and on ship hulls.

This research appears in Physical Review Letters.

Vitaly Ginzburg: a life in physics

Physics World: How did you first become interested in physics?

Vitaly Ginzburg: I was sent to school in 1927 when I was 11. Education was not compulsory at the time, which meant that I started in the fourth form. But in 1931, after I had been at school for only four years, the Soviet Union imposed another round of idiotic school reforms, abolishing all upper secondary schools, which used to be for 16–18-year-olds. It was recommended that all pupils who had been at school for seven years should transfer to the factory school (the school for “working people”) and that only then should they try to find a way to enrol in a university.

But I did not like this path and went to work as a technician in the X-ray laboratory of one of the higher-education technical institutes, where I was first introduced to physics. I was especially impressed with a popular-science book by the Russian physicist Orest Danilovich Hvolvson entitled Physics in Our Day. It was then that I decided to become a physicist and I began to familiarize myself with the knowledge necessary for admission to university, which had just been made a competitive process. Not without difficulties, I became a student of the physics department of Moscow State University (MSU) in 1933 on the second course.

Who or what had the biggest influence on you in the way you think about and approach physics?

I think that my biggest achievement in physics is connected with the theory of superconductivity.

Hvolvson’s book deserves another mention, though I never actually met the author in person. Then there were university textbooks such as The Fundamentals of the Theory of Electricity by Igor Tamm (the famous course of Lev Landau and Evgeny Lifschitz had not yet been written). I was also influenced by Leonid I Mandelstam, who was a very prominent figure at that time. Now we have many good books on physics and the latest results are described in review journals, such as Uspekhi Fizicheskikh Nauk [Physics–Uspekhi], which has been published since 1918 and of which I have been editor-in-chief for over 11 years.

Looking back on your scientific career, what do you think has been your single biggest achievement in physics, and why?

To me, the special charm and specific feature of theoretical physics is that you can quickly change what you are studying. Typically, you do not need many years to build new equipment, as experimentalists often do. In this spirit, I worked in many areas of physics and astrophysics. The main highlights of my work are described in my recently published book On Superconductivity and Superfluidity (Springer, 2008) in the chapter called “Scientific Autobiography. An attempt.” Having said all that, I think that my biggest achievement in physics is connected with the theory of superconductivity.

Vitaly Ginzburg

Do you have any regrets about your part in developing the hydrogen bomb for the Soviet Union?

We thought at the time that we were working to prevent a monopoly on the atomic bomb – Hitler’s monopoly if he got the bomb before Stalin. The thought of what would happen if Stalin had a monopoly on atomic weapons somehow never entered my head. Scary thought. Stalin would seek to subjugate the entire world. I admit this may betray stupidity, but this stupidity was, back then, a common way of thinking in the Soviet Union.

I wish to use this opportunity to clarify my input into the creation of the Soviet Union’s hydrogen bomb. In 1948 it was already clear that the Soviet Union would acquire nuclear weapons and would be able to compete with the US. In addition, secret sources revealed that the Americans were starting to think about building an H-bomb and so a group of Soviet theorists were instructed to address this issue, although not in a particularly urgent way as the problem was not yet being treated as a high priority. Before 1948 I had no connection with atomic weapons work, but once it had been decided to race the US for the H-bomb, Tamm was brought in and asked to suggest people to join such a team. I was his deputy in the theory department of the Lebedev Physical Institute so, naturally, Tamm proposed including me. However, as I learned many years later, my candidature was initially rejected; it was approved only some time later.

Why were you initially turned down from being part of the H-bomb team?

The main point was that I was not treated as a “reliable person” by the Soviet authorities: for example, my wife was in exile. But when the problem began to be treated seriously, I was included in the team due to the fact that I had, as one would say nowadays, a high scientific rating. Incidentally, Andrei Sakharov was included in the team at the request of our director, Sergey Vavilov, because Sakharov and his wife had a small child but did not have anywhere to live; after joining the group the Sakharovs were assigned a single small room in a communal flat.

I am a very rare – and possibly unique – person in that I was saved by the H-bomb!

One way or another, we began to get acquainted with the available materials and concluded that they offered no real guidance for the design of the H-bomb. At this stage Sakharov and I advanced two ideas that could solve the problem. In his memoirs, Sakharov called these ideas “the first” and “the second”, because they were still classified. His “first” idea was to use alternating layers of uranium and fuel, while my “second” was to use lithium-6 as a fuel, which would, when hit by neutrons, create tritium and helium nuclei and liberate 4 MeV of energy per nucleus. I do not think that either of these ideas was terribly bright, but together they made it possible to create the H-bomb.

What happened next?

Once the authorities had decided to build the H-bomb based on these two ideas, the real work was transferred to Arzamas-16 (the city of Sarov) where the main atomic bomb laboratories were situated. But at this stage I was once again refused clearance for the work and stayed in Moscow. I started working on the thermonuclear energy problem but at the end of 1951 I lost clearance for that work too. That was the end of my “top secret work”, period. It was a time of terror. Stalin personally signed orders for 40,000 people to be executed by firing squad. I have no idea why I missed one of those execution lists. I think that I was a very good candidate for this fate: my wife was in exile and I was branded by some as Bezrodny Kosmopolit [“homeless, stateless cosmopolitan”]. Of course, my Jewish origin could also be an “argument”. Suffice to say that I was expelled from the scientific council of the Lebedev Institute, seemingly in an attempt to make it stronger. But why I was not on the firing squad list at that time I still do not know. Perhaps it was sheer luck. Maybe an unknown somebody had expressed a kind of gratitude for the “second idea”. The bottom line is that I am a very rare – and possibly unique – person in that I was saved by the H-bomb!

You have long been a staunch atheist: what are your concerns about the growing influence of the church in Russian public life?

I am, and always have been, an atheist, but I think that to be – or not to be – religious is a fundamental human right.

I am, and always have been, an atheist, but I think that to be – or not to be – religious is a fundamental human right. It is, however, a different matter if the church interferes with secular education, offering creationism as a foundation of science (in other words, the idea that God created everything and that everything then evolved obeying laws that were also established by the divine powers). I fully reject this approach and therefore keep seeking to expose creationism as false. This struggle is not easy because church leaders try to equate belief in God with a struggle for high moral values and, among other things, a struggle against alcoholism. It is clear that when the church is fighting alcoholism, or drug abuse, or immoral behaviour, I am certainly not against it. But I am convinced that the bright future of mankind is connected with the progress of science and I believe it is inevitable that one day religions (at least those existing now) will drop in status to no higher than that of astrology. When that will happen is, of course, is difficult to say.

Vitaly Ginzburg in Stockholm

 

What do you think are the three most important unsolved problems in physics?

Regarding the current state of physics as a whole, it is possible to divide it into two parts: “modern physics” (ordinary physics) and “the physics of the future” (extraordinary physics). Modern physics covers everything that we know and also what we do not yet know (but, in principle, can learn) in close vicinity to the data that can be obtained at the Large Hadron Collider at CERN; this is partly true for cosmology too. I also include condensed-matter physics and plasma physics as part of modern physics, the main task of which is to create substances with special properties that we wish to have. Currently, not all possible combinations can be calculated, let alone implemented. One example of such a task is to create room-temperature superconductors, which appears to be a problem that can be solved but one that remains stubbornly beyond our reach.

As for what I call extraordinary physics, there we encounter problems involving fundamental uncertainties. This includes the question of what types of particles can exist. With regard to cosmology, I hold a perhaps unorthodox view that the complete theory should not have singularities, in particular no energy density singularities. This not fully understood area of cosmology is characterized by the Planck length [1.616 × 10–35 m] and Planck time [5.4 × 10–44 s] I once tried to work in this direction but obtained no very significant results – see, for example, my 1971 paper in the Soviet Journal of Experimental and Theoretical Physics (33 242). In general, it is a huge terra incognita.

Of course, I feel it would be logical to include the problems of creation of life and the emergence of logical thinking in “future physics”, although it is perhaps “extraordinary biology” rather than physics. It is impossible to know when success will be achieved in “extraordinary physics” but it may happen in the coming years.

Finally: when and where were you happiest (if such a state of mind exists)?

You do feel happier when you produce results. However, I would not really say that one’s previous achievements make one happy.

• This interview was translated from the original Russian by Vitaly Kisin. With thanks to Maria Aksenteva.

Vitaly Ginzburg: an interview

ginzburg.jpg
Vitaly Ginzburg in Stockholm in 2003

By Matin Durrani

Vitaly Ginzburg, who turned 93 last month, is without doubt one of the leading Russian theorists of the 20th century, who shared the 2003 Nobel Prize for Physics with Alexei Abrikosov and Tony Leggett for their work on the theory of superconductors and superfluids.

He’s a long-standing admirer of Physics World magazine — having first written for us back in 1997 — and when the opportunity arose to interview him, I jumped at the chance.

Ginzburg gave answers to our questions in Russian, which were then translated into English by Vitaly Kisin, a former colleauge of mine here at Institute of Physics Publishing. I must also thank Maria Aksenteva, who is the managing editor of the journal Uspekhi Fizicheskikh Nauk, which Ginzburg has edited for the last 11 years. She is very much his “eyes and ears”.

In the interview, which you can read by following this link, Ginzburg talks about how his interest in physics developed, why he distrusts the Church’s growing role in Russian society, and how his role in developing a hydrogen bomb for the Soviet Union was what saved his life.:

The interview is in the opinion section of physicsworld.com’s In-depth channel which currently contains a couple other great articles worth checking out.

In How to publish a scientific comment Rick Trebino relives the time he tried – and failed – to have a comment published in a scientific journal. You couldn’t make the story up.

Then as Imperial College London counts down to a debate on the pros and cons of human space flight on 12 November, the two panellists write exclusively for us, presenting their arguments for and against manned or robotic space missions in the article Human spaceflight: science or spectacle? Championing robotic missions is David Clements, a lecturer in astrophysics from Imperial. Making the case for human space flight is Ian Crawford, a reader in planetary science and astrobiology from Birkbeck College, London.

Finally, Robert P Crease probes arguments made by US energy secretary Steven Chu that the next generation of synchrotron sources are an essential tool for meeting the energy challenge — check out his article “The Lure of Synchrotrons” by following this link

How to publish a scientific comment

You read a journal article that “proves” that your life’s work is wrong. Fortunately, you realize that the paper is totally wrong, so you decide to write a comment – the option provided by scientific journals to correct such errors. The procedure for doing this is very simple. You prepare by reading previous comments in the journal, all three pages long. You e-mail the paper’s authors, politely asking for key parameters they omitted. Receiving no response, you determine the parameters instead from the authors’ graphs. You write and submit your comment.

You receive the journal’s response: “Your comment is 2.39 pages long. Unfortunately, it cannot be considered until it is less than 1.00 pages long.”

So you remove all non-essential items, such as figures, equations and explanations, and resubmit your comment. The journal’s response is now “Your comment is 1.07 pages long. Unfortunately, it cannot be considered until it is less than 1.00 pages long.” You remove frivolous linguistic luxuries like adjectives and adverbs, and resubmit, noting a detailed three-page comment in the latest issue of the journal. You also answer questions from competitors regarding the fraudulence of your life’s work and how you got away with it for so long.

You receive the reviews of your comment. Reviewer #3 likes it. Reviewer #2 hates it for taking issue with a great paper that finally debunked your terrible work. Reviewer #1, however, feels it was too short to understand. The editor writes that no decision can yet be made and suggests expanding your comment to three pages. Adjectives, adverbs, figures, equations and explanations go back in, and you resubmit. Meanwhile, you receive condolences from colleagues regarding the fraudulence of your life’s work, but fail to enjoy their tales of other debunked scoundrels, explaining that you do not have much in common with alchemists, astrologers and flat-earthers.

In the re-reviews of your resubmitted comment, reviewer #3 still likes it. Reviewer #2 still hates it and now adds that it should not be published until you obtain the parameters you confessed you were not able, and never would be able, to obtain. Reviewer #1, however, now loves your three-page comment. Incredibly, he has obtained the parameters and confirmed that the authors are wrong, and that you are right. You add a thank you in your comment to reviewer #1 for his heroic efforts.

The editor writes that your comment could perhaps now be published. Unfortunately, it cannot be considered until it is less than 1.00 pages long. You remove all figures, equations, explanations, adjectives and adverbs. You also replace wastefully wide letters, like “m” and “w”, with space-saving letters like “i”, “t” and “l”. “Global warming” thus becomes “global tilting”. The journal responds: “Your comment is 1.09 pages long. Unfortunately, it cannot be considered until it is less than 1.00 pages long.”

You remove all logical arguments and consider kicking off a co-author with a different address, which requires an entire line of valuable space. And you thank your Chinese graduate-student co-author for having a last name only two letters long, and include this important fact in recommendations to her potential employers. You resubmit. Meanwhile, numerous friends remind you that at least you still have your health, albeit in a noticeably deteriorating state since submitting your comment. At a conference, a colleague informs you that he is reviewer #1. You hug him.

You read another three-page comment in the latest issue of the journal. There is also an “erratum” from the authors, in which they admit no errors and instead report new – also wrong – numbers. You yearn for the days when errata involved correcting old errors and not introducing new ones – and realize that, with this “erratum”, the authors have already published their “reply” to your comment.

Adding text debunking the “erratum” lengthens your comment unacceptably, so you shorten it by omitting words like “a”, “an” and “the”, thus giving it exotic foreign feel. And learn that, in some literary circles, sentence fragments acceptable; indeed, verbs highly overrated. Use txt msg shorthand 2 further shorten ur comment, but decide not 2 when 100 frowny-face emoticons u couldnt resist adding actually lengthen it 2 2 pages 🙁 Rsbmt ur cmnt.

A senior editor tells you that you cannot thank reviewer #1 for obtaining the parameters, as it would reveal the journal’s obvious bias in your favour. He adds that you cannot thank reviewer #1 even for confirming your calculations, because there is no record of his having done so. Apparently, the paper on which it was printed has, over the eons, turned to dust. You send reviewer #1’s review to the senior editor, including reviewer #1’s name in case all records of his identity have also been lost.

Over a year after your comment’s submission, you learn from the editor that the authors’ official reply to it was rejected. And because, for maximum reader enjoyment, a comment cannot be published without a reply, your comment cannot be published. This decision is final. Thank you for submitting your comment to the journal with the most rapid publication time in its field.

The full (true!) story is much longer, and has had quite an afterlife on the Internet (see www.physics.gatech.edu/frog). Unfortunately, it could not be considered as a Lateral Thought until it was less than 1.00 pages long.

Author’s note

This ridiculous scenario actually occurred pretty much as written. The one exception is that the events described in the last paragraph actually occurred with a different comment, which I submitted to a different journal a few years earlier and which was never published, precisely for the absurd reason given. I also confess that I exaggerated the responses from competitors, colleagues, friends, relatives etc, but not those of the journal editors or the authors. Those events all happened exactly as I have described them.

More than a year after submitting the comment discussed in the rest of this story, I realized that it was clearly doomed unless I took serious action, so I sent a copy of this article to the senior editor’s boss. Shortly afterward, I received a call from the senior editor, who had suddenly withdrawn all of his objections. The comment was fine as it was, and it would be published.

Unfortunately, I was still not allowed to see the authors’ reply until it was actually in print. And when it appeared it reiterated the same erroneous claims and numbers (for the third time!) and then introduced a few new erroneous claims, which, of course, I am not allowed to respond to.

I have withheld the names of the various individuals in this story because my purpose is not to make accusations but instead to effect some social change. Nearly everyone I have encountered who has written a comment has found the system to be heavily biased against well-intentioned correcting of errors – often serious ones – in the archival literature. I find this quite disturbing.

Finally, I should also mention that, to keep this article light and at least somewhat entertaining, I omitted numerous additional steps involving journal website crashes, undelivered e-mails, unreturned phone calls to dysfunctional pagers, complaints to higher levels of journal management and some rather disturbing (and decidedly unfunny) behaviour by the authors and certain editors.

After all, I wouldn’t want to discourage you from submitting a comment.

Human spaceflight: science or spectacle?

On 20 July 1969 NASA’s Apollo 11 mission landed on the surface of the Moon. Apollo was done, to paraphrase US President John F Kennedy, because it was hard, and human spaceflight still remains very hard. Indeed, since the sixth and final Apollo lunar landing in December 1972, all of human spaceflight has been constrained to low Earth orbit – just a few hundred miles above the ground.

Yet over that same period, robotic science missions have studied the Sun, comets, asteroids and moons, and have reached every planet in the solar system. (We are still awaiting the arrival of the New Horizons mission to recently demoted Pluto.) They have also probed the solar wind and explored the rest of the universe in the electromagnetic spectrum from radio to gamma rays – right back to the Big Bang and the cosmic microwave background.

While Apollo provided insights into the geology of six small areas on the Moon, hardly any of the science achieved since then could have been done by a manned mission. There are many reasons for this, but the principal issue is the difficulty of keeping people alive in space and returning them safely to Earth. Crew safety has to be paramount, so science can never be the priority of a manned mission. Science is always scaled back when cutbacks are needed, well before anything that might affect the safety of the crew.

Scientific space missions and human spaceflight take place on very different scales. Apollo is estimated to have cost about $25bn at 1969 prices, equivalent to $145bn at today’s prices. The plan to return astronauts to the Moon is currently estimated to cost $97bn, though there are likely to be over-runs beyond this, and a crewed mission to Mars is expected to cost several times more. The cost of Apollo alone is comparable to the combined cost of all robotic space missions since we left the Moon. If science were the primary goal of these successors to Apollo, then it could be done cheaper, better and faster without a human crew.

To take one example, the Spirit and Opportunity rovers have been driving around Mars for the last five years. To date, the mission has cost just less than $1bn. Although each rover has only travelled about 10 miles on the red planet, for the cost of a human mission to Mars we could send about 600 such vehicles and conduct something approaching a geological survey of the whole planet (this of course ignores economies of scale and technological advances since the rovers were designed). This is something a human mission to Mars, even with many months on the surface, could not achieve.

Even the success stories that combine human spaceflight with science do not survive close inspection. The most famous of these is the Hubble Space Telescope, which has spent many years beaming back spectacular images of the universe. It is, of course, a robotic mission, but it has been repaired and upgraded five times by astronauts using the Space Shuttle. The original cost of Hubble was about $1.5bn, including contributions from both NASA and the European Space Agency. The subsequent servicing missions have cost an additional $3–4.5bn, largely as a result of the expense of shuttle launches. For the cost of the upgrades and repairs we could have launched two or three extra Hubble Space Telescopes. Hubble’s successor, the James Webb Space Telescope, will not even need to be serviced, principally because it will be positioned well beyond the Moon, some 1.5 million kilometres away.

Given all of the above, it is fair to ask why we bother with human spaceflight at all. It simply does not deliver as much science per pound, dollar or euro as robotic missions. The issue is that human spaceflight has never been about science. From the beginning, with the Soviet Union’s Vostok mission and NASA’s Mercury project each seeking to send a human into space, it was, and still remains, about national prestige and spectacle rather than science. Science can be a useful label to help market a space mission, but prestige has always been the main driver. A human mission to Mars or the Moon would be the ultimate reality TV spectacle, but would it be worth the hundreds of billions it would cost? This is the question that the Augustine Commission is asking in the US, and its answers might make unpleasant reading for the human-spaceflight community.

Admittedly, human spaceflight is one of the most spectacular achievements of the 20th century, but science has never been, and can never become, the driving force for a manned programme. The most effective way of achieving scientific goals in space is through the use of unmanned robotic probes.

Human spaceflight tends to be a controversial issue because many scientists believe that the limited resources available for space exploration would be better invested in robotic missions. On the other hand, human beings are uniquely qualified to undertake several key scientific investigations in space ranging from life- and physical-sciences research in microgravity to geological and biological fieldwork on planetary surfaces.

It is true that most astronomical observations have, since the start of the space race, benefited from robotic spacecraft placed above the obscuring effects of the Earth’s atmosphere. However, one of the principal lessons from the most successful of these instruments, the Hubble Space Telescope, is that access to a human-spaceflight infrastructure can greatly increase the lifetime and efficiency of space-based astronomical instruments. Since its launch in 1990, Hubble has been serviced by five Space Shuttle missions, without which it would have been a much shorter lived, and far less versatile, instrument.

Perhaps the Apollo missions to the Moon were the best demonstration of astronauts as explorers of planetary surfaces. They underlined how humans bring an agility, versatility and intelligence to exploration in a way that robots cannot. Although it is true that humans will face many dangers and obstacles operating on other planets, mostly due to their physiological limitations when compared with robots, the potential scientific returns (resulting from rapid sample acquisition, the ability to put data and past experience into a coherent picture, and the on-the-spot ability to recognize observations that may be of importance) is more than sufficient to justify employing astronauts as field scientists on other planets.

Indeed, these advantages were recognized in a report by the UK’s Royal Astronomical Society in 2005, which concluded that “profound scientific questions relating to the history of the solar system and the existence of life beyond Earth can best, perhaps only, be achieved by human exploration on the Moon or Mars, supported by appropriate automated systems”.

Closer to home, the microgravity environment of low Earth orbit such as that on the International Space Station (ISS) gives a unique opportunity for research in the life sciences (including human physiology and medicine), materials science and fundamental physics. This type of environment can provide unique insights into areas such as gene expression, immunological function, bone physiology and cardiovascular function, which are important for understanding a range of terrestrial diseases such as osteoporosis, muscle atrophy and cardiac impairment. As humans are the subjects of these physiological experiments, robots can never be a substitute. Further progress in these areas will rely on funding for the ISS being maintained well into the coming decades. Although science will be a major beneficiary of having people in space, it is not, and is never likely to be, the sole motivation for human space exploration. Other benefits of investing in human space exploration include the stimulus it gives to scientific and technical education: space exploration is inherently exciting, and is an obvious way to inspire young people to take an increased interest in science and engineering. Human spaceflight is also good for hi-tech jobs and can foster innovation. Finally, it provides a focus for international cooperation that could help to build a more stable geopolitical environment.

The multiple benefits of human space exploration were recognized by the publication in May 2007 of the Global Exploration Strategy report by 14 of the world’s space agencies. Developing a global exploration programme, with the ultimate aim of sending astronauts back to the Moon and on to Mars, is a noble vision for the 21st century, the realization of which would confer significant scientific, economic and cultural benefits on our world. As such, human spaceflight should be seen as an investment in the future of humanity and it deserves the full support of the scientific community, and indeed of all citizens of our planet.

Recipes for planet formation

Anyone who has ever used baking soda instead of baking powder when trying to make a cake knows a simple truth: ingredients matter. The same is true for planet formation. Planets are made from the materials that coalesce in a rotating disk around young stars – essentially the “leftovers” from when the stars themselves formed through the gravitational collapse of rotating clouds of gas and dust. The planet-making disk should therefore initially have the same gas-to-dust ratio as the interstellar medium: about 100 to 1, by mass. Similarly, it seems logical that the elemental composition of the disk should match that of the star, reflecting the initial conditions at that particular spot in the galaxy.

Yet we see a great diversity in the chemical composition of planets in our solar system, as a function of both planet mass and distance from the Sun (two variables that are themselves not completely independent). Consider the inner solar system, which is the realm of “terrestrial” planets – Mars, Earth, Venus and Mercury. As far as we can tell, Venus, Earth, Mars and the parent bodies of rocky meteorites that fall to Earth all have a similar composition. Mercury is an oddball with an unusual iron-rich and silicate-poor composition: it is thought to be the core of a planet that lost its mantle and crust in an energetic encounter with another protoplanet. Yet even other terrestrial planets exhibit curious differences in the ratios of certain elements compared with the Sun. The ratio of carbon to silicon on the Earth, for example, is 20 times smaller than it is in the Sun. Such differences might offer important clues to our solar system’s formation.

Jupiter and Saturn are further away, orbiting at radii five and 10 times greater, respectively, than the mean distance between the Earth and Sun (one astronomical unit or AU). They are also radically different from their terrestrial cousins. Like the Sun, these “gas giant” planets are mostly hydrogen and helium. However, sophisticated measurements of Jupiter’s gravitational field, deduced by orbiting spacecraft, reveal that it contains more than 30 Earth-masses of elements heavier than helium. This means that Jupiter – the solar system’s largest planet at about 318 Earth-masses – is richer than the Sun in these elements by a factor of three. The same holds true for Saturn.

Beyond Saturn in the outer solar system lie Uranus and Neptune. These so-called ice giants formed beyond the radius at which it is thought that carbon, nitrogen and oxygen condensed from the gas phase to form solid ices. These two distant planets are made up of hydrogen and helium in roughly equal proportion to heavy elements. This may suggest that their cores were created just as the gas disk from which Jupiter and Saturn formed was evaporating away. This event is thought to have occurred more than 10 million years after the first protostellar condensation formed from the collapsing, rotating molecular cloud that became the Sun.

Astronomers can obtain many other clues to the formation and evolution of our solar system from studying the composition of planetary satellites, dwarf planets in the Kuiper belt beyond Neptune, comets, other debris, and the Sun itself. But over the past decade, we have also begun to learn more about planets beyond our solar system, orbiting both exotic and Sun-like stars (see “Brave new worlds” by Alan Boss, Physics World March). This means that for the first time we can study planetary systems that have evolved independently of our own. What we are finding is that in this kitchen, the ingredients for baking different kinds of cakes are kept separate, and result in cakes of very different types.

The planetary cookbook

Although the classic Joy of Making Planets has not yet been written, we do have some hints from galactic planet-making “chefs”. The basic recipe for planetary system might read as follows.

Start with a disk of gas and dust, initially about 10–20% of the mass of the young star. The disk will be hottest at small radii (several times the radius of the star) and coldest at the largest radii (often more than 100 AU) due to the combined effects of gravitational potential-energy release and stellar irradiation. Close to the star, the disk will consist of gas only, because the dust grains will sublimate. The inner edge of this gas-only region is set by interactions with the young star, as the magnetic pressure of the stellar dipole competes with the pressure of gas trying to fall onto the star. Outside this region, gas and dust should be well mixed. The mass surface density in the disk (measured by vertically integrating through the thickness of the disk) will probably be greatest at the edge of the inner ring and will drop gently with increasing radius.

If the disk is massive enough, you may observe spiral density waves developing, not unlike those seen in galaxies with well-defined spiral arms. Do not worry. This may actually help in the collection of the solid particles needed to form large planetesimals, or even the cores of giant planets, as described below.

Next, stir turbulently as the system undergoes viscous accretion. Which mechanism provides the needed viscosity is not entirely clear. However, if the disk is sufficiently ionized, theory suggests that an inherent instability in Keplerian disks of conducting gas – termed the magneto-rotational instability (MRI) – can create the necessary turbulence. If the disk is optically thick (as most are when you start), a “dead zone” may form in the disk mid-plane between a little less than 1 AU to more than 10 AU, where MRI cannot operate because of the low ionization. However, the disk surface should be sufficiently ionized to permit viscous accretion: most material moves inwards towards the star while some material moves outwards, conserving angular momentum. Throughout this mixing process, small dust grains collide and stick, forming larger and larger bodies over time. As this happens, the disk will become less opaque, as big particles have larger ratios of mass to surface area compared with tiny ones.

Once the small bodies reach 1 m in diameter, they can end up being pushed into the central star via a phenomenon known as “gas drag”. This happens because gas in the disk is partly supported by gas pressure, so it orbits more slowly than nearby boulders. Hence, these rocks feel a “headwind” from the gas, lose angular momentum and spiral towards a fiery death near the star. If too much solid material is lost, then you will not be able to make your planet. This sort of baking is not for the faint of heart!

But there is at least one way of avoiding the problem. Ken Rice of Edinburgh University in the UK, Anders Johansen of Leiden University in the Netherlands and other planetary scientists have proposed that planet chefs could use the gas drag to their advantage. Wherever there is a density enhancement in the gas, the researchers suggest, the solid particles will also tend to collect. And within these denser regions of the disk, larger bodies can be produced through collisions more quickly than they are lost through gas drag.

If this process is successful, it will leave several larger bodies about 100 km in diameter   perfect for making protoplanets. Gravitational focusing, where the trajectories of small particles are significantly perturbed by encounters with larger bodies, can speed up this process, leading to runaway growth where the big keep getting bigger. But once the protoplanets get larger than the Moon, there is another problem: a combination of torques and resonances between the rocky protoplanets and the remaining gas can push and pull the planetary embryos into a net inward spiral, a phenomenon known as Type I migration. This can result in additional loss of necessary solid material as it falls into the young star. However, if the inner disk can sustain MRI-induced turbulence, Richard Nelson from Queen Mary University of London has suggested that local density enhancements can create a “pinball” effect, scattering the protoplanets around and slowing down the loss of solids. This could even lead to a pile-up of lunar-mass objects at the transition between the actively accreting and dead zones of the disk.

The ice-line, where important heavy elements condense into ices, is another place to look for signs of early planet formation. If all the carbon, nitrogen and oxygen in the area is converted from the gas to the solid phase, then the surface density of solids (dust plus ice) in the disk will increase fourfold, thus greatly promoting the formation of giant planet cores. This will eventually leave a small number of large “oligarch” protoplanets, the composition of which will depend on the local conditions of each zone in the disk. These are the building blocks for further planet formation. In the inner solar system, these objects will have a mass less than that of Mars, leading to the eventual formation of Earth-mass planets over tens of millions of years. However, protoplanets in the outer disk could be as large as the Earth.

Advanced cookery

Forming protoplanets, however, is just the first step. If you want to form a gas-giant planet, then you need to build a core of between 1–10 Earth-masses before the gas disk disappears. Between 0.1 to 10 million years after the disk forms, its mass surface density will decrease as material is lost onto the central star, and as the outer radius of the disk increases as material spreads out to conserve angular momentum. Also, high-energy ultraviolet radiation and X-rays from the young star can dissociate molecules and ionize atoms, which can leave some material with enough excess kinetic energy to escape the system entirely. This process is known as photo-evaporation. Observations suggest that the gas disk will typically disappear within 10 million years – so the window for making gas-rich planets is quite short by astrophysical standards.

Still, if you can build the core of your gas giant fast enough, you can trigger the rapid accretion of gas, leading to the formation of a giant planet. But even then, your gas giants may not be safe! The planet can end up being dragged along with the viscously accreting disk in another type of migration process, called Type II migration. If this happens, your planet could end up “parked” in the very inner disk (like the so-called hot Jupiters found very close to their host stars) or even pushed into the young star itself.

All of this dynamic and energetic activity has a profound effect on the chemistry of the disk and thus on the composition of planets that form in any particular place. Heidelberg University’s Hans Peter Gail and colleagues have calculated that at temperatures above 800 K, hydroxide molecules help transmute carbon-rich solids to other forms, reducing their carbon content in the process. These “free” carbon atoms quickly react with the oxygen-rich gas disk, forming carbon monoxide.

This gaseous molecule can either accrete onto the young star or photo-evaporate away. Either way, the carbon content in the planet-forming solids is depleted, which may explain why the carbon-to-silicon ratio of the terrestrial planets is so different compared to the ratio found in the Sun. But quantifying the magnitude of this mystery requires that we know the abundance of elements in the Sun to high precision, and these results are currently in flux, informed by new models of the solar spectrum by Martin Asplund of the Max Planck Institute for Astrophysics in Germany and collaborators.

In principle, forming an Earth-mass planet through collisions will be much easier than forming a gas giant (which requires core formation before the gas disappears), provided you can solve the problems of gas-drag and Type I migration of solids. According to the models of John Chambers at the Carnegie Institution of Washington, Scott Kenyon of the Harvard-Smithsonian Center for Astrophysics and others, it will take somewhere between 10–100 million years to form a planet with the mass of the Earth at a radius of less than 3 AU. If this process proves to be universal, it would lead to a wealth of terrestrial planets in the galaxy – not to mention the universe as a whole.

Finally, while it seems to be quite easy to make “super-Earth” planets with masses a few times greater than Earth – such planets have been observed around up to 30% of stars in the HARPS survey led by the Geneva Observatory – there could be a “mass gap” that makes it difficult to form planets lighter than Saturn but heavier than Neptune. Such a gap has been predicted in the models of Shigeru Ida of Tokyo University, Doug Lin of the University of California, Santa Cruz, as well as the University of Bern’s Willy Benz and colleagues. We still lack a convincing recipe for forming the ice giants in our solar system.

Looking outward

Fortunately, new observations of exoplanets are coming in thick and fast. Increasingly, we are learning a great deal about their composition – and, in a few cases, the structure of their planetary systems. The first exoplanet found orbiting a Sun-like star was discovered in the mid-1990s by Michel Mayor of Geneva Observatory and colleagues, who had been sifting through stellar spectra looking for Doppler shifts in the absorption lines of atoms and molecules. Such shifts can arise from a star’s reflex motion in response to the gravitational pull of a nearby planet. Using this technique, the researchers found that the star 51 Pegasi has a Jupiter-mass companion planet located at an orbital radius that puts it closer to its star than Mercury is to the Sun.

Since then, hundreds of planets have been discovered by researchers around the world using this “radial velocity” (RV) technique, many in multiplanet systems that occasionally lie in resonant orbits with one another. Given the period of the orbit and the maximum velocity observed for a system, one can deduce the mass of the planet (assuming the mass of the star is known). The often unknown inclination of the orbit introduces some ambiguity, as the observed velocity is only the component of the motion projected along the line of sight.

For star–planet systems that happen to be viewed nearly edge-on from Earth, this inclination uncertainty vanishes. More importantly, in such systems, astronomers can observe the opaque disk of the planet as it passes in front of the light-emitting surface of the star. Precision observations of these transits enable astronomers to measure the relative radii of the planet and the star, while timing variations in the phase can reveal orbital perturbations that indicate the presence of additional planets. Given the mass of the planet from the velocity observations and the radius from the transits, the bulk density of the planet can be inferred. The observed range to date is startling, varying by more than a factor of 20, which suggests that planetary systems form, as well as evolve, in different ways. In the bakery of the galaxy, you can find anything from “angel food” planets like TrES 4 that would float in water to dense “fruit cakes” that would sink like stones (e.g. COROT 7b).

Another technique for characterizing extrasolar planets has been pioneered by scientists working on data from the Spitzer and Hubble space telescopes. By dividing the emission of the star–planet system by the emission of the star alone, as the planet passes behind it, we can detect extrasolar planets directly, rather than by inferring their existence from gravitational wobbles of their host star. Last year astronomers were also treated to the first direct images of exoplanets from both ground and space-based telescopes. These direct detections provide unique information concerning the temperature and composition of these new worlds. For those planets that lie far from their host stars, such that their energy budgets are not dominated by the “starlight” they receive, observations of the planets’ emission spectra provide estimates of the internal energy of the planets themselves – an important constraint on models of their formation and evolution.

The future of exoplanets

Led by the RV technique, and recently revolutionized by transit detections, exoplanet science is set to grow rapidly as direct imaging, microlensing and astrometric surveys become routine. The discovery of additional exoplanets from both ground and space will continue to accelerate. This year NASA’s Kepler mission joined the joint CNES/European Space Agency mission CoRoT in performing space-based transit surveys. The Canadian MOST satellite and the NASA EPOCh project, as well as the Hubble and Spitzer space telescopes, will also contribute through dedicated follow-up transit observations.

While Kepler is capable of determining the frequency of terrestrial planets within 1 AU around Sun-like stars, microlensing surveys can provide statistics on the frequency of rocky planets as small as Mars between 1–10 AU. Tantalizingly, instruments under development for the world’s largest telescopes may have the capability, just barely, to see Earth-like planets around a handful of nearby stars. In a few years’ time, NASA’s James Webb Space Telescope will provide unprecedented infrared capabilities at increased angular resolution compared with Spitzer. A variety of new astrometric-, coronagraphic-, microlensing- and transit-based space missions are in development that could pave the way for future discoveries. Reaching our ultimate goal of obtaining images and spectra of terrestrial planets around nearby stars will require new instruments developed for a future generation of extremely large ground-based facilities like the European Extremely Large Telescope, as well as sophisticated space telescopes that will build on a rich legacy of discovery.

As we increase the number of exoplanets that have known bulk densities, atmospheric compositions, masses and orbital locations, and that orbit stars with a variety of properties, we will begin to understand what it takes to make the diverse planets we have already uncovered. With these new tools, it will be amazing to see what is cooking around Sun-like stars in our galaxy. If these systems bear any similarities to the diversity of outcomes from the leftovers in my refrigerator, they may be staring back at us.

Cooking up exoplanets

The statistics of exoplanet discoveries reveal some fascinating trends. McGill University’s Andrew Cumming and collaborators have recently predicted that about 20% of Sun-like stars will turn out to have gas-giant planets. Their prediction is based on extrapolating the frequency and mass distribution of planets (as observed with radial-velocity measurements) that orbit within 3 AU of their host star to orbital radii from 3 to 20 AU (1 AU is the distance between the Earth and the Sun). However, there are some factors that could improve the odds for would-be gas-giant chefs. For example, disks around higher-mass stars are bigger, thus making the process of forming a gas giant’s core easier, at least in principle. The down side is that disks around such stars do not last long: the relationship between expected outcomes for planet-formation models as a function of stellar mass is not obvious. Thus far, it appears that more-massive stars form bigger planets at larger orbital radii than lower-mass stars.

Alternatively, for extremely massive disks that can cool very efficiently at large radii, models by Lucio Mayer of the University of Zurich and others suggest that planet chefs can bypass many of the core-formation problems for gas giants, and instead make giant planets through immediate gravitational instability. But the dearth of massive gas-giant planets observed at large radii using ground-based telescopes (such as the Very Large Telescope, Gemini, Keck and the Multiple Mirror Telescope) suggest that such fine-tuned circumstances may be relatively rare. Still, there are a few astonishing counter-examples, including recent images of planets around stars such as HR 8799, Fomalhaut and Beta Pictoris.

Even more intriguing is the fact that the greater the heavy-element abundance in the atmosphere of a star, the more likely it is to have a gas-giant planet like Jupiter. This is consistent with a well-established theory of gas-giant planet formation, which requires a rocky core to form as a nucleation site for giants to emerge from gas-rich disks. With the velocity precision of RV measurements reaching 10 cm s–1 (20 times slower than a typical human walking speed), planets as small as a few Earth masses have been identified around a handful of quiescent Sun-like stars. As for the rule that gas-giant planets are more likely to form around Sun-like stars with lots of heavy elements, there is even evidence that it may break down for smaller planets forming around lower-mass stars.

Secrets and spies

There is no shortage of popular histories of the creation of nuclear weapons. From the mid-1940s to the present day, scientists, historians and others have tried to explain the genesis of these awesome and awful weapons, and the reasons for their use against Japan at the end of the Second World War. From the official 1945 Smyth Report on the Manhattan Project to Richard Rhodes’ 1986 Pulitzer Prize-winning The Making of the Atomic Bomb and beyond, the history of nuclear weapons and the Cold War continues to exert a powerful and sometimes macabre fascination for those interested in the history of modern science.

Secrecy has always been part of the allure of nuclear technology. One reason for a recent proliferation of nuclear histories has been the gradual release of new archival documentation, particularly from the former Soviet Union and Eastern Europe. British Cold War archives (including selected MI5 files) have also become more accessible in recent years following the advent of “open government” and the Freedom of Information Act. Historians have been busy in these archives for two decades, with results that include Mark Walker’s superb work on the Nazi atomic-bomb project, David Holloway’s on the Soviet Union, and ongoing efforts of military, diplomatic and science historians to elucidate the reasons for the Hiroshima and Nagasaki bombings. These histories – and many more – have transformed our understanding of the early history of the nuclear age.

But one consequence of these excellent, specialized histories is that there is now a place, even in such a crowded field, for a book that brings some of this fresh information together into a good, accessible general history. With Atomic, Jim Baggott, a business consultant and popular-science writer, may well have written it. His book draws on reams of more specialized scholarship to produce a well-informed and highly readable overview of early nuclear history.

A bit of a brick at nearly 600 pages, the book is divided into four sections. It covers the mobilization of physicists at the outbreak of war and the initial wartime nuclear programmes in Britain, America and Germany; the subsequent development of these programmes and the creation of ENORMOZ, the Soviet espionage operation against the Manhattan Project; the demise of the Nazi bomb project and the development and denouement of the Allied one; and the way in which these events played out at the beginning of the Cold War, concluding with the first Soviet nuclear test in 1949.

All the familiar names of nuclear history are here – Heisenberg, Bohr, Chadwick, Oppenheimer, Fermi, Lawrence, Frisch, Peierls (see pp38–39), Teller, Kurchatov and the rest – and many other less well-known figures. Linking the various episodes are two interweaving themes: the story of the Allies’ attempts to destroy a heavy-water factory in Nazi-controlled Norway; and the role of spies, espionage and counter-espionage in the transfer of nuclear information from country to country in the 1940s and 1950s. Indeed, through his accounts of Klaus Fuchs and the other atomic spies, one of Baggott’s neatest accomplishments is to show clearly how secrecy and spying both permeated the wartime nuclear projects and shaped the ensuing Cold War, and national and international politics.

The author does an excellent job both of describing the various parallel national programmes and of integrating the scientific, technical, political, strategic, special operations and espionage elements on the larger international stage. His account is dramatic, pacey and engaging, and manages to convey a rich sense both of the various personalities involved and of the larger forces that shaped the events. He mostly manages to resist sensationalizing the material, though his journalistic tendency to end each subsection with a punchy, prophetic sentence does occasionally grate. A comparative timeline of the various national developments and a listing of the key dramatis personae with brief biographical details form very useful appendices.

The book is significantly weaker on the uses and consequences of the first nuclear weapons and their role in the early Cold War. Here Baggott misses a substantial literature from political and diplomatic historians who have been exploring the myths and realities around the “half a million lives saved” by the bombings of Hiroshima and Nagasaki, and the idea of “atomic diplomacy.” Elsewhere, he is perhaps too ready to take his sources at face value, and tends to offer simplistic judgements (for example about the clichéd “Faustian” pact between physicists and the military) that do not reflect the complex historical debates currently taking place about much of this material. Similarly, an overlong “epilogue” takes the story from 1950 through the Oppenheimer trial of 1954 and the Cuban Missile Crisis in 1962 and into the 21st century, but draws on a far too narrow a source base, making for a rather patchy and untidy end to the narrative.

Despite these criticisms, Atomic is now one of the most accessible synthetic and up-to-date histories of the early nuclear age. The book will be useful as a broad introduction to the history of nuclear weapons in their wider context – though interested readers may well wish to progress quickly to the more nuanced historical accounts from which Baggott draws.

The lure of synchrotrons

Shortly after he was sworn in as US energy secretary earlier this year, the Nobel-prize-winning physicist Steven Chu discussed what he called the “energy challenge” during a visit to the Brookhaven National Laboratory on Long Island. The challenge, according to Chu, is threefold. First, US national security as well as economic prosperity depends on the availability of clean and affordable energy. Second, competition for energy resources threatens to spark geopolitical conflict. Third, the development of alternative energy sources that do not depend on fossil fuels is critical to address climate change.

Meeting the energy challenge, Chu said, requires more than “political will”. It must involve improving the efficiency of existing technologies by a factor of 5–10. This in turn requires not just fine-tuning existing recipes for producing and distributing energy, but also developing “transformative technologies”. To develop these, Chu said, he was placing many hopes in the National Synchrotron Light Source II (NSLS II) – a $912m facility currently being built at Brookhaven that was awarded $150m in stimulus money by the American Recovery and Reinvestment Act and is scheduled to be commissioned in 2015. “I think that one of [NSLS II’s] major contributions will be at the energy frontier,” he said (referring to fuels not particle physics).

In the past, Chu said, transformative technologies often emerged from fundamental conceptual breakthroughs. He gave two familiar examples. One was electronic amplification: the transistors that replaced vacuum tubes were made possible by quantum mechanics, which revolutionized ideas of how electrons are transported in solid materials. The other example was food production, where ammonia synthesis and other chemical-technology breakthroughs led to the ability to get more food from the same land through the use of better fertilizers. Research planned for NSLS II, Chu said, has a good chance of achieving breakthroughs of the sort that lead to transformative technologies.

Those are high hopes, but how solid are they? Synchrotron light sources, after all, are used for a variety of purposes, many of which have nothing to do with energy but instead with things like crystallography. Moreover, such facilities are renowned for not behaving exactly according to plan; their contributions to science often differ from expectations for reasons that have less to do with our intentions than with nature’s complexity. Chu, a former director of the Lawrence Berkeley National Laboratory, surely knows this. So what justifies his confidence that NSLS II can address the energy crisis?

Engineered solutions

Physicists build more powerful machines for three different reasons. One is to look more closely at familiar phenomena and see whether new phenomena can be spotted. These are perennial questions. Or there may be tantalizing hints – indications that something unusual is happening just beyond the horizon of existing machines. Finally, there may be specific objectives if compelling arguments exist that solutions to existing problems await at a specific resolution. Chu’s claims about NSLS II suggest it can solve such specific objectives. What are they?

Existing synchrotron sources have a spatial resolution of about 15–20 nm, whereas NSLS II expects to reduce this to just 1 nm. This is the transitional scale between atomic and bulk matter, where properties change rapidly and in poorly understood ways. The ability to study this domain has several implications.

One is that NSLS II will be able to scrutinize the kinetics of existing energy processes. The rate at which a battery charges, for instance, is a function of the interface between electrolyte and electrode, and the kinetics that control this process take place at the nano-scale. The interface can be seen in bulk with existing X-ray sources and at the atomic level under vacuum conditions with electron microscopes – but studying it with nanometre resolution would reveal how it behaves under operating conditions (i.e. not in a vacuum) and in real time.

Moreover, nanometre-scale resolution will let researchers engineer nanometre-scale structures, such as catalysts and solar cells that use quantum dots, which may be important at the energy frontier. Because of their different surface-to-volume ratios, nanomaterials have different properties from those at the bulk scale. Nano-scale engineering requires the ability not only to inspect such structures, but also to get them to self-assemble. Indeed, there is a kind of “reverse-bootstrapping” effect in which ever smaller structures enable the development of better technologies to engineer ever smaller structures.

Yet another implication of studying this particular scale relates to emergent properties. This scale is an inhomogeneous region in which different forces struggle for dom_inance. Studying how these forces compete and new properties emerge may let researchers design and build materials with qualitatively new properties.

A good example of all three of these implications involves gold. At the atomic scale, gold has certain, well-understood chemical properties, while in bulk it is chemically inert. In between, gold has marvellous but poorly understood properties as a catalyst that seem to peak at the 6 nm scale. In allowing researchers to scrutinize gold catalysts in action and in real time – as well as to study nano-scale particles involving gold – an instrument of nano-scale resolution like NSLS II could help us to understand what kinds of catalysts we should be building to convert fuel more efficiently. It would be a step towards the dream articulated half a century ago by materials scientist Arthur Von Hippel of being able to “play chess with elementary particles according to prescribed rules until new engineering solutions become apparent”.

The critical point

The energy challenge will, of course, also involve reducing demand: we will have to learn to live with less energy, which is a formid_able challenge of a separate kind. But with the world’s population set to rise to 10 billion by 2050, there is no way round having to increase our overall energy supply. The arguments are sound that a more advanced light source, able to reach a resolution of about 1 nm, will help in that quest.

Copyright © 2026 by IOP Publishing Ltd and individual contributors