Skip to main content

Fusion ambassador

With his glasses and shock of thick, white hair, Chris Llewellyn Smith does not look like a superhero saving the world from peril. Yet the slim, 66-year-old physicist is seemingly becoming a potential saviour in the public eye. At least that is the reaction he says he got while recently moving house in Oxford. “I was quite surprised by my new neighbours’ knowledge of energy issues when they said ‘The world is relying on you to develop fusion!’.”

Yet Llewellyn Smith is certainly not your average physicist. During a career spanning nearly 50 years, he has held numerous high-level positions, notably director general of CERN (see “A passion for particle physics”), provost and president of University College London (UCL), head of physics at Oxford University and director of the Culham site of the UK Atomic Energy Authority (UKAEA), which is home to both the UK fusion programme and the Joint European Torus (JET), which is currently the world’s leading fusion experiment.

Now supposedly retired, Llewellyn Smith is not putting his feet up but is instead involved in the €5bn ITER fusion experiment currently being built in Cadarache, France, where he is chairman of the project’s council. He is also president of the Synchrotron-light for Experimental Science and its Applications in the Middle East (SESAME) in Jordan, and in December last year became a vice-president of the Royal Society, a role where he expects to be involved in briefing the society’s president Martin Rees on energy issues.

One of his functions as ITER chair is to advise the project’s director-general Kaname Ikeda on funding and strategy, but the role also involves him advocating fusion as a possible energy alternative, which has seen Llewellyn Smith give dozens of public lectures on energy. “The public seem to understand that nuclear fusion has the potential to provide essentially unlimited energy, in an environmentally responsible manner,” says Llewellyn Smith as we chat at the Rudolph Peierls Centre for Theoretical Physics in Oxford. “Having another major energy option would be enormously valuable.”

Star power

Nuclear fusion is the energy source that powers the Sun and the stars. Mimicking this source of energy involves heating and controlling a plasma of hydrogen isotopes — deuterium and tritium (D–T) — until it is so hot that the nuclei can overcome their mutual Coulomb repulsion and fuse to produce helium nuclei and 14 MeV neutrons. The idea for a fusion power station is to then to extract the heat of the neutrons, which would be used to boil water and drive a steam-powered electrical generator.

But it is not an easy task: the difficulty lies in maintaining a burning plasma for periods of weeks and getting out substantially more energy than you put in. There are currently two methods that could make it work: confining the plasma with magnetic fields; or “inertial confinement” using laser or particle beams. Magnetic confinement, which is how ITER will operate, is the most developed and more likely to be consistently supplying fusion generated electricity to the grid by the middle of the century.

Yet the ITER project has endured a rough ride since the four initial partners — the European Union, Japan, the former Soviet Union and the US — first agreed in 1985 to build an experimental reactor to demonstrate the scientific and technical practicality of fusion power. The latest setback came last year when the reactor’s designers submitted a plan to upgrade the reactor from the 2001 proposal. This change put back ITER’s start-up date by two years to 2018 and has contributed to construction costs rising above €5bn, although Llewellyn Smith is unwilling to put a specific figure on the increase.

One of the main design changes involves a new method to contain potentially damaging discharges of the plasma onto ITER’s giant 1000 m2 reactor wall. At fixed temperature, the fusion rate is proportional to the square of the pressure. It was originally thought that the pressure falls off smoothly to zero at the edge of the plasma, but in the early 1980s physicists discovered a way of operating a reactor in which the pressure drops off very steeply at the edge, and is uplifted elsewhere by the “height” of this drop. This mode of operation increases the fusion rate, but it also produces instabilities at the plasma’s edge — known as “edge-localized modes” or ELMs — that spit globs of plasma onto the reactor wall.

The original 2001 design envisaged firing frozen pellets of deuterium from outside the reactor into the plasma to produce many small ELMs, rather than a few large ELMs. However, plasma physicists have since realized that this may not be enough to do the job completely, so the new design incorporates an additional way of taming ELMs by applying a random weak magnetic field via small coils within the reactor near the plasma edge. To accommodate the new coils means re-designing the inner reactor. The snag is that this will cost much more than the original design.

The redesigns now need to be funded by ITER’s seven members (since 2001, China, India and South Korea joined and the US rejoined having pulled out). Llewellyn Smith points out that while the new design increases the cost, it is much more likely to achieve ITER’s goal. However, ITER’s price-tag has risen for other reasons too. “People hadn’t been careful enough in tracking the cost increases of commodities, which have gone up much more than general inflation, and they grossly underestimated the difficulty of setting up an international laboratory from zero,” he says. “When the initial costing was done, there were three parties in ITER, but now there are seven.” He points out that since all the members want to obtain technical know-how in a wide range of areas, construction of many of the components is being split between several different countries and companies, which adds to the cost.

Such delays have left critics repeating the well-worn phrase that fusion is always 30 years away. Indeed, Llewellyn Smith says it would not surprise him if there were yet more delays beyond the 2018 “first plasma” start date. “That date, of course, is a big public-relations goal,” he says, “but I think the emphasis on the first plasma is wrong.” This is because initially ITER will only use hydrogen to avoid activating the magnets and walls. Tritium will only be used five or six years later. “The first plasma can be whenever you like as long as you don’t delay, or jeopardise, the success of the first D–T plasma,” says Llewellyn Smith. The first D–T plasma is officially planned for 2023 and he insists that any further redesigns or delays should avoid pushing this date back further than absolutely necessary.

Although Llewellyn Smith is confident that ITER will demonstrate its main goal of generating more power than it consumes, what if ITER does not work? “What might happen then would depend on why it failed,” he says. Whether governments will be interested in pursuing fusion if ITER does not work is a big question, but one that Llewellyn Smith thinks they will have to address. “When we see the lights go out as fossil fuels become increasingly scarce, people will think differently about investing in developing new energy sources,” he says.

Opening SESAME

While his involvement in ITER seems a pretty big job for someone in retirement, Llewellyn Smith has also for the past few months been president of the council of the SESAME synchrotron, which is being built in Jordan. It aims to foster science and technology in the Middle East and to use science to forge closer ties between scientists across the region (see Physics World April 2008 pp16–17, print edition only). Most of his time on this project is spent trying to get funding to complete SESAME, which will produce X-rays that can be used in a range of experiment from condensed matter to biology.

Despite his initial reluctance and lack of knowledge in synchrotron science, Llewellyn Smith sees some advantages of getting involved. “It needed a president from outside the region who is politically neutral,” he says, “but also someone who knows about running big science projects, and knows people in Brussels and bodies such as the Department of Energy in Washington and UNESCO.”

Now it is up to Llewellyn Smith to take the lead in finding the funding to build the remaining piece of the jigsaw — the synchrotron storage ring, which is used to keep the electrons circling while producing X-rays. In addition to Jordan itself, Germany and the UK are the biggest contributors to the project — the former having provided the injector system, based on the old BESSY synchrotron in Berlin, which pumps electrons into the storage ring, while the latter donated some of the beamlines from the recently shut down Synchrotron Radiation Source at the Daresbury lab in Cheshire.

Llewellyn Smith is looking not only to the members of SESAME and to the European Union, but to charitable organizations and philanthropists to fill the gap in these “capital costs” amounting to about $15m. However, the running costs will grow to $4–5m a year, putting further pressure on the tight science budgets of SESAME’s 10 member states, which include Israel, Iran and the Palestinian Authority. Even with many potential stumbling blocks, Llewellyn Smith is hopeful that the synchrotron will be operational in five years’ time.

Despite his prowess in running large research projects, Llewellyn Smith’s career was not always rosy. He had a difficult time after quitting CERN to become president and provost of University College London in 1999. “I didn’t enjoy the job and you don’t do your best when you don’t enjoy it,” he admits. He also concedes that “problems such as how to restructure UCL’s faculties were not what I wanted to think about 24 hours a day”.

It is obvious that particle physics is Llewellyn Smith’s real passion, and indeed he is currently writing a book on the LHC with James Gillies — head of public relations at CERN. Rather than starting on the first chapter, they have already written the last one, entitled “Is it worth it?”. As far as the LHC is concerned, Llewellyn Smith would undoubtedly say yes. Whether the same is true for ITER remains to be seen.

In person

Born: Giggleswick, Yorkshire, 1942
Education: University of Oxford (BA and DPhil)
Career: University of Oxford (1974–1998);
director-general of CERN (1994–1998);
provost and president of UCL (1999–2002);
director of UKAEA Culham (2003–2008)
Family: married, one son, one daughter
Hobbies: reading and singing (having recently joined a choir)

Journeys to greatness

Readers, I hope, will forgive me for a shameless bit of self-publicity about my latest book, The Great Equations: Breakthroughs in Science from Pythagoras to Heisenberg (Norton). But then the book is partly yours too, inspired as it was by the responses of Physics World readers to my request for suggestions of great equations (see “Critical Point: The greatest equations ever”). In the book, I chose to discuss not the most frequently mentioned equations, but those that seem to have engaged their discoverers in the most remarkable journeys.

The journey metaphor may seem misleading if taken to suggest smooth and steady progress to an already known destination. The scientific journeys I recount — which include those culminating in F=ma, and the equations of Maxwell and Schrödinger — were unpredicted, often protracted and erratic. The journey metaphor should also not imply that the travellers passively observed the changing scenery; in fact, the scientists interacted with their environment while altering it.

But the journey metaphor does capture one important aspect of the birth of these equations, which is how their originators’ ideas about what was important changed during the course of their research. Newton, Maxwell, Schrödinger and others each inherited a “landscape” or view of how knowledge about nature was organized. But during their research, new concepts — such as mass and force, entropy and displacement current, quanta and wave equations — appeared on the horizon, grew in importance and displaced others to assume positions as indispensable landmarks in the conceptual landscape.

For the ultimate destination of such scientists was not a particular location that they saw beforehand, but clarity. They were dissatisfied with what they had, perceived a vision of what might take its place, and were able to carry out the inquiry needed to realize it. At each step, they found the world to be somewhat discordant — not fully grasped — with hints of another, deeper order just over the horizon. This discordance is what makes newly realized equations seem, strangely, to be both discovered and invented.

Oliver Heaviside, who transformed Maxwell’s then-convoluted equations into their now-familiar versions, once remarked that “it was only by changing its form of presentation that I was able to see it [electromagnetism] clearly”. The sense of that remark — you transform to clarify — could have been said by any of the scientists mentioned in The Great Equations.

No royal road

Most of the time we are less interested in journeys than in where they take us. But we can learn much from them. One is just how varied such journeys are. Sometimes they are taken by scientists who talk and argue constantly with one another, as with the equations of thermodynamics and the uncertainty principle. Other journeys were undertaken by individuals working essentially by themselves, such as Einstein in his path to general relativity and Schrödinger to his wave equation, though such individuals in effect carried on conversations with colleagues even when working alone. There is no royal road to discovery.

Another thing we learn is that equations are not simply inert tools that work only in the hands of scientists and engineers. They can also exert an educational and even cultural force that shapes our view of the world. The Pythagorean theorem teaches us what proof means, the second law of thermodynamics keeps in check our dreams of free energy, Einstein’s equations changed our understanding of space and time, and the work of Schrödinger and Heisenberg forces us to rethink what being a “thing” means.

We also learn to appreciate how deeply affecting the scientific life can be. The scientists who took those journeys were never blasé, never disinterested. They were infused with curiosity, consternation, bafflement, frustration and wonder. And each scientist had what might be called a particular style. Some succeeded because they were only satisfied when they found what they were looking for, while others succeeded only because they were prepared to see something more than they expected.

Most of all, the journeys allow us to glimpse the mutability of nature and our role in it. The journeys teach us that nature could be otherwise — that it was otherwise for us until a moment ago, and for all we know it could change in the future. In such instances, we experience a transcendent moment in which a higher thought emerges in the middle of an existing one.

The critical point

The Great Equations ends by relating a conversation I had while writing the book, with an elderly physicist who expressed little comprehension and sympathy. To his workmanlike mind, the equations I mentioned seemed so obvious and logical that he could not picture not having known them, and he saw no value in making them more enigmatic. “Such equations”, he told me, “would not be wonderful if people realized how trivial they are. You should help them do so.”

I could have hugged him. At that moment, I finally realized exactly what I was trying to do. It was exactly the opposite — to undo that sense of obviousness and triviality, and to take readers back to the moment just before the equations were discovered, to appreciate how untrivial they are. Readers could, I hoped, thereby relive the wonder of the moment when the equations were first grasped — when they seemed simultaneously discovered and invented.

Scientists such as my physicist acquaintance tend to focus on the formal, discovered — what he meant by “trivial” — aspect of the birth of equations, whereas philosophers and historians tend to focus on the other aspect, having to do with their invention. It ought to be possible, I felt, to capture both aspects at once — which would, I thought, finally provide a more complete picture of the discovery process itself.

Not wrapped up yet

Cosmologists ask questions about the history and evolution of the universe on the largest spatial and temporal scales. How fast is the universe expanding? What are the densities of the various sorts of mass–energy therein? What is its future? And how and when did it all begin? These cosmic questions may at first seem far removed from the branch of mathematics known as topology, which is the study of shapes at their most basic. It is not about the angles, corners and planes of geometry, but of pliable shapes and the handles and holes that cannot be changed by bending and stretching. Topologically, a ball is the same as a glass and a single-handled coffee mug is the same as a ring, but clearly their geometries are different. Similarly, we must separate questions about the geometry of the universe from those about its topology.

Both cosmology and topology reach back to the ancient Greeks and, likely, to the first humans who had any time to think at all. However, it is only in the last couple of centuries that the two have become proper sciences. Each relies on what has come to be known as non-Euclidean geometry, a branch of mathematics that forms a cornerstone of Einstein’s general theory of relativity and is also required to enumerate the possible topologies that could describe the universe.

The Wraparound Universe by the French cosmologist Jean-Pierre Luminet is not just a twofold popular overview of the union of these two sciences, but also a none-too-subtle plug for the author’s idea that the universe might have the large-scale equivalent of handles or holes. The universe, he argues, could be multiply connected, just like a computer-game “world” where moving off the right edge of the screen brings you back onto the left, and moving off the top brings you back to the bottom (see “A cosmic hall of mirrors”).

Luminet’s argument builds on the fact that the possible topologies for a particular surface (a computer screen, say, or the fabric of space–time) are related to the ways in which you can tile, or tessellate, the surface using repeated patterns — like the ones in M C Escher prints hanging in college dorms worldwide. The individual repeating pattern, known as the fundamental domain, determines the underlying topology.

As an example, let us return to the computer-game “world”, for which the fundamental domain is a rectangle. The topology of the computer game “world” is a torus, like the surface of a doughnut. To visualize this, take a rectangular rubber sheet (representing the computer screen) and wrap the left edge against the right, making a cylinder. Now join the two ends. Note that the computer game and a doughnut are topologically equivalent, but they have different geometries: the game is flat, while the doughnut is curved.

We can expand this notion of tiling a surface to higher dimensions easily enough. One possible 3D fundamental domain that tiles Euclidean 3D space, for example, is a cube. Crucially, the same process can be extended to curved surfaces and multidimensional spaces as well — which is just what is needed to make the link to the 4D curved manifold that cosmologists are starting to describe via high-precision measurements of the universe.

Luminet’s book covers these two disciplines, cosmology and topology, and the history of their overlap up to and including these high-precision measurements. He concentrates on observations of the Cosmic Microwave Background (CMB), repeated patterns on which could be a mark of primordial topology. He closes the book proper with the first hints from the background-measuring COBE satellite, together with the MAXIMA and BOOMERANG balloon-based experiments that such data might be compatible with his topological ideas.

In the new English-language edition being reviewed here, Luminet updates the original text (written in 2001) by adding an appendix that discusses the analysis he and colleagues have performed on the more recent data. Their analysis, he claims, reveals a strong preference for a fundamental domain in the shape of a dodecahedron — albeit one inhabiting a curved hyper-spherical universe, rather than the flat 3D Euclidean geometry we can more easily picture.

Alas, a more detailed look at the relevant data obtained using NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) shows that the conclusion seems unwarranted. Among other groups, I, along with my colleagues Anastasia Niarchou and Levon Pogosian, concluded that the “power spectrum” data used in Luminet’s original work does not statistically warrant any explanation beyond the plain-vanilla standard cosmological model: a simply connected, spatially flat universe. Examining the detailed patterns of CMB fluctuations as Niarchou and I have done more recently — with a method that uses all available information — shows that Luminet’s preferred model is actually very strongly disfavoured.

Moreover, Luminet’s explanation ignores a crucial a priori weakness of his proposal: it requires the universe to have a very slightly curved geometry. This, in turn, requires the radius of the hyper-spherical universe (the curvature scale) to be comparable to the so-called Hubble distance (the distance a beam of light could have travelled since the Big Bang). Why should these numbers be nearly equal?

In fact, an interesting topology, especially paired with a curved geometry, is neither required nor particularly supported by the data. In contrast, one of the crucial features of cosmic inflation — the fact that inflation flattens the geometry of the universe so that the curvature scale becomes immeasurably large — is supported by the vast majority of cosmological data, and therefore by cosmologists (lightly mocked as the “inflation lobby” by Luminet). Nonetheless, the paired questions of the correctness of inflation and of the topology of the universe are by no means closed. We await further data, especially from European Space Agency’s Planck Satellite, which is due to be launched in April.

Luminet otherwise presents a workmanlike introduction to modern cosmology and to topology. For a popular-science book, both topics are presented at a sufficiently high level as to confuse the completely uninitiated reader, but I suspect that the audience for this book is one that has already digested some subset of other recent books, such as those by Stephen Hawking, Brian Greene, Janna Levin and João Magueijo. To differentiate this book, Luminet attempts to have its form reflect its content: the reader is presented with arrows and page numbers in the margins, supposedly giving the book multiple connections and a “tree-like structure”. I admire the attempt, and it usefully supplements the index, but I found it easier to read the book straight through (although a Web version might be more successful).

Nowadays, science (or at least physics) progresses not by sustained argument in books but by short snippets. Books serve to consolidate knowledge and present it to students or to the public. So it is heartening to read a book like The Wraparound Universe that not only summarizes the state of the art of a field but also argues for an idea, even if that idea is, in this case, likely to be incorrect.

Once a physicist: Zhengrong Shi


Why did you choose to study physics?

I was born on an island called Yangzhong on the Yangtze River in Jiangsu province, China. Though mainly agricultural, Yangzhong was blessed with a good school system and I dedicated myself to my studies. I followed my instincts and interests in science and built a strong foundation in physics. My family always encouraged me to pursue my studies, so I went on to obtain a Bachelor’s degree in optical science from Chang Chun University of Science and Technology in 1983 and a Master’s degree in laser physics from the Shanghai Institute of Optics and Fine Mechanics in 1986.

What did you do next?

I had been doing laser physics for more than five years when I decided to go to the University of New South Wales in Australia to further my studies. Aside from the initial culture shock, this was a fantastic opportunity to work with like-minded students and accomplished professors.

How did you become interested in solar power?

If I had not left China, I would undoubtedly still be working in laser physics. In the 1980s, this was a very new research area in China, and the Chinese government devoted many human and financial resources to this subject — I was honoured to be involved. But one of my colleagues in Australia suggested I speak to Martin Green, a prize-winning solar specialist, and so it was really by chance that I came to work in the field. Under Green’s tutelage I undertook a PhD in thin-film solar cells. When a breakthrough came, I was ecstatic, and that experience became the basis for a career in solar power.

Why did you decide to shift into industry?

I accumulated a lot of experience developing solar technologies both from my time at university and working in a solar start-up firm, so I knew the challenges involved. Then in 2000 I heard about the massive economic growth in China and the support provided to Chinese citizens based overseas willing to set up new enterprises back in China. I was also acutely aware of the growing need for renewable energy and so I decided to launch a company that helped provide a solution. I have always enjoyed challenges, and the opportunity to create an independent solar company was one that I could not forgo.

How has your physics training affected your approach to your business?

My scientific background was fundamental in the development and growth of Suntech. When I started the company back in 2002, we had very limited resources, and I had to design and build a production line using a mixture of second-hand and new equipment. At that point, I personally managed all aspects of the business, including production, sourcing, R&D and general operations. It was a real challenge, but my training allowed me to wear a number of hats and use our resources sparingly.

Do you still keep up to date with any physics?

The business and the industry are incredibly dynamic, and I do my best to keep track of the latest technical developments. It takes a huge commitment from me to guide the strategic developments of the company to maintain its growth and at the same time ensure that our costs do not get out of hand. In addition, I spend up to 20% of my time educating others about the importance of solar power and the necessity for us to act quickly to avoid the consequences of climate change. I also try to spend as much time as possible with the R&D group and discuss with our scientists the latest improvements in solar technology.

What is the biggest challenge for the future of solar power?

The goal of the solar industry is to reduce the cost and price of solar products to the point where the electricity generated is at or below the cost of energy coming from the grid. Once we reach grid parity, mass adoption of solar technologies will occur without government subsidies, and increased use of solar power will help us tackle issues like global warming, environmental pollution and energy security. To meet this goal we have a number of cost-reduction initiatives under way, including reducing the cost of silicon by improving how we source it; increasing the conversion efficiency of our solar products (which simultaneously increases power output per unit area and reduces production costs); and improving production efficiency through greater automation and lean supply-chain management.

The nuclear threat: a new start

President Obama has said that he intends to “make the goal of eliminating all nuclear weapons a central element in [US] nuclear policy”. He faces enormous challenges in converting this aspiration into a practical reality, both at home and abroad. One reason is that the world has become accustomed to thinking of nuclear weapons the way Winston Churchill described them in 1955: “It may be that we shall by a process of sublime irony have reached a stage in this story where safety will be the sturdy child of terror, and survival the twin brother of annihilation.” Over half a century later, the world has changed and the “sublime irony” is that terror and annihilation still loom over humanity, while safety and survival are still in doubt.

During the Cold War, the US and the former Soviet Union relied on nuclear deterrence to navigate successfully through those perilous years. And, against what seemed to be insurmountable odds, not one of the many thousands of existing nuclear weapons was detonated in military combat, although there were numerous opportunities to do so.

But it would be dangerously wrong to draw comfort from that achievement. Relying on nuclear weapons for deterrence is becoming increasingly hazardous and decreasingly effective in a world in which nuclear know-how, materials and weapons are spreading ever further and faster. Today, the world is teetering on the edge of a new and more perilous nuclear era, facing a growing danger that nuclear weapons — the most devastating instrument of annihilation ever invented — may fall into the hands of “rogue states” or terrorist organizations that do not shrink from mass murder on an unprecedented scale.

With the spread of advanced technology, and a renewed international interest in nuclear technology for civil power generation, there will be more opportunities for the theft or diversion of bomb fuel unless the full nuclear-fuel cycle is under tight, verifiable control, from enrichment to reprocessing. The threat of such proliferation is becoming ever more likely, particularly with more countries now aspiring to enter the nuclear power club.

To prevent such a catastrophe will take strong leadership and a sense of urgency that was lacking when two bold leaders, US President Ronald Reagan and Soviet leader Mikhail Gorbachev, attempted to escape the trap of nuclear deterrence based on mutual assured destruction at their remarkable summit meeting in Reykjavik in 1986. Although they failed to close the deal then — recall that in 1986 the Berlin Wall still stood and we had yet to emerge from the Cold War — Gorbachev and Reagan did start down the path of reducing the sizes of their bloated nuclear arsenals. However, without a vision of a world free of nuclear weapons as a guide beam, the nations of the world have not pursued measures that could reduce the nuclear dangers we face with the intensity and the boldness that the times require.

The challenges ahead

Rekindling this vision of Reykjavik will be President Obama’s main challenge, but realizing that goal will be very difficult. The importance of meeting this challenge is discussed in the new book Reykjavik Revisited: Steps Towards a World Free of Nuclear Weapons published by Hoover Institution Press, which I co-edited. Achieving the goals will require nothing less than a new deal between the states that have nuclear weapons and those that, for now, have volunteered to forego them. Progress will require political co-operation on a global scale between nations with very different economic and strategic aspirations, as well as forms of governance.

Currently, inspections under the Nuclear Non-Proliferation Treaty (NPT) to limit the spread of nuclear weapons are restricted to declared facilities only, and existing institutions like the International Atomic Energy Agency need more political clout and resources, including rights to the on-site inspection of suspect activities such as those currently being pursued under the Additional Protocols to the NPT. It will also be necessary to convince sceptics that it is possible to meet the very difficult challenge of verifying, with an effectiveness consistent with US security, that no nuclear weapons or bomb material have been secretly stored away in violation of a treaty commitment to eliminating them.

Winning over sceptical audiences in the US and elsewhere will take time, but the Obama administration can begin by proposing a series of practical steps to convince sceptics and allies alike that the vision of a world without nuclear weapons is not a flight of fancy but a practical goal. The following are candidates for the first two steps to do this.

First the US should engage the full cooperation of Russia, which, together with the US, possesses more than 90% of the world’s nuclear warheads. The administration must resume and reinvigorate serious negotiations to review and, if appropriate, extend key provisions of the Strategic Arms Reduction Treaty of 1991. Most pressing is the need to negotiate an extension of the essential monitoring and verification provisions of this treaty, which is scheduled to expire on 5 December 2009. The two parties should agree to reduce the limits on the total number of warheads to less than 1700–2200, as agreed in the 2002 Moscow Treaty on Strategic Offensive Reductions

Second, the new administration should adopt a process for bringing the Comprehensive Test Ban Treaty (CTBT) into effect. Nine of the 44 members of the Nuclear Suppliers Group, including the US and China, have not ratified the CTBT, so it is currently not in force. The new administration should initiate a timely, bipartisan, congressional review of the value of the CTBT to US security. The International Monitoring System (IMS), which comprises more than 320 seismic, radio-nuclei, optical and acoustic monitoring stations around the world for identifying and locating treaty violations, has been greatly strengthened since the US Senate questioned its adequacy when refusing to ratify the CTBT in 1999.

Since then, the IMS has been strengthened by additional stations so that it is now approximately 90% complete. The IMS impressively displayed its sensitivity and effectiveness by rapidly locating, identifying and determining the very low yield of a test explosion by North Korea in October 2006. The US has also made considerable technical progress over the past decade in maintaining high confidence in the reliability, safety and effectiveness of the nation’s nuclear arsenal under a test ban. It can be demonstrated that the CTBT not only meets US national-security requirements but enhances security worldwide by constraining further developments and deployments of these weapons with the potential of such devastating destructiveness. Other nations have made it clear that they are looking to the US for leadership to bring the treaty into force.

Vision for the future

It will be a difficult challenge to turn the goal of a world without nuclear weapons into a practical enterprise. But it has also become clear that a global effort to reduce nuclear danger and prevent proliferation of nuclear weapons will require both nuclear- and non-nuclear- weapon states to embrace the vision of Reykjavik as an essential part of the process. Many of the non-nuclear nations have made it clear that they are willing and eager to enter into such co-operative efforts with the US and other nuclear weapons states, but only if they see the world moving away from the current two-tier system of a small number of nuclear nations and many non-nuclear nations towards a level playing field with a common vision of a world free of nuclear weapons.

The necessity of embracing the vision of Reykjavik was emphasized in two letters in the Wall Street Journal by George Shultz, Henry Kissinger, William Perry and Sam Nunn (4 January 2007 and 15 January 2008). In endorsing the vision of a world free of nuclear weapons, and describing the essential steps toward achieving it, they wrote in the first letter that “Without the bold vision, the actions will not be perceived as fair or urgent. Without the actions, the vision will not be perceived as realistic or possible.” With these two steps outlined above, President Obama has a historic opportunity to start down a practical path towards achieving his stated goal of “eliminating all nuclear weapons”.

The science of fine art

When I first saw the painting of an Elizabethan woman — thought possibly to be a portrait of Queen Elizabeth I herself — it was split completely down the middle, with paint flakes hanging off like an outcrop and its two halves curved like a shield. The heating system in the National Trust-owned house in the UK where it was on display behaved erratically in the winter, and the relative humidity had dropped dramatically. The 5 mm thick painted wood panel responded by warping so severely that its frame eventually restrained it, forcing the panel to crack under the pressure.

Conservators and conservation scientists play a key role in physically preserving important parts of our cultural heritage. With the Elizabethan painting, our team’s remit here at the Courtauld Institute of Art in London was to repair the split, find out more about the painting’s provenance, understand its environmental response and provide a suitable mount to protect it from future damage. To do this, we carefully realigned the two halves of the panel and rejoined them with a polyvinyl-acetate adhesive, taking care not to lose the flakes of paint clinging precariously to each side. A surface fill of chalk and gelatine covered the join, which was then retouched using a hydrocarbon compound and dry pigments, before finally being varnished.

We monitored the movement of the panel by simply marking out its profile on graph paper and found it responded almost immediately to small changes (5%) in relative humidity. We were able to slow down this response by applying a coating of ethylene vinyl acetate to the back of the painting, building a flexible support for it and placing it in a sealed, glazed frame. It has now been returned to Trerice in Cornwall, where it is again on display.

Bringing art and physics together

I was first attracted to conservation science by the opportunity to work hands-on with fascinating and often beautiful objects of cultural heritage — each presenting a plethora of interesting and demanding problems for scientists as well as art historians or curators. The complexity and individuality of each object requires us to understand and integrate ideas from many fields. To conserve an object, many different factors must be considered, including its aesthetic, provenance and history; the artist’s intent, choice of materials and original technique; and the object’s physical condition.

I studied both art and science at A-level, and I always assumed they were facets of the same universe. I chose to study physics at Imperial College London because I liked the philosophical as well as the mathematical aspects of the subject. After I finished my undergraduate degree, I decided to do a Master’s degree in applied optics at Reading University because it brought together many of my interests, from the theory of colour and vision to the creativity of designing experiments and specialized lenses.

After I left university and started working on optical-systems design at the Rutherford Appleton Laboratory and then at Chelsea Instruments, I saw an advert for a postgraduate conservation course, and really became aware that I could combine science and art. However, funding was not available at the time so instead I started a PhD in mechanical engineering at Imperial, researching techniques used to look for defects in ceramic tiles. I then realized these same techniques could be applied to non-destructive testing of works of art, so I contacted the scientific departments of the National Gallery in London and the Tate to find out more about the problems encountered in paintings. Seeing the work they did made me decide that this was the area I wanted to work in, so I persuaded my supervisor to let me change my PhD topic to cover the physical properties of canvas paintings.

I soon found out that there are numerous ways in which to apply my physics knowledge to this field. For the past 16 years I have been using a technique called electronic speckle pattern interferometry, which employs lasers and interferometric imaging to measure the strain induced in paintings. I have also experimented with other methods like pulsed thermography and infrared optical coherence tomography to identify subsurface features and adhesion between layers in works of art, and used multispectral imaging to understand artists’ materials and techniques.

Patterns at work

I joined The Courtauld, which consists of the Courtauld Gallery and the Institute of Art, in 2000. The Gallery houses a famous collection of Impressionist and Post-Impressionist paintings (including works by Manet, Monet, Cezanne and Renoir) and the Institute of Art is a college of the University of London that specializes in art history and conservation. My research interest is in non-invasive techniques for measuring the physical condition of paintings, and developing methods for structural conservation treatments.

Working within an academic environment means that the flow of students, lectures and exams provides an overall structure to the year, but beyond that my day-to-day activities vary quite a lot. Our postgraduate students treat paintings starting in their first year, which requires a lot of studio supervision. So on an average afternoon, I might be recording an infrared image to check for any drawings underneath the paint, using ultraviolet light to identify retouching and varnish, or using a technique called energy-dispersive X-ray spectroscopy (EDX) to identify chemical elements in the paint layers. Equally, I could be removing a painting from its wooden support, mending tears, designing a mount for a panel painting, or undertaking an environmental survey at a historic house where paintings — like the Elizabethan portrait — are displayed.

Careers in conservation

Conservation science is a relatively small field, and there is a lot of collaboration between institutions and individuals. In general, paintings conservators work directly on an object; so depending on what is required, they may remove a degraded varnish, repair a tear in the canvas, fill and retouch where paint has flaked off, or, as with the Elizabethan painting, rejoin a wooden panel that has split. Conservation scientists, in contrast, usually take a more indirect approach. For example, we may analyse paintings using X-rays (see “Underneath the surface”), monitor the movement of a painting due to environmental changes with optical or mechanical techniques, or use small original samples or replicas to investigate new cleaning methods in a laboratory setting.

Research and practice are more closely interrelated than in many scientific fields, which makes the work very satisfying. I keep abreast of emerging techniques in applied physics and engineering, and I have longterm collaborations with conservators, scientists and engineers from several institutions, including the National Physical Laboratory, Imperial College London, the Tate and National galleries and the Museum of Modern Art (MoMA) in New York.

For a physicist, there are many ways to work in the field either as a conservator or as a conservation scientist, or as both. The principal employers of conservation scientists are the scientific departments or preventive conservation sections of major museums and public collections in Europe and the US. Many come into the profession as I did, through doing a PhD at a university with external links with a museum. Scientists are sometimes employed at a junior level directly after completing undergraduate or postgraduate courses, without prior training in a conservation-related field; they then learn on the job.

There are also sometimes research-assistant posts in conservation-science research projects. At The Courtauld, we have recently had two projects: one investigating artists’ materials and techniques using microscopy, Raman spectroscopy and EDX spectroscopy; the other working with ultraviolet lasers to investigate their suitability for cleaning 19th-century paintings.

For those wanting to work hands-on with objects as conservators, then a postgraduate training course is the recognized route. There are two- and three-year postgraduate courses available in many areas — including easel paintings, wall paintings, paper, objects, stained glass, preventive conservation and archaeology — for which a first degree in physics is appropriate. Usually, after finishing their postgraduate course, students work on short-term contracts to build up experience. Conservators are employed in museums and galleries or work privately in many countries. Finding permanent posts at institutions is more difficult, but most conservation-trained scientists do eventually find full-time work.

For those wanting to see if conservation might be for them, a good start would be to read the technical bulletins published by the National Gallery, British Museum or similar institutions outside the UK. Several museums and galleries also maintain good conservation webpages, and two conservation-community websites, icon.org.uk and iiconservation.org, contain a number of useful resources. The most important step, however, is to go to museums, galleries and cultural-heritage sites to look at things and find out what really interests you.

Web life: The Internet Plasma Physics Education Experience

What is it?

An educational outreach site maintained by the Princeton Plasma Physics Laboratory in the US, IPPEX features several interactive, game-like tools (applets) for exploring the physics of fusion, the doughnut-shaped “tokamak” reactors used in fusion experiments around the world, and related topics.

What are the simulations like?

Slider bars and pop-up graphs on the Virtual Tokamak applet allow wannabe fusion scientists to determine how plasma density, magnetic field and auxiliary heating in their simulated reactor will evolve over a 20 s “shot”. Once you are happy with your choices, you can fire up the tokamak and watch the program generate a time-dependent graph of how much power your reactor produces. In the Magnetic Confinement applet, the aim is to keep the plasma within the tokamak cross-section by switching magnets on and off. Additional applets appear in tutorials illustrating basic principles of physics like electricity and magnetism.

Where is the physics in these games?

IPPEX head Andrew Zwicker notes that the Virtual Tokamak runs a “fairly sophisticated” performance calculation based on a potential reactor design, albeit with a few computational simplifications. An obvious first step on the simulated tokamak is to crank all variables to their maximum values, but be warned: if you exceed the program’s limits on plasma density or temperature, then you lose control of the plasma and your “score” (a number related to the amount of fusion power divided by the heating power) drops to zero. Similarly, magnetically confining a “mildly unstable” plasma is fairly easy, but “totally out of control” plasmas are more of a challenge. A good analogy among plasma-physics insiders is that confining a plasma is like trying to hold jelly together with rubber bands, and the Magnetic Confinement applet helps drive this point home. The lessons learned are mostly qualitative, however.

Who is it aimed at?

The physics is described on a very basic level, so that even secondary-school students should have little difficulty in following it. The format of IPPEX is ideal for teachers wishing to incorporate the site into lessons on fusion or renewable energy. In addition to the applets, the site also hosts a series of pages where students can explore real fusion data from low-confinement deuterium plasmas, answer a series of questions and submit these answers to a “Fusion Wizard” for online evaluation. In a few months, live data from the laboratory’s NSTX experiment will replace the archived shots, bringing the fusion-analysis exercises even closer to “real physics”. More advanced users may find the lack of in-depth scientific explanations frustrating, but should still enjoy playing with the applets.

Why should I visit?

The site has won several science-education awards, and it is not hard to see why. The Virtual Tokamak, in particular, is surprisingly addictive and requires a fair amount of thought to master. The hints page contains a few suggestions for boosting your score (the current maximum is around 148), but for the most part it simply urges users to play with the variables in a systematic fashion — sound advice for any experimental scientist.

Graphane makes its debut

The “wonder material” graphene burst onto the scene five years ago — and the sheet of carbon just one atom thick continues to wow physicists with its growing list of remarkable properties. Now, a team including the UK-based research group that discovered graphene has created a new material called graphane by adding hydrogen atoms to their original discovery.

As well as being an insulator that could prove useful for creating graphene-based electronic devices, graphane might also find use as a hydrogen-storage medium that could help hydrogen-powered vehicles travel further before refuelling.

Despite being extremely thin, graphene has great physical strength as well as being an excellent conductor of both heat and electricity. What’s more, it is a semiconductor with electronic properties that can be adjusted by simply applying a voltage across a region of a graphene sheet — rather than by introducing chemical impurities as in silicon. As a result, some researchers believe that it could be used to create transistors that are smaller and faster than silicon-based devices.

Further on a tank of graphane?

Thanks to its low mass and large surface area, graphane has also been touted as an an ideal material for storing hydrogen fuel on vehicles. Finding an economical way of storing enough hydrogen for a reasonably long journey is a major challenge because liquefying the gas (as is done with propane)is prohibitively expensive in terms of both money and energy.

However, making graphane had proven to be difficult. The problem is that the hydrogen molecules must first be broken into atoms and this process usually requires high temperatures that could alter or damage the crystallographic structure of the graphene.

Now, a team led by Andre Geim and Kostya Novoselov at the University of Manchester has worked out a way to make graphane by passing hydrogen gas through an electrical discharge. This creates hydrogen atoms, which then drift towards a sample of graphene and bond with its carbon atoms.

The team studied both the electrical and structural properties of graphane and concluded that each carbon atom is bonded with one hydrogen atom. It appears that alternating carbon atoms in the normally-flat sheet are pulled up and down — creating a thicker structure that is reminiscent of how carbon is arranged in a diamond crystal. And, like diamond, the team found that graphane is an insulator — a property that could be very useful for creating carbon-based electronic devices.

Fine tuning graphene’s properties

“This is the first step in being able to fine tune the electronic properties of graphene by attaching various species to its scaffolding,” said Novoselov. “It is a new look at graphene, if you like, with far reaching and promising consequences.”

The next step would be to learn how to control the electronic properties by adding other chemicals, with perhaps different arrangements of these species on graphene’s surface, according to Novoselov.

“The modern semiconductor industry makes use of the whole periodic table, from insulators to semiconductors to metals. But what if a single material could be modified so that it covers the entire spectrum needed for electronic applications?” added Geim. “Imagine a graphene wafer with all interconnects made from highly conductive, pristine graphene whereas other parts are modified chemically to become semiconductors — and work as transistors — or become insulators.”

And that’s not all: the team also showed that reaction is reversible because graphane can be converted back into graphene by heating it up so that the hydrogen is removed. This property, plus the high hydrogen density and low mass of graphane, could make it a candidate for hydrogen storage.

Bloggers versus journalists

By Hamish Johnston

…is it hype or reporting a neat idea?

Sorry for the navel gazing but I thought I would point out an interesting discussion of one of our news stories over on Chad Orzel’s Uncertain Principles blog.

The article in question is about a proposal for using ultracold atoms to make precise measurements of neutrino mass.

The question that Chad asks is whether it is appropriate for us to report as “news” what is really just an interesting idea that may or may not ever come to fruition? By doing so, are we guilty of hyping the importance of the proposal?

We decided to go with the story for two reasons — the first is that this experiment is a very clever way of using developments in one field of physics (ultracold atoms) to solve a fundamental problem in a seemingly unrelated field (particle physics).

The second reason — perhaps a bit more woolly — is that this proposal comes from a respected experimental physicist, which suggests to me that there is at least a chance that it could be realized.

Chad alludes to the idea that a blogger with expertise in the field of ultracold atoms would probably take a more cautious approach to reporting this proposal because they would have a better understanding of the technical challenges involved.

However, I’m guessing that if you asked a circa 1970 semiconductor physicist whether it would be possible to mass-produce CMOS devices with 32 nm features, you would be given a list of seemingly insurmountable technical challenges.

I suppose what I’m saying is that there’s nothing wrong with reporting on what a reputable group of physicists thinks may be possible, without getting too caught up in the nitty gritty of why it might never come to pass.

Indeed, coming up with such ideas (and having them shot down) is an important part of the scientific process, so I don’t think that we should shy away from reporting on informed speculation.

So should we have taken a more cautious approach? I don’t think so — we did after all make it clear that this was a proposal and that it would be difficult to implement.

However, I do agree with Chad that we perhaps should have toned down the headline a bit. I like one of the suggestions put forward in the comments on Chad’s blog: put a question mark at the end of the headline.

Plot thickens for iron-arsenide superconductors

Since they were discovered a little more than a year ago, iron-arsenide superconductors have raised hopes that physicists will soon crack the difficult problem of explaining why certain materials remain superconducting at relatively high temperatures, while others do not. Now, a team of physicists in the US and China has shown that the superconductivity in a specific iron-arsenide material does not depend on the orientation of an applied magnetic field. The finding could challenge a long-standing belief among some physicists that high-temperature superconductivity only occurs when electrons are confined to move in two directions.

Superconductivity occurs when a material is cooled below a certain temperature and its conduction electrons form a condensate that can flow without any resistance. In a conventional, low-temperature superconductor, such as lead, this process (which involves electron pairs) is described by BCS theory, developed in 1957 by John Bardeen, Leon Cooper and Robert Schrieffer. However, BCS theory cannot explain why superconductivity persists in certain cuprate materials, some of which remain superconductors at temperatures above 100 K.

The new family of iron-arsenide-based high-temperature superconductors that were discovered last year do not, however, appear to fit either the BCS or cuprate models. Some physicists think that the mystery of why the cuprates are superconductors at such high temperatures could be cracked by comparing the physical properties of these new materials to known superconductors.

Now, Huiqiu Yuan and colleagues at Los Alamos National Laboratory in the US and the Chinese Academy of Science in Beijing have added an important piece to the puzzle by measuring the electrical resistivity of Ba0.6K0.4Fe2As2 by placing a single crystal of it in a strong magnetic field that could be varied between 0 and 60 T (Nature 457 565).

2D or not 2D

In all types of superconductors, the critical temperature (Tc) at which a material ceases to be a superconductor falls as the magnetic field is increased. In conventional superconductors only the strength of the field matters, not its direction. In cuprates, both the strength and direction of the field relative to the crystal lattice will affect superconductivity — suggesting that the superconductivity occurs in special 2D planes in the material.

The researchers found that the Tc of Ba0.6K0.4Fe2As2 fell from 28K as the field was increased from zero. Surprisingly, however, the material’s Tc did not depend much on the orientation of the magnetic field relative to material — in other words the superconductivity is 3D. It appears therefore that the iron arsenides are more like conventional superconductors in this respect.

Directionality in the cuprates suggests that the electrons move without resistance through planes of copper and oxygen atoms, which has led some physicists to conclude that the “quasi-2D” nature of these electrons is necessary for high-Tc superconductivity. But because the material studied by Yuan’s team contains planes of iron-arsenide — yet does not have the directionality of the cuprates — the link between high-Tc superconductivity and two-dimensionality could be a “red herring”, according to Jan Zaanen of Leiden University in the Netherlands.

Not very high-Tc

Others, however are more cautious in how they interpret Yuan’s data. Nigel Hussey of the University of Bristol, UK, points out that with a Tc of 28K, Ba0.6K0.4Fe2As2 cannot really be compared to the cuprates as is not really a high-temperature superconductor. Instead, he points out that such a Tc is more in line with other “3D” superconductors such as the perovskite Ba0.6K0.4BiO3 and the fulleride superconductors, which have Tc values as high as 30K.

Hussey adds that some iron-arsenide superconductors similar to Yuan’s sample, but with Tc values as high as 56K, appear to be more 2D in nature.

So the plot thickens for iron-arsenide superconductors and the mystery of high-Tc superconductivity lives on.

Copyright © 2025 by IOP Publishing Ltd and individual contributors