Skip to main content

Free papers by 2010 Nobel-prize winners

paper6.jpg

By Hamish Johnston

As you might imagine we’re all rather chuffed here in the UK after two physicists at the University of Manchester won the 2010 Nobel Prize in Physics.

To celebrate, IOP Publishing (which brings you physicsworld.com) has made all papers published in its journals by Andre Geim and Konstantin Novoselov free to download.

You can dig into the papers here.

Clippers set sail for space

Artist's impression of the Thales Alenia Space interplanetary clipper

Engineers at Thales Alenia Space in France say it is possible to slash the amount of time needed to transport planetary data back to Earth by using spacecraft propelled by solar radiation. These “data clippers” could in principle shuttle continuously between Earth and the outer planets, returning large volumes of data decades earlier than is possible with conventional radio transmissions.

Satellites studying planetary bodies that are relatively nearby, such as the Moon and Mars, can transmit data back to Earth fairly quickly via a radio link. Missions observing more distant planets, however, face a problem because the 1/r2 attenuation in radio waves over these greater distances renders the signal sufficiently weak that transmission must be slowed down in order to guarantee that radio antennas on Earth can intercept the data.

As an example, says Thales’ Joël Poncy, a full high-resolution map of Jupiter’s moon Europa or Saturn’s moon Titan would require about 10–20 terabytes worth of data. However, data could not be sent via a radio link faster than about 1 gigabyte per day, which, he points out, means a transmission time of around half a century. More advanced optical links would be a little quicker, he says, perhaps taking around 20 years to complete the dispatch.

Sailing on a photon breeze

Poncy and colleagues at Thales have studied an alternative – data clippers. These spacecraft would use large lightweight “sails” that are pushed forward by the very slight but continuous radiation pressure of the photons emitted by the Sun. With onboard steering devices these clippers could be directed around the solar system so that they pass close to conventional spacecraft orbiting distant planets or moons, upload data from the spacecraft to an onboard flash memory using a laser beam and then perform a flyby of Earth, during which they download the data, again using the laser, to a ground station.

By passing within a few tens of thousands of kilometres of the orbiting spacecraft and of Earth, the divergence of the laser beam would be small enough to guarantee a fast data transmission rate – up to 1 gigabyte per second – and the data transport would therefore be limited simply by the time that it takes the clipper to travel from the planetary body back to Earth.

Speaking to delegates at the European Planetary Science Congress in Rome recently, Poncy said that he and his colleagues have proved the viability of data clippers by carrying out orbital mechanics calculations. He said they worked out that a data clipper could reach Europa, after accelerating by making several orbits close to the Sun, within about six years and it would then take three years to put itself on the right trajectory to be able to make a flyby of Earth. As he pointed out, that figure of three years compares favourably to the roughly 50 years that would be possible using current radio technology. The clipper could then in principle perform many more such upload and download cycles, visiting a different planetary orbiter each time.

Strong sailcloth needed

The Japanese Space Agency currently has a solar-sail-powered spacecraft, known as IKAROS, cruising towards Venus. According to Poncy, however, several technological hurdles need to be overcome in order to make solar sails suitable for data clippers. These include the development of a lightweight but strong material from which to make the sails – one possibility being a woven mesh of carbon nanotubes. Among the other challenges are how to unfold the huge 150 × 150 m sails in space and how to make the spacecraft sufficiently manoeuvrable so that they can approach the planetary orbiters but do so without propellant.

The development of data clippers is likely to take about 20 years and cost between €100m and €200m. Poncy says that government support will be essential; although he points out that his company has yet to make a formal request for funding from the European Space Agency. Speaking to physicsworld.com, ESA’s director of science and robotic exploration David Southwood described data clippers as “an interesting concept” but one “for the day after tomorrow rather than tomorrow.” He also queried Poncy’s claim that downloading data is “the major design driver” for interplanetary missions, maintaining that it is, in fact, “one problem of quite a few.”

Graphene pioneers bag Nobel prize

 

The 2010 Nobel Prize in Physics has been awarded jointly to Andre Geim and Konstantin Novoselov “for groundbreaking experiments regarding the two-dimensional material graphene”.

Both physicists work at the University of Manchester in the UK.

In a telephone interview with Swedish journalists minutes after the announcement, Geim said that he was answering e-mails when he found out about the award. “When I got the call, I thought ‘oh shit’, because it is a life-changing exercise.”

Graphene is a sheet of crystalline carbon just one atom thick. Many physicists believed that a 2D crystal like graphene would always roll up rather than stand free in a planar form, but in 2004 Geim and Novoselov brought to an end years of unsuccessful attempts to isolate graphene, and were able to visualize the new crystal using a simple optical microscope.

The material is very strong and an excellent conductor of heat and electricity. As such it is often described as a “wonder material” with many possible technological applications from ultra-fast transistors to DNA sequencing. When asked what is his dream application for graphene Geim replied, “That’s a difficult question…I don’t want to pick out any particular applications, there are so many.”

Ahead on intuition

“[Novoselov and Geim] really deserve the prize,” said Andrea Ferrari of Cambridge University, who started working on graphene shortly after its discovery. “They are way above other people in terms of their intuition and they have done key work in all the main subfields – looking at graphene’s electronic, mechanical and other properties.”

Ferrari also told physicsworld.com that the two physicists were very generous about sharing their discovery with other scientists. “From day one they were completely open about how to make graphene,” he said. “They trained the first generation of physicists in how to make graphene by inviting scientists to Manchester.”

The two researchers discovered the material by using a piece of adhesive tape to peel a single atomic layer off a piece of graphite – a process known as micromechanical cleavage or the “Scotch tape method”.

The pair then worked out how to make field-effect transistors using the material and discovered that electrons in the device were able to travel ballistically – that is, without being scattered – from the source to the drain electrode at room temperature.

As famous as silicon

In principle, ballistic transistors could operate much faster than conventional devices made of silicon and this discovery led to a flurry of research into the electronic properties of graphene that shows no signs of abating.

In 2005 Geim and Novoselov showed that the electrons in graphene behave like relativistic particles called “Dirac fermions” that have no rest mass. The pair also observed a new “half-integer” quantum Hall effect in the material.

More recently, Geim and Novoselov have shown that graphene has the ideal optical properties to form the transparent electrodes in liquid-crystal displays (LCDs); fabricated tiny quantum dots from graphene; and developed a new way of manufacturing sizable quantities of graphene. Just last year the pair created a new material called graphane by adding hydrogen atoms to their original discovery.

Surge of interest

The discovery of graphene triggered a surge of interest in the wonder material. Other researchers, for example, have found that graphene not only conducts heat very well but is also the “strongest material in the world”.

At a practical level, one team of scientists has created a new kind of chemical sensor by combining graphene with DNA, while another group used the material to gain a better understanding of surface-enhanced spectroscopy.

Novoselov was born in the Soviet Union in 1974 and did a PhD at the University of Nijmegen in the Netherlands before joining the University of Manchester in 2001, where he is Leverhulme Research Fellow. He is a British and Russian citizen.

Born in the Soviet Union in 1958, Geim did a PhD at the Institute of Solid State Physics in Chernogolovka, Russia. He was associate professor at the University of Nijmegen in the Netherlands before joining the University of Manchester in 2001, where he is director of the Centre for Mesoscience and Nanotechnology. He is a Dutch citizen.

‘We’ve struck gold’

Speaking to physicsworld.com in 2006 Geim said, “In the different areas that I’ve worked in for the last 20 years I’ve been searching for something big. I think with graphene at last I’ve found it. Before I relied on professionalism or hard work but never had the luck. Now at last I think I’ve been lucky and that we’ve struck gold.”

Geim is also famous for his 1997 “flying frog” experiment in which he and his colleagues in Nijmegen levitated a frog using a powerful magnet. He shared the 2000 “IgNobel” prize (with Michael Berry of the University of Bristol) for his efforts and is the first individual to win both awards.

Berry told physicsworld.com, “I’m delighted that my fellow IgNobelist has been Stockholmed. I knew it was only a matter of time, and it’s good they didn’t make him wait decades. Knowing him, it won’t spoil his scientific creativity – he has a great deal of science left in him.”

The prize is the second in as many days for UK-based scientists and comes at a time when British science is under threat from funding cuts. Marshall Stoneham, president of the UK’s Institute of Physics said that the physics prize shows the “strength of the British science base. It confirms at the highest level the excellence of UK physics”.

The Nobel prize is worth SEK10m and will be presented at a ceremony in Stockholm on 10 December.

Further information

Download three feature articles about Geim, Novoselov and their work on graphene here:

“Beyond the wonder material” by Konstantin Novoselov (PDF, 1 MB)
The Nobel winner describes the amazing properties of graphene’s chemical cousin graphane

“A physicist of many talents” (PDF, 140 KB)
Andre Geim talks to Physics World about his wide-ranging career

“Drawing conclusions from graphene” (PDF, 1.6 MB)
Antonio Castro Neto, Francisco Guinea and Nuno Miguel Peres explore the fascinating structure of graphene

Watch a lecture by Geim on graphene

“Graphene — exploring carbon flatland”
Andre Geim gives a video lecture on the importance of graphene at the 2008 Condensed Matter and Materials Physics conference of the Institute of Physics

Should you win a Nobel if your views aren't always conventional?

By Hamish Johnston

There’s just 18 hours and 21 minutes until the physics Nobel is announced – or so says the tacky countdown on the Nobel Foundation website.

So there’s time for just one more Nobel-related blog entry.

Yesterday’s Observer had an interesting article about the British astrophysicist Fred Hoyle, who famously didn’t share the 1983 Nobel Prize in Physics with Willy Fowler and Subramanyan Chandrasekhar.

Fowler bagged his half for his work on nucleosynthesis – the process by which stars create heavy elements out of hydrogen. He was apparently shocked to learn that his long-time collaborator Hoyle was snubbed by the Nobel committee.

Why? According to the science writer Robin McKie, it’s because Hoyle believed, among other things, that outbreaks of flu are sometimes caused by microbes from outer space.

Should you win a Nobel if your views aren’t always conventional? The answer is apparently no.

You can read more here.

Graphene shines a light on surface-enhanced spectroscopy

Surface-enhanced Raman scattering (SERS) is a powerful way to identify molecules at very low concentrations – a technique that has proven very useful in forensics, medical diagnostics and identifying new drugs. But despite its success, scientists have struggled to understand the physics behind how SERS works.

Now researchers in the UK and Greece have developed a new model that shows how tiny metal discs can greatly enhance SERS signals. They claim their work could lead to a better understanding of the physics underlying SERS.

In recent years researchers have discovered that they can boost interactions between light and matter by harnessing collective oscillations of surface electrons called surface plasmons. Light fields are enhanced when they are resonant with these plasmons, leading to SERS and other surface-enhanced techniques. Indeed, scientists have already succeeded in boosting Raman-scattering signals by as much as 1014 when molecules sit on random nanostructure “hot-spots” on metal surfaces.

However, scientists don’t fully understand which nanostructures make the best hotspots or how to create SERS substrates with uniform enhancements over large areas. This is because most SERS systems studied before now were based on random nanostructures, whose properties varied from experiment to experiment. This non-uniformity also made quantitative comparisons between theory and experiment difficult.

To overcome these problems, Andrea Ferrari and colleagues at the University of Cambridge along with researchers at the University of Manchester and the University of Ioannina have studied SERS using metal nanostructures grown on graphene. A sheet of carbon just one atom thick, graphene was chosen because it provides large, uniform and virtually defect-free surfaces. Furthermore, the Raman spectrum of graphene is well known.

The team created square arrays of gold dots on the graphene with a separation between dots of 320 nm. The dots were about 80 nm thick and had a radius of 210 nm in some arrays and 140 nm in others. The researchers then compared the Raman spectrum of bare graphene to the spectrum of graphene with dots, and found a significant enhancement when dots are present.

Modelling SERS

To gain a better understanding of why the enhancement was occurring, the team modelled its system by considering metal dots of various sizes placed on the graphene. They then solved Maxwell’s equations for each sample using the “finite difference time domain method”.

Ferrari and colleagues observed significant enhancements of the Raman signal and found that the enhancement is inversely proportional to the tenth power of the distance between graphene and the centre of a metal nanoparticle. “These results could help us better understand SERS physics of 2D materials,” added Ferrari. “The work also proves that plasmonic nanostructures can enhance light absorption and scattering from 2D materials, like graphene, something that could have direct applications for photodetectors and sensors.”

Spurred on by these findings, the team will now try and optimize the SERS substrate to achieve even bigger enhancements by using different metals and dot shapes. “We will also implement plasmonic nanostructures on various devices made from graphene,” revealed Ferrari.

“We are confident that this research will become a key reference in the SERS field and will stimulate follow-up studies,” team leader Andrea Ferrari said.

The work was reported in ACS Nano.

Is Britain facing a physics brain drain?

By Hamish Johnston

The grass is always greener on the other side of the street.

That old proverb applies to physics research as much as anything else – especially in many European countries where austerity measures are ravaging research budgets.

The UK is no exception and today the Guardian is running a two-page spread about an impending brain drain. The piece caught my eye because it profiles two physicists at opposite ends of the career ladder – Brian Foster and Tom Whyntie. You can read their stories here.

The main article describes an “insidious grinding down of the UK research community” and suggests that UK universities may soon have to shed as many as 30 departments. Individual universities such as Newcastle and Liverpool can look forward to losing research funds totalling £4m and £3.5m per year respectively, claims the article. Less prestigious institutions could be cut off completely.

It sounds like many British scientists will be looking abroad for funds – but if you take another country’s money, they usually expect you to live there.

Not a problem if you luck out and get a job at the University of California at Santa Barbara or the University of Western Australia.

But you will have to be made of tough stuff if you opt for the sweltering humidity of a university in booming South-East Asia or the freezing six-month winters of a campus on the resource-rich Canadian prairies. Or you could find yourself in a prosperous but tiny “college town” in the US, hundreds of miles from the nearest big city.

I suppose if all you want to do is physics, then you don’t care about your surroundings. Indeed, you probably don’t even notice the gentle climate, lovely countryside and vibrant cities surrounding most UK universities.

And I’m not saying that because I am British. I’d like to think that I am part of the Canadian brain drain, although I’m probably flattering myself!

The world’s smallest fridge

To most people a refrigerator is a cabinet full of chilled food and drink – but now a group of researchers in the UK has shown that it is possible to build a refrigerator using just two quantum particles (or even just one) in order to cool another quantum particle. They believe that such a device could be exploited in nanotechnology and that versions of it may even exist in nature.

Physicists have already built refrigerators using just a handful of atoms. However, to do so they used an external source of energy, such as a laser beam, to drive the cooling. This is equivalent to what happens inside a domestic refrigerator driven by an electric motor – both set-ups reflecting the fundamental requirement of the second law of thermodynamics that energy is needed to transfer heat from a colder to a hotter body.

The approach taken by mathematician Noah Linden and physicists Sandu Popescu and Paul Skrzypczyk of the University of Bristol is different. Rather than trying to find a practical way of building a microscopic refrigerator they instead aimed to work out how small a fridge can be in theory. They did this by showing how to remove the macroscopic external energy source and instead use three heat baths, rather than two. As Popescu explains, a domestic fridge can be considered as having two heat baths (a cold interior and a warm exterior) plus a motor, but it can also be described in terms of three heat baths, with the third, the hot bath, being used in the power station that produces the electricity.

Chilled qubits

The Bristol design comes in three varieties, one of which consists of three quantum bits, or qubits, particles that can exist only in two possible states. Two of these qubits form the refrigerator while the third is the item to be cooled. The researchers set the system up so that the combined energy of the excited states of one of the qubits from the fridge and the qubit to be cooled, say qubits one and three, is equal to that of the excited state of qubit two. With 1 representing a qubit’s excited state and 0 its ground state, this means that the system states 101 and 010 have the same energy.

Quantum mechanics dictates that if all qubits are at the same temperature these system states will have an equal probability of existing, in other words the system will continuously flip between one state and the other, spending equal amounts of time in both. Cooling qubit number three, however, means dragging it down from state 1 to state 0, or, in other words, biasing the system so that it spends more of its time in 010 than it does in 101. This, say the researchers, can be done by putting qubit one in contact with a hot bath, while qubit two is in contact with a tepid bath and qubit three is in contact with a cold one. In this way, qubit one will be forced to exist at its higher energy, i.e. in state 1, which puts the system into 101. The system is then more likely to flip from 101 to 010 than vice versa, thus cooling the third qubit to a temperature below that of its bath.

The other two fridge designs are even smaller, although a little more difficult to conceptualize. One involves a qubit and a qutrit, a quantum particle that can exist only in three possible states. Smaller still is the third possibility – a qutrit on its own. This relies on the quitrit’s three states having different spatial distributions, which means that they could in principle be in contact with three distinct heat baths. “We believe this is the smallest possible system which may be called a refrigerator,” says Linden.

Practical applications

Even though their work is purely theoretical, the researchers believe that a refrigerator based on the first of their designs could in fact be built without too much difficulty – perhaps by using three different ions held in electromagnetic fields. They say that such a fridge could be used to cool parts of nanotechnological devices such as those inside a future quantum computer, and they speculate that nature might also make use of this technology. Cooling enzymes, for example, would reduce the rate at which certain chemical reactions take place and therefore regulate processes occurring inside the body.

The Bristol team has found that their fridge can cool arbitrarily close to absolute zero, but had wondered whether it would be less efficient than macroscopic devices, in other words whether the constraint of having to use only a very limited number of quantum states would put their designs at a disadvantage. But this is not what they have found. As they report in another paper not yet published in a journal (arXiv: 1009.0865), their quantum devices can reach the Carnot efficiency – the maximum possible efficiency of any thermal machine.

Isolation is a problem

Günter Mahler of the University of Stuttgart believes the work of the Bristol researchers should stimulate “further discussions and investigations”. He describes it as an interesting piece of theoretical research but questions how easily it can be translated into working devices. In particular, he points out, the different baths would have to be kept isolated in order to avoid unwanted heat leakage. “As far as we know this would require spatial separation,” he says, “which is rather counter-productive for futuristic localized nano-devices.”

The research is described in Phys. Rev. Lett. 105 130401.

Nuclear’s new generation

After a 20-year slump following accidents at Three Mile Island in the US and Chernobyl in the former Soviet Union, the power of the atom is making a comeback. In the past two years alone, China has begun constructing 15 new nuclear power stations, while Russia, South Korea and India are also initiating major expansions in atomic power. Some Western countries look set to join them: at the end of 2009, licence applications for 22 new nuclear plants had been submitted in the US, while the Italian government has said that it will reverse a ban on nuclear power and start constructing reactors by 2013.

The reasons for this resurgence are not hard to spot. One is the political importance of fighting anthropogenic global warming: nuclear reactors do not emit greenhouse gases during operation, and are more reliable than other low-carbon energy sources such as solar or wind power. The other is energy security: governments are keen to diversify their energy sources and distance themselves from politically unstable suppliers of fossil fuels. As a result of such pressures, a report published in June this year by the International Energy Agency (IEA) and the Organisation for Economic Co-operation and Development’s Nuclear Energy Agency anticipated that the world’s total nuclear generating capacity could more than triple over the next four decades, rising from the current 370 GW to some 1200 GW by 2050.

However, the agencies believe that countries must develop more advanced nuclear technologies if this form of energy is to continue to play a major role beyond the middle of the century. Today’s reactors are mostly “second generation” facilities that were built in the 1970s and 1980s. The “third generation” facilities that are gradually replacing them often incorporate additional safety features, but their basic designs remain essentially the same. Moving beyond these existing technologies will require extensive research and development, as well as international co-operation. To this end, in 2001 nine countries set up the Generation-IV International Forum (GIF), which aims to foster the development of “fourth generation” reactors that improve on current designs in four key respects: sustainability, economics, safety and reliability, and non-proliferation.

Since then, the forum has expanded to 13 members (including the European Atomic Energy Community, EURATOM) and it has identified six designs that merit further development. The hope is that one or more of these will be ready for commercial deployment in the 2030s or 2040s, having proved their feasibility in demonstration plants in the 2020s. But the scientists and engineers working on these designs have many formidable technical challenges to overcome, and must convince funders that the advantages of these advanced reactors over existing plants will be worth the billions needed to deploy them.

A slow (neutron) start

Nuclear-fission reactors generate energy by splitting heavy nuclei, with each splitting giving off neutrons that go on to split further nuclei. This process creates a stable chain reaction that releases copious amounts of heat. The heat is taken up by a coolant that circulates through the reactor’s core and is then used to produce steam to drive a turbine and generate electricity. Most existing nuclear plants are “light-water reactors”, which use uranium-235 as the fissile material and water as both the coolant and moderator. A moderator is needed to slow the neutrons so that they are at the optimum speed to fission uranium-235 nuclei.

Of the six designs for generation-IV reactors (see table 1), the closest to existing light-water reactors is the “supercritical water-cooled reactor”. Like light-water reactors, this design uses water as the coolant and moderator, but at far higher temperatures and pressures. With the coolant leaving the core at temperatures of up to 625 °C, the reactor’s thermodynamic efficiency – the ratio of power generated as electricity to that produced in the fission reactions – can reach as high as 50%. This compares favourably to the 34% typical of today’s reactors, which operate at just over 300 °C. Moreover, because the cooling water exists above its critical point, with properties between those of a gas and a liquid, it is possible to use it to drive the turbine directly – unlike in existing designs, where the coolant heats up a secondary loop of water that then drives the turbine.

Both of these improvements would reduce the cost of nuclear energy. However, a number of significant technological hurdles must be overcome before they can be implemented, including the development of materials that can withstand the high pressures and temperatures involved, plus a better understanding of the chemistry of supercritical water.

Another generation-IV option, the “very-high-temperature reactor”, uses a helium coolant and a graphite moderator. Because this design uses a gas coolant rather than a liquid one, it could operate at even higher temperatures – up to 1000 °C. This would boost efficiency levels still further and also allow such plants to generate useful heat as well as electricity, which could potentially be used to produce hydrogen that is needed in refineries and petrochemical plants (figure 1).

Some elements of this concept for a very-high-temperature reactor have already been investigated at lower temperatures in prototype gas-cooled reactors built in the US and Germany. A number of countries are also developing reactors that operate at intermediate temperatures of up to 800 °C. Until recently, one of the most advanced intermediate-temperature projects was South Africa’s “pebble-bed modular reactor”, which was designed to use hundreds of thousands of fuel “pebbles” – cricket-ball-sized spheres each containing about 15,000 kernels of uranium dioxide enclosed inside layers of high-density carbon to confine the fission products as the fuel burns. The reactor core would also contain 185,000 fuel-free graphite pebbles to moderate the reaction.

Packaging the fuel in this way confers two major potential advantages over the fuel rods used in conventional light-water reactors. One is that pebble-bed-type reactors could be refuelled without shutting them down; the pebbles would simply fall to the bottom of the reactor core as their fuel burned and then be re_inserted at the top of the core, thus allowing the reactor to supply energy continuously. Pebble-bed reactors could also be designed to be “passively safe”, meaning that any temperature rise due to a loss of coolant would reduce the efficiency of the fission process and bring the reactions to an automatic halt.

However, in July the South African government decided to end its involvement in pebble-bed research. Jan Neethling, a physicist at the Nelson Mandela Metropolitan University in Port Elizabeth who has worked on developing the fuel pebbles, believes that following elections in 2009, the new government decided that the country’s urgent energy needs would be better met with coal and conventional nuclear plants – rather than with a potentially more efficient and safer but untried and problematic alternative.

One factor that may also have played a role in the South African government’s decision to abandon the pebble-bed idea is a 2008 report by Rainer Moormann of the Nuclear Research Centre at Jülich, Germany, which operated a small pebble-bed reactor between 1967 and 1988. The report indicated that radiation may have leaked out of the pebbles, making repairs and maintenance of pebble-bed reactors potentially more costly than previously envisaged. Also, customers and international investors never really got behind the South African project, mirroring the Jülich centre’s earlier failure to sell their pebble-bed technology to Russia. “We had a very good flagship project that combined the work of many scientists and engineers,” says Neethling, “but more time and money is needed to commercialize this concept.”

Opinion is divided on the significance of the South African project’s termination. Stephen Thomas, an energy-industry expert at the University of Greenwich in London, calls it a “major setback” for the development of very-high-temperature reactors, since, he says, South Africa’s efforts appeared to be more advanced than research being carried out elsewhere. However, Bill Stacey, a nuclear engineer at the Georgia Institute of Technology in the US, disagrees with this assessment, adding that South Africa was “just one of many players and not one of the major ones”. China, Japan, France and South Korea are also developing technology for high-temperature reactors, some of which is also designed to use pebbles.

For its part, the US is pursuing a variant of the pebble-bed design known as the Next Generation Nuclear Plant (NGNP). Intended to reach temperatures of 750–800 °C, the NGNP will allow for different fuel configurations, with the coated fuel kernels held either in pebbles or hexagonal graphite blocks. According to Harold McFarlane, technical director of GIF and a researcher at the Idaho National Laboratory, the US Congress approved the construction of a prototype NGNP in 2005 but has so far awarded funding only for preliminary research and development. The US Department of Energy is now trying to set up joint funding for the project with industry. The reactor is unlikely to be completed by its original target date of 2021, McFarlane says, and where it will be built still needs to be determined, although speculation so far has concentrated on sites along the Gulf Coast.

Faster neutrons

Both the supercritical water-cooled reactor and the very-high-temperature reactor would use uranium-235 as fuel. However, less than 1% of naturally occurring uranium comes in this form: the remaining fraction is uranium-238, which ends up as “depleted uranium” after uranium ore is enriched to produce reactor-grade fuel (typically about 5% uranium-235, 95% uranium-238). Significant amounts of uranium-238 are also discarded as waste after the fissile fraction of reactor-grade fuel has been consumed. Many nuclear experts therefore believe that this “open fuel cycle” is a waste of resources. It would be better, they say, to recycle the uranium and plutonium that make up the bulk of spent fuel as well as the depleted uranium in what is known as a “closed fuel cycle” (figure 2).

The most efficient way of doing this is to use “fast reactors”, which do not moderate the speed of fission neutrons. Such reactors require a far higher concentration of fissile material, usually plutonium-239, to generate sustained chain reactions than moderated, or “thermal”, reactors do. But they are far better at converting non-fissile uranium-238 into plutonium-239 via neutron absorption. In fact, fast reactors can be made to produce more plutonium-239 than they consume – a process known as “breeding” – by surrounding the reactor core with a “blanket” of uranium-238. Using uranium-238 in this way would extend the lifetime of the world’s uranium resources from hundreds to thousands or even tens of thousands of years, assuming no increase in current nuclear generating capacity. Fast reactors could also be made to burn some of the long-lived, heavier-than-uranium isotopes (known as “transuranics”) that make up spent fuel, converting them to shorter-lived nuclides and thereby reducing the volume of nuclear waste that needs to be stored in long-term geological repositories (see “A problem for the future”).

All four of the remaining generation-IV reactor designs could be configured to work as fast reactors. The main difference between them is in their cooling systems – an aspect of fast-reactor design that seems to offer plenty of scope for innovation. So far, nearly all of the world’s fast reactors have used sodium as a coolant, taking advantage of the material’s high thermal conductivity. Unfortunately, sodium reacts violently when it comes into contact with either air or water. As a result, at least two sodium-cooled reactors have been shut down for significant periods due to fires. One of them, Japan’s Monju prototype fast reactor, experienced a major sodium–air fire in 1996 and only restarted earlier this year, almost a decade and a half later. Even without such incidents, the very fact that sodium and air need to be kept apart means that refuelling and repairs are more complicated and time-consuming than for water-cooled reactors. The one commercial-sized fast reactor built to date, France’s Superphénix (figure 3), was shut down for more than half the time it was connected to the electrical grid (between 1986 and 1996).

Sodium also becomes extremely radioactive when exposed to neutrons. This means that sodium-cooled fast-reactor designs must incorporate an extra loop of sodium to transfer heat from the radioactive sodium cooling the reactor core to the steam generators; without it, a fire in the generators could release radioactive sodium into the atmosphere. This extra loop adds significantly to the cost of such reactors. Indeed, according to a recent report from the International Panel on Fissile Materials (IPFM) – a group promoting arms control and non-proliferation policies – the fast reactors constructed so far have typically cost twice as much per kilowatt of generating capacity as water-cooled reactors.

Scientists working on generation-IV sodium fast reactors are aiming to make them cheaper through improved plant layout and steam generation. They are also experimenting with more inherent safety features, such as arranging the reactor vessel and other components so that if the system overheats, the sodium naturally transports the excess heat out of the system, not back into it. Researchers in both France and Japan hope to start operating new sodium reactors that incorporate such advanced features at some point in the 2020s.

The three non-sodium-cooled fast-reactor designs being explored by the GIF each have their own advantages, but major technological hurdles mean they are more of a long-term prospect. The “gas-cooled fast reactor”, like its thermal equivalent, would operate at high temperatures (up to 850 °C), generating electricity more efficiently than a sodium plant and raising the possibility of producing hydrogen or heat as well. Unfortunately, although the helium gas coolant in such a plant would be inert, helium is a much poorer coolant than sodium. Given the high concentrations of fissile material needed in a fast-reactor core, this makes gas-cooled designs extremely challenging to implement.

No less challenging is the “lead-cooled fast reactor”. Like helium, lead does not react with air or water, which would potentially simplify the plant design. Unfortunately, a liquid-lead coolant would corrode almost any metal it touched, so new kinds of coatings would be needed to protect the reactor’s components from corrosion. The final and most ambitious generation-IV concept is the “molten-salt reactor”. This design calls for the nuclear fuel to be dissolved in a circulating molten-salt coolant, the liquid form doing away with the need to construct fuel rods or pellets and allowing the fuel mixture to be adjusted if needed. Such a reactor could use either fast or thermal neutrons, and could also be used to breed fissile thorium (see “Enter the thorium tiger” on p40, print edition only) or burn plutonium and other by-products. However, the chemistry of molten salts is not well understood, and special corrosion-resistant materials would need to be developed.

Thinking ahead

In addition to the considerable research and development needed to implement each of the individual fast-reactor designs, new kinds of plants for reprocessing and refabricating the fuel would be required to commercialize the technology. Beyond this, however, lies an even bigger problem associated with fast reactors: the freeing up of weapons-grade plutonium during reprocessing. According to the IPFM, there is currently enough plutonium in civilian stockpiles to make tens of thousands of nuclear weapons, and the continued development of fast reactors would only add to this. Advocates of fast reactors have proposed keeping the reprocessed plutonium bound up with some of the transuranics inside spent fuel, which would in theory make it more difficult to steal because the mixed plutonium–transuranic packages would be more radioactive than plutonium alone. However, panel co-chair Frank von Hippel of Princeton University in the US points out that radiation levels in such packages would still be lower than those found in spent fuel before reprocessing. A report produced last year by a group of nine scientists working at the US’s national laboratories did not find that this transuranic bundling would significantly reduce the risk of proliferation, von Hippel adds.

Edwin Lyman of the Union of Concerned Scientists in Washington, DC, agrees. “Fast reactors should not be part of future nuclear generating capacity at all,” he says. “Around $100bn has been wasted on this technology with virtually nothing to show for it. Research and development on nuclear power should instead be focused on improving the safety, security and efficiency of the once-through cycle without reprocessing.”

This view is not shared by Stacey. Although he acknowledges that the technical challenges in commercializing fast reactors are “sobering”, he believes that the arguments in favour of closing the fuel cycle are still compelling. “You can’t provide nuclear power for a long time using 1% of the energy content of uranium,” he says, referring to the tiny fraction of natural uranium that is fissile. “And as it is, the spent fuel is stacking up and at some point we are going to need to do something about it. We can bury it but we would need sites that can contain it for a million years. That stretches credibility.”

Whether or not any of the generation-IV designs are commercialized will depend on a broad range of issues, including those beyond the purely technical. These include the need to build up a skilled workforce and maintain safety standards at existing plants, as well as the political problem of what to do with nuclear waste. The industry’s progress on constructing third-generation plants will also influence what follows them.

But as William Nuttall of Cambridge University’s Judge Business School points out, perhaps the most important factor is economics. Nuttall, an energy-policy analyst specializing in nuclear power, says it is still not clear how far governments are prepared to go in implementing policies such as carbon taxes that could make nuclear energy cost-competitive with fossil fuels. As for fast reactors, higher prices for uranium could make them more attractive in the future, he suggests, as long as their capital costs and reliability measure up.

“The question is what the scale of the nuclear renaissance will be, especially in Europe,” says Nuttall. “If it means simply replacing nuclear with nuclear, then there is probably no need to go beyond light-water reactors. But if you want to, say, replace coal with nuclear, then there could be room for generation-IV.” The important thing, he believes, is to keep the option open. “We don’t want to be in a position 20 years down the line when we wish we could have done it, but find we can’t,” he says.

Hot fusion

It has to be one of the greatest public lectures in the history of science. Indeed, the presidential address by Arthur Stanley Eddington to the 1920 meeting of the British Association in Cardiff is still worth reading for the simplicity and clarity of the arguments alone. But it is his extraordinary vision that stands out nearly a century later. Until Eddington’s lecture, it was widely accepted that the Sun was powered by gravitational contraction, converting gravitational potential energy into radiation. Some 60 years earlier, Lord Kelvin had argued that this mechanism means that the Sun can be no more than 20–30 million years old. But using simple arguments based on a wide range of observations, Eddington showed that the Sun must be much older than Kelvin’s estimate and that stars must draw on some other source of energy.

It was fortunate that just prior to Eddington’s address his Cambridge University colleague Francis Aston had measured the masses of hydrogen and helium to be 1.008 and 4, respectively. Eddington argued that the Sun is being powered by converting hydrogen to helium – by combining four hydrogen nuclei (protons) with two electrons and releasing energy in the process. The exact details were wrong of course – the process is more complicated and involves deuterium, positrons and neutrinos, for example – but the basic idea was correct: the Sun is indeed converting hydrogen to helium.

The energy released in this transformation can be calculated using E = mc2 and the measured masses of hydrogen and helium. From this, Eddington estimated that the Sun has enough energy to shine for 15 billion years – remarkably close to modern estimates of approximately 10 billion years from formation until the Sun enters its red-giant phase, when it will have exhausted the hydrogen fuel in its core. He had deduced the existence of what we now call nuclear fusion. Although Eddington cautioned about being too certain of his conclusions, he realized that the potential was staggering and he immediately saw the enormous benefits fusion could bring society. As he told his audience in Cardiff, “we sometimes dream that man will one day learn how to release it and use it for his service”.

Eddington’s vision is now within our reach, although it has not been easy getting this far. Along the way we have needed to develop the field of plasma physics, which studies gases heated to the point where the electrons separate from their atoms. Despite the struggles, it is fair to say that scientists have now captured the Sun’s power.

From dream to reality

The modern fusion programme really began in the closing moments of the Second World War at Los Alamos in the US, when Enrico Fermi and other members of the team that built the first atomic bombs speculated that a fusion reaction might be initiated in a plasma confined by a magnetic field. In May 1946 George Thomson and Moses Blackman of Imperial College London applied for a patent for a magnetically confined fusion device in which powerful magnets could be used to hold a plasma in place while it is heated to high temperatures.

By the early 1950s it was clear that the easiest fusion reaction to initiate is that of two isotopes of hydrogen – deuterium and tritium. To initiate significant fusion, a plasma of deuterium and tritium must be heated to temperatures of about 150 million kelvin. Some 10 times hotter than the centre of the Sun, this was a daunting goal. However, in 1997 scientists achieved it in a magnetically confined plasma at the Joint European Torus (JET) at the Culham Centre for Fusion Energy in the UK. JET produced 16 MW of fusion power while being driven by 25 MW of input power.

Eddington would no doubt be pleased with the scientific progress on his vision. But despite the successes, we are not yet at a point where we can generate commercial electricity and fusion’s home stretch still involves significant challenges. Exactly what needs to be done to make a commercial fusion power source? What are the key scientific issues? How should countries position themselves to participate in a future fusion economy? These are essential questions. Before turning to them, however, it is worth addressing the most important question of all: why bother? Perhaps other energy sources would be simpler options. In reality, there are worryingly few long-term energy sources with sufficient resources to replace the roughly 80% of our energy that is generated by fossil fuels.

In the coming decades, current nuclear-fission technology will play a critical role in generating low-carbon electricity. But in the long term, aside from fusion, only solar and nuclear fission with uranium or thorium breeders (advanced reactors that breed nuclear fuel and so extend the resource of fission fuel) have the capability to replace fossil fuels. These technologies still need extensive research before they are ready to be deployed on a large scale. But despite this potential, it is clear, however, that no energy source offers the extraordinary promise of fusion: practically unlimited fuel; low waste; no carbon-dioxide production; attractive safety features and insignificant land use. These are compelling reasons to develop fusion even if success is not fully assured.

Self-heating fusion reactors

What then needs to be done to capitalize on JET’s achievement of significant fusion power? The next stage is clearly to demonstrate that a plant producing a net amount of electricity can be constructed – something that JET was not designed to achieve. The ratio of fusion energy produced to the electrical energy consumed to initiate and sustain the reaction must be increased. This requires a self-heated plasma – one heated by the energetic helium nuclei produced in deuterium–tritium fusion (figure 1).

The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory uses a different approach to fusion than the magnetic-confinement method discussed here. The facility is designed to concentrate 500 TW of power onto a millimetre-scale fuel pellet using an array of 192 lasers. The fusion energy produced is expected to be roughly 10 to 20 times what the laser driver delivers as light. This would be a significant demonstration of fusion “burn”, i.e. self-heating. However, the NIF laser is less than 1% efficient and thus the facility is still short of the critical demonstration that net energy production is possible.

For magnetically confined fusion, the crucial demonstration is at hand. Seven international partners – China, the European Union, Japan, South Korea, India, Russia and the US, together representing more than half the world’s population – are now, after years of delays, building a self-heated device called ITER at Cadarache in southern France (figure 2). Like JET, this experiment will have a magnetic configuration denoted by the Russian acronym “tokamak”. ITER will be completed in 10 years and a few years after that is expected to be producing roughly 500 MW of output power from less than 50 MW of input power – a 10-fold amplification, or “gain”, at least. One-fifth (roughly 100 MW) of the fusion power will be released as energetic helium nuclei, which get trapped by the magnetic field and self-heat the plasma. The target is to sustain this power level for a duration of 400 s or more. However, recent experiments using JET and other machines, coupled with detailed modelling, show that it should be possible to significantly increase that duration – and the gain. Even without these increases, ITER will generate industrial levels of fusion power while being largely self-heated; this is the burning-plasma regime. This demonstration of the scientific feasibility of high-gain fusion is a critical step on the road to fusion power.

But how do we know that ITER will reach these performance levels? The key physics parameter is the “energy confinement time”, τE, which is the ratio of the energy in the plasma to the power supplied to heat the plasma, where the latter is both the self-heating due to the fusion-produced helium (one-fifth of the fusion power, Pfusion/5) and the external heating (Pheat). The energy confinement time parametrizes how well the magnetic field insulates the plasma – it might be thought of as roughly the time it takes the heat put in to the plasma to work its way back out. The plasma is sustained for many energy confinement times (in principle indefinitely) by the heating. Clearly, a larger τE makes a fusion reactor a better net source of power. The energy gain is defined as Q =  Pfusion/Pheat. The deuterium–tritium fusion power produced per cubic metre of plasma at a given temperature and density (the fusion power density) can be calculated using the measured fusion cross-section (the reaction rate for a given fusion collision). In the temperature range 100 × 106–200 × 106 K, the fusion power density is approximately 0.08p2 MWm–3, where the plasma pressure, p, is measured in atmospheres.

At high pressure the fusion power is large and the plasma is entirely self-heated (Pheat = 0 and Q → ∞) – this is termed “ignition”. Heating the plasma externally (supplying Pheat) reduces the net output and complicates the reactor design. Therefore, high gain is essential. The gain of a fusion device depends on the state of the plasma – specifically the fusion product, pτE, and the plasma temperature, T. Ignition occurs roughly when pτE > 20. In ITER the central plasma pressure will reach about 7 atmospheres and the confinement time is expected to be in the range 3.5–4 s (recall that ITER’s plasma will be sustained for more than 400 s – perhaps thousands of seconds). A plot of psub>iτE versus T enables a performance comparison for different tokamaks, where pi = p/2 is the ion pressure in the centre of the toroidal plasma (figure 3).

Predictions of high power at ITER

The most challenging technical question faced by the fusion community is determining what the confinement time is and how we can be sure that it will reach 3.5–4 s. We know that the loss of heat from magnetically confined plasmas is controlled by small-scale turbulence. The turbulence consists of plasma-density and electromagnetic-field fluctuations that cause little swirls of plasma flow – eddies. The turbulent fluctuations are essentially unstable sound waves driven by the temperature gradient in the plasma. Like convection in a saucepan, eddies transport hot plasma out and cold plasma in. Progress in tokamak performance over the last 40 years has been achieved by increasingly suppressing the turbulent convection of heat and thereby increasing τE. One of the scientific triumphs of the last decade has been the ability to calculate this turbulence using high-performance computers to provide state-of-the-art simulations (figure 4).

Detailed comparisons of the simulations and measurements show that in many cases the calculations are indeed correctly capturing the complex dynamics. There is, however, still room for improvement, especially in the intriguing cases where the simulated turbulence is almost entirely suppressed. The analytical theory of this turbulence is complicated and is only now just beginning to be understood. However, a qualitative understanding of the turbulent transport can be obtained from a simple random-walk argument based on the characteristics of the unstable sound waves that form the eddy structures. This argument yields the estimate τE ∝ L3B2T–3/2, where L is the size of the device, B is the magnetic field strength and T is its temperature. Clearly, bigger devices should perform much better due to the steep L3 scaling. Indeed, empirical scaling derived from many experiments differs only a little from the simple estimate. ITER’s energy confinement time has been predicted in two ways: first, by extrapolation from the existing machines using the empirical scaling; and second, using sophisticated local-transport models derived from simulations. These predictions are expected to be very accurate, with confinement times in the range 3.5–4 s. This prediction is the basis of our confidence that ITER will reach the self-heated burning-plasma regime. We can get a qualitative feel for the extrapolation using the simple random-walk scaling: JET achieves roughly τE ~ 0.5–1 s confinement times and therefore ITER (which will be roughly twice as big, 30% hotter and have a field approximately 30% larger) will have roughly τE ~ 4 s.

Blanket engineering

Given our current knowledge, it is more than reasonable to assume that ITER will achieve its goal of a burning plasma in the mid-2020s. However, as any engineer will confirm, there is much more to commercial power generation than simply proving a design is scientifically feasible. Indeed, several components of any future fusion reactor – in particular the systems that breed tritium from lithium (the second reaction in figure 1) and convert neutron power to electrical power – have yet to be tested at any scale. The neut_rons produced in deuterium–tritium reactions, which carry four-fifths of the fusion power, are not confined by the magnetic field and therefore leave the plasma and pass through the surrounding wall. Inside the wall there must be a complex system that absorbs the neutrons, extracts heat and “breeds” tritium from lithium – this is known as a “blanket”.

There are many blanket designs but they all have a few things in common: they are typically 0.5–1 m thick, separated from the plasma by a steel wall and bounded on the outside by a steel shield. The blanket contains lithium, which absorbs neutrons from fusion to breed tritium (figure 1) that is then fed back into the plasma as fuel. Also in the blanket are neutron multipliers and a coolant used to flush out tritium and heat, which is used to power a turbine and generate electricity.

The blanket must satisfy some key requirements: to be economically viable it should operate robustly at high temperature in a harsh neutron environment for many years; and for tritium self-sufficiency it must breed more tritium than the fusion reactions consume. The technologies of the blanket, as well as the wall, are becoming a major focus of the fusion programme and will represent much of the intellectual property associated with commercial fusion. These reactor-system technologies are critical for a future fusion economy – we cannot wait for ITER’s results in order to start developing them.

A prerequisite for a viable blanket–wall system is robust materials. Structural materials, breeder materials and high-heat-flux materials are needed. In typical reactor conditions the atoms in the first few centimetres of the wall facing the plasma will get moved, or displaced, by neutron bombardment more than 10 times per year. Each displacement causes the local structure of the solid wall to be rearranged. Often this will be benign but sometimes it can weaken the structure. Materials must therefore retain structural integrity in these very challenging conditions for several years. To minimize the environmental impact of fusion, the walls must also be made of elements that do not become long-lived radioactive waste following high-energy neutron bombardment.

We do not know for certain whether such materials exist, but several promising candidate materials have been proposed. For example, various special steels have been shown to have suitable structural properties in theoretical calculations and ion-beam tests undertaken at Culham and UK universities. But we will not know for sure until samples have been subjected to a fusion-type neutron-radiation environment. The International Fusion Materials Irradiation Facility (IFMIF) is an accelerator-driven neutron source being developed by the international research community to test small samples of the promising materials; its design team is based in Japan as part of the deal that brought ITER to Europe. The neutron spectrum of IFMIF will mimic the high-energy neutron spectrum of a fusion reactor. Samples will be irradiated in a beam of neutrons for several years to evaluate the changes in their structural properties.

We need a testing facility

If, as expected, ITER proves to be successful, then blanket development is probably the critical path for fusion. Blanket designs are being developed and tested with weak sources of neutrons, and it appears that these designs will breed tritium efficiently enough to be self-sufficient. But they must be tested at full neutron power before we can ensure a reliable commercially viable system. Although test-blanket modules will be placed in the walls of ITER in the later stages of operation, definitive tests require a continuous neutron flux of 1–2 MW m–2 for several years, which will not be technically possible at ITER. Thus I believe that a “component test facility” (CTF) that can deliver reactor-level neutron flux over many square metres is needed to significantly accelerate the development of blanket and wall structures. For such a device to be affordable it must be compact with low power consumption.

Researchers at Culham have pioneered a compact device called the spherical tokamak that is a prime candidate for a CTF. Indeed MAST (the MegaAmp Spherical Tokamak) has achieved impressive plasma conditions at a very modest scale. Calculations and measurements suggest that MAST achieves good confinement by suppressing the turbulence by spinning the plasma at supersonic speeds. The National Spherical Tokamak Experiment (NSTX) at Princeton in the US also operates at about the MAST scale.

Results from these devices suggest that the spherical tokamak is an ideal candidate for a compact and affordable fusion device – i.e. a suitable candidate for a CTF. Culham and the Oak Ridge National Laboratory in the US have therefore developed conceptual designs of CTFs based on spherical tokamaks. These facilities could test whole components of the blanket and wall at full power for many years. Both Princeton and Culham are upgrading their machines to prove the viability of these conceptual designs. The MAST upgrade will deliver near-fusion conditions, sustained plasmas and a test of the new exhaust system for gaseous plasma-burn products – the Super-X divertor.

If the MAST upgrade confirms the viability of a spherical CTF, then one could be built during the early years of ITER’s operation. Wall and blanket development on the CTF coupled with ITER’s programme could enable the construction of the first demonstration reactors in the 2030s. The current international programme has no plans to build a CTF – but surely it is essential if we are to deliver commercial fusion when it is needed.

It seems inevitable, given what has been achieved, that Eddington’s dream will come true eventually – but when? Although we cannot say for sure, for a world that is hungry for energy, a reduction of the time to commercial fusion by even one decade could have an enormous impact.

At a glance: Fusion energy

  • Fusion power has the extraordinary promise of practically unlimited fuel, no carbon-dioxide production, good safety and insignificant land use
  • Controlled fusion was realized in the 1990s by the Joint European Torus (JET) and the US Tokamak Fusion Test Reactor. JET needed more energy to run than it produced – 25 MW input power to the plasma produced 16 MW of fusion power
  • We could reach net electricity production by building a reactor that can support the hot burning-plasma regime, where fast-moving fusion products self-heat the reaction, so that less input power is required
  • Simulations and measurements predict that the ITER facility being built in France will reach this regime by having a less turbulent fusion plasma and a greater volume – therefore making more fusion and losing less energy – than its predecessors
  • For commercial fusion, a wall and “blanket” for the reactor must be engineered that can withstand many years of heat and radiation without weakening

More about: Fusion energy

ITER: www.iter.org
NIF: https://lasers.llnl.gov
K Ikeda et al. 2007 Progress in the ITER physics basis Nuclear Fusion 6 47
R Pitts, R Buttery and S Pinches 2006 Fusion: the way ahead Physics World March pp20–26

Nuclear power: yes or no?

Ian Lowe

There are no universal truths in a complex question such as the future role of nuclear power. Each country has a unique energy supply and demand pattern. At one extreme, France gets over 80% of its electricity from fission reactors, so the country would find it almost impossible to do without nuclear power on any realistic timescale. At the other extreme, countries such as Australia, Portugal and Norway have no commercial reactors and limited capacity to develop the technology quickly, so it would take decades for them to develop a nuclear-power industry. Most countries belonging to the Organisation for Economic Co-operation and Development, such as the UK, are somewhere between those two extremes.

The only reason anyone would even consider building nuclear power stations in a nation that does not already have any is the recognition that climate change is a serious threat to our future. A decade ago, nuclear power was widely seen as a failed technology. Originally hailed as cheap, clean and safe, after the Chernobyl accident it was seen as expensive, dirty and dangerous. The peak of nuclear-power installation happened more than 20 years ago. Since then, cancellations and deferments have outnumbered new constructions.

If nuclear power were the only effective way of slowing climate change, then I would support it. However, we would have to put a huge effort into managing nuclear waste. That problem is, in principle, one that we could eventually solve. Storing the current waste is a technical problem, while it is possible in principle to design reactors that could burn materials that are now seen as waste. But even if it were solved, I would remain desperately worried about the proliferation of nuclear weapons, as this is a social and political problem with no apparent prospect of a solution. Fortunately, we may not have to face that terrible dilemma as there are other, much better, ways of moving to a low-carbon future.

Nuclear-waste risks
Nuclear power is certainly not a fast enough response to climate change. In Australia, for example, a strongly pro-nuclear government committee concluded that it would take 10–15 years to build one nuclear reactor from scratch. It proposed a crash programme of 25 reactors by 2050 but then calculated that this would not actually reduce Australia’s carbon-dioxide emissions; it would only slow the growth rate.

Nuclear power is also expensive. In most countries, there have to be direct or indirect public subsidies to make the nuclear option look competitive. Applying a carbon price of about £30 per tonne of carbon dioxide emitted by fossil-fuel power stations would make fossil-fuel electricity more expensive and make nuclear look more attractive, but it would also improve the relative economics of a wide range of renewable supply options. It might be true, as optimists assure us, that a promised new generation of reactors could deliver cheaper electricity, but we cannot afford to delay tackling climate change for decades.

While modern nuclear power stations do not have the technical limitations of the Chernobyl reactor, there will always remain some risk of accidents. There is community anxiety about nuclear energy because an accident at a nuclear power station poses a much more serious risk than an accident at any form of renewable-energy plant. Since nobody has yet demonstrated the safe and permanent management of radioactive waste from nuclear power stations, we can only give the public assurances that the problem will be solved in the future.

There also does not seem to be any real prospect of stopping the proliferation of nuclear weapons. Only five nations had nuclear weapons when the Non-Proliferation Treaty was drafted in 1970. Today, however, there are nearly twice as many, while a further group of countries has the capacity to develop weapons. The more countries that use nuclear technology, the greater is the risk of fissile material being diverted for weapons. Indeed, Mohammed El Baradei, the former head of the International Atomic Energy Agency, told the United Nations that he faced the impossible task of regulating hundreds of nuclear installations with the budget of a city police force. His agency documented countless examples of attempts to divert fissile material for improper purposes. There is a real risk of unaccountable military regimes, rogue dictators or even terrorists having either full-scale nuclear weapons or the capacity to detonate a “dirty bomb” that could make an entire city uninhabitable.

The fundamental point is that there are better alternatives. Australian, European and global studies have concluded that we could reduce demand dramatically – not by turning out the lights, but simply by improving the efficiency of turning energy into services such as lighting – and get all our electricity from a mix of renewables by 2030. That is a more responsible approach to tackling climate change. The clean-energy strategy is quicker, less expensive and less dangerous and there is no risk from terrorists stealing solar panels or wind-turbine blades! A mix of renewable supply systems would decentralize energy production, thus making societies more resilient and better insulated against natural disasters or terrorist action. We also know how to decommission wind turbines and solar panels at the end of their life, at little cost and with no risk to the community. So the question for pro-nuclear advocates is, as Australian political analyst Bernard Keane put it, “Why should taxpayers fund the most expensive and slowest energy option when so many alternatives are significantly cheaper and pose less financial risk?”

Barry Brook

As China, India and other populous developing nations expand their economies, with the very human aim of improving the prosperity and quality of life enjoyed by their citizens, the global demand for cheap, convenient energy is growing rapidly. If this demand is met by fossil fuels, then we are heading for both an energy-supply bottleneck and, because of the associated massive carbon emissions, a climate disaster.

Ironically, if climate change is the “inconvenient truth” facing high-energy-use, fossil-fuel-dependent societies such as the US, Canada, Australia and many countries in the European Union, then the inconvenient solution staring back is advanced nuclear power. The answer does not principally lie with renewable energy sources such as solar and wind, as many claim. However, these technologies will likely play some role.

There is a shopping list of “standard objections” used to challenge the viability or desirability of nuclear fission as a clean and sustainable energy source. None of these arguments stands up to scrutiny. Opponents claim that if the world ran on nuclear energy, then uranium supplies would run out in the coming decades and nuclear power plants would then have to shut down. This is false. Uranium and thorium are both more abundant than tin; and with the new generation of fast-breeder and thorium reactors, we would have abundant nuclear energy for millions of years. Yet even if the resources lasted a mere 1000 years, we would have ample time to develop exotic new future energy sources.

Going nuclear
Critics argue that past nuclear accidents such as Chernobyl mean that the technology is inherently dangerous. However, this simply ignores the fact that nuclear power is already hundreds of times safer than the coal, gas and oil we currently rely on. A study of 4290 energy-related accidents by the European Commission’s ExternE research project, for example, found that oil kills 36 workers per terawatt-hour, coal kills 25 and that hydro, wind, solar and, yes, nuclear, all kill fewer than 0.2 per terawatt-hour. Moreover, in nuclear reactors the passive safety features do not rely on engineered intervention and so remove the chance of human error, making it impossible to have a repeat of serious accidents. For example, in an emergency in the core cooling tank of a Westinghouse AP-1000 third-generation nuclear power plant, water is channelled into the reactor core by gravity, rather than by electric pumps.

Some contend that expanding commercial nuclear power would increase the risk of spreading nuclear weapons. First, this has not been true historically. Furthermore, the metal–fuel products of modern “dry” fuel recycling using electro-refining, which are designed for subsequent consumption in fast reactors, cannot be used for bombs because it is not possible to separate pure plutonium from the mix of uranium and minor actinides. Potential bomb-makers would get only a useless, dirty, contaminated product in a mix of heavy metals. Indeed, burning plutonium in fast reactors to generate large amounts of electricity would take this material permanently out of circulation, making it the most practical and cost-effective disposal mechanism imaginable. Those opposed to nuclear energy also claim that it leaves a legacy of nuclear waste that would have to be managed for tens of thousands of years. This is true only if we do not recycle the uranium and other heavy “transuranics” metals in the waste to extract all their useful energy.

At present, mined uranium is cheap. For light-water reactor technology, the total fuel costs – including mining, milling, enrichment and fuel-rod fabrication – is £13m a gigawatt per year. In unit-cost terms, that works out at 0.13p a kilowatt-hour for uranium oxide at a price of £45 per kilogram. However, in the longer term a once-through-and-throw-away use of nuclear fuel makes no economic sense. This is because such “open” fuel cycles not only leave a legacy of having to manage long-lived actinide waste, but they also inefficiently extract less than 1% of the energy in the uranium. Feeding nuclear waste into fast reactors will use all of the energy in uranium, and liquid-fluoride thorium reactors will access the energy stored in thorium, which works out as an 160-fold gain!

After repeated recycling, the tiny quantity of fission products that would remain would become less radioactive than natural granites and monazite sands within 300 years. To claim that large amounts of energy (thus generating greenhouse gases) would be required to mine, process and enrich uranium, and to construct and later decommission nuclear power stations simply ignores a wealth of real-world data. Authoritative and independently verified whole-of-life-cycle analyses in peer-reviewed journals have repeatedly shown that energy inputs to nuclear power are as low as, or lower than, wind, hydro and solar thermal, and less than half those of solar photovoltaic panels. That is today’s reality. In a future all-electric society – which includes electric or synthetic-fuelled vehicles supplied by nuclear power plants – greenhouse-gas emissions from the nuclear cycle would be zero.

Embracing nuclear energy
Finally, when all other arguments have been refuted, critics fall back on the claim that nuclear power takes too long to build or is too expensive compared with renewable energy. These arguments are perhaps the most regularly and transparently false arguments thrown up by those trying to block nuclear power from competing on a fair and level playing field with other energy sources. Many environmentalists believe that the best low-carbon solution is for governments to guide us back to simpler, less energy-consuming lives. Notions like that are unrealistic. The world will continue to need energy, and lots of it. But fossil fuels are not a viable future option. Nor are renewables the main answer. There is no single solution, or silver bullet, for solving the energy and climate crises, but there are bullets, and they are made of uranium and thorium – the fuels needed for nuclear plants.

It is time that we embraced nuclear energy as a cornerstone of the carbon-free revolution we need in order to address climate change and long-term energy security in a world beyond fossil fuels. Advanced nuclear power provides the technological key to unlocking the awesome potential of these energy metals for the benefit of humankind and for the ultimate sustainability of our society.

Copyright © 2026 by IOP Publishing Ltd and individual contributors