Skip to main content

LHC gears up to create mini big bangs

After seven months of successful proton collisions at 7 TeV in the Large Hadron Collider (LHC), researchers at CERN are now reconfiguring their famous machine to collide lead nuclei within the next few days. Collisions between these heavier particles will generate the highest temperatures and densities ever recorded on Earth, recreating the early universe moments after the Big Bang.

“This shows that the objective we set ourselves for this year was realistic, but tough, and it’s very gratifying to see it achieved in such fine style,” said Rolf Heuer, CERN’s director general.

The change in beamline marks the beginning of the main physics programme for the ALICE detector, which was specifically designed to track large numbers of particles. It can detect up to 15,000 particles per event, which may be produced from the collisions between lead nuclei occurring in the centre of the detector. The extremely high temperatures at the collision points will cause protons and neutrons to break down into a dense soup of subatomic particles known as a quark–gluon plasma, a condition thought to have existed shortly after the Big Bang.

Go ask ALICE

One of ALICE’s main scientific goals is to characterize this quark–qluon plasma in an attempt to find out more about the nature of the strong force, one of the four fundamental forces in nature. Despite being responsible for generating 98% of the mass of atoms, the strong force is still the most poorly understood of the forces.

We will be creating the highest temperatures and densities ever produced in an experiment in these mini big bangs David Evans

“We will be creating the highest temperatures and densities ever produced in an experiment in these mini big bangs”, said David Evans, leader of the UK team at the LHC’s ALICE experiment. “Although the tiny fireballs will only exist for a fleeting moment (less than a trillionth of a trillionth of a second) the temperatures will reach over ten trillion degrees, a million times hotter than the centre of the Sun.”

The LHC beamlines will be run at a centre of mass energy of 2.76 TeV per colliding nucleon pair, which will generate temperatures and densities that are an order of magnitude larger than the previous record held by the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory in the US.

“At the LHC we’ll be continuing a journey that began for CERN in 1994, which is certain to provide a new window on the fundamental behaviour of matter and in particular the role of the strong interaction,” says Jurgen Schukraft, spokesperson of the ALICE experiment.

If all goes to plan, the LHC will begin circulating lead ions by the weekend before CERN engineers spend up to a week tuning the beamlines in preparation for the scientific programme. Researchers will then record data until 6 December when CERN will shut down for maintenance work over Christmas. “This will give us plenty of time,” says Evans. “At these energies, the lead collisions will generate more data in a month than proton collisions could generate in a year.”

Operation of the collider will start again with protons in February and physics runs will continue through 2011.

A day out in Telford

flir.jpg

By Hamish Johnston

Yesterday I was up in Telford – birthplace of the industrial revolution – for Photonex and the Vacuum Expo.

It’s the first time that a vacuum event has been run alongside Photonex, which not surprisingly is focused on photonics.

Physics World is a media sponsor of the Vacuum Expo – which wraps up today – and we had a booth at the exhibition.

My first stop on the exhibition floor was the FLIR booth, where I met with Jon Chicken. Jon was showing off the firm’s latest infrared imaging systems – which you can see him demonstrating in the photo above.

Jon was very keen to talk about FLIR’s “super framing” technology – or as he prefers to call it “multiple integration time” or just “multi-IT”. The technique involves processing a stream of IR images, each with a different integration time. The technique is good for looking at subjects in which the local temperature varies over a wide range.

brenner.jpg

So, what are some exciting applications of this latest IR technology? It could be used, for example, to measure the temperature inside a fusion reactor. Indeed, Jon told me that the firm’s systems will be going into the ITER demonstration reactor that is currently being built in the south of France.

Continuing in the theme of fusion, my next stop was a booth promoting the laser fusion activities of the UK’s Rutherford Appleton Laboratory and the proposed HiPER laser fusion programme.

RAL’s Ceri Brenner (right) was there to explain how HiPER is expected to use powerful lasers to implode a tiny pellet containing deuterium and tritium – creating a dense hot plasma in which nuclear fusion can occur. If all goes to plan, 10 pellets per second will be ignited at HiPER, which will result in a net production of energy – which could someday power your toaster!

Ceri is also interested in developing another practical application of laser-plasma interactions – tabletop particle accelerators. In particular, she’s looking at how protons can be accelerated to tens and maybe hundreds of MeV using a laser. The idea is that an intense pulse of laser light separates electrons from ions in a plasma creating an extremely high electric field that can be used to accelerate changed particles such as protons.

joseph.jpg

Such a proton source could be very handy for medical therapies based on heavy charged particles. There are two challenges, however, that must be overcome. Current sources can only accelerate protons up to about 65 MeV. These could be used for treating the eye, but not for other applications, which need protons at 150–200 MeV. The other challenge is that the protons are produced over a wide energy range, but proton therapies are only effective if the particles have a very narrow energy distribution.

Laser accelerators are just one example of a technology that was first developed by academics and is now well on the road to commercialization. One person who has been down that road many a time is Tiju Joseph (right), who splits his time between the UK’s National Physical Laboratory (NPL) and the University of Surrey.

Joseph is a “technology translator” who tries to encourage academics to think about the commercial potential of their research. But it’s not about sitting back and waiting for eager scientists to approach him with ideas – he figures only about 5% of successful ventures start that way. Instead, it’s all about Joseph learning about what a scientist has done over the past ten years and then sitting down with the researcher and discussing avenues of commercialization.

You might think that the goal is always a spin-out company that is sold by the university once it has been established. According to Joseph, this is passé because it is just too cumbersome. Today, the goal seems to be to license the technology to an established firm with the ability to develop it rapidly.

connelly.jpg

Now, it wouldn’t be a photonics show without a laser and a few optical components, and that’s just what the University of Limerick’s Michael Connelly brought to Telford. Michael (right) was showing off his “low-cost laser Doppler vibrometer”, which is a way of measuring vibration by firing a laser at the object of interest and comparing the reflected beam with a reference beam in an interferometer.

Also working on a practical application of photonics is Shijie Liang of the University of Manchester, who told me about her work on creating “long-period gratings” within polymer fibres. Such gratings cause light at certain wavelengths to be absorbed by the fibre cladding – a process that is very sensitive to environmental factors such as the temperature of the fibre. As such, long-period gratings can be used as temperature or other probes.

While such gratings have been made in silicon, such fibres tend to be very fragile, which is why Liang is keen on much more flexible polymers.

specs.jpg

With all those laser beams whizzing around, I felt I needed a pair of safety glasses. Fortunately Paul Tozer of Lasermet had a few fetching pairs on display that might even tempt Bono (above). The UK-based firm supplies a wide range of laser safety equipment and Paul talked about the company’s business, including the refitting of laser labs in the UK and beyond.

Star Wars ‘telepresence’ tantalisingly close

In 1977 audiences were wowed by the special effects of the first Star Wars film, which included a hologram of Princess Leia making a distress call to Obi-Wan Kenobi after her ship had fallen under attack by the Empire. Now, the idea of real-time, dynamic holograms depicting scenes occurring in different locations is almost a reality, thanks to a breakthrough at the University of Arizona and Nitto Denko Technical Corporation.

Current interest in 3D display technology is higher than ever, spurred by the demonstration of 3D TV and the release of films produced in this format, such as Avatar. The action appears to come out of the screen because two perspectives combine to generate a 3D image. But to see 3D images, viewers have to wear specialized glasses with two different lenses that let through light polarized in different directions.

Holography is different from this, producing many perspectives that allow the viewer to see the “object” from multiple angles. With this approach the amplitude and phase of the light are reproduced by diffraction, allowing the viewer to perceive the light as it would have been scattered by the real object. In practice this is achieved by creating a screen – out of materials such as silver halide films or photopolymers – that provides the viewer with a slightly different perspective, depending on the observation angle.

A new hope

Progress towards achieving more dynamic holograms, with the ultimate goal of real-time reproduction, took a major step forward two years ago when a team led by Nasser Peyghambarian created a monochromatic display that could produce a new image every four minutes. Now, with this latest work the researchers have taken a dramatic leap by unveiling a 17 inch display that can reproduce an object in colour every two seconds.

The system works by taking multiple images of an object with 16 different cameras positioned at a range of different angles. A computer processes all this information into “hogel data”, which is transferred to a second computer via an ethernet link. At this location three different holograms are written into the material at different angles. Illuminating the polymer with incoherent emission from red, blue and green LEDs creates colour images.

The key to the breakthrough is the material from which the screen is fabricated – a photorefractive polymer. Switching to this polymer has slashed the time taken for a laser to “write” on a holographic pixel, known as hogel, from a second to just six nanoseconds. “[The latest polymer] can also be erased with the same beams used to write the image, so a separate erasing set-up is not required,” explains lead author Pierre-Alexandre Blanche from the University of Arizona.

Towards telepresence

Peyghambarian believes that his team’s technology could aid medical operations. “The cameras would be sitting around where the surgery is done, so that different doctors from around the world could participate, and see things just as if they were there,” he says.

To commercialize the system, writing speeds must increase to 30 frames per second, and the display must be larger, deliver a better colour palette and have a higher resolution. “If you want a true, real-time telepresence you need to go to at least 6–8 feet by 6–8 feet, so that the human person can be demonstrated as they are,” says Peyghambarian.

The ultimate goal is to achieve “telepresence”, where you could chat with others via 3D replications. In moving towards this, the technology will have to improve its resolution as well as its speed.

The team details its work in Nature.

Quantum gravity corrects QED

So, whose citation index ranking is about to go into the stratosphere?

The paper was written by David Toms, a Canadian mathematical physicist and lecturer at Newcastle University in the UK.

What has Toms done?

He has shown that interactions between quantum gravity and quantum electrodynamics (QED) cause electric charge to vanish at very high energies (above about 1015 GeV). He told physicsworld.com that his technique can be generalized to apply to the two other “gauge couplings”, which define the strong and weak forces.

Why should electric charge vanish at high energies?

A major problem with QED, which describes the interaction between charged particles and photons, is that electric charge increases at higher interaction energies. This is a result of vacuum polarization, whereby the spontaneous creation of electron–positron pairs tends to screen the electric charge of a particle at low energies. At higher energies, however, the screening is much reduced and the effective charge increases – and this cannot be correct.

Can you explain?

Physicists already know that the strong force – which binds together quarks within hadrons – goes to zero at extremely high energies. This property is called asymptotic freedom and its discovery earned Frank Wilczek, David Gross and David Politzer the 2004 Nobel Prize for Physics. If it can be proved that quantum gravity makes QED asymptotically free then it could stand as a viable theory on its own.

Can you elaborate slightly?

The main reason why QED was viewed as incomplete, prior to Gross et al, was that without asymptotic freedom the electric charge becomes infinitely large at some energy scale and the theory is no longer reliable. For their calculations to be reliable at high energies, physicists expect the strong, weak and electromagnetic forces to become unified and become asymptotically free.

Hold on, didn’t Frank Wilczek and Sean Robinson establish gravity-induced asymptotic freedom of charge in 2006?

Yes, sort of. Robinson and Wilczek came up with the idea of gravity-driven asymptotic freedom and worked out that it applied to all three gauge couplings (Phys. Rev. Lett. 96 231601). It was later pointed out, however, that there were errors in their calculations. This caused a flurry of activity as other physicists tried and failed to do the calculation using different approaches.

Now, Toms has worked out a way of avoiding these errors by performing a set of careful checks to guarantee that the calculation meets certain mathematical and physical criteria. In doing so, he has shown that Robinson and Wilczek’s idea was correct all along.

So what do they have to say?

“Toms’ work is important equally as much because of the way in which he did the calculation as the result itself,” said Robinson who is a lecturer at Massachusetts Institute of Technology. He said that an important feature of the technique is that it is “demonstrably flawless”. He also pointed out that while Toms’ paper was under review at Nature, an independent group of physicists at Tsinghua University in China posted a preprint (arXiv: 1008.1839) using a similar “flawless” technique but a different set of cross-checks. The Tsinghua team obtained essentially the same result as Toms, illustrating the power of the technique.

That must be good news for physicists working on unification?

Sort of. Toms has shown that quantum gravity causes asymptotic freedom in all the gauge couplings. This is handy if you want to show that all forces unify in a single (very weak) force at very high energies. However, he treated quantum gravity by simply quantizing Einstein’s general theory of relativity. This approach breaks down at the very energies that unification is expected to occur. To take things further, physicists would need to integrate more exotic aspects of quantum gravity such as additional dimensions and supersymmetry.

Where can I find out more?

Steven Chu talks energy politics in Scotland

SDC11489_2.JPG

Steven Chu, speaking yesterday at Laserfest in Glasgow

By James Dacey

“I was getting increasingly concerned that the climate was changing and that it was being caused by humans.”

Those were the words yesterday of Steven Chu, the US secretary of energy, speaking in Scotland about why he decided to park his glistening academic career to move into politics.

Chu was in Glasgow where he gave the opening speech as part of a national day of celebration of the 50th birthday of the laser. His broad-sweeping talk began with an overview of laser-cooling – the work for which he was awarded 1997’s Nobel Prize for Physics – before moving on to a discussion of molecular biology and the “alien” physics that occurs within the human body.

Then, with about half his allocated time remaining, Chu made a sudden gear change and began to talk more plainly about his motivation for moving into politics. “There is no real credible argument why the Earth will not warm up over a 50–100 year time period,” he explained.

He went on to talk about the imperative of developed nations to act as we shift increasingly towards a carbon-strained world. “The development of clean energy technologies and the rebuilding of an energy-efficient infrastructure is actually a job growth thing – it’s a demand really needed that can actually spur economies all over the world and it will be essential for our economic prosperity”.

Chu acknowledged the fact that the US is still lacking a comprehensive climate and energy bill, but he talked with great excitement about his how he is spending the $90 billion allocated to developing clean energy sources as part of the US Recovery Act passed last year. This includes the Energy Innovation Hubs, which, Chu says are partly inspired by earlier hierarchy of facilities such as Los Alamos National Laboratory and Bell Labs, where the best young scientists were elevated to management positions to accelerate the science.

Chu warned that failure to do this would leave the US and other developed countries lagging behind China. He talked about his meetings with the Chinese premiere, Wen Jiabao. “They want to be leaders in every energy technology because they think it will lead to their future prosperity – and because there are a lot of engineers in their government”. As examples, he cited China’s plans to generate 100 GW of wind by 2010 and their plans to build 25 new nuclear reactors.

Do giant spiral galaxies thwart clusters of young stars?

Astronomers in Scotland and Germany say simple physics may explain a long-standing paradox: why large clusters of young stars tend to reside in relatively small galaxies and not in giants like the Milky Way. The reason, according to the astronomers, is that giant spiral galaxies, like the Milky Way, spin fast, shearing star clusters before they grow into monsters.

The stunning 30 Doradus complex is the most luminous nursery of young stars in the Local Group – a collection of several dozen nearby galaxies that includes the Milky Way. It stands to reason, therefore, that 30 Doradus would inhabit an equally impressive galaxy, either Andromeda or the Milky Way, the two largest galaxies in the Local Group.

But instead the stunning 30 Doradus complex lies in the Large Magellanic Cloud, a satellite galaxy of our own that emits only one tenth as much light. The newborn stars of 30 Doradus have set gas aglow over an area 700 light-years wide, 30 times the diameter of the well known Orion nebula.

Rotational inhibition

Now Carsten Weidner and Ian Bonnell of the University of St Andrews in Fife and Hans Zinnecker of the Astrophysical Institute of Potsdam have conducted computer simulations that model interstellar clouds of molecular gas which collapse to form star clusters. Says Weidner, “It seems that rotation inhibits the formation of very massive star clusters.”

Giant spiral galaxies spin fast. For example, the Milky Way rotates at about 230 kilometres per second, and the even larger Andromeda galaxy spins faster still. By contrast, smaller galaxies, such as the Large Magellanic Cloud, rotate slowly.

Weidner’s team ran four computer simulations, each with a different spin speed. “Each model took about a month to compute,” Weidner says. In the fast-spinning models, stars and clusters formed over a wide area, because the spin prevented the gas from collapsing into one gigantic cluster. By contrast, in the slowest-spinning model, the gas collapsed and gave birth to a single huge star cluster at the centre. That model might explain why the huge 30 Doradus complex arose in a galaxy much smaller than our own.

Colliding galaxies

This work also applies to colliding galaxies. Says Weidner, “In the collision region, you have less rotational support, so you would also expect more massive clusters.” In fact, the famous Antennae galaxies – two large spiral galaxies that are smashing together in the constellation Corvus – have created young star clusters far greater than any young clusters in either the Milky Way or Andromeda.

Bruce Elmegreen, a star-formation expert at the IBM Research Division in Yorktown Heights, New York, says the study is interesting, but he’s sceptical of the result. “The connection between galaxy spin and molecular cloud spin is vague,” he says. “Does galaxy spin correlate with the spin of molecular clouds? I’m not aware of an answer to that.” Weidner responds that fast-spinning galaxies should indeed have faster-spinning clouds, because the clouds interact with one another.

What about ‘ram pressure’?

Elmegreen also says that 30 Doradus may owe its great size to factors other than its home galaxy’s slow rotation. The Large Magellanic Cloud – which is only 160,000 light-years from Earth – is plowing through the Milky Way’s halo. Gas in the halo compresses gas in the Large Magellanic Cloud, a process called “ram pressure” that may have sparked the star formation in 30 Doradus.

Weidner acknowledges that ram pressure may have played a role. “30 Doradus is a complex object,” he says, “and we do not claim that we can explain every detail of it. We just say there might be a trend with rotation.”

Weidner and his colleagues will publish their work in The Astrophysical Journal and a preprint (arXiv: 1009.1618) is available.

A life-changing phone call

SDC11489_2.JPG

Reflecting on his achievements, Eric Cornell

By James Dacey

Early one morning in October 2001 Eric Cornell’s life was about to change forever; he was about to receive a call from Sweden to inform him that he had been awarded that year’s Nobel Prize in Physics.

The University of Colorado physicist shared the prize with Wolfgang Ketterle and Carl Wieman for the achievement of Bose-Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates.

Nine years later, Cornell is giving a talk here in Glasgow as part of Laserfest, an event to mark the 50th anniversary of these really useful devices, which, of course, helped Cornell to cool his atoms into his condensate.

I just caught up with Cornell in his hotel before the event to find out a bit more about his big discovery and how it has changed his life. “The day we saw [the condensate], we really believed it…it was a very clear signature,” he said.

While Cornell admits to having had an inkling that the discovery could bring the Nobel, he was shocked to get the prize after just six years, and he admits that this has affected the way he does physics. “Before the prize I was a young, slightly brash, not particularly cautious physicist…now when I say something, it’s like ‘oh, Cornell says it’s wrong’.”

We also talked about Cornell’s interests outside of physics, one of which is politics – and he will be closely following events tomorrow as Obama fights to keep his support in the mid-term elections. “I like to follow the game and of course tomorrow is the big game,” he says.

But it seems unlikely that Cornell will make the transition from spectator to player any time soon. “My wife is much more involved in politics than me…I could be a sort of Dennis Thatcher or Michelle Obama.”

One physicist who has made the move is 1997 Nobel-prize winner, Steven Chu, the US secretary of energy. Chu is also talking today at Laserfest about quantum optics, so I’d better go take my seat in the auditorium.

• For more on the 50th anniversary of the laser, check out our video with Tom Baer in which the executive director of the Stanford Photonics Research Center outlines the many current and future uses of the laser.

• Meanwhile, don’t miss Sidney Perkowitz’s great article From ray-gun to Blu-ray on the impact of the laser on culture, science and everyday life.

Living with a star

The Sun is at its most beautiful when it is at its most dangerous. That beauty is visible down here on Earth in the form of the northern and southern lights, which appear when charged particles from the Sun strike the Earth’s upper atmosphere. But out in space, the consequences of Sun-caused “space weather” are not so benign: the high-energy particles, X-rays and gamma rays that the Sun emits can damage sensitive electronics, crash computers and have dangerous (possibly even fatal) effects on astronauts.

Most of the time, the Earth’s atmosphere and magnetic field protect us from the more violent events that occur in the solar atmosphere, such as explosions near the Sun’s surface (known as solar flares) or eruptions of huge bubbles of gas from inside the Sun (called coronal mass ejections, or CMEs). Even so, when charged particles from the Sun hit the Earth’s magnetic field, the field gets distorted and compressed. The resulting changes in the densities of charged particles in the Earth’s upper atmosphere can produce significant effects. Radio communications can be disrupted and, sometimes, such changes can induce damaging currents in long power lines, buried cables and oil pipelines. Giant flares have even destroyed power transformers and brought down electrical grids.

Yet like the auroral displays, the solar processes that cause space weather are also stunningly beautiful. The image on the left shows a ring-shaped prominence erupting from the surface of the Sun, sending a pulse of plasma rushing outwards at a speed of about 300 km s–1. Before the eruption, this prominence existed as a long tube of relatively cool, magnetically contained material just above the visible surface. It was then destabilized by mechanisms that are not completely understood. Such mechanisms are important because they can produce CMEs, which can launch up to 10 billion tonnes of hot plasma into the heliosphere – with serious consequences for any object, human or otherwise, that happens to be in the way.

One of the major goals of NASA’s new Solar Dynamics Observatory (SDO) mission is to understand these destabilization mechanisms. To learn more about them, and the phenomena they produce, we need to be able to observe solar events as they happen. This is not easy. Flares and CMEs can occur nearly anywhere at any time, so we need a monitoring system that can observe the entire Sun’s surface continuously. Moreover, solar explosions are fast – speeds of 1000 km s–1 are not uncommon – so images must be obtained at a rate and with exposure times that can capture the evolution of these complex events. Sending the data from so many images back to Earth and distributing then to the scientific community is also difficult. Finally, there are all the usual problems associated with working in space: you only get one shot, so if equipment does not work, then you cannot fix it; all equipment has to be as light as possible because it costs £200,000 per kilogram just to launch an experiment; and the sensitive instruments and computers must be able to withstand the very space weather they are meant to study, without the protection of the Earth’s magnetic field.

All of these factors posed a challenge for those of us who designed the instruments on the SDO. As the first mission in NASA’s “Living with a star” programme, the SDO’s purpose is to help us to gain a better understanding of how solar events, such as the ring prominence shown in figure 1, affect the heliosphere and, in particular, how they cause space weather. In doing so, the SDO is building on earlier missions such as SOHO and STEREO, which were launched in 1995 and 2006, respectively. These two missions are still operating, adding to our knowledge of solar events by collecting additional data on the outer corona and, in the case of STEREO, providing additional views of solar eruptions. Similarly, TRACE, which was launched in 1996 and turned off in September, provided high-resolution images of selected regions of the solar atmosphere.

The results from these earlier missions offered a tantalizing glimpse of how the Sun operates. However, this new mission will tell us much more about the Sun than its predecessors ever could. All previous images of the solar corona suffered from three major limitations. One is that they did not combine high spatial resolution with observations that covered the full disk of the Sun. Second, the instruments could not take lots of images in quick succession (known as “high cadence” operations) because of limitations to the rate that data could be sent back to Earth. And finally, because previous instruments could not take images across a range of different wavelengths, and at a rate comparable to coronal evolution, it was impossible to distinguish whether the observed events were due to heating, cooling or density changes.

A solar-observing trio

The SDO was launched from the Kennedy Spacecraft Center on 11 February and carried into a geosynchronous orbit 36,000 km above the Earth by an Atlas V rocket. The three instruments on board were designed to complement each other. The Heliospheric and Magnetic Imager (HMI), which was developed by researchers at Stanford University and the Lockheed Martin Space Astrophysics Laboratory (LMSAL), will study the behaviour of magnetic fields at the surface of the Sun. To do this, every 30 s the HMI makes maps of material flowing in the solar surface. It also maps the “line-of-sight” magnetic field every 45 s and the vector magnetic field every 15 min. The surface-flow maps let us infer some of what is going on below the surface of the Sun, because patterns in the surface flow can reveal the behaviour of magnetic fields even before they appear on the visible hemisphere. The vector-field maps, meanwhile, show the direction and strength of the magnetic field that emerges through the solar surface. As for the line-of-sight maps, they reveal the magnetic flux in the direction of the Earth. The vector field offers more information, but the line-of-sight measurements are more sensitive.

The second instrument on the SDO is the Atmospheric Imaging Assembly (AIA), which was also developed at the LMSAL (figure 2). Its task is to study how the solar corona responds to the magnetic fields that the HMI observes near the Sun’s surface. The AIA’s four telescopes (see box) direct light onto four CCD cameras, which take images of the Sun’s atmosphere at wavelengths that correspond to ionization states of iron and helium, as well as three spectral bands in the ultraviolet region of the spectrum. Data from the iron spectral lines allow us to map the temperatures of the corona in a band from 700,000–20 × 106 K, while the helium data probes temperatures from 30,000–100,000 K.

The final instrument aboard the SDO is the Extreme Ultraviolet Variability Experiment (EVE). Developed by staff at the University of Colorado’s Laboratory for Atmospheric and Space Physics, EVE consists of an array of spectrometers that measures the total solar irradiance over wavelengths between 0.1–105 nm. Because EVE and the AIA are flying together, it is usually possible to associate changes in the Sun’s irradiance with specific solar events, by comparing the timing of changes in EVE’s measurements with the spectral-band data in the AIA’s images.

Dealing with the data

The requirements for high imaging rate, high spatial resolution and broad spectral coverage drove the design of all three instruments, as well as the properties and orbit of the spacecraft that carries them. The observatory’s geosynchronous orbit, for example, offers two significant advantages for studying the Sun. First, such orbits are high enough above the Earth that the planet only blocks out the Sun for one hour a day at the most – and even then only for two, two-week periods each year, in September and March. Second, geosynchronicity means the SDO spacecraft is always over the same latitude, so it can broadcast data and receive commands continuously from a single ground station near White Sands in New Mexico.

Being in continuous contact with the ground station is vital for the SDO, thanks to the sheer volume of data it produces. There are a total of six CCD cameras on the SDO – two on the HMI and four on the AIA – and almost every second a 4096 × 4096 pixel (16 megapixel) image from one of them must be read out and transmitted back to Earth. The actual pixels are big by the standards of commercial camera CCDs (13 × 13 µm).

Because the number of photons that can be detected in a single exposure scales with the pixel size, the CCDs on the AIA have a big dynamic range – from 1 to 10,000. (The cameras were designed and manufactured by scientists and engineers at the Rutherford Appleton Laboratory near Didcot, while the special CCD detectors were made by e2v, also in the UK.) This is wonderful for covering the broad range of intensities in a solar flare, but it also means that each image contains a quarter of a terabit of data. Indeed, the total amount of data sent from the AIA and the HMI to the New Mexico ground station is about 1.8 terabytes per day, or 67 megabits per second. To get an idea of the scale of data involved, consider that each image would fill 6.25 DVD discs, so it would take about 540,000 DVDs to hold all the images obtained in a single day.

This high data rate had a significant impact on the design of the Joint Science Operations Center for the HMI and the AIA (EVE, with a much smaller data rate, has its own data centre), the data-distribution system and the system the rest of the scientific community uses to access the data. This last feature is particularly important, given that if you ask a scientist what data they want to see, their first response is usually “All of them!” Unfortunately, the awful truth is that once the images are uncompressed, the AIA alone generates about 3.5 terabytes of data per day – equivalent to downloading about 700,000 high-sound-quality MP3 files.

To make life easier for solar scientists, a number of utilities have been developed that allow them to mine the SDO archive for data that contribute to specific science objectives. For example, some questions that scientists are investigating include whether flares are associated with CMEs, what types of flares are associated with specific features in the EVE spectra, and what the statistical relations are between filament ejections and magnetic-field configurations. We have also produced a data viewer, which allows scientists to view the archive using compressed data. This greatly reduces the number of data that must be collected before precise science evaluations can begin. Other data-processing tools include a “Sun Today” webpage (sdowww.lmsal.com) that shows samples of the AIA images and HMI magnetograms, updated every five minutes, as well as daily movies of solar events.

What we are learning

In late March, we opened the doors of the AIA’s telescopes for the first time. The first images were beautiful. All the delicate front filters on the telescopes had survived the launch and all the instrument functions were working perfectly. A few days after we started taking data, the Sun rewarded us with a huge eruptive prominence on its east side – a wonderful start to our planned five-year mission.

Since then, we have been observing the Sun almost continuously, with only minor breaks for calibrations. During this period, the Sun has presented us with a number of CMEs, filament eruptions, small flares and even a few moderately large ones. As a result, we are now beginning to appreciate just how much of the Sun is impacted by a magnetic rearrangement in a very local region. For example, areas without spots can create disturbances that impact 30–60% of the visible surface.

Taking images at a high cadence has also been richly rewarding. At the beginning of a filament activation or CME, some features occur at speeds of 100–600 km s–1. At the onset of a flare, there are occasional “puffs” of plasma that move at speeds of 1000–2000 km s–1. When such events are captured, part of their diffuse appearance is caused by motion blur; a typical 3 s exposure taken by the AIA, for example, blurs the image of a 2000 km s–1 structure by 4–8 pixels. A 30 s exposure typical of earlier spacecraft would cause five or more times more blurring and make the event appear 25 times fainter – so faint, in fact, that the event might not have been detected at all. We also see wave patterns that move along magnetic field lines at 1000–2000 km s–1 as the flare event evolves. These fast waves had never been seen before and we do not yet know the mechanism that produces them or their role in the flare process.

Although some of these data are better interpreted numerically, the multiple temperature images taken by the AIA can also be combined to make several types of false-colour temperature maps, like those shown in figure 3. Movies of such colour maps allow solar scientists to study how temperature patterns evolve when the Sun is quiet, as well as when it is active. These movies provide a visual picture of the relations between events on the Sun that are quite distant from one another. For decades there have been arguments about whether flare or filament eruption can cause another distant event. Now, after only a few months of observations, the AIA movies have clearly established causality on distances of a solar diameter and more. Even though we are currently experiencing the deepest minimum in solar activity for more than a century, the Sun still has plenty to tell us.

The AIA telescopes: a fourfold challenge

NASA’s Solar Dynamics Observatory carries three instruments, one of which is the Atmospheric Imaging Assembly (AIA). Designing its four telescopes presented us with four main challenges, the first of which is caused by sunlight itself. The amount of light that reaches a typical extreme ultraviolet (EUV) channel is a billion times weaker than the sunlight that falls on the front of the telescope. To reject the visible light, the front of each EUV channel is covered with a metal filter that is only 150 nm thick, or about 0.2% the diameter of a human hair – thick enough to block visible light, but thin enough to pass the desired EUV light.

Manufacturing such filters is challenging, but designing mounts for them is even harder. These mounts must be sturdy enough to survive the vibrations and pressure changes they are subjected to at launch, but they cannot block a significant fraction of the EUV light. The image shows one of many filters that failed while we were testing various designs to determine which one would survive the launch environment.

The second challenge is making sure that the EUV light will be reflected from the telescopes’ mirrors. EUV light does not reflect from the single layer of silver or aluminium that suffices for the mirrors of visible-light telescopes, so instead we had to coat the mirrors with a series of thin alternate layers of silicon and molybdenum. These coatings are not removable, so a failure in the coating ruins a mirror. The mirrors also have to be the right shape, and because the wavelength of EUV light is so short, they must also be extremely smooth, with root-mean-square variations of about 0.3 nm.

The third challenge is that EUV light is easily absorbed by contaminates such as the silicon and hydrocarbon compounds used to hold the AIA telescope together. A coating of contaminates just 50 nm deep would be enough to lower the telescope’s transmission by 50%, and the AIA telescopes have 11 different surfaces where such contaminates can settle, including multiple filters, the telescopes’ primary and secondary mirrors, and the surface of the CCD camera itself. This means that less than 5 nm of contaminates could be allowed to collect on any given surface, either during the fabrication process or from the outgassing of components after the AIA goes into orbit.

Finally, there is the question of stability. Each pixel on the AIA’s CCD cameras samples light from a cone about 0.6 arcseconds wide, which corresponds to about 730 km at the centre of the solar disk. To produce sharp images, the motion induced by the spacecraft must be limited to about 0.02 arcseconds, or about 14 km on the solar surface. This requires a system of active stabilization, where signals generated by the telescopes themselves are used to control the angles of secondary mirrors that are mounted on piezoelectric actuators. The result is so stable that it is like being able to keep a laser pointed at a 1 mm-diameter target circle from a distance of 10 km. For golf fans, this is equivalent to a player making a hole in one on the Old Course in St Andrews while standing in Piccadilly Circus.

Bronx physics

Every morning about 3000 students at the Bronx High School of Science in New York pass beneath a huge mosaic that hangs over the school’s entrance. It shows a Moses-like figure – representing “the humanities” – rising over a rainbow, beneath which are tile depictions of Pythagoras’s theorem, surveying gear, a Benjamin Franklin-like key and kite, and more old stuff. The students rushing to class hardly notice. They are into calculus, photodetectors and robots.

On 15 October this year, Bronx Science, as it is colloquially known, was officially designated a “historic physics site” in a ceremony organized by the American Physical Society (APS). The high school joins an imposing list of 18 other landmarks with that status. They include Bell Labs in New Jersey, where the transistor was discovered, the Massachusetts Institute of Technology’s Radiation Laboratory, which helped to develop radar, the University of Chicago site where Robert Millikan measured the charge on the electron, and the spot outside Cleveland, Ohio, where Albert Michelson and Edward Morley did their epochal ether-drift experiment.

Located in the northwest corner of New York City, Bronx Science owes its historic status to the fact that seven future Nobel-prize-winning physicists went through its doors – more than any other high school in the world and more than most countries have ever achieved. The school, which opened in 1938, was founded by the educator Morris Meister, who believed that if a school put bright students together, it would kindle ill-defined but valuable learning processes. The school seems to have proved him right: according to the Bronx laureates, their physics learning took place mainly outside the classroom.

Roy Glauber, who shared the 2005 Nobel prize for his work on quantum optics, entered in 1938 in the school’s initial class. The first physics course was not taught until 1939, and its textbook did not even mention atoms. That subject was addressed in the chemistry textbook, which did not even say that atoms contained neutrons, despite their discovery in 1932.

A Bronx mathematics teacher changed Glauber’s life by giving him a book on calculus for summer reading, and the sophomore was thrilled to find he understood it. Glauber went to Harvard in 1942, skipped the intermediate physics courses, discovered that advanced courses were cancelled because the teachers were doing war work, and was catapulted into graduate physics. His outstanding performance caught the attention of well-connected scientists, and in 1943 – age 18 – he was spirited to the top-secret Los Alamos laboratory to help build the atomic bomb.

Leon Cooper, who shared the 1972 prize for work on superconductivity, recalls physics lessons as boring, and was far more enchanted by his biology classes, which lured him to stay late after school designing and running experiments “until they threw me out”. Indeed, the school’s basic-physics textbook was written by a certain Charles E Dull, whose work, though widely used in US high schools, lived up to his name. The future particle physicist Melvin Schwartz, who shared the 1988 Nobel gong, once told me his classmates’ excited discussions – not his teacher – were what first awakened his interest in physics.

The class of 1950 – the year below Schwartz – included Sheldon Glashow and Steven Weinberg, who shared the 1979 Nobel prize with Abdus Salam. Glashow recently told me that he cannot remember learning anything much from his introductory-physics class. At the time, the school offered only two advanced-physics courses. One was in “radio technology”, in which students built crystal radio sets, while in the “automotive physics” they took apart and reassembled an old aeroplane engine.

Neither Glashow nor Weinberg bothered with either. Far more exciting was the science-fiction club – whose members clustered around lab tables to talk about physics – and afterschool trips to the used bookstores that then populated lower Manhattan.

Particle theorist David Politzer, who shared the 2004 Nobel prize and who spoke at last month’s APS ceremony, described the school’s spirit by citing a transport strike that took place in 1966 – the year he left Bronx. The strike paralysed the city for almost two weeks and in most schools attendance plummeted; in some, nobody turned up. “[But] at Bronx Science, attendance was normal,” Politzer recalls. “We walked, bicycled and hitchhiked to school. We wouldn’t miss it!” Politzer’s classmate Russell Hulse, who shared the 1993 Nobel prize for discovering the first binary pulsar, recalled that his favourite afterschool activity was building various antennas, including a radio telescope. As Hulse told me, “It was very special to me to finally be in a place that focused on what I found most interesting and compelling in life, namely science.”

Much has changed in recent years. The advanced-physics labs were renovated last year, and “we try to instil an inquiry mindset”, says Jean Donahue, the assistant principal for science. In one physics classroom, I saw the teacher illustrate a talk on vectors by having a student navigate a blindfolded companion around the room by shouting directions and magnitudes; in another, the teacher taught the same principle by asking students how far fish swim in currents of various strengths heading in different directions.

The school’s most fearsome physics module – Advanced Placement Physics C – is tougher than most college-physics courses. Its dynamic instructor is Ghada Nehmeh, who was born in Lebanon and studied nuclear physics. Diminutive – smaller than most of her students – and scarf-clad, she jumps rapidly from lab table to lab table, helping piece together equipment and analyse results. Famous for being ruthlessly demanding, she tests the students on their first day by assigning them 40 calculus problems, due back the next day. “I’d never seen derivatives before,” says Kezi Cheng, a senior interested in theoretical physics. So Cheng did what most Bronx Science students do – she asked her classmates to give her a crash course on the subject. “They’re always willing to help.”

After last month’s ceremony, Bronx students now have a new plaque to walk past. My guess, though, is that they will be too busy scurrying to the next class to notice.

Weighty matters

In a dusty display cabinet in the museum of the University of Cambridge’s Cavendish Laboratory there sits a curiously shaped glass container with a few electrodes inside, like a ship in bottle. It is a replica of the first particle accelerator, used by J J Thomson to liberate electrons from atoms in 1897. Thomson’s breakthrough was the first step on the long quest to crack open the atom and reveal its inner workings. More than a century later, that same quest has produced the multibillion-pound experiments at the CERN lab in Geneva, which are poised to explore new territory in their search for the long-awaited Higgs boson, and to test whether our current understanding is only a shadow of a much richer reality.

The story of what came in between has all the makings of a Hollywood movie, and Massive: The Hunt for the God Particle could be the screenplay. The grand narrative in Ian Sample’s book sweeps from the earliest speculations on the nature of matter; through the Second World War and the dawn of nuclear weapons; the paranoia of the Cold War (during which science was seen as a source of national security); rival efforts by the US and Europe to lead the world in times of peace; and the eventual emergence of worldwide scientific co-operation. Swept along on that tide are the individual scientists who struggle to make sense of their equations and measurements while marrying, having children and fighting off both the manoeuvrings of their political funders and low blows from their rivals. This story is far from the stately intellectual progress by heroic lone geniuses that gets portrayed in some histories of science.

Massive carries the reader though the epic using individual episodes from the lives of some of the participants. These passages often read like a fast-paced novel, as, for example, when an aeroplane carrying evidence of the latest breakthrough “touched down with a brief screech of rubber” at a wintry Heathrow airport. This makes for an appealing read that is quite unlike a textbook narrative. The uncertainty faced by the scientists as they explore new theories and the agonizing decision to shut down the LEP collider at CERN just when the Higgs boson appeared to be in reach are brought to life by these human touches.

The central character is Peter Higgs, whose life story runs through the book. His early work, the famous publication suggesting the particle that now bears his name, and his subsequent career are all dealt with sympathetically, and reveal much about the scientific process. The real difficulties in knowing which directions to pursue, the dead ends, fears and frustrations are all covered. Like Higgs himself, Sample is careful to give due credit to all those involved in the development of electroweak theory. The gradual development of the theory, and particularly the way that multiple insights contributed to the solution, are both well covered. Sample also makes it clear that finding the boson will not signal the end of the story: in an excellent and topical chapter, he ad_dresses the implications of the Higgs mechanism for other new physics, tackling supersymmetry, extra space dimensions and hidden worlds with great clarity. This is a useful counter to the popular perception that particle physics is simply about the search for the Higgs.

The book avoids tackling the science head-on with long discussions of theory (or even any equations beyond E = mc2), and relies instead on straightforward descriptions of the key points backed up by lots of illustrative analogies. This works rather well, since the analogies are usually well chosen and there is no sense that the science has been over-simplified. Sample’s experience as a science correspondent for the Guardian newspaper pays off here.

A parallel narrative follows the development of the accelerators themselves, from table-top devices funded by small laboratories to vast international facilities. The rivalry between the US and Europe makes a fascinating sub-plot, and shows how much the science has been at the mercy of political events on both sides of the Atlantic. UK Prime Minister Margaret Thatcher, for example, was much criticized in the scientific community, but she nevertheless prevented the death of British particle physics, and thereby ensured the future of CERN. Meanwhile, her transatlantic counterpart President Ronald Reagan approved the Superconducting Super Collider (SSC) so that the US would have “the most powerful…gun in the world” – only for it to founder in the pork-barrel politics of Washington.

Many of the factors behind the SSC’s demise are well known, but, strangely enough, it seems that both the first President Bush’s illness at a state banquet in Japan (which cast a pall over US efforts to get the Japanese to join the SSC collaboration) and the Arab–Israeli peace talks that took place during Bill Clinton’s presidency (which forced a campaign to save the collider off the news agenda) also contributed to its downfall. However, the final blow came from the Congressional allocation process, which ensured that support melted away once a decision had been taken on which state would host the facility, and hence receive the bulk of the funding. The benefit to Europe of having a recognized shared facility at CERN becomes very clear.

Sample’s account of the public reaction to events in particle physics is also fascinating. Initially, such an esoteric subject was portrayed as being of no conceivable use, and celebrated as a purely intellectual pursuit. The atomic bomb put paid to that view, and physicists became the focus of nationalistic and military aspirations. Now that the projects are too expensive for any single nation to fund, particle physics is seen as a shining example of international co-operation. The book is particularly strong on the supposed threat to the existence of the planet, or even the entire universe, posed by colliders, and the ensuing media debate.

The scene is now set for the final denouement, where the Higgs is either captured or revealed to be a chimera. The new heavyweight Large Hadron Collider at CERN is the favourite to win this final round, but the old champ, Fermilab’s Tevatron, is making is one last bid for the title. So the screenplay ends on a cliffhanger, leaving the audience ready for the sequel.

Since CERN has become the last word in geek-chic, every practising particle physicist has at some time been cornered by a friend or relative with a demand to “explain what is going on”. Now, at last, there is a simple answer: buy them this book, and get a copy for yourself.

Copyright © 2026 by IOP Publishing Ltd and individual contributors