Skip to main content

High-temperature superconductor goes super thin

Gennady Logvenov and colleagues at Brookhaven National Laboratory in Upton, New York, have created layered films of copper-oxide or “cuprate” materials and have discovered that they can localize the superconducting behaviour to a single atomic plane. They say that the discovery will help theorists to build more comprehensive models of high-temperature superconductivity, and lead to thin-film devices that have their superconducting properties tuned by electric fields.

“We wanted to answer a fundamental question about such films,” says team member Ivan Bozovik. “Namely: how thin can the film be and still retain high-temperature superconductivity?”

No resistance

Discovered at the beginning of the 20th century, superconductivity is a phenomenon whereby a material’s electrical resistance can suddenly drop to zero as the substance is chilled below a specific temperature – known as the transition temperature (Tc). It exists in some pure metals close to absolute zero, and scientists believe that this is because electrons distort the metal lattice to let subsequent electrons flow freely, a mechanism outlined in so-called Bardeen-Cooper-Schrieffer (BCS) theory.

In 1986, however, physicists discovered that superconductivity also exists in certain compounds, including cuprates, at much higher temperatures of 30 K and more. This discovery of high-Tc superconductivity triggered a lot of initial excitement due to suggestions that, if extended up to room temperature, it could lead to novel applications such as levitating trains and ultra-efficient power cables. Over the past 20 years, however, these exciting new technologies have not materialized because physicists and engineers have struggled to understand the mechanism behind the phenomenon.

Now, Logvenov and colleagues have performed an experiment that could help to point theorists in the right direction. They have created a “bilayer” film with one layer of a cuprate metal and another of a cuprate insulator, using a technique called molecular beam epitaxy. Superconductivity in such bilayers tends to manifest at the interface between the layers, so the researchers were able to isolate where the effect occurs by carefully doping atomic planes within the layers with zinc, which suppresses superconductivity.

Crucial planes

The researchers found that when they doped the entire film with zinc, it did not superconduct at all. However, when they doped a certain plane – specifically, the second copper-oxide plane away from the interface – they found that the transition temperature for superconductivity dropped from 32 to 18 K. This, they say, is proof that that plane alone is crucial for the high-temperature superconductivity.

Elisabeth Nicol, a solid-state physicist at the University of Guelph, Canada, calls the Brookhaven study “a very clever piece of investigative work”, and explains that it will help researchers create superconductors that work at higher temperatures. “If we could understand where the source of the high transition temperature comes from,” she says, “we could possibly engineer things such that the transition temperature becomes higher.”

The discovery may also have direct practical benefits. Superconductivity can be controlled with electric fields, but because these can penetrate films by only a nanometre or so this ability has proved difficult to realize. Now that Logvenov and colleagues have found how to identify the crucial plane, engineers could find it possible to create tunable high-temperature superconductors for a variety of electronic devices.

The research is published in Science.

Renewables revolution needs clear scientific advice

windturbines.jpg
Cutting through turbulence on the way to Copenhagen

By James Dacey

European leaders have been in Brussels over the past couple of days and there has been an lot of talk about climate change. The latest reports suggest they are reaching some sort of agreement over how to help the world’s poorer nations to commit to restricting greenhouse gas emissions.

The EU summit in Brussels represents one of the last opportunities for European nations to iron-out disagreements ahead of December’s UN conference in Copenhagen, which is could result in a global treaty on climate change.

So assuming that the world’s politicians can wrangle their way to solid, legally binding targets in the Danish capital, we will then be faced with the next big set of choices – how to achieve the targets.

Whatever way the green revolution is played out over the next few decades, it will be necessary for the developed world to quickly get over its addiction to fossil fuels, and to deploy a whole raft of renewable energy solutions. More than ever, governments will need clear scientific advice about the different options ahead of them.

Despite currently lagging many of its European neighbours over renewables, the UK now at least has a clear-thinking scientific advisor in the form of David Mackay.

mackay sml.jpg

If you’re not already familiar with Mackay, he is author of the book Sustainable Energy – Without the Hot Air. Despite being available for free on-line, the book has been something of a publishing phenomenon and was described by the Guardian as “this year’s must-read book”.

You can also read Physics World’s review of the book here.

When I saw Mackay on Wednesday giving a talk at the Institute of Physics in London, he was quick to establish his philosophy. He says we need are in need of a “grassroots arithmetic movement” in which members of the public should lobby/educate their local MPs with the figures of renewables.

To a packed-out lecture room, Mackay explained why he prefers to express energy consumption in terms of kilowatt hours per day per person. His reason being, that these figures mostly fall in the range 1-100, and results can easily be translated into personal forms. “I am pro arithmetic, not any specific energy policy,” he said.

Dark-matter paper raises questions over data sharing

A preprint that uses NASA data to claim “possible evidence” for dark matter has led some researchers to question the US space agency’s data-sharing policies.

The preprint – which was uploaded to the arXiv internet server earlier this month by physicists Lisa Goodenough of New York University and Dan Hooper of Fermilab in Batavia, Illinois – makes the claim by matching a theoretical model of dark matter to freely available data from NASA’s Fermi gamma-ray telescope. But with an official analysis of the same data yet to be published, some scientists have pointed out that, depending on the validity of the evidence, the preprint will either cause confusion or steal glory from the Fermi team.

“If this turns out to be the first convincing discovery, it will become known as the Goodenough and Hooper discovery,” says Alex Murphy, a physicist at the University of Edinburgh who works on Europe’s ZEPLIN III dark-matter experiment. “Publicitywise, that’s a catastrophe.

“You can’t really stop this. NASA has this problem where they have to release their data. I kind of disagree with that myself – I’d hate for our data to be released too early and for others to be looking at it.”

‘Fermi has a PR problem’

The Fermi telescope launched into space in June last year to study the universe’s high-energy phenomena, including the annihilation of dark matter, an unknown entity that many astrophysicists think makes up over 80% of the universe’s mass. Team members used the first year of data to calibrate the telescope’s instruments but now, following NASA policy, they make all data public immediately.

In their study, Goodenough and Hooper examined some of the new Fermi gamma-ray data of the centre of our galaxy. They then compared it with a simple model of dark matter annihilation, which produces gamma rays, and discovered that it fitted a dark matter particle with a mass in the range of 25 to 30 GeV — about 30 times heavier than a proton – and an annihilation cross-section of about 9 × 10–26 cm3/s. According to Hooper, this could indicate a particle such as a neutralino in “supersymmetric” extensions to the Standard Model of particle physics.

Gordon Watts, a physicist at the University of Washington in Seattle, wrote on his blog Life as a Physicist that news coverage of this result on other websites has shown that “things got away” from the Fermi team before they have issued their own analysis. “Now Fermi has a PR problem on its hands – people are running round talking about their data and they’ve [the Fermi team] not really had a voice yet,” he adds.

But Hooper tells physicsworld.com that he sees it as his job to study data as soon as it is released. “Most papers in my community are posted on arXiv prior to being accepted by a journal, and this is no exception,” he says. “This is entirely not out of the ordinary. The reason for this is simply that in the months that it can take for peer review to be carried out, a great deal can change regarding the state of the research, and to not share the up-to-the-minute progress with the rest of the community can be counterproductive.”

Fermi analysis will have ‘lasting impact’

Fermi project scientist Julie McEnery believes her team’s work is not in competition with studies like those of Goodenough and Hooper. “I think a carefully done analysis will have the longer-lasting impact. Had there been an obvious dark-matter signal in the data, something that could be done in a reasonably quick analysis, we would of course have already published – in a refereed journal first, and to [arXiv’s subsection] astro-ph later.”

She adds, however, that all apparently “groundbreaking” results would be better off being published in refereed journals before they appear on arXiv. “There’s a danger that we’re going to confuse the field with many results that prove not to be true, and by the time some really strong, key result comes out essentially the scientific community will still be interested but the media and the public may not be.”

The Fermi team are planning to present their analysis on the galactic-centre data next week. Meanwhile, however, other groups studying the same data have come to different conclusions. For example, in a seperate preprint on arXiv, Gregory Dobler of Harvard University and colleagues attribute the gamma-ray signal to “inverse Compton scattering”, a phenomenon in which photons gain energy when they interact with matter.

“Several groups are doing their own analysis of Fermi data, which I think is fine,” says Katherine Freese, an astrophysicist at the University of Michigan. “The data are public, and there is nothing wrong with people trying to glean from it what they can. In fact, I think it is great. We are all aware that there will be some disagreement in the short term. In the long run it will shake out as to who is right.”

Helium atoms get the ride of their life

To the adrenaline junkie midway through a bungee jump, gravity must feel like it can accelerate matter at a spectacular rate. At the atomic scale, however, when it comes to shifting around neutral particles, gravity is incredibly ineffective compared with other fundamental interactions such as the strong and weak nuclear forces.

Now, however, a team of physicists in Germany has shown that a little-known interaction caused by electric fields known as the “ponderomotive” force can accelerate neutral particles at up to 1014 times the Earth’s gravitational acceleration. As well as being of interest to fundamental physics, this ability to transfer large amounts of momentum to neutral particles could lead to a host of novel applications in surface science, say the researchers.

‘Electrical pressure’

All students are taught at school that when objects possessing electrical charge are exposed to an electric field, these objects experience an electric force that can lead to motion. If the electric field is oscillating, however, then the charged object is then exposed to a second force that is proportional to the field-intensity gradient. Depending on the amount of matter and the scale of this intensity gradient, the ponderomotive force can have a significant effect.

Until now, however, physicists had assumed that the ponderomotive force would have a negligible effect on matter that is neutral. But, according to Ulli Eichmann and colleagues at the Max-Born Institute and the Institute for Optical and Atomic Physics in Germany, there is no reason for this to be the case. These researchers argue that the effect is largely independent of charge and they designed an experiment to demonstrate the magnitude of the effect on neutral matter.

The physicists began by aiming a beam of helium atoms at a detector, before firing a series of laser pulses at the beam so that individual atoms were exposed to a localized electromagnetic field. Then, by analysing data from their position-sensitive detector, they were able to show that at least one per cent of the helium atoms had undergone an acceleration, and in some cases this was as much as 1014 times that of the Earth’s gravitational acceleration (9.8 m s–1.

Like ants dragging a mountain

To explain the mechanism of this acceleration, Eichmann and colleagues refer to a model that they put forward in a paper last year. When the atoms are exposed to a laser pulse, an electron can gain energy from the laser field, causing it to be briefly “liberated” from the atom. However, this surge of energy is not sufficient for the electron to break free entirely from the Coulomb forces and it is recaptured so that it sits a long way from the nucleus in what is known as a “Rydberg state”.

It is in this state that the atom is subject to the ponderomotive force and the “quivering” electron can drag the entire atom in the direction of the localized electric field. Fortunately for the researchers, this state was long-lived enough for them to locate the positions of helium atoms at the detector and thus rule out other effects that could have caused a beam of neutral particles to be deflected and spread.

Eichmann told physicsworld.com that he can envisage applications resulting from the “instantaneous” transfer of momentum to an atom. An example of this might be the accurate and efficient deposition of atoms on surfaces for optical applications. “Atoms may be steered by manipulating the spatial geometry of the laser fields,” he says.

Robert Jones, an atomic physicist at the University of Virginia in the US is impressed by the new research. “The possibility of controlled interactions between atoms or molecules through precisely timed collisions at well-defined relative velocities is particularly intriguing,” he told physicsworld.com.

This research is published in this week’s issue of Nature.

Special relativity passes key test

Scientists studying radiation from a distant gamma-ray burst have found that the speed of light does not vary with wavelength down to distance scales below that of the Planck length. They say that this disfavours certain theories of quantum gravity that postulate the violation of Lorentz invariance.

Lorentz invariance stipulates that the laws of physics are the same for all observers, regardless of where they are in the universe. Einstein used this principle as a postulate of special relativity, assuming that the speed of light in a vacuum does not depend on who is measuring it, so long as that person is in an inertial frame of reference.

Unifying the cosmic with the quantum

In over 100 years Lorentz invariance has never been found wanting. However, physicists continue to subject it to ever more stringent tests, including modern-day versions of the famous Michelson–Morley interferometry experiment. This dedication to precision stems primarily from physicists’ desire to unite quantum mechanics with general relativity, given that some theories of quantum gravity – including string theory and loop quantum gravity – imply that Lorentz invariance might be broken. In particular, these theories allow for the possibility that the invariance does not hold near the minuscule Planck length – about 10–33 cm – since at this scale quantum effects are expected to strongly affect the nature of space–time.

It is not possible to test physics at the Planck length directly because this length corresponds to an energy of around 1019 gigaelectronvolts – way beyond the reach of particle accelerators (the most powerful of which, CERN’s Large Hadron Collider, will generate collision energies of around 104 gigaelectronvolts). However, this latest research, carried out by a collaboration of physicists under the leadership of Jonathan Granot of the University of Hertfordshire in the UK, has provided an indirect test of Lorentz invariance at the Planck scale.

Granot and colleagues studied the radiation from a gamma-ray burst – associated with a highly energetic explosion in a distant galaxy – that was spotted by NASA’s Fermi Gamma-ray Space Telescope on 10 May this year. They analysed the radiation at different wavelengths to see whether there were any signs that photons with different energies arrived at Fermi’s detectors at different times. Such a spreading of arrival times would indicate that Lorentz invariance had indeed been violated; in other words that the speed of light in a vacuum depends on the energy of that light and is not a universal constant. Any energy dependence would be minuscule but could still result in a measurable difference in photon arrival times due to the billions of light years that separate gamma-ray bursts from us.

The Fermi team used two relatively independent data analyses to conclude that Lorentz invariance had not been violated. One was the detection of a high-energy photon less than a second after the start of the burst, and the second was the existence of characteristic sharp peaks within the evolution of the burst rather than the smearing of its output that would be expected if there were a distribution in photon speeds. The researchers arrived at the same null result when studying the radiation from a gamma-ray burst detected in September last year, but could only reach about one-tenth of the Planck energy. Crucially, the shorter duration and much finer time structure of the more recent gamma-ray burst takes this null result to at least 1.2 times the Planck energy.

Constraining quantum-gravity

According to Granot, these results “strongly disfavour” quantum-gravity theories in which the speed of light varies linearly with photon energy, which might include some variations of string theory or loop quantum gravity. “I would not use the term ‘rule out’,” he says, “as most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance. However, our observational requirement that such an energy scale would be well above the Planck energy makes such models unnatural.”

Granot says that far more precise measurements would be needed to probe the Planck scale for theories that postulate a quadratic or higher-order dependence of light speed on photon energy. He also points out that his group’s approach probes just one of a number of possible effects of Lorentz invariance violation, and that extremely precise constraints on this violation have been obtained by studying the possible dependence of light speed on photon polarization from X-rays emitted by the Crab nebula. But he adds that his group’s new limit is the most precise for simple energy dependence.

Giovanni Amelino-Camelia of the University of Rome La Sapienza believes that the latest work points to the coming of age of the field of quantum gravity phenomenology, with physicists finally able to submit theories of quantum gravity to some kind of experimental test. “Nature, with its uniquely clever ways, might have figured out how to quantize space–time without affecting relativity. But even a slim chance of being on the verge of a new revolution is truly exciting,” he says.

Naming the exoplanets

exoplanet1.jpg
Artist’s Concept — “Hot Jupiter” Around the Star HD 209458 Credit: NASA

By James Dacey

When the European Space Agency (ESA) recently announced the discovery of 32 new exoplanets, it struck me how quickly we can become numbed to the wonders of scientific discovery.

In 1995, astronomers generated a surge of excitement when they discovered the first planet to be orbiting a star other than our Sun. Over the past 14 years, astronomy has entered a dramatic new era with more than 400 of these exoplanets now officially catalogued. The recent launch of NASA’s Kepler mission and with ESA considering its ambitious PLATO project means that we may well have detected thousands of exoplanets within the next few years.

But as the discoveries now come thick and fast, have we becoming a bit blasé about exoplanets?

Well, one researcher in Germany has come up with an idea that could re-inject some of the initial excitement. Wladimir Lyra of the Max Planck Institute for Astronomy is proposing that we give names to the exoplanets based on Roman-Greek mythology thus ditching the dry cataloguing that has led to planet names like MOA-2008-BLG-310-L b.

Of course, the reason why the International Astronomical Union came up with their scientific naming system is because the heavens may well be awash with exoplanets and it would soon become impractical to name every single one of them.

But as Lyra points out, every other class of astronomical body discovered to date has been given a name including the 15, 000 asteroids and minor planets.

“Our place in the cosmos is not special in any way, so there is no reason why only the planetary objects in the solar system should be named,” writes Lyra citing the Copernican Principle.

Lyra’s proposed system would assign names based on the mythological stories of the constellations. For example, the planets in Andromeda will be named after Andromeda’s myth and the planets in Hercules after Hercules’ myth. Inevitably, there are a few caveats to the system, which Lyra explains in his paper on the arXiv preprint server.

‘High-payoff’ energy research receives US cash boost

 

A US programme designed to support “high-risk, high-payoff” research in the field of energy has handed out its first cash. The Advanced Research Projects Agency-Energy (ARPA-E) has funded 37 projects worth a total of $150m. The projects, which each receive up to $10m, cover a range of topics in renewable energy, such as bacteria that can produce biofuel to energy storage using carbon nanotubes.

ARPA-E, run by the US Department of Energy (DOE), was created in 2007 to fund projects that could help to reduce emissions of greenhouse gases and the US’s dependence on foreign sources of energy.

In the 2009 Recovery and Reinvestment Act, ARPA-E received $400m – the first time it had been given funds. In April, over 3600 initial concept papers were sent to the agency, of which over 300 were invited to send in full applications. This caused a furore in September because researchers who had been rejected complained that they were not given any reasons why.

Over 40% of the money, which was announced by the DOE yesterday, has been awarded to small start-up companies. The largest grant, $9.15m, has gone to Foro Energy, a company based in Colorado. It will research new drilling technologies that could dig deep into basement rock which could make it easier and faster to tap into geothermal power – energy that is extracted from heat stored in the Earth.

Other big winners include the Massachusetts-based FloDesign Wind Turbine, which will use $8.33m to carry out research into a new “shrouded” wind turbine that could reduce noise from the blades and deliver more energy.

Phononic Devices in Oklahoma, meanwhile, has won $3m to do research into thermoelectric materials, which work by converting temperature differences into an electric potential. Envia Systems, based in California, will use $4m to do research into creating more efficient lithium-ion batteries.

Research projects at academic institutions will take around 35% of the money. Iowa State University, for example, will use $4.37m to investigate whether it is viable to use algae to produce biofuels directly from sunlight and carbon dioxide, while researchers at the Massachusetts Institute of Technology will use $6.95m to look into liquid metal batteries that could be used for energy storage.

“I would have liked to have seen bolder projects funded,” says Martin Hoffert from New York University, whose team had failed to secure funding for research on the feasibility of space-based solar-power technology. “Some of them are interesting, but they will likely not lead to the transformative technologies that we need to tackle climate change.”

Plasmons shine a light on catalysis

Catalysts are materials that accelerate chemical reactions without being consumed themselves. They play a vital role in the chemical industry and are also used to remove harmful substances from vehicle exhausts. Now, physicists in Sweden have devised a new way of monitoring catalytic processes in “real-life” situations. The technique uses collective electron oscillations called “surface plasmons” and is said to be better than current analytical methods, which often rely on ultrahigh vacuum (UHV) techniques and single-crystal samples.

Many catalytic systems, including those used in cars, consist of surfaces covered with tiny pieces of catalyst over which gases flow. Although such systems operate at (or above) atmospheric pressure, they are usually studied in very different environments – namely in ultra-clean UHV chambers using large, single-crystal samples. Disparities between what occurs in real systems and in such experiments are known as the “pressure” and “materials” gaps.

Mind the gap

The new technique, developed by Bengt Kasemo and colleagues at Chalmers University, involves studying catalytic processes at realistic pressures and particle sizes. What the team has done is to deposit about 30 nm of gold onto a glass slide, which is then dipped into a plastic colloid – tiny particles suspended in a liquid – that then dries to form a pattern of circles on the surface of the gold. Etching away the exposed gold leaves gold disks about 100 nm in diameter. The sample is then coated with a thin insulating film – about 10 nm deep – and then with nanometre-sized pieces of the catalyst platinum, which cover 10–20 % or less of the surface.

When light from an ordinary lamp is shone through the slide, radiation at certain wavelengths is absorbed to create surface plasmons – collective oscillations of electrons on the surfaces of the gold disks. The transmitted-light spectrum therefore has a dip in absorption at these wavelengths. Although physicists already knew that the position of this minimum shifts in the presence of platinum particles, Kasemo and colleagues discovered that the position also changes when certain molecules such as oxygen or carbon dioxide are adsorbed onto the surface of the platinum.

Poisoned catalyst

The team used the technique to study several common catalytic reactions, including the oxidation of carbon monoxide to create carbon dioxide. For example, by passing pure oxygen across the sample and then introducing carbon monoxide (CO), they could monitor how the CO turns into carbon dioxide simply by monitoring the position of the absorption minimum. In particular, the team found that when the CO concentration reached about 7% of the total gas, the minimum shifted suddenly.

This occurs because the platinum surface goes from being covered by oxygen – which supports catalysis – to being covered in carbon monoxide. The latter is referred to as the “poisoning” of the catalyst because it brings the reaction to a halt. Understanding exactly when this transition occurs is crucial in designing and operating catalytic systems. Kasemo and colleagues saw similar effects while using their sample to catalyze two other reactions – the oxidation of hydrogen and the conversion of nitrogen oxides to molecular nitrogen.

Sensitive to immediate environment

This new method relies on a well established technique of plasmonics, according to Bill Barnes of the University of Exeter in the UK. “The optical response is dominated by the localized surface-plasmon resonances supported by the gold disks, and such modes are well known to be very sensitive to their immediate optical environment,” he explained.

Niek van Hulst of the Institute of Photonic Sciences in Barcelona is also impressed by the technique. “The elegance of the method is that only a transparent glass with nanoparticles needs to be mounted in the reaction chamber – it’s simple and effective,” he said.

The work is reported in Science.

Rolling rucks in a rug

ruck.jpg
A short-lived experiment…

By Hamish Johnston

Here’s a question for you — what is the easiest way to move a large rug?

The answer, according to carpet fitters — as well as two papers in Physical Review Letters — is to create a “ruck” and then push in along the rug (see photo above).

The reason, apparently, is that the ruck is quasi-static, which means that it can be moved easily by a series of gentle pushes that don’t take it very far out of equilibrium

I thought I would try it for myself, but the only rugs I could find were in the main entrance to Dirac House and there was too much foot traffic to do the experiment safely!

If you want to read more about rucks, check out Statics and Inertial Dynamics of a Ruck in a Rug by Dominic Vella, Arezki Boudaoud Mokhtar Adda-Bedia as well Shape and Motion of a Ruck in a Rug by John M. Kolinski, Pascale Aussillous and L. Mahadevan.

The first paper begins with an investigation of the conditions needed for a static ruck to persist — rather than flatten out — once it’s been created. The team derived an equation describing the transition and tested it experimentally using several “rug” and “floor” materials, incluing a real rug on a wooden floor.

The equation, which had to be solved numerically, did a pretty good job of predicting which rucks survive and which collapse.

ruck2.jpg
Ruck on a roll

The second paper looks at rucks “rolling” downhill by placing a thin latex rug on an inclined plane. The team found that a static ruck will begin to roll when the plane is tilted above a critical angle. It will continue to roll until the angle is reduced to a second critical angle — which is smaller than the first angle.

From this, the team concluded that the coefficient of static rolling friction is greater that the coefficient of dynamic rolling friction.

You’re probably wondering what they mean by a rolling ruck?

To show that the ruck was rolling — rather than sliding — the team followed the paths of points on the rug as the ruck moved through and found that they move on a cycloidal tragectory. In other words, the points moved as if they were on the rim of a rolling wheel.

You’re probably also wondering why PRL has published two papers on rugs?

According to the first paper rug rucks have “long proved to be a useful analogy
in explaining a range of important physical phenomena”. These include dislocations in crystalline materials as well as wrinkle-drive motion, which has been observed in living organisms including inchworms.

Particles are back in the LHC!

lhc.jpg
Back in business:The first ion beam entering point 2 of the LHC, just before the ALICE detector

By Hamish Johnston

Physicsts at CERN passed an important milestone (again!) last weekend by injecting the first beam of ions into the Large Hadron Collider (LHC) since the disastrous shutdown of September, 2008.

According to a CERN press release, lead ions were placed in the clockwise beam pipe on Friday 23 October and guided past the Alice detector before being dumped.

Later that day the first beam of protons followed the same route — and then on Saturday protons were sent through the LHCb detector.

CERN said “All settings and parameters showed a perfect functioning of the machine, which is preparing for its first circulating beam in the coming weeks”.

Matin Durrani recently spoke to CERN boss Rolf-Dieter Heuer about the switch-on of the LHC — you can watch the interview here or below, along with two other videos made at CERN.

Copyright © 2026 by IOP Publishing Ltd and individual contributors