Skip to main content

How a next-generation particle collider could unravel the mysteries of the Higgs boson

More than a decade following the discovery of the Higgs boson at the CERN particle-physics lab near Geneva in 2012, high-energy physics stands at a crossroads. While the Large Hadron Collider (LHC) is currently undergoing a major £1.1bn upgrade towards a High-Luminosity LHC (HL-LHC), the question facing particle physicists is what machine should be built next – and where – if we are to study the Higgs boson in unprecedented detail in the hope of revealing new physics.

Several designs exist, one of which is a huge 91 km circumference collider at CERN known as the Future Circular Collider (FCC). But new technologies are also offering tantalising alternatives to such large machines, notably a muon collider. As CERN celebrates its 70th anniversary this year, Michael Banks talks to Tulika Bose from the University of Wisconsin–Madison, Philip Burrows from the University of Oxford and Tara Shears from the University of Liverpool about the latest research on the Higgs boson, what the HL-LHC might discover and the range of proposals for the next big particle collider.

Tulika Bose, Philip Burrows and Tara Shears

What have we learnt about the Higgs boson since it was discovered in 2012?

Tulika Bose (TB): The question we have been working towards in the past decade is whether it is a “Standard Model” Higgs boson or a sister, or a cousin or a brother of that Higgs. We’ve been working really hard to pin it down by measuring its properties. All we can say at this point is that it looks like the Higgs that was predicted by the Standard Model. However, there are so many questions we still don’t know. Does it decay into something more exotic? How does it interact with all of the other particles in the Standard Model? While we’ve understood some of these interactions, there are still many more particle interactions with the Higgs that we don’t quite understand. Then of course, there is a big open question about how the Higgs interacts with itself. Does it, and if so, what is its interaction strength? These are some of the exciting questions that we are currently trying to answer at the LHC.

So the Standard Model of particle physics is alive and well?

TB: The fact that we haven’t seen anything exotic that has not been predicted yet tells us that we need to be looking at a different energy scale. That’s one possibility – we just need to go much higher energies. The other alternative is that we’ve been looking in the standard places. Maybe there are particles that we haven’t yet been able to detect that couple incredibly lightly to the Higgs.

Has it been disappointing that the LHC hasn’t discovered particles beyond the Higgs?

Tara Shears (TS): Not at all. The Higgs alone is such a huge step forward in completing our picture and understanding of the Standard Model, providing, of course, it is a Standard Model Higgs. And there’s so much more that we’ve learned aside from the Higgs, such as understanding the behaviour of other particles such as differences between matter and antimatter charm quarks.

How will the HL-LHC take our understanding of the Higgs forward?

TS: One way to understand more about the Higgs is to amass enormous amounts of data to look for very rare processes and this is where the HL-LHC is really going to come into its own. It is going to allow us to extend those investigations beyond the particles we’ve been able to study so far making our first observations of how the Higgs interacts with lighter particles such as the muon and how the Higgs interacts with itself. We hope to see that with the HL-LHC.

What is involved with the £1.1bn HL-LHC upgrade?

Philip Burrows (PB): The LHC accelerator is 27 km long and about 90% of it is not going to be affected. One of the most critical aspects of the upgrade is to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to very small sizes of the order of 10 microns in cross section. This upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.

Engineer working on the HL-LHC upgrade in the LHC tunnel

What is the current status of the project?

PB: The schedule involves shutting down the LHC for roughly three to four years to install the high-luminosity upgrade, which will then turn on towards the end of the decade. The current CERN schedule has the HL-LHC running until the end of 2041. So there’s another 10 years plus of running this upgraded collider and who knows what exciting discoveries are going to be made.

TS: One thing to think about concerning the cost is that the timescale of use is huge and so it is an investment for a considerable part of the future in terms of scientific exploitation. It’s also an investment in terms of potential spin-out technology.

In what way will the HL-LHC be better than the LHC?

PB: The measure of the performance of the accelerator is conventionally given in terms of luminosity and it’s defined as the number of particles that cross at these collision points per square centimetre per second. That number is roughly 1034 with the LHC. With the high-luminosity upgrade, however, we are talking about making roughly an order of magnitude increase in the total data sample that will be collected over the next decade or so. So in other words, we’ve only got 10% or so of the total data sample so far in the bag. After the upgrade, there’ll be another factor of 10 data that will be collected and that is a completely new ball game in terms of the statistical accuracy of the measurements that can be made and the sensitivity and reach for new physics

Looking beyond the HL-LHC, particle physicists seem to agree that the next particle collider should be a Higgs factory – but what would that involve?

TB: Even at the end of the HL-LHC, there will be certain things we won’t be able to do at the LHC and that’s for several reasons. One is that the LHC is a proton–proton machine and when you’re colliding protons, you end up with a rather messy environment in comparison to the clean collisions between electrons and positrons and this allows you to make certain measurements which will not be possible at the LHC.

So what sort of measurements could you do with a Higgs factory?

TS:  One is to find out how much the Higgs couples to the electron. There’s no way we will ever find that out with the HL-LHC, it’s just too rare a process to measure, but with a Higgs factory, it becomes a possibility. And this is important not because it’s stamp collecting, but because understanding why the mass of the electron, which the Higgs boson is responsible for, has that particular value is of huge importance to our understanding of the size of atoms, which underpins chemistry and materials science.

PB: Although we often call this future machine a Higgs factory, it has far more uses beyond making Higgs bosons. If you were to run it at higher energies, for example, you could make pairs of top quarks and anti-top quarks. And we desperately want to understand the top quark, given it is the heaviest fundamental particle that we are aware of – it’s roughly 180 times heavier than a proton. You could also run the Higgs factory at lower energies and carry out more precision measurements of the Z and W bosons. So it’s really more than a Higgs factory. Some people say it’s the “Higgs and the electroweak boson factory” but that doesn’t quite roll off the tongue in the same way.

Artist concept of the International Linear Collider

While it seems there’s a consensus on a Higgs factory, there doesn’t appear to be one regarding building a linear or circular machine?

PB: There are two main designs on the table today – circular and linear. The motivation for linear colliders is due to the problem of sending electrons and positrons round in a circle – they radiate photons. So as you go to higher energies in a circular collider, electrons and positrons radiate that energy away in the form of synchrotron radiation. It was felt back in the late-1990s that it was the end of the road for circular electron–positron colliders because of the limitations of synchrotron radiation. But the discovery of the Higgs boson at 125 GeV was lighter than some had predicted. This meant that an electron–positron collider would only need a centre of mass energy of about 250 GeV. Circular electron–positron colliders then came back in vogue.

TS: The drawback with a linear collider is that the beams are not recirculated in the same way as they are in a circular collider. Instead, you have “shots”, so it’s difficult to reach the same volume of data in a linear collider. Yet it turns out that both of these solutions are really competitive with each other and that’s why they are still both on the table.

PB: Yes, while a circular machine may have two, or even four, main detectors in the ring, at a linear machine the beam can be sent to only one detector at a given time. So having two detectors means you have to share the luminosity, so each would get notionally half of the data. But to take an automobile analogy, it’s kind of like arguing about the merits of a Rolls-Royce versus a Bentley. Both linear and circular are absolutely superb, amazing options and some have got bells and whistles over here and others have got bells and whistles over there, but you’re really arguing about the fine details.

CERN seems to have put its weight behind the Future Circular Collider (FCC) – a huge 91 km circumference circular collider that would cost £12bn. What’s the thinking behind that?

TS: The cost is about one-and-a-half times that of the Channel Tunnel so it is really substantial infrastructure. But bear in mind it is for a facility that’s going to be used for the remainder of the century, for future physics, so you have to keep that longevity in mind when talking about the costs.

TB: I think the circular collider has become popular because it’s seen as a stepping stone towards a proton–proton machine operating at 100 TeV that would use the same infrastructure and the same large tunnel and begin operation after the Higgs factory element in the 2070s. That would allow us to really pin down the Higgs interaction with itself and it would also be the ultimate discovery machine, allowing us to discover particles at the 30–40 TeV scale, for example.

Artist concept of the Future Circular Collider

What kind of technologies will be needed for this potential proton machine?

PB: The big issue is the magnets, because you have to build very strong bending magnets to keep the protons going round on their 91 km circumference trajectory. The magnets at the LHC are 8 T but some think the magnets you would need for the proton version of the FCC would be 16–20 T. And that is really pushing the boundaries of magnet technology. Today, nobody really knows how to build such magnets. There’s a huge R&D effort going on around the world and people are constantly making progress. But that is the big technological uncertainty. Yet if we follow the model of an electron–positron collider first, followed by a proton–proton machine, then we will have several decades in which to master the magnet technology.

With regard to novel technology, the influential US Particle Physics Project Prioritization Panel, known as “P5”, called for more research into a muon collider, calling it “our muon shot”. What would that involve?

TB: Yes, I sat on the P5 panel that published a report late last year that recommended a course of action for US particle physics for the coming 20 years. One of those recommendations involves carrying out more research and development into a muon collider. As we already discussed, an electron–positron collider in a circular configuration suffers from a lot of synchrotron radiation. The question is if we can instead use a fundamental elementary particle that is more massive than the electron. In that case a muon collider could offer the best of both worlds, the advantages of an electron machine in terms of clean collisions but also reaching larger energies like a proton machine. However, the challenge is that the muon is very unstable and decays quickly. This means you are going to have to create, focus and collide them before they decay. A lot of R&D is needed in the coming decades but perhaps a decision could be taken on whether to go ahead by the 2050s.

And potentially, if built, it would need a tunnel of similar size to the existing LHC?

TB: Yes. The nice thing about the muon collider is that you don’t need a massive 90 km tunnel so it could actually fit on the existing Fermilab campus. Perhaps we need to think about this project in a global way because this has to be a big global collaborative effort. But whatever happens it is exciting times ahead.

  • Tulika Bose, Philip Burrows and Tara Shears were speaking on a Physics World Live panel discussion about the future of particle physics held on 26 September 2024. This Q&A is an edited version of the event, which you can watch online now

Objects with embedded spins could test whether quantum measurement affects gravity

A new experiment to determine whether or not gravity is affected by the act of measurement has been proposed by theoretical physicists in the UK, India and the Netherlands. The experiment is similar to one outlined by the same group in 2017 to test whether or not two masses could become quantum-mechanically entangled by gravity, but the latest version could potentially be easier to perform.

An important outstanding challenge in modern theoretical physics is how to reconcile Einstein’s general theory of relativity – which describes gravity – with quantum theory, which describes just about everything else in physics.

“You can quantize gravity”  explains Daniel Carney of the Lawrence Berkeley National Laboratory in California, who was not involved in this latest research. However, he adds, “Gravitational wave detection is extremely quantum mechanical…[Gravity is] a normal quantum field theory and it works fine: it just predicts its own breakdown near black hole singularities and the Big Bang and things like that.”

Multiple experimental groups around the world seek to test whether the gravitational field can exist in non-classical states that would be fundamentally inconsistent with general relativity. If it could not, it would suggest that the reason quantum gravity breaks down at high energies is that gravity was not a quantum field. Performing these tests, however, is extraordinarily difficult because it requires objects that are both small enough to be detectably affected by the laws of quantum mechanics and yet massive enough for their gravitation to be measured.

Hypothetical analogy

Now, Sougato Bose of University College London and colleagues have proposed a test to determine whether or not the quantum state of a massive particle is affected by the detection of its mass. The measurement postulate in quantum mechanics says that it should be affected. Bose offers a hypothetical analogy: a photon passes through an interferometer, splitting its quantum wavefunction into two paths. Both paths interact equally with a mass in a delocalized superposition state. When the paths recombine, the output photon always emerges from the same port of the interferometer. If, however, the position of the mass is detected using another mass, the superposition collapses, the photon wavefuntion no longer interacts equally with the mass along each arm and the photon may consequently emerge from the other port.

However, this conceptually simple test is experimentally impracticable. For a mass to exert a gravitational field sufficient for another mass to detect it, it needs to be at least 10-14 kg – about a micron in size: “A micron-sized mass does not go into a quantum superposition, because a beamsplitter is a like a potential barrier, and a large mass doesn’t tunnel across a barrier of sufficient height,” explains Bose.

The solution to this problem, according to Bose and colleagues, is to use a small diamond crystal containing a single nitrogen vacancy centre – which contains a quantum spin. At the beginning of the experiment, a microwave pulse would initialize the vacancy into a spin superposition. The crystal would then pass through a Stern–Gerlach interferometer, where it would experience a magnetic field gradient.

Nitrogen vacancy centres are magnetic, so opposite spins would be deflected in opposite directions by the magnetic field gradient. Crystals with spins in superposition states would be deflected both ways simultaneously. The spins could then be inverted using another microwave pulse, causing the crystals to recombine with themselves without providing any information about which path they had taken. However, if a second interferometer were placed close enough to detect the gravitational field produced by the first mass, it would collapse the superposition, providing “which path” information and affecting the result measured by the first interferometer.

Stern–Gerlach interferometers

In 2017, Bose and colleagues proposed a similar setup to test whether or not the gravitational attraction between two masses could lead to quantum entanglement of spins in two Stern–Gerlach interferometers. However, Bose argues the new test could be easier to perform, as it would not require measurement of both spins simultaneously – simply for a second interferometer to perform some kind of gravitational detection of the first mass’s position. “If you see a difference, then you can immediately conclude that an update on a quantum measurement is happening.”

Moreover, Bose says that the inevitable invasiveness of a measurement is a different postulate of quantum mechanics from the formation of quantum entanglement between the two particles as a result of their interaction. In a hypothetical theory going beyond both quantum mechanics and general relativity, one of them could hold but not the other. The researchers are now investigating potential ways to implement their proposal in practice – something Bose predicts will take at least 15 years.

Carney sees some merit in the proposal. “I do like the one-sided test nature of things like this, and they are, in some sense, easier to execute,” he says; “but the reason these things are so hard is that I need to take a small system and measure its gravitational field, and this does not avoid that problem at all.”

A paper describing the research has been accepted for publication in Physical Review Letters and is available on the arXiv pre-print server.

Flocking together: the physics of sheep herding and pedestrian flows

In this episode of Physics World Stories, host Andrew Glester shepherds you through the fascinating world of crowd dynamics. While gazing at a flock of sheep or meandering through a busy street, you may not immediately think of the physics at play – but there is much more than you think. Give the episode a listen to discover the surprising science behind how animals and people move together in large groups.

The first guest, Philip Ball, a UK-based science writer, explores the principles that underpin the movement of sheep in flocks. Insights from physics can even be used to inform herding tactics, whereby dogs are guided – usually through whistles – to control flocks of sheep and direct them towards a chosen destination. For even more detail, check out Ball’s recent Physics World feature “Field work – the physics of sheep, from phase transitions to collective motion“.

Next, Alessandro Corbetta, from Eindhoven University of Technology in the Netherlands, talks about his research on pedestrian flow that won him an Ig Nobel Prize. Corbetta explains how his research field is helping us understand – and manage – the movements of human crowds in bustling spaces such as museums, transport hubs and stadia. Plus, he shares how winning the Ig Nobel has enabled the research to reach a far broader audience than he initially imagined.

Confused by the twin paradox? Maybe philosophy can help

Once upon a time, a man took a fast rocket to a faraway planet. He soon missed his home world and took a fast rocket back. His twin sister, a physicist, was heartbroken, saying that they were no longer twins and that her sibling was now younger than she due to the phenomenon of time dilation.

But her brother, who was a philosopher, said that they had experienced time equally and so were truthfully the same age. And verily, physicists and philosophers have quarrelled ever since – physicists speaking of clocks and philosophers of time.

This scenario illustrates a famously counterintuitive implication of the special theory of relativity known as the “twin paradox”. It’s a puzzle that two physicists (Adam Frank and Marcello Gleiser) and a philosopher (Evan Thompson) have now taken up in a new book called The Blind Spot. The book shows how bound up philosophy and physics are and how its practitioners can so easily misunderstand each other.

Got time?

Albert Einstein implicitly proposed time dilation in his famous 1905 paper “On the electrodynamics of moving bodies” (Ann. Phys. 17 891), which inaugurated the special theory of relativity. If two identical clocks are synchronized and one then travels at a speed relative to the other and back, the theory implied, then when the clocks are compared one would see a difference in the time registered by the two. The clock that had travelled and returned would have run slower and therefore be “younger”.

For humans to experience the world, time cannot be made of abstract instants stuck together

At around the same time that Einstein was putting together the theory of relativity, the French philosopher Henri Bergson (1859–1941) was working out a theory of time. In Time and Free Will, his doctoral thesis published in 1889, Bergson argued that time, considered most fundamentally, does not consist of dimensionless and identical instants.

For humans to experience the world, time cannot be made of abstract instants stuck together. Humans live in a temporal flow that Bergson called “duration”, and only duration makes it possible to conceive and measure a “clock-time” consisting of instants. Duration itself cannot be measured; any measurement presupposes duration.

These two accounts of time provided the perfect opportunity to display the relation of physics and philosophy. On the one hand was Einstein’s special theory of relativity, which relates measured times of objects moving with respect to each other; on the other was Bergson’s account of the dependence of measured times on duration.

Unfortunately, as the authors of The Blind Spot describe, the opportunity was squandered by off-hand comments during an impromptu exchange between Einstein and Bergson. The much-written-about encounter, which took place in Paris in 1922, saw Einstein speaking to the Paris Philosophical Society, with Bergson in the audience.

Coaxed into speaking at a slow spot in the meeting, Bergson mentioned some ideas from his upcoming book Duration and Simultaneity. While relativity may be complete as a mathematical theory, he said, it depends on duration, or the experience of time itself, which escapes measurement and indeed makes “clock-time” possible.

Einstein was dismissive, calling Bergson’s notion “psychological”. To Einstein, duration is an emotion, the response of a human being to a situation rather than part and parcel of what it means to experience a situation.

Mutual understanding was still possible, had Einstein and Bergson pursued the issue with rigorous and open minds. But the occasion came to an unnecessary standstill when Bergson slipped up in remarks about the twin paradox.

Bergson argued that duration underlies the experience of each twin and neither would experience any dilation of it; neither would experience time as “slowing down” or “speeding up”. This much was true. But Bergson went on to say that duration was therefore a continuum, and any intervals of time in it are abstractions made possible by duration.

Bergson thought that duration is single. Moreover, the reference frames of the twins are symmetric, for the twins are in reference frames moving with respect to each other, not with respect to an absolute frame or universal time. An age difference between the twins, Bergson thought, is purely mathematical and only on their clocks; it might show up when the twins are theorizing, but not in real life.

This was a mistake; Einstein’s theory does indeed entail that the twins have aged differently. One twin has switched directions, jumping from a frame moving away to one in the reverse direction. Frame-switching requires acceleration, and the twin who has undergone it has broken the symmetry. Einstein and other physicists, noting Bergson’s misunderstanding of relativity, then felt legitimated to dismiss Bergson’s idea of duration and of how measurement depended on it.

The Blind Spot uses philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked

Many philosophers, from Immanuel Kant to Alfred North Whitehead, have demonstrated that scientific activity arises from and depends on something like duration. What is innovative about The Blind Spot is that it uses such philosophical arguments to show how specific paradoxes and problems arise in science when the role of experience is overlooked.

“We must live the world before we conceptualize it,” the authors say. Their book title invokes an analogy with the optic nerve, which makes seeing possible only by creating a blind spot in the visual field. Similarly, the authors write, aspects of experience such as duration make things like measurement possible only by being invisible, even to scientific data-taking and theorizing. Duration cannot itself be measured and precedes being able to practise science – yet it is fundamental to science.

The critical point

The Blind Spot does not eliminate what’s enigmatic about the twin paradox but shows more clearly what that enigma is. An everyday assumption about time is that it’s Newtonian: time is universal and can be measured as flowing everywhere the same. Bergson found that this is wrong, for duration allows humans to interact with the world before they can measure time and develop theories about it. But it turns out that there is no one duration, and relativity theory captures the structure of the relations between durations.

The two siblings may be very different, but with help they can understand each other.

Physics-based model helps pedestrians and cyclists avoid city pollution

Computer rendering of a neon-blue car with airflow lines passing over it and a cloud of emissions trailing behind it, labelled "brake dust ejection" near the front wheels and "tyre and road dispersion" in the middle

Scientists at the University of Birmingham, UK, have used physics-based modelling to develop a tool that lets cyclists and pedestrians visualize certain types of pollution in real time – and take steps to avoid it. The scientists say the data behind the tool could also guide policymakers and urban planners, helping them make cities cleaner and healthier.

As well as the exhaust from their tailpipes, motor vehicles produce particulates from their tyres, their brakes and their interactions with the road surface. These particulate pollutants are known health hazards, causing or contributing to chronic conditions such as lung disease and cardiovascular problems. However, it is difficult to track exactly how they pass from their sources into the environment, and the relationships between pollution levels and factors like vehicle type, speed and deceleration are hard to quantify.

Large-eddy simulations

In the new study, which is detailed in the Royal Society Open Science Journal, researchers led by Birmingham mechanical engineer Jason Stafford developed a tool that answers some of these questions in a way that helps both members of the public and policymakers to manage the associated risks. Among other findings, they showed that the risk of being exposed to non-exhaust pollutants from vehicles is greatest when the vehicles brake – for example at traffic lights, zebra crossings and bus stops.

“We used large-eddy simulations to predict turbulent air flow around road vehicles for cruising and braking conditions that are observed in urban environments,” Stafford explains. “We then coupled these to a set of pollution transport (fluid dynamics) equations, allowing us to predict how harmful particle pollutants from the different emission sources (for example, brakes, tyres and roads) are transported to the wider pedestrian/cyclist environment.”

A visible problem

The researchers’ next goal was to help people “see” these so-called PM2.5 pollutants (which, at 2.5 microns or less in diameter, cannot be detected with the naked eye) in their everyday world without alarming them unduly and putting them off walking and cycling in urban spaces altogether. To this end, they developed an immersive reality tool that makes the pollutants visible in space and time, allowing users to observe the safest distances for themselves. They then demonstrated this tool to members of the general public in the centre of Birmingham, which is the UK’s second most populous city and its second largest contributor to PM2.5 emissions from brake and tyre wear.

The people who tried the tool were able to visualize the pollution data and identify pollutant sources. They could also understand how to navigate urban spaces to reduce their exposure to these pollutants, Stafford says.

“It was very exciting to find that this approach was effective no matter what a person’s pre-existing knowledge of non-exhaust emissions was, or on their educational background,” he tells Physics World.

Clear guidance and a framework via which to convey complex physicochemical data

Stafford says the team’s work provides clear guidance to governments, city councils and urban planners on the interface between road transport emissions and public health. It also creates a framework for conveying complex physicochemical data in a way that members of the public and decision-makers can understand, even if they lack scientific training.

“This is a crucial component if we are to help society,” Stafford says. Longitudinal studies, he adds, would help him and his colleagues understand whether the method actually leads to behavioural change for vehicle drivers or pedestrians.

Looking forward, the Birmingham team aims to reduce the computing complexity required to build the model. At present, the numerical simulations are intensive and require high-performance facilities to solve the governing equations and produce data. “These constraints limited us to constructing a one-way virtual environment,” Stafford says.  “Techniques that would provide close to real-time computing may open up two-way interactions that allow users to quickly change their environment and observe how this affects their exposure to pollution.”

Stafford says the team’s physics-informed immersive approach could also be extended beyond non-exhaust emissions to, for example, visualize indoor air quality and how it interacts with the built environment, where computational modelling tools are regularly used to inform thermal comfort and ventilation.

Liquid-crystal bifocal lens excels at polarization and edge imaging

A bifocal lens that can adjust the relative intensity of its two focal points using an applied electric field has been developed by Fan Fan and colleagues at China’s Hunan University. The lens features a bilayer structure made of liquid crystal materials. Each layer responds differently to the applied electric field, splitting incoming light into oppositely polarized beams.

Bifocal lenses work by combining two distinct lens segments into one, each with a different focal length – the distance from the lens to its focal point. This gives the lens two distinct focal lengths.

While bifocals are best known for their use in vision correction, recent advances in optical materials are expanding their application in new directions. In their research, Fan’s team recognized how recent progress in holography held the potential for further innovations in the field.

Inspired by hologaphy

“Researchers have devised many methods to improve the information capacity of holographic devices based on multi-layer structures,” Fan describes. “We thought this type of structure could be useful beyond the field of holographic displays.”

To this end, the Hunan team investigated how layers within these structures could manipulate the polarization states of light beams in different ways. To achieve this, they fabricated their bifocal lens from liquid crystal materials.

Liquid crystals comprise molecules that can flow like in a liquid, but can maintain specific orientations – like molecules in a crystal. These properties make liquid crystals ideal for modulating light.

Bilayer benefits

“Most liquid-crystal-based devices are made from single-layer structures, but this limits light-field modulation to a confined area,” Fan explains. “To realize more complex and functional modulation of incident light, we used bilayer structures composed of a liquid crystal cell and a liquid crystal polymer.”

In the cell, the liquid crystal layer is sandwiched between two transparent substrates, creating a 2D material. When a voltage is applied across the cell, the molecules align along the electric field. In contrast, the molecules in the liquid-crystal polymer are much larger, and their alignment is not affected by the applied voltage.

Fan’s team took advantage of these differences, finding that each layer modulates circularly polarized light in different ways. As a result, the lens could split the light into left-handed and right-handed circularly polarized components. Crucially, each of these components is focused at a different point. By adjusting the voltage across the lens, the researchers could easily control the difference in intensity at the two focal points.

In the past, achieving this kind of control would have only been possible by the mechanical rotation of the lens layers with respect to each other. The new design is much simpler and makes it easier and more efficient to adjust the intensities at the two focal points.

Large separation distance

To demonstrate this advantage, Fan’s team used their bifocal lens in two types of imaging experiments. One was polarization imaging, which analyses differences in how left-handed and right-handed circularly polarized light interact with a sample. This method typically requires a large separation distance between focal points.

They also tested the lens in edge imaging, which enhances the clarity of boundaries in images. This requires a much smaller separation distance between focal points.

By adjusting the geometric configurations within the bilayer structure, Fan’s team achieved the tight control over the separation between the focal points. In both polarization and edge imaging experiments, their bifocal lens did very well, closely matching the theoretical performance predicted by their simulations. These promising results suggest that the lens could have a wide range of applications in optical systems.

Based on their initial success, Fan and colleagues are now working to reduce the manufacturing costs of their multi-layer bifocal lenses. If successful, this would allow the lens to be used in a wide range of research applications.

“We believe that the light control mechanism we created using the multilayer structure could also be used to design other optical devices, including holographic devices and beam generators, or for optical image processing,” Fan says.

The lens is described in Optics Letters.

Century-old photoelectric effect inspires a new search for quantum gravity

According to quantum mechanics, our universe is like a Lego set. All matter particles, as well as particles such as light that act as messengers between them, come in discrete blocks of energy. By rearranging these blocks, it is possible to build everything we observe around us.

Well, almost everything. Gravity, a crucial piece of the universe, is missing from the quantum Lego set. But while there is still no quantum theory of gravity, the challenge of detecting its signatures now looks a little more manageable thanks to a proposed experiment that takes inspiration from the photoelectric effect, which Albert Einstein used to prove the quantum nature of light more than a century ago.

History revisited

Quantum mechanics and general relativity each, independently, provide accurate descriptions of our universe – but only at short and long distances, respectively. Bridging the two is one of the deepest problems facing physics, with tentative theories approaching it from different perspectives.

However, all efforts of describing a quantum theory of gravity agree on one thing: if gravity is quantum, then it, too, must have a particle that carries its force in discrete packages, just as other forces do.

In the latest study, which is described in Nature Communications, Germain Tobar and Sreenath K Manikandan of Sweden’s Stockholm University, working with Thomas Beitel and Igor Pikovski of the Stevens Institute of Technology, US, propose a new experiment that could show that gravity does indeed come in these discrete packages, which are known as gravitons.

The principle behind their experiment parallels that of the photoelectric effect, in which light shining on a material causes it to emit discrete packets of energy, one particle at a time, rather than in a continuous spectrum. Similarly, the Stockholm-Stevens team proposes using massive resonant bars that have been cooled and tuned to vibrate if they absorb a graviton from an incoming gravitational wave. When this happens, the column’s quantum state would undergo a transition that can be detected by a quantum sensor.

“We’re playing the same game as photoelectric effect, except instead of photons – quanta of light – energy is exchanged between a graviton and the resonant bar in discrete steps,” Pikovski explains.

“Still hard, but not as hard as we thought”

While the idea of using resonant bars to detect gravitational waves dates back to the 1960s, the possibility of using it to detect quantum transitions is new. “We realized if you change perspectives and instead of measuring change in position, you measure change in energy in the quantum state, you can learn more,” Pikovski says.

A key driver of this perspective shift is the Laser Interferometer Gravitational-wave Observatory, or LIGO, which detects gravitational waves by measuring tiny deviations in the length of the interferometer’s arms as the waves pass through them. Thanks to LIGO, Pikovski says, “We not only know when gravitational waves are detected but also [their] properties such as frequency.”

Aerial photo of the Hanford detector site of LIGO, showing a building in the centre of the image and two long interferometer arms stretching into the distance of a desert-like landscape

In their study, Pikovski and colleagues used LIGO’s repository of gravitational-wave data to narrow down the frequency and energy range of typical gravitational waves. This allowed them to calculate the type of resonant bar required to detect gravitons. LIGO could also help them cross-correlate any signals they detect.

“When these three ingredients—resonant bar as a macroscopic quantum detector, detecting quantum transitions using quantum sensors and cross-correlating detection with LIGO— are taken altogether, it turns out detecting a graviton is still hard but not as hard as we thought,” Pikovski says.

Within reach, theoretically

For most known gravitational wave events, the Stockholm-Stevens scientists say that the number of gravitons their proposed device could detect is small. However, for neutron star-neutron star collisions, a quantum transition in reasonably-sized resonant bars could be detected for one in every three collisions, they say.

Carlo Rovelli, a theorist at the University of Aix-Marseille, France who was not involved in the study, agrees that “the goal of quantum gravity observations seem within reach”. He adds that the work “shows that the arguments claiming that it should be impossible to find evidence for single-graviton exchange were wrong”.

Frank Wilczek, a theorist at the Massachusetts Institute of Technology (MIT), US who was also not involved in the study, is similarly positive. For a consistent theory that respects quantum mechanics and general relativity, he says, “it can be interpreted that this experiment would prove the existence of gravitons and that the gravitational field is quantized”.

So when are we going to start detecting?

On paper, the experiment shows promise. But actually building a massive graviton detector with measurable quantum transitions will be anything but easy.

Part of the reason for this is that a typical gravitational wave shower can consist of approximately zillions of gravitons. Just as the pattern of individual raindrops can be heard as they fall on a tin roof, carefully prepared resonant bars should, in principle, be able to detect individual incoming gravitons within these gravitational wave showers.

But for this to happen, the bars must be protected from noise and cooled down to their least energetic state. Otherwise, such tiny energy changes may be impossible to observe.

Vivishek Sudhir, an expert in quantum measurements at MIT who was not part of the research team, describes it as “an enormous practical challenge still, one that we do not currently have the technology for”.

Similarly, quantum sensing has been achieved in resonators, but only at much smaller masses than the tens of kilograms or more required to detect gravitons. The team is, however, working on a potential solution: Tobar, a PhD student at Stockholm and the study’s lead author, is devising a version of the experiment that would send the signal from the bars to smaller masses using transducers – in effect, meeting the quantum sensing challenge in the middle. “It’s not something you can do today, but I would guess we can achieve it within a decade or two,” Pikovski says.

Sudhir agrees that quantum measurements and experiments are rapidly progressing. “Keep in mind that only 15 years ago, nobody imagined that tangibly macroscopic systems would even be prepared in quantum states,” he says. “Now, we can do that.”

Passing the torch: The “QuanTour” light source marks the International Year of Quantum

Earlier this year, the start of the Paris Olympics was marked by the ceremonial relay of the Olympic torch. You’ll have to wait until 2028 for the next Olympics, but in the meantime there’s the International Year of Quantum (IYQ) in 2025, which also features a torch relay. In keeping with the quantum theme, however, this light source is very, very small.

The light source is currently on tour around 12 different quantum labs around Europe as part of IYQ and last week I visited the Cavendish Laboratory at the University of Cambridge, UK, where it was on stop eight of what’s dubbed QuanTour. It’s a project of the German Physical Society (DPG), organised by Doris Reiter from the Technical University of Dortmund and Tobias Heindel from the Technical University of Berlin.

According to Mete Atatüre, who leads the Quantum Optical Materials and Systems (QOMS) group at Cambridge and in whose lab QuanTour is based, one of the project’s aims is to demystify quantum science. “I think what we need to do, especially in the year of quantum, is to have a change of style.” he says. “So that we focus not on the weirdness of quantum but on what it can actually bring us.”

Indeed, though it requires complex optical apparatus and must be cooled with helium, the Quantour light source itself looks like an ordinary computer chip. It is in fact an array of quantum dots, each emitting single photons when illuminated by a laser. “It’s really meant to show off that you can use quantum dots as a plug in light source” explains Christian Schimpf, a postdoc in the Quantum Engineering Group in Cambridge, who showed me around the lab where QuanTour is spending its time in England.

The light source is right at home in the Cambridge lab, where quantum dots are a key area of research. The team is working on networking applications, where the goal is to transmit quantum information over long distances, preferably using existing fibre-optic networks. In fibre optics, the signal is amplified regularly along the route, but quantum networks can’t do this – the so-called “no-cloning” theorem means it’s impossible to create a copy of an unknown quantum state.

The solution is to create a long-distance communication link from many short-distance entanglements. The challenge for scientists in the Cambridge lab, Schimpf explains, is to build ensembles of entangled qubits that can “store quantum bits on reasonable time scales.” He’s talking about just a few milliseconds, but this is still a significant challenge, requiring cooling close to absolute zero and precise control over the fabrication process.

Elsewhere in the Cavendish Laboratory, scientists in the quantum group are investigating platforms for quantum sensing, where changes to single quantum states are used to measure tiny magnetic fields. Attractive materials for this include diamond and some 2D materials, where quantum spin states trapped at crystal defects can act as qubits. Earlier this year Physics World spoke to Hannah Stern, a former postdoc in Atatüre’s group, who won an award from the Institute of Physics for her research on quantum sensing with hexagonal boron nitride, which she began in Cambridge.

I also spoke to Dorian Gangloff, head of the quantum engineering group, who described his recent work on nonlinear quantum optics. Nonlinear optical effects are generally only observed with high-power light sources such as lasers, but Gangloff’s team is trying to engineer these effects in single photons. Nonlinear quantum optics could be used to shift the frequency of a single photon or even split it into an entangled pair.

When asked about the existing challenges of rolling out quantum technologies, Atatüre points out that when quantum mechanics was first conceived, the belief was: “Of course we’ll never be able to see this effect, but if we did, what would the experimental result look like?” Thanks to decades of work however, it is indeed possible to see quantum science in action, as I did In Cambridge. Atatüre is confident that researchers will be able to take the next step – building useful technologies with quantum phenomena.

At the end of this week, QuanTour’s time in Cambridge will be up. If you missed it, you’ll have to head to University College Cork in Ireland, where it will be spending the next leg of its journey with the group of Emanuele Pelucchi.

 

Data-intensive PhDs at LIV.INNO prepare students for careers outside of academia

LIV.INNO, Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science, offers students fully-funded PhD studentships across a broad range of research projects from  medical physics to quantum computing. All students receive training in high-performance computing, data analysis, and machine learning and artificial intelligence. Students also receive career advice and training in project management, entrepreneurship and communication skills – preparing them for careers outside of academia.

This podcast features the accelerator physicist Carsten Welsch, who is head of the Accelerator Science Cluster at the University of Liverpool and director of LIV.INNO, and the computational astrophysicist Andreea Font  who is a deputy director of LIV.INNO.

They chat with Physics World’s Katherine Skipper about how LIV.INNO provides its students with a wide range of skills and experiences – including a six-month industrial placement.

This podcast is sponsored by LIV.INNO, the Liverpool Centre for Doctoral Training for Innovation in Data-Intensive Science.

LIVINNO CDT logo

Operando NMR methods for redox flow batteries and ammonia synthesis

Want to learn more on this subject?

Magnetic resonance methods, including nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR), are non-invasive, atom-specific, quantitative, and capable of probing liquid and solid-state samples. These features make magnetic resonance ideal tools for operando measurement of an electrochemical device, and for establishing structure-function relationships under realistic condition.

The first part of the talk presents how coupled inline NMR and EPR methods were developed and applied to unravel rich electrochemistry in organic molecule-based redox flow batteries. Case studies performed on low-cost and compact bench-top systems are reviewed, demonstrating that a bench-top NMR has sufficient spectral and temporal resolution for studying degradation reaction mechanisms, monitoring the state of charge, and crossover phenomena in a working RFB. The second part of the talk presents new in situ NMR methods for studying Li-mediated ammonia synthesis, and the direct observation of lithium plating and its concurrent corrosion, nitrogen splitting on lithium metal, and protonolysis of lithium nitride. Based on these insights, potential strategies to optimize the efficiencies and rates of Li-mediated ammonia synthesis are discussed. The goal is to demonstrate that operando NMR and EPR methods are powerful and general and can be applied for understanding the electrochemistry underpinning various applications.

An interactive Q&A session follows the presentation.

Want to learn more on this subject?

Evan Wenbo Zhao is a tenured assistant professor at the Magnetic Resonance Research Center at Radboud Universiteit Nijmegen in the Netherlands. His core research focuses on developing operando/in situ NMR methods for studying electrochemical storage and conversion chemistries, including redox flow batteries, electrochemical ammonia synthesis, carbon-dioxide reduction, and lignin oxidation. He has led projects funded by the Dutch Research Council Open Competition Program, Bruker Collaboration, Radboud-Glasgow Collaboration Grants, the Mitacs Globalink Research Award, and others. After receiving his BS from Nanyang Technological University, he completed a PhD in chemistry with Prof. Clifford Russell Bowers at the University of Florida. Evan’s postdoc was with Prof. Dame Clare Grey at the Yusuf Hamied Department of Chemistry at the University of Cambridge.

 

Copyright © 2025 by IOP Publishing Ltd and individual contributors