Skip to main content

Glashow resonance is spotted in a neutrino detector at long last

Physicists working on the IceCube Neutrino Observatory in Antarctica say they have made the first observation of the Glashow resonance – a process first predicted more than 60 years ago. If confirmed, the observation would provide further confirmation of the Standard Model of particle physics and help astrophysicists understand how astrophysical neutrinos are produced.

In 1959 the theoretical physicist and future Nobel laureate Sheldon Glashow worked out that an electron and an antineutrino could interact via the weak interaction to produce a W boson. Subsequent calculations indicated that this coupling – known as the Glashow resonance – should occur at antineutrino energies of around 6.3 PeV (6.3 × 1015 V). This is well beyond the energies achievable in current or planned particle accelerators, but natural astrophysical phenomena are expected to produce such neutrinos, which could then create a W boson by colliding with an electron here on Earth

IceCube is well placed to detect such an event because it comprises 86 strings of detectors suspended in holes bored into the Antarctic ice cap. When a neutrino occasionally interacts with the ice, tiny flashes of light are seen by the detectors. However, most interactions are equally probable for neutrinos and antineutrinos, and many neutrino detectors cannot tell which triggered the interaction.

Parity conservation

The need to conserve parity demands that the Glashow resonance be different. “At 6.3 PeV, this peak is only possible if you have an antineutrino interacting with an electron or, in an alternative symmetric world, a neutrino interacting with an antielectron,” explains Lu Lu of the University of Wisconsin-Madison, a senior member of the IceCube team. As a result, measuring the proportion of Glashow resonance events relative to the total number of neutrinos detected from an astronomical event could constrain the ratio of neutrinos to antineutrinos. This in turn could suggest how the neutrinos were produced.

Although astrophysical events are expected to produce neutrinos with very high energies, the number of neutrinos drops off following a power law in energy. Catching neutrinos requires a large detector in any case, so catching very high energy neutrinos requires an immense one. IceCube therefore uses a cubic kilometre of ice at the South Pole as its detection medium.

To search for Glashow resonance collisions in IceCube, the team analysed data taken by the detector between May 2012 and May 2017 using a machine learning algorithm. One event that occurred on 8 December 2016 stood out. The researchers estimate the detectable energy from the event to be 6.05 PeV, which – when losses through undetectable channels are factored in – is consistent with an antineutrino energy of about 6.3 PeV.

Highest energy deposition

The signature from the “early pulses”, caused by particles that outrun the light waves in the ice, helped the researchers rule out other possible explanations, such as a cosmic ray muon, and conclude that the event was indeed caused by an astrophysical neutrino. “This is definitely the highest energy deposition event IceCube has ever recorded from a neutrino,” says Lu. Based on the extremely low background levels expected at this energy, the researchers concluded it was at least 99% likely to be a Glashow event.

The researchers are now planning an even bigger detector, called IceCube Gen-2. As well as detecting neutrinos predicted to have even higher energies, the researchers hope this would allow them to detect a statistically significant number of Glashow events, confirming the findings and allowing the phenomenon to be used in astronomy.

Lu is particularly excited by the potential to understand how particles are accelerated by astrophysical processes. “For cosmic rays, it’s too difficult because they deflect everywhere,” she says. “High-energy photons interact with the cosmic microwave background; if you don’t have gravitational waves, the only other messenger is the neutrino, and the neutrino-to-antineutrino ratio brings a completely new axis to this game.”

Neutrino physicist David Wark, the UK principal investigator of the Super-Kamiokande detector, is impressed. “People have been trying for 50 years to detect these high-energy astrophysical neutrinos, and so it is astounding that IceCube has finally done it. Just a few years ago they saw the first gold-plated astrophysical neutrinos and now one at a time they’re knocking off all the things we expect to see at these very high energies.” Uncertainties arising partly from the impossibility of calibrating collision energy without extrapolation make him want to see at least one more event to be sure, but he says that the odds of one single detection appearing to be exactly where theory predicts the Glashow resonance are “not large”.

The research is described in Nature.

Iceberg melting is driven by geometry, experiments reveal

New experiments with ice blocks have revealed that icebergs melt faster on their sides. The discovery paves the way for better models of melting that consider the varied shapes of icebergs. The research could also improve our understanding of the role of iceberg melting in climate change.

Icebergs are a major source of freshwater flowing into some parts of the oceans; the Greenland ice sheet alone releases about 550 Gt of icebergs per year. This affects water salinity, which in turn affects water circulation and the global climate. The production of icebergs by both the Greenland and Antarctic ice sheets has been increasing significantly because of climate change so understanding the melting rate of icebergs is crucial for predicting changes in the oceanic heat flux.

Icebergs come in various shapes and sizes, with the largest recorded iceberg spanning almost 300 km in length and 40 km in width. They melt due to solar radiation, subsurface interaction with seawater and breaking into smaller pieces. Models used to predict the melting of icebergs were initially developed in the 1970s to study the possibility of towing icebergs to arid regions to provide an economical source of freshwater. These early models ignored the shape of icebergs and assumed constant water movement – and both assumptions have been carried through subsequent research.

Irregular melting

Icebergs are classified by an “aspect ratio” – the ratio of their length to submerged depth. New research by scientists in Australia, New Zealand, the US, and France led by Eric Hester, a PhD student of Applied Mathematics at the University of Sydney, has shown that the melting rate strongly depends on an iceberg’s geometry.

In their small-scale experimental simulation, the team submerged large rectangular ice blocks containing blue dye into a tank with circulating salt water and left them to melt for 10 min. The ice blocks were then weighed and photographed to assess the melting of each side.

“We put dye in the ice to observe where the melting occurs. We observed that at the front, the ice started to slope and melted three times faster. The bottom started to melt preferentially in the middle,” Hester tells Physics World.

Varying water velocity

In calculating melting rates, the team held a depth of 3 cm and varied the water velocities. At the highest water flow of 3.5 cm/s, the melting of the sides was roughly twice as fast as the base. If an iceberg moves in an ocean, the front face could be melting up to three times faster than predicted by old models.

Considering the varying size and geometry of real icebergs, their aspect ratio can affect overall melting by changing the areas exposed to water, suggesting that wider icebergs melt slowly, whereas smaller icebergs could melt up to 50% faster because they have more surface area on their sides.

Following their experimental findings, the team compared their results against numerical simulations assessing the flow dynamics.

“We developed a mathematical model to simulate the flow of warm salt water around the melting iceberg,” explains Hester.

Considering different parameters, such as temperature, salinity, and pushing forces, the team observed the same melting behaviour as in the experiment; the front of the ice block was melting the fastest, with a middle part melting faster on the base.

Difficult to observe in the field

According to Ellyn Enderlin of the Geoscience Department at Boise State University, who was not involved in this study, “Iceberg melting is something that cannot really be observed in the field – at best we can use sonar to map iceberg geometries at discrete time steps and back-out melting from the change in geometry – so these experiments tell us a lot about spatial variations in iceberg melting and how both water shear and iceberg geometry influence iceberg melt rates”.

“The revised parameterizations that [Hester and colleagues] present could be implemented into numerical models that include iceberg melting as a freshwater flux, allowing us to more accurately account for this important source of freshwater and assess the influence of variations in iceberg melt fluxes on fjord water masses. Given the strong dependence of melt rates on iceberg geometry and the changes in iceberg geometry that often accompany rapid glacier change, it is important that the influence of geometry is accounted for in the model parameterizations,” says Enderlin.

Apart from predicting climate change impacts of melting icebergs, these results can be extended to modelling melting glaciers. In addition, improved ice melting models could be used to understand the dynamics of extraterrestrial ice sheets, such as those on Saturn’s moon Enceladus or Jupiter’s moon Europa.

The research is described in Physical Review Fluids.

High-energy electron beams target tumours more precisely

An international research team has developed a pioneering radiotherapy technique that uses very high-energy electron (VHEE) beams to target tumours precisely. Proposed as an alternative to X-ray photons for radiation therapy, VHEE beams penetrate deep into tissue, but can also overexpose healthy tissue.

In this study, published in Nature Communications Physics, the researchers demonstrate how they overcame this problem by using a large aperture electromagnetic lens to focus a VHEE beam to an extremely small spot in tissue, concentrating the radiation dose into a small volume.

As study leader, Dino Jaroszynski, director of the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA) at the University of Strathclyde, explains, this volumetric element can then be rapidly scanned over a tumour to “paint” all the areas of it, while minimizing the dose to healthy tissue.

“Taking the analogy of focusing sunlight using a hand lens, you will experience a serious burn if you place your hand at its focus, but only feel mild warmth away from the focus,” he says.

“Electron beams damage cancer cells in a very similar way to that of high-energy photons, which is the most common radiotherapy modality in the clinic,” Jaroszynski explains. “This is because high-energy electrons convert a large fraction of their kinetic energy to X-ray photons by bremsstrahlung, which triggers a cascade of electrons, positrons and photons, and also ionizes matter by inelastic collisions. These are what ultimately damage cancer cells, by directly ionizing the DNA or indirectly creating radicals that damage their DNA.”

Precise targeting

In experiments performed at CERN’s Linear Electron Accelerator for Research (CLEAR) facility, the team measured depth–dose profiles of 158 and 201 MeV electron beams focused into a water phantom. They demonstrated that the focused VHEE beams concentrated dose into a well-defined volume several centimetres deep into the phantom.

According to Enrico Brunetti, a research fellow at the University of Strathclyde, one key advantage of VHEE beams is that they can be produced at much lower cost than proton or ion beams. In addition, VHEE beams can reach deep-seated tumours with reduced susceptibility to tissue inhomogeneities.

“The range of a proton or ion beam depends on the integrated matter through which it passes. If there are voids, gas pockets or variable density material, the range will be uncertain. However, because VHEE beams are light-like, they will always deposit dose at the same spot, so are insensitive to the morphology of the tissue in their path,” Brunetti explains.

“The focusing method that we have developed enables the precise targeting of deep-seated tumours using VHEEs by concentrating dose into a small volume. This dose can be controlled, which may give an advantage in treating radiation-resistant tumours,” he adds.

According to Brunetti, the ability to deposit dose into a small spot can also benefit ultra-precision therapy, where there is a need to irradiate very small regions.  In addition, the very low entrance dose and low distal and proximal doses of focused VHEE beams may help to minimize damage to healthy tissue and sensitive organs.

Next steps

Despite the clear advantages of the new technique, Brunetti is keen to stress that its use in clinical settings is some way off – and much work remains to be done on developing the laser and accelerator technology to make it suitable for clinical application.

“Our advice is to be patient and undertake further in vitro and in vivo studies, and eventual clinical trials. We have undertaken cell survival studies using VHEE beams and found them to be similar to X-ray photons. Interest in VHEE as a new modality is growing rapidly,” he says.

“If it can be demonstrated that VHEE beams, focused or not, give a clinical advantage, then research and development will accelerate. There would be a strong incentive to develop them to the point where they can be applied clinically because lives would be saved, and saved lives would translate into economic benefit,” he adds.

Looking ahead, the next steps for the researchers are to implement symmetric focusing and investigate the use of larger bore focusing magnets to obtain even smaller focal spots, as well as to explore the use of a permanent magnet to focus the beam. They will also work alongside colleagues in Lausanne to investigate the use of VHEE beams to deliver FLASH radiotherapy (a cutting-edge technique that delivers high-dose-rate radiation in sub-millisecond time-scales to target tumours while sparing healthy tissue) in deep-seated tumours.

The team also plans to carry out in vitro and in vivo studies using facilities at Strathclyde, which include an EPSRC-funded research-focused medical beamline at SCAPA.

“Finally, we plan to investigate a new way of using VHEE beams to simultaneously apply radiotherapy and image the region around the tumour – this will give real-time imaging capability [and] is a very novel idea,” adds Jaroszynski.

Arecibo Observatory: a scientific giant that fell to Earth

1 December 2020 was a dark day for Puerto Rico and the global astronomy community. The iconic Arecibo Observatory collapsed, with the radio telescope’s 900-tonne suspended platform crashing into the 305 m dish below. Warning signs had been there in the preceding months, but that did little to soften the shock felt by the astronomy community.

In this episode of the Physics World Stories podcast, Andrew Glester speaks with astronomers about the impact of this dramatic event. Abel Méndez, a planetary astrobiologist at the University of Puerto Rico, explains why the observatory was a beacon for Puerto Rican scientists and engineers. Mourning continues but Méndez and colleagues have already submitted a white paper to the National Science Foundation with plans for a new telescope array on the same site.

Constructed in the 1960s with US funding, Arecibo was originally used for military purposes. Its powerful radar was bounced off the ionosphere to better understand the nature of the Earth’s upper atmosphere and to look for signs of incoming Soviet missiles. Seth Shostack, senior astronomer at the SETI Institute, talks to Glester about Arecibo’s origins and how scientists soon saw the potential for bouncing Arecibo’s radar off astronomical objects such as asteroids.

Arecibo was the world’s largest radio dish until it was surpassed in 2016 by China’s FAST telescope. Arecibo’s size and tropical setting captured the public imagination and the observatory appeared in the films GoldenEye and Contact – the adaptation of the Carl Sagan novel. Contact’s lead protagonist is Ellie Arroway (played by Jodie Foster), partly based on SETI scientist Jill Tarter. Tarter joins the podcast recounting her experiences advising Jodie Foster on the character and role.

Three-node quantum network makes its debut

Scientists at the Delft University of Technology in the Netherlands have taken an important step towards a quantum Internet by connecting three qubits (nodes) in two different labs into a quantum network. Such quantum networks could be used for secure communication, for safer means of identification or even distributed quantum computing.

The group, led by Ronald Hanson, is no stranger to setting up quantum links. In 2015 members of the group performed the first loophole-free Bell inequality violation, successfully entangling two electron spin states over 1.3 kilometres in an experiment that finally put the lid on the 80-year-old Einstein-Podolski-Rosen dispute about the nature of entanglement. Although this two-node experiment could hardly be called a network, it laid the basis for the present work, which is described in a preprint on the arXiv repository.

From Alice to Charlie

The new network contains three communication quantum bits (qubits) in the form of nitrogen vacancy (NV) centres in diamond. These qubits are known, in traditional fashion, as Alice, Bob and Charlie, and the network that connects them has several unique aspects. Bob, for example, is associated with a separate memory qubit consisting of a nuclear spin in a carbon-13 atom. Another key aspect is that the system can signal when entanglement is established by detecting the photon that was used to create the entanglement.

The quantum network operates by first entangling the communication qubits of Alice and Bob. At Bob’s node, the entangled state is “cloned” to Bob’s memory qubit, leaving Bob’s communication qubit ready for further action. The next step is to set up a remote entanglement between the communication qubits of Bob and Charlie. This creates two links, each with a shared entangled state: one state shared between Alice and the memory qubit of Bob and the other shared between the communication qubits of Bob and Charlie. Bob, the central node, then performs an operation known as a Bell state measurement on its two qubits. This measurement teleports the state stored on Bob’s memory qubit to Charlie, which in turn establishes direct entanglement between Alice and Charlie.

Stabilizing the network

Expanding a quantum network from two nodes to three – and, in time, from three to many – is not as simple as just adding more links. The task is complicated by the fact that noise (which can destroy quantum information) and optical power levels vary greatly across the network.

The Delft team addresses this problem using a twofold stabilization scheme. The first, local element of the scheme focuses on stabilizing the interferometers used to generate entanglement between the communication qubit and the “flying” qubit at each node. The team does this by measuring the phase of the light that reflects off the diamond’s surface during the entanglement process and boosting this faint phase signal with a stronger laser beam. Polarization selection ensures that the reflected light does not reach the detectors that register entanglement, which would create false entanglement signals. The phase of the boosted light is measured with additional detectors, and the interferometers are stabilized by feeding the measured phase signal back to the piezo-controllers that position their mirrors.

The second, global part of the stabilization involves directing a portion of the laser light towards a separate interferometer used to generate entanglement between nodes. The interference is measured and the signal coupled to a fibre stretcher in one arm of the interferometer. By stretching the fibre, the phase of light in that arm can be controlled and the interferometer can be stabilized. This local and global stabilization scheme can be scaled to an arbitrary number of nodes, making it possible to expand the network.

Expansion strategies

The researchers suggest that their network could be expanded by increasing the number of qubits at a single network node, similar to the ten-qubit diamond NV-centre that a different Delft group created in 2019. They also say that the new network provides a platform for developing higher-level quantum network control layers that would make it possible to automate the network.

Jian-Wei Pan, a quantum communications expert at the University of Science and Technology of China who was not involved in the work, thinks its key achievement lies in realizing entanglement swapping between two elementary links of remotely entangled matter. “Such a process is essential in extending entanglement distances via quantum repeaters,” Pan says. However, he adds that the fidelities of the elementary entanglement, gate operations and storage – which together determine the scalability of the group’s approach – will need to be improved before a larger-scale network can be constructed.

Anders Sørensen from the Niels Bohr Institute in Denmark thinks that this is an important milestone in the quest for the quantum Internet. “This is the first time that anybody has managed to connect more than two processing nodes,” says Sørensen, who was also not part of the Delft team. “At the same time, they demonstrate some of the protocols, for example entanglement swapping, which we believe will play an important role in a full-scale quantum network.” While “formidable challenges” remain, especially when it comes to stretching the network over longer distances, Sørensen concludes that “this is a challenge that they are in a good position to tackle”.

D-Wave demonstrates performance advantage in quantum simulation

Researchers at the quantum computing firm D-Wave Systems have shown that their quantum processor can simulate the behaviour of an “untwisting” quantum magnet much faster than a classical machine. Led by D-Wave’s director of performance research Andrew King, the team used the new low-noise quantum processor to show that the quantum speed-up increases for harder simulations. The result shows that even near-term quantum simulators could have a significant advantage over classical methods for practical problems such as designing new materials.

The D-Wave simulators are specialized quantum computers known as quantum annealers. To perform a simulation, the quantum bits, or qubits, in the annealer are initialized in a classical ground state and allowed to interact and evolve under conditions programmed to mimic a particular system. The final state of the qubits is then measured to reveal the desired information.

King explains that the quantum magnet they simulated experiences both quantum fluctuations (which lead to entanglement and tunnelling) and thermal fluctuations. These competing effects create exotic topological phase transitions in materials, which were the subject of the 2016 Nobel Prize in Physics.

A team photo showing many people standing and sitting in front of the D-Wave logo

The researchers used up to 1440 qubits to simulate their quantum magnet. In a study published in Nature Communications, they report that the quantum simulations were over three million times faster than the corresponding classical simulations based on quantum Monte Carlo algorithms.

Importantly, the experiment also showed that the speed of quantum simulations scaled better with the difficulty of the problem than the classical ones did. The quantum speed-up over classical methods was greater when the researchers simulated colder systems with larger quantum effects. The speed-up also increased when they simulated larger systems. Hence the quantum speed-ups are greatest for the hardest simulations, which can take classical algorithms extremely long times.

Advantage with a twist

The D-Wave team performed a similar quantum magnet simulation in 2018, but it was too fast to take accurate measurements of the system’s dynamics. To slow down the simulation, the researchers added a so-called topological obstruction to the quantum magnet – essentially, a “twist” in the magnet that takes time to unravel. Together with a new low-noise quantum processor, this addition enabled them to accurately measure the system’s dynamics.

“Topological obstructions can trap classical simulations that use quantum Monte Carlo algorithms, while a quantum annealer can circumvent the obstructions via tunnelling,” explains Daniel Lidar, who directs the Center for Quantum Information Science and Technology at the University of Southern California, US, and was not involved with the research. “This work has demonstrated a speed-up arising from this phenomenon, which is the first such demonstration of its kind. The result is very interesting and shows that quantum annealing is promising as a quantum simulation tool.”

In contrast with previous simulations comparing quantum and classical algorithms, King’s experiment directly relates to a useful problem. Quantum magnets are already being investigated for their potential applications in creating new materials. Quantum speed-ups could rapidly accelerate this research; however, the D-Wave team does not rule out the possibility of developing faster classical algorithms than those currently used.  The team ultimately sees the most promising upcoming applications of quantum simulations to be a hybrid of quantum and classical methods. “This is where we expect to find near-term value for customers,” King says.

Direct in-muscle bioprinting can treat massive trauma injuries

What prize do you get for not moving a single muscle all week? A trophy!

Alternatively, patients who have undergone volumetric muscle loss injuries may be interested in a novel technology recently reported in Advanced Healthcare Materials. The authors of the article – from the University of Nebraska, the University of Connecticut and Brigham and Women’s Hospital – developed a handheld printer to deliver hydrogel-based bioinks for treatment of such injuries.

When a large proportion of a muscle’s mass is lost, due to trauma, disease or surgery, the muscle begins to repair itself in such a manner that disabling scar tissue left behind causes loss of some degree of muscle function. Numerous approaches to assist the body’s natural regenerative processes have been developed, but all have limited efficacy and utility. For example, treatments such as direct cell delivery are constrained by the complex process of differentiating and harvesting myogenic cells, the cells that evolve into skeletal muscle. In settings where a direct and rapid response to volumetric muscle injuries is required – specifically in military trauma care – such approaches are inapplicable.

This latest study demonstrates a feasible approach for treating such injuries. The researchers developed a novel “Muscle Ink” for printing directly into large muscle-loss wounds. Within this specialized ink, vascular endothelial growth factor – key to inducing the angiogenesis and vascularization needed for muscular regeneration – is attached to 2D nanoclay discs, allowing for its continuous release over multiple weeks. The team incorporated these growth factor-bound discs into a biocompatible hydrogel that can adhere to wet tissue, such as the remaining musculature. The resulting scaffold has mechanical properties conducive to cellular regeneration and natural deformation during movement.

The gel is applied by loading it into a novel handheld printer, composed of a loadable syringe, a motor-controlled syringe pump and an ultraviolet light-emitting diode for in vivo hydrogel crosslinking. Clinicians can utilize the device for hot glue gun-like printing directly into wounds, creating a scaffold for cell regeneration and an environment that induces the same.

Handheld printer

The researchers evaluated the efficacy of the printing process in murine models. Animals underwent muscle mass injuries to the quadriceps. Some were left untreated, while others were treated with the Muscle Ink, with or without growth factor. After an eight-week recovery, the animals were tested for running function.

The mice treated with growth factor-loaded Muscle Ink had a maximum running speed insignificantly different from that of uninjured mice. This group was also able to run approximately twice as far as the untreated group or those with Muscle Ink but without growth factor. This test supports the premise that slow release of growth factor by the ink was responsible for the improvement in functional performance after volumetric loss muscle injury.

The research team envisions that, in addition to the tested skeletal muscle, this printing technique could be applied for treatment of other soft-tissue wounds. The handheld printer helps broaden the potential of in vivo delivery of hydrogel scaffolds for tissue therapy.

Women scientists join forces at online quantum summit

It’s questionable whether the romanticized archetype of the lone scientist has ever been the reality of science – even Isaac Newton didn’t work entirely in a vacuum, but often corresponded with Gottfried Leibniz. And certainly in the 21st century, working together is the name of the game.

I was reminded of this principle while attending the fourth Women in Quantum Summit, which took place on 9–11 March. It highlighted for me the importance not only of gender and background diversity in science, but also of collaboration in general.

Hosted by OneQuantum – an organization that brings together quantum-technology leaders from around the world – the summit included career-focused talks from quantum companies about their work as well as sessions on various aspects of the technology and where it is today.

Together, these events showcased the wide variety of opportunities in this exciting new field and allowed attendees to meet potential employers. While these summits do not exclude men, they differ from many such conferences in that most speakers are women, fostering an inclusive and inspiring environment for many who might often feel like outsiders in this field.

The event programme included “employers pitch” sessions, in which several quantum-technology companies, from start-ups like Pasqal to household names like Google, working on both hardware and software, introduced their work and the kinds of roles they have available. Most representatives were women working at the respective companies, and they were keen to emphasize their employers’ desire to hire more – not as some kind of box-ticking exercise, but because more diverse teams tend to perform better.

Unique to this most recent summit was a career fair following the employers pitch, where attendees could talk to representatives of companies in online booths, designed to recreate the networking opportunities presented by traditional in-person conferences. Though these online alternatives can seem a little strange at first, I think we’re all starting to get the hang of it now, and at a conference focused on quantum, of all fields, it felt aptly modern.

The second day of the summit was dedicated to the theme of quantum machine learning (QML) – an emerging field that aims to use quantum technology to advance machine learning. Three women scientists who work at Zapata Computing – a start-up founded in 2017 to develop quantum software for business – gave a fascinating talk about their work on QML. A poll at the beginning of this talk found that 83% of the audience, myself included, had no background in QML, so there was a lot to learn.

Hannah Sim, a quantum scientist, described her career path to date and how she had used her background in quantum physics to improve machine learning algorithms.

The second Zapata speaker, Kaitlin Gili, a quantum applications intern at Zapata, explained how quantum noise, typically thought to be a limitation in QML, can actually be harnessed as a useful feature when addressing certain tasks. For example, noisy quantum hardware can be applied to “generative models”, which are a promising candidate for AI. These models aim to find an unknown probability distribution, given many samples to train on, and produce an output of original data that is similar to the training set. The random noise that is input together with the training data turns out to be a useful part of the system.

The third speaker, Marta Mauri, who is a quantum software engineer at Zapata, touched on the importance of curiosity-driven research, arguing in favour of exploratory studies and against the belief that all attempts to enhance machine learning with quantum should be justified.

I was left with the feeling that there are many questions yet to be answered in this area, and even more yet to be asked, with room for people from diverse backgrounds and different perspectives to contribute. Like many emerging scientific fields, QML is now so broad and deep that a single scientist – or even a handful of scientists – cannot hope to make progress in isolation.

Proton contains more anti-down quarks than anti-up

The sea of short-lived particles in the proton has a far higher abundance of anti-down quarks than anti-up quarks, new research has shown. An international team including Paul Reimer of Argonne National Laboratory in the US, discovered the asymmetry by firing a beam of protons at a hydrogen (proton) target at Fermilab. The results shed new light on highly complex interactions within the proton and could lead to a better understanding of the dynamics that unfold following high-energy proton collisions.

The original quark model of the proton is pleasingly simple: two up quarks and one down quark interact with each other by exchanging gluons, which bind the particles together through the strong nuclear force. However, physicists have now known for some time that the full picture is far more complex, with these three “valence” quarks alone only account for a fraction of the proton’s mass. Instead, the proton is now modelled as a roiling “sea” of many quarks, antiquarks and gluons that pop in and out of existence over very short time scales. This makes it very difficult to both calculate the internal properties of the proton and study them experimentally.

In the past, physicists have glimpsed these virtual particles in scattering experiments that measure their probability distributions in relation to the fraction of the total proton momentum they carry. Several theories had suggested that these seas should contain near-identical probability distributions of anti-up and anti-down quarks. Their reasoning is that the masses of both are roughly the same when compared to the total mass of the proton.

Muon and anti-muon pair

Reimer and colleagues tested this idea using the SeaQuest spectrometer at Fermilab. Protons are fired at a proton target. In some collisions, a valence quark in one proton will interact with a virtual antiquark of the same flavour in the other proton. This creates a photon that can transform into a muon and antimuon pair, which can then be detected.

Contrary to previous predictions, the researchers found that over a wide range of collision momenta, protons contain a far higher abundance of anti-down quarks than anti-up. The team hopes that these results will generate new interest in several proposed mechanisms for antimatter asymmetry within protons – which had fallen out of favour following previous experimental results.

The result could enable physicists to better interpret the results of high-energy proton collisions at CERN’s Large Hadron Collider. Accounting for antimatter asymmetry could lead to a better understanding of the particles produced by these collisions – including high-mass W and Z bosons which regulate the weak nuclear force.

The research is described in Nature.

Ferroelectric domain wall diodes get flexible

Researchers have made ferroelectric domain wall diodes from structures etched on the surface of an insulating single crystal. The new devices, which are made from a material that is already widely employed in optoelectronics, can be erased, positioned and shaped using electric fields and might become fundamental elements in large-scale integrated circuits.

Domain walls are narrow (roughly 10-100 nm) boundaries between regions of a material where the dipole moments point “up” and neighbouring regions where they point “down”. At these boundaries, the dipole moments undergo a gradual transition to the opposite orientation rather than flipping abruptly.

Technologies that exploit these structures in ferromagnets have come along considerably over the last 15 years, making it possible to construct devices such as racetrack memories and circuits that operate using domain-wall logic.Spurred on by these advances, some researchers have turned their attention to analogous domain walls in ferroelectrics – that is, materials that have permanent electric dipole moments in the same way as their ferromagnetic counterparts have permanent magnetic dipole moments. Ferroelectric materials hold particular promise for applications because their dipole moments can be oriented using electric fields, which are much easier to create than the magnetic fields used to manipulate ferromagnets.

New group of two-dimensional conductors

Ferroelectric domain walls have several useful properties. When made into ferroelectric devices like diodes, all the domain walls, regardless of their polarity, align in the same direction when an electric field is applied. The ferroelectric domain walls can therefore be reversibly created, erased, positioned and shaped using positive or negative voltages.

Researchers in the School of Microelectronics at Fudan University, China, have now developed a new way of constructing such diodes by using ferroelectric mesa-like cells that form at the surface of an insulating crystal of lithium niobate (LiNbO3). This material is already commonly used in many optical and optoelectronic devices, including optical waveguides and piezoelectric sensors.

Led by Jun Jiang and An-Quan Jiang, the researchers used electron-beam lithography and dry etching processes to fabricate cells that were 60 nm high, 300 nm wide and 200 nm long on the surface of the LiNbO3. They then connected two left and right electrodes made from platinum (Pt) to opposite sides of a cell for subsequent measurements.

When they applied an electric field across the material via the electrodes, they observed that the domain within part of a cell reversed so that it pointed antiparallel to a domain at the bottom of the cell (which remained unswitched). This led to the formation of a conducting domain wall.

Interfacial “imprint field”

The team controlled the conducting domain wall’s current (which can be as high as 6 μA under an applied voltage of 4V) using two interfacial and unstable (volatile) domain walls that they connected to the two side Pt electrodes. The researchers explain that these interfacial domain walls then disappear, turning off the wall current path after the applied electric field has been removed or under a negative applied voltage.

“We ascribe the rectifying behaviour to the volatile domains within the interfacial layers,” team member Chao Wang tells Physics World. “As we remove the applied voltage or reduce it to below the device’s onset voltage (Von), the interfacial domains switch back into their previous orientations thanks to the existence of an interfacial ‘imprint field’. This field does not exist in the bottom domain, which, as we remember, is non-volatile.”

Reporting its work in Chinese Physics Letters, the Fudan University team says it will now be focusing on optimizing the properties of its devices – namely their Von, their on/off current ratio and their stability.

Copyright © 2025 by IOP Publishing Ltd and individual contributors