Skip to main content

Neural networks extract information from sparse datasets

How did you get the idea for your company?

I was chatting to a materials science PhD student in a pub a few years ago, and he started telling me about some mathematical problems his group was facing. They were trying to use neural networks to predict the properties of new materials as a function of their composition, and I showed them how to use a tool called a covariance matrix to calculate the overall probability that a new material will satisfy various requirements – strength, cost, density and so on – at once. By doing that, we were able to design several new metal alloys, which are now being tested by Rolls-Royce.

At that point, I began to investigate ways of getting even deeper insights into material properties. Certain physical laws, like the fact that electrical conductivity is proportional to thermal conductivity, or that the tensile strength of a material is proportional to three times its hardness, are very powerful for predicting how a material will behave. However, because we set up our neural networks to always extrapolate from composition to property, we weren’t exploiting property–property correlations. So I changed the algorithm so that the neural network could capture that additional information, and we used it to design materials that can be used in a 3D printing process called direct metal deposition. We only had 10 experimental data points for how well materials could be 3D printed, but we were able to take that small amount of data and merge it with the huge database of how weldable different alloys are, which is an analogous property. The resulting extrapolations guided our design of new materials.

What happened next?

The direct metal deposition project exposed me to the idea that there might be new opportunities in merging sparse databases (like the one for 3D printability) with full ones (like the one for weldability), so the next step was to develop a much more comprehensive method for doing that. The mathematical inspiration for this method comes from many-body quantum mechanics, where something called the Dyson formula is used to calculate the Green’s function for an interacting particle in terms of the Green’s function for a non-interacting particle and a self-energy term that captures the effect of one particle interacting with another. We’re able to make an analogy in which the Green’s function of an interacting particle is like a prediction of a full material property, while the Green’s function of a non-interacting particle is like an “empty” data cell, for which we just make a naïve guess about what the value might be. Then our neural networks use the quantity we know to guide the extrapolation of the quantity we don’t. This enables us to merge experimental datasets, which are sparse, with some first-principles computer simulations and molecular dynamics simulations, which are complete.

We also noticed that there is often a lot of information hidden in the “noise” within data. Again, we know this from many-body physics, from the physics of critical phenomena that occur in low-temperature solid-state systems, and from renormalization group theory, where the large-scale fluctuations in one physical quantity can be related to the mean expectation value of a different physical quantity. Physicists have developed a lot of maths to capture that knowledge, and if I port that across to our neural network, we can use the uncertainty in one quantity to tell us the mean value of another. That’s been helpful for interpreting microstructures and phase behaviour in materials.

These techniques have many possible uses, and although I’ve worked on a few of them in my capacity as a researcher at the University of Cambridge – collaborating first with Rolls-Royce, and later with Samsung to design new battery materials and BP to design new lubricants – I eventually decided that I needed to form a spin-out company to really drive them forward.

What was the spin-out process like?

Initially, I was put in touch with Cambridge Enterprise, which is the university’s commercialization arm. They introduced me to several local business angels. I took each of them out to dinner, worked out what they thought the opportunities were and tried to understand what they’d be like to work with, and eventually selected an angel called Graham Snudden. Working with Graham helped me to understand our business plan, and he also introduced me to a former employee of his, Ben Pellegrini, who became my co-founder and the CEO of our spin-out, Intellegens. Ben had experience of working at smaller companies, and he had worked in software, which is a complementary area to my own skillset and absolutely core to our business strategy.

Ben Pellegrini: When I first met Gareth, he was running the algorithm through a terminal prompt on the university computer centre. He was always very enthusiastic and very bright, and I could see that there was real interest and value in what he was doing, but it was hard – I had to meet him a few times before I understood that when he was moving data around, he was generating interesting results. The big question was how to transform this tool from something that a specialized user can engage with at the command line into something your average engineer or scientist in a clinical lab or materials company can use. That’s a challenge I enjoy.

How did you get funding?

BP: For the first six months, I was based in my kitchen and Gareth was doing work for Intellegens in the evenings. Then we got some money from Innovate UK to get us going with a proof-of-concept project, plus a little bit of money from Cambridge Enterprise and from Graham, who (as Gareth mentioned) is a local angel investor. We’ve also been quite lucky in that we can run consultancy-style projects to generate income as we’re going along.

What are some of the projects you’ve worked on?

GC: We’ve been pushing hard on the problem of designing new drugs. The basic question is, if you inject a drug into a patient, which proteins will react to it? Does the drug activate them or inhibit them? There are about 10,000 proteins in the body and about 10 million drugs that you can test, so if you imagine a huge matrix where each column is a different protein and each row is a different drug, the dataset is only about 0.05% complete, because it’s impossible to conduct experimental tests on that many drug-protein combinations. It’s the ultimate sparse dataset.

However, we do have information about the chemical structure of every drug and every protein. That’s a complete dataset. Our goal is to marry the complete dataset of chemical knowledge to the sparse dataset of protein activity and use it to predict the activities of proteins. We can do this by taking advantage of protein-to-protein correlations and protein-to-drug-chemical-structure correlations. It’s very similar to what we were doing with materials for 3D printing, where weldability is a complete dataset and 3D printability is a sparse dataset.

The business has now moved to the stage of licensing machine learning as a product. For drug discovery, Alchemite is marketed through Optibrium, and there has already been enthusiastic take-up by Big Pharma. For materials discovery, Intellegens is licensing a full-stack solution direct to the customer, with the first sales now complete.

BP: We’re also talking to people who work on infrastructure, trying to understand gaps in maintaining things like bridges or equipment. In a transport network, for example, you may or may not have data on relevant factors such as weather, geography, topology, road composition and pedestrian use at specific points in the network, so you end up with very big, sparse datasets. We’re working on patient analytics as well, trying to predict optimum treatment profiles from sparse sets of historical patient data. Again, we may or may not have the same data available for all patients, but we have a combination of data points, and trying to learn from all the data points we have seems to give us an edge in suggesting possible routes of treatment.

I would like to point out, though, that there’s a lot of hype around artificial intelligence (AI) and deep learning at the moment, and that’s a double-edged sword for us. It’s getting us a lot of interest, but we have a special – maybe even unique – academically driven toolset that solves problems in a new way, and that can sometimes get lost in the noise about AI-based voice recognition or image recognition.

How is your technology different?

The main differentiator is our ability to train models from incomplete data. The usual methods for training an AI or a neural network require lots of good-quality training data to make good models for future predictions. In contrast, the driver for our algorithm is that we don’t have enough data for an AI to learn the correlations and build a model on its own. I think that’s our unique selling point. Everyone talks about “big data”, and you sometimes hear people complain about it – “Oh, I’ve got big data, I’ve got too much data to deal with.” But when you hone in on a specific use case and look at it in a certain way, you realize that in fact, their problem is that they don’t have enough data, and they never will. At that point, we can say, well, given that you haven’t got enough data, we can use our technology to learn from the data you have, and use that information to help you make the best decisions.

What’s the most surprising thing you’ve learned from starting Intellegens?

BP: This is the first time I’ve worked closely with academics, which has been interesting (in a good way). I’d worked in software start-ups before, so I was used to dealing with experienced software people who are familiar with the tools and processes of commercial software. Academic software sometimes needs a bit more finessing to get it into a commercially stable product, in terms of source control, release management and documentation. It might sound like quite boring stuff, but if you’re going to be selling a product and supporting it, it becomes critical.

GC: I was surprised to learn that the process of getting contracts depends so much on word of mouth. I give talks at conferences, potential customers come up to me afterward, and then one customer introduces us to the next one, like stepping stones.

I also didn’t fully understand the different reasons why people might want to engage with a business like ours. Some people really want to bring in the latest technology to give their company a competitive advantage. Others want to be associated with using a technique that’s right at the bleeding edge. And some are interested in working with entrepreneurs because they personally want to buy in to the adventure and the excitement of a smaller company.

  • Gareth Conduit is a Royal Society University Research Fellow at the University of Cambridge, UK, and the chief technology officer at Intellegens, e-mail gjc29@cam.ac.uk. Ben Pellegrini is the chief executive officer at Intellegens

The physics of lawn sprinklers, the hazy mist of the Standard Model, careers in medical physics

Here in Bristol the climate is ideal for a carefree lawn – it rarely gets very hot and we usually get enough rain to make it through the summer without the need for a lawn sprinkler. As a result, I miss the hiss of sprinklers that were part of my youthful summers in Canada. In Wired, the physicist Rhett Allain looks at the fascinating physics of lawn sprinklers – and the optical illusions they can create – in “The mesmerizing science of garden sprinklers”.

Unfortunately, Allain does not look at the physics of impact sprinklers (see above image), which used to fascinate me as a child with their seemingly chaotic behaviour.

If I had a penny for every time a wrote a phrase like “…this new experiment could provide tantalizing hints of what physics lies beyond the Standard Model” I would be a pound or two richer. In “The once and present Standard Model of elementary particle physics” James Wells of the University of Michigan looks into the “hazy mist” of the model since its birth with the discovery of charm in 1974 (Wells’ definition, not mine).

One chapter is called “Facts, mysteries and myths”, which advocates constructing a myth for how neutrinos obtain mass and describes the cosmological constant, dark matter, baryogenesis, and inflation as four “mysteries of the cosmos”.

If you have given up on improving the Standard Model, you might fancy becoming a medical physicist. There is a nice article in Symmetry that asks five former particle physicists why they chose that career path and what they do now as medical physicists.  You can read more in “Transitions into medical physics” by Catherine Steffel.

Ultrasound device creates an audio, visual and tactile 3D display

An ultrasound-powered, 3D visual display that can also produce audible sound and holograms that you can touch been unveiled by researchers at the University of Sussex. The team used the display to produce 3D images such as a torus knot, a globe, a smiley face and letters, as well as a dynamic countdown of levitating numbers.

The display is a type of sonic tractor beam, which uses ultrasound transducers to create acoustic holograms that can trap and manipulate objects in mid-air. The Sussex device uses two arrays of 256 speakers to levitate a single polystyrene bead, which traces out 3D images in mid-air while illuminated by coloured LEDs. The bead can move at speeds of almost 9 m/s (in the vertical direction), which is so fast that an image is drawn in less than 0.1 s. This creates the illusion of a single 3D image in much the same way as a cathode-ray tube creates a 2D image in an old television by rapidly scanning an electron beam across a phosphor screen.

“Our new technology takes inspiration from old TVs,” explains Ryuji Hirayama of the University of Sussex. “Our prototype does the same using a coloured particle that can move so quickly anywhere in 3D space that the naked eye sees a volumetric image in mid-air.”

Amplitude and phase

The display creates sound by vibrating the polystyrene bead at audible frequencies. This is possible because the device uses different elements of the ultrasound signal for levitation and vibration. Sussex’s Sriram Subramanian explains that the ultrasound phase information is used to create the levitation traps, which means that the amplitude is free to be used for other applications. In the Sussex display, amplitude modulation is used to generate audible sound.

To add tactile feedback, the device creates a secondary set of acoustic holograms, which produce enough pressure for you to feel them. But this does have an impact on the performance of the device, which switches between levitation and tactile holograms, producing levitation traps 75% of the time and tactile feedback 25% of the time. While producing just a visual display, the polystyrene particle can be moved horizontally at speeds of 3.75 m/s, but this drops to 2.5 m/s when tactile feedback and audio is added.

Subramanian says that the most significant part of this work is the speed at which they can now move a levitated object. Previous displays held particles for a few milliseconds in each new position before moving them again — but the new device keeps them moving all the time. “The particle is always accelerating and that is how we get the speed,” Subramanian explains.

Harry Potter theme park

Subramanian believes that one of the most obvious applications the display is at a theme park, using the example of a Harry Potter experience. “As a kid you can walk up to the system, you can hold your hand out and you can start feeling a magic spell in your hand. And then there is a fireball that is bubbling in front of you. You can have a very magical experience,” he says.

But there are other non-display applications for these techniques. For example, the researchers show that the device can be used to manipulate liquids and Subramanian says that this could, for example, be used to create 3D printers that manipulate different liquids simultaneously, to print a multi-material object. “We haven’t tested these things, but I think they are future projections of what we could do,” he explains.

Tatsuki Fushimi, at the University of Bristol, who unveiled a similar display, but without the tactile and audio elements, earlier this year, said that he is very impressed by the work.

“The future of acoustophoretic volumetric displays is very bright and this [research is] a significant step towards turning this science fiction idea into reality,” Fushimi says. “They were successful in enlarging the screen size of the display by increasing the number of ultrasonic emitters (we used 60 whereas their setup used 512). This means that the particle can be displaced along a larger region of space and with a greater velocity. There are many things to be done before commercialization, but it is exciting to think about the future of acoustophoretic volumetric displays, and I am sure that interesting applications will emerge as we further improve the performance of these devices.”

The display is described in Nature.

Ultrafast 3D ultrasound wins journal citations prize

Mathieu Pernot and co-authors

A research paper describing an ultrasound imaging technique that can produce ultrafast 3D videos has won its authors the 2019 Physics in Medicine & Biology (PMB) citations prize. This annual prize recognizes the PMB paper that received the most citations in the preceding five years.

The paper, 3D ultrafast ultrasound imaging in vivo, was written by researchers from Physics for Medicine, formerly Institut Langevin (ESPCI ParisTech, CNRS, INSERM and PSL Research University) in France. The winning study, which also won the Roberts Prize for the best paper published in PMB in 2014, describes the first implementation of a novel ultrasound technique that produces 3D videos at thousands of frames per second.

The researchers achieved this high imaging rate by extending their previous work on 2D ultrahigh-frame-rate ultrasound imaging to three dimensions. To do this, they used diverging or plane waves emitted from a sparse virtual array located behind the probe. They designed a customized portable ultrasound system that samples 1024 independent channels and drives a 32×32 matrix-array probe. Graphics processing units were employed to speed the processing of the backscattered signals.

The 3D ultrafast ultrasound system achieved high contrast and resolution. Lead author Mathieu Pernot and colleagues demonstrated its use for several potential applications, including 3D mapping of stiffness and tissue motion, as well as the first real-time imaging of blood flowing through the chambers of a human heart.

Rapid progress

In the years since the paper was published, the field of 3D ultrafast ultrasound imaging has progressed rapidly. Pernot’s team and other groups have used the technique for applications including imaging cardiac blood flow and tissue in the human heart, myocardial fibre imaging, coronary flow imaging and functional brain imaging in animals.

“3D ultrafast imaging remains today a research tool, but the miniaturization of the technology is progressing rapidly and cost-effective solutions are emerging,” says Pernot. “Clinical systems could become available in the next few years.”

As for why the paper attracted so many citations, Pernot suggests that it introduced a transition from ultrasound being perceived as a low-tech imaging modality with high operator dependency to a flexible tool for imaging entire organs with high spatial and temporal resolutions.

“This is a new paradigm for ultrasound imaging,” he says. “3D ultrafast imaging can provide, in quasi-real time, quantitative parameters such as myocardial stiffness or functional connectivity of the brain, which remain challenging to image with other modalities.”

The PMB citations prize is marked with the presentation of the Rotblat medal, named in honour of Sir Joseph Rotblat, PMB’s second and longest-serving editor. “We feel very honoured and proud to receive the Rotblat medal,” Pernot tells Physics World. “Our team, Physics for Medicine, is pursuing the development of new imaging and therapeutic modalities for many years with the support of our institutions and funding organisations such as the ERC and the ANR. The Rotblat Medal is an important recognition of our efforts to achieve these goals at the highest scientific level.”

  • The winner of the 2019 Physics in Medicine & Biology citations prize is: 3D ultrafast ultrasound imaging in vivo by Jean Provost, Clement Papadacci, Juan Esteban Arango, Marion Imbault, Mathias Fink, Jean-Luc Gennisson, Mickael Tanter and Mathieu Pernot Phys. Med. Biol. 59 L1

Nanowire circuits allow for transparent and flexible LED screens

Researchers in China have fabricated transparent and flexible LED screens using a simple, low-cost manufacturing process based on silver nanowires. Liu Yang and colleagues at Zhejiang University say their technique is an improvement on existing screens, which are too opaque for some applications and can be brittle when deposited on flexible substrates. Their technology could soon bring diverse new capabilities to displays built into the walls and windows of modern buildings.

In recent years, transparent LED screens have become a focus of efforts to make flexible video displays using substrates like glass and clear plastic. Such screens are made from networks of highly transparent conductive circuits that connect their constituent LEDs together. For screens measuring a metre or more, either fluorine-doped tin oxide or indium tin oxide are typically used to construct the circuits. However, networks of this type suffer from several shortcomings, including a complex and expensive manufacturing processes as well as brittleness and a lack of transparency.

In order to design an effective alternative, Yang’s team needed to fabricate a network of wires that was dense enough to distribute electric current throughout the screen, but also sparse enough to preserve transparency. This led them to silver nanowires, which have excellent optical transmittance, electrical conductance, and mechanical flexibility. To manufacture their nanowires, Yang and colleagues first coated plastic and glass substrates with sacrificial masks, etched with networks of straight lines. After treatment in a specialized solution, these lines became stickier than the rest of the substrates. A further spray-coating process led to silver nanowires forming only along these sticky lines, creating an intricate network.

Using this technique, the researchers fabricated a series of 25cm-long, transparent and highly uniform conductive strips from both types of substrate. Through experiments, they showed that these strips possessed both high optical conductivity and low resistance, making them superior to previous tin oxide-based materials. In addition, they demonstrated a screen that hosted a silver nanowire circuit as long as 1.2m; enabling it to emit red, green, and blue light with varying biases, as seen in conventional displays. Finally, they showed that when the circuit was deposited onto a polymer substrate, its performance remained stable even when bent to a radius of 15mm – confirming its flexibility.

Thanks to these advantages, Yang’s team believes their technology could eventually replace tin oxide-based circuits in transparent display applications. The next steps in their research will include developing coatings to protect circuits from the surrounding environment; enhancing substrate adhesion; and sandwiching circuits between substrates for better protection and maintenance. With these improvements in place, the technology shows significant promise in allowing for widespread and practical smart displays.

The team report their findings in Optical Materials Express.

Trapped interferometer makes a compact gravity probe

An illustration of the Berkeley group's lattice interferometer

Atoms held in place by laser beams offer a new and more compact means of measuring the local acceleration due to gravity, paving the way for applications ranging from geophysical exploration to sensitive tests of fundamental forces.

The new device, which was developed by Victoria Xu and colleagues at the University of California, Berkeley, US, exploits the quantum properties of cold, trapped atoms to measure tiny variations in the Earth’s gravitational field. Like other such “quantum gravimeters”, it relies on the interference pattern generated when clouds of atoms, or matter waves, are first vertically separated in space, and then allowed to recombine. Because the gravitational acceleration g depends on altitude above the Earth’s surface (as well as factors such as the local density of the Earth’s crust, and the presence of massive objects nearby) the two groups of atoms experience slightly different gravitational potential energies. This difference translates into a phase shift that can be detected in the laboratory.

Suspended in space

The twist is that whereas most gravimeters measure the effects of gravity on atoms as they fall through space, the Berkeley device instead uses atoms that are suspended in an optical trap. This makes it possible for the atoms to interact with the gravitational field for up to 20 seconds, improving the sensitivity of the measurement. “The longer you can allow that phase difference between the two arms of the interferometer to accumulate, the smaller the effect you can measure at the end,” Xu explains. As a result, the team is easily able to measure potential energy differences that arise when atoms are separated by as little as a few microns.

Three physicists standing next to an optical bench filled with lasers, optics

The researchers begin their gravitational measurements by cooling a sample of caesium atoms to 300 nK and launching them upward within a vacuum chamber. Next, they apply a sequence of two pulses of light, set to a frequency and intensity that places the atoms in a quantum superposition of two spatially separated states. At the apex of their trajectory, the atoms are caught in an optical lattice formed by a beam of laser light passing between a pair of highly reflective mirrors. The atoms remain trapped for up to 20 s, at which point the researchers turn off the lattice, allow the atoms to fall, and apply a second pair of light pulses to recombine and interfere the atomic wave packets. The resulting phase difference is proportional to the difference in gravitational potential energy the atoms experienced during their time in the trap.

In addition to its long interaction times, the Berkeley gravimeter has two other advantages over devices based on free-falling atoms. The first is that its geometry makes it far less sensitive to noise caused by vibrations in the laboratory. The second is that it is relatively compact. The team’s entire experimental set-up fits onto an optical bench measuring 1.2 m by 2.4 m, and the atoms themselves move only a few millimetres. An equivalent experiment performed on atoms in free fall would, the researchers note, require a vacuum system half a kilometre tall.

Novel geometry, novel experiments

Xu says that their gravimeter could be used to measure the gravitational potential of small objects. “If you work in a free-fall geometry, this is a pretty hard measurement to do, because you’re just dropping atoms past the signal you’re interested in,” she says. With trapped atoms, in contrast, the atoms can be held in the area where the gradient of the gravitational potential is greatest. This, she says, should make it possible to perform more sensitive measurements of short-range forces, which could be used to test theories of dark energy.

Holger Müller, who leads the Berkeley group, points out that being able to hold atoms in a quantum state for long periods of time could also have applications in other areas of physics. “What we are doing is using quantum mechanics for a useful purpose,” he explains. In their current set-up, the “useful purpose” is measuring tiny variations in the force of gravity, but similar techniques could also be applied to experiments on quantum computation. In either case, Müller adds, the time required to make the measurement is important, but so is the length of time that the atoms remain in a quantum superposition. “What we are doing is vastly extending this second time scale,” he concludes.

The team report their work in Science.

Physics on the silver screen, fictional and real wormholes, a new parlour game

This episode of the Physics World Weekly podcast goes to the movies as we discuss how physics and physicists have shaped the film industry – both on and off screen.

Physics World editors chat about wormholes, both real and fictional, and make the case for our favourite science moments in films. We look at how the laws of physics are put to work in visual effects to ensure that everything from weightlessness to curly hair looks realistic on screen. We also chat about what moviemakers and physicists can learn from each other and the science of science fiction.

Finally, we play a physics-and-film related parlour game that is sure to liven-up your next party.

Catalytic technique ‘upcycles’ single-use plastic

Single-use plastic products could have a more useful and less polluting future thanks to a new technique that “upcycles” them into valuable lubricants and waxes. The technique, known as catalytic hydrogenolysis, uses platinum nanoparticles supported on tiny cubes of strontium titanate perovskite material to convert energy-rich polyethylene molecules into liquid hydrocarbons. The high-quality nature of these hydrocarbons means they could be employed in consumer products, potentially reducing plastic pollution in the environment.

Polyolefins such as polyethylene are widely used in single-use plastic products because the starting materials to make them are cheap and abundant. While these products are vital in some applications – sterile packaging for foods or medical devices, for example – they are being produced in ever-increasing amounts. Around 380 Mt of plastic materials are created worldwide each year, corresponding to roughly 7% of all crude oil and natural gas produced, and some analysts predict that plastic production could quadruple by 2050.

Of these materials, 75% are discarded after just a single use. Most single-use plastic waste ends up in landfills, in the environment, or in incinerators that produce greenhouse gases and toxic by-products as well as electricity. The plastic left in the environment does not easily degrade because of the very strong carbon-carbon bonds present, but instead breaks down into microplastic particles. In instances where it is recycled, current methods tend to produce lower-value materials with degraded properties – a phenomenon known as “downcycling”.

A vast and as-yet untapped resource

Despite this, some scientists view polyolefin waste as a vast and untapped resource for producing higher-grade chemicals and new materials. According to a US-based team led by Kenneth Poeppelmeier at Northwestern University, Aaron Sadow at Ames Lab and Iowa State University and Massimiliano Delferro at Argonne National Laboratory, a more efficient technology for extracting value from discarded polymers could save the equivalent of up to 3.5 billion barrels of crude oil each year. Selective catalytic processes that upcycle plastic waste into valuable products are thus sorely needed.

The researchers have now developed such a process. In their work, the researchers  were able to react some of  the strong C-C bonds of high-molecular-weight polyethylene with hydrogen converting the material into high-quality liquid hydrocarbons with a lower molecular weight and narrow distribution of between 200 to 1000 Da. Such liquids could be used as lubricating oils or as intermediates such as waxes that can be further processed into ingredients for everyday necessities such as detergents and cosmetics, say the researchers.

The catalyst they used consists of platinum nanoparticles just 2 nm in size deposited onto strontium titanate (SrTiO3) perovskite nanocubes (50-60 nm across) using a technique called atomic layer deposition. This technique, which was developed at Argonne National Laboratory, allows for precise control of the nanocubes. The researchers chose these materials because they are stable at high temperatures and pressures.

The catalyst cleaves the C-C bonds in the PE under moderate pressures of 170 psi Hand temperatures of 300°C. The technique, detailed in ACS Central Science, also produces far less waste than conventional recycling methods, according to Poeppelmeier and colleagues.

European physicists propose huge underground gravitational-wave laboratory

Physicists from across Europe have revealed plans for a huge underground gravitational-wave observatory that, if funded, could be operational by the mid-2030s. The European Laboratory for Gravitational and Atom-interferometric Research (ELGAR) could be located in either France or Italy and would cost around €200m to build. Those involved in the project have now applied for European funding to carry out a detailed design and costing for the facility.

Gravitational waves are ripples in space-time that were predicted over 100 years ago by Albert Einstein. In 2015 the twin Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in the US along with the Virgo gravitational-wave detector in Italy detected the first gravitational-wave signal and since then tens of such events have been spotted. The observation and pinpointing of such gravitational waves is expected to be boosted in the coming years by the recent completion of Japan’s KAGRA observatory, which is the world’s first underground gravitational-wave observatory to use cryogenic mirrors.

ELGAR is the first large-scale instrument that relies solely on quantum technologies and the only project of research infrastructure in Europe based on matter-wave interferometry

Benjamin Canuel

Rather than detecting gravitational waves by bouncing laser beams off mirrors as carried out by aLIGO, Virgo and KAGRA, ELGAR would instead use atom interferometry. This involves splitting an atom beam – rubidium atoms in ELGAR’s case — in half and allowing both halves to travel for a certain distance before being recombined to look for differences in their paths. A slightly longer path would result from a tiny curvature in space-time that could be caused by a passing gravitational wave.

Atom interferometers tend to be more sensitive at low frequency than their laser counterparts as atomic beams travel more slowly. “The technology for ELGAR is already mature,” says Benjamin Canuel from the Photonics, Numerial and Nanosciences Laboratory (LP2N) at the Institut d’Optique Graduate School in Bordeaux, who is coordinating the ELGAR proposal. “Many technological bricks of the ELGAR detector are now available in lab experiments but an ambitious R&D programme is required to benefit from those techniques in a large research infrastructure.”

Plugging the gap

ELGAR would feature two 32 km long arms that each would contain 80 atom “gradiometers” that are separated by 200 m. The gradiometers would measure the relative difference in the positions of the atoms beams as they pass through.  This set-up would allow researchers to detect gravitational waves in the 0.1–10 Hz frequency range, which would be emitted, for example, by medium-size black-hole binaries. These black holes have masses between 100 and one million solar masses and are elusive but crucial to explain whether supermassive black holes formed from the expansion of small black holes, from the merger of multiple smaller black holes, or possibly from other scenarios.

This frequency range would allow researchers to plug a gap in observations given that ground-based detectors like LIGO cover the frequency range from around 10 Hz to 10 000 Hz while the LISA space-based observatory, would, if launched in the 2030s, study gravitational waves between 0.1 mHz to 0.1 Hz.

Three possible sites that have been picked for ELGAR – the Laboratoire Souterrain à Bas Bruit (LSBB) in southern France and two former mines in the Mediterranean island of Sardinia. The LSBB is currently the location for the €12m Matter–wave laser Interferometric Gravitation Antenna (MIGA) — a demonstrator atom interferometer being built by a consortium of 17 French institutions and featuring a 150 m-long optical cavity. MIGA will carry out precision measurements of gravity as well as applications in geosciences and fundamental physics.

“The choice of ELGAR’s location will be another important goal of the design study that should give a precise methodology for site comparison and characterization, and could also eventually consider other sites in Europe,” Canuel told Physics World.

The ELGAR proposal is similar to one announced earlier this year by physicists in China. Known as the Zhaoshan Long-baseline Atom Interferometer Gravitation Antenna – Gravitational Waves (ZAIGA-GW), their facility, if built, would consist of three 1 km-long tunnels in the shape of an equilateral triangle with each arm being an independent atom interferometer. Costing 1.5 billion yuan, it would aim to detect gravitational waves in the 0.1–10 Hz frequency range and could be later upgraded to 3 or 10 km arms.

The team behind the ELGAR proposal come from six European Union countries and they are now applying for funding to carry out a complete design study for the facility including a full cost analysis. “ELGAR is quite unique in Europe,” adds Canuel. “It is the first large-scale instrument that relies solely on quantum technologies and the only project of research infrastructure in Europe based on matter-wave interferometry.”

Radiosurgery and Immunotherapy: Evolving Knowledge

Copyright © 2025 by IOP Publishing Ltd and individual contributors