Skip to main content

Next stop, the LHC

lhc road sign.jpg
An unassuming setting for such an ambitious project

By James Dacey at CERN

Early tomorrow morning, I will be leaving my roadside hotel and taking the second exit on this roundabout towards CERN where the scientists will embark on an infinitely more exciting journey – the start of the physics programme at the Large Hadron Collider (LHC).

This event is marked by the first particle collisions 7 TeV, which will set yet another impressive benchmark for accelerator physics.

CERN has announced within the last hour that the first attempt at collisions will take place any time from 7 a.m. Central European Summer Time.

I will be reporting from CERN during the day, but if you want to be even closer to the action you can follow events via a live webcast, which will include coverage from the control room as well as step-by-step explanations of the procedures.

Later in the day there will be a number of roundtable discussions as well as broadcasts from the LHC’s four experiments: ATLAS, ALICE, CMS and LHCb.

Graphene-oxide framework packs in hydrogen

Stacked layers of oxidized graphene could be used to store hydrogen fuel for cars and other applications. So say researchers in the US who have made graphene-oxide frameworks (GOFs) that can hold roughly 1% of their weight in hydrogen. This value is 100 times better than graphene oxide itself and compares well with MOF-5 (the most studied metal-organic framework to date for hydrogen storage), which absorbs about 1.3 wt%.

Vehicles and other systems powered by hydrogen have the advantage of emitting only water as a waste product. An important challenge, however, is storing enough hydrogen on board a car to give it a range comparable to a vehicle powered by fossil-fuels. If hydrogen is stored as a compressed gas, it takes up far too much space – and liquefying hydrogen is expensive in terms of both cost and energy.

One promising solution to this problem is to exploit the fact that many solid materials will absorb large amounts of hydrogen. Graphene oxide is a sheet of carbon and oxygen just one atom thick, and hydrogen can be stored between the layers in stacks of this lightweight material. The challenge is to get the spacing between layers just right to reach maximum storage capacity.

Connector molecules

Now, Taner Yildirim and colleagues at NIST and the University of Pennsylvania have boosted the storage capacity of graphene oxide by using organic “connector molecules” to separate individual layers by 1.1 nm. This is three times more than the inter-plane distance in bare graphite – which comprises stacked layers of graphene.

“Being able to control this width is important for a number of applications, including hydrogen storage,” explains team member Jacob Burress. He says that the interlayer spacing can be controlled to optimize hydrogen adsorption. The idea is to have pores that are small enough to maximize the interaction between hydrogen and the surface of the frameworks, but at the same time large enough to hold two layers of adsorbed hydrogen.

The team took its inspiration from work already done on metal-organic frameworks (MOFs), widely studied materials for hydrogen storage. Here, inorganic nodes are connected by organic struts using well established chemistry techniques. In the new work, the metal oxides are replaced with graphene oxide and the struts with diboronic acid “pillars”.

Future optimization

The GOFs can store roughly 1 wt% of hydrogen at 77 K and 1 bar. “This is less than one fifth that the ‘ideal’ GOF structure can hold, according to state-of-the-art computer simulations,” says team member Wei Zhou. “Based on our adsorption simulations, the ideal GOF structure can adsorb hydrogen up to 6 wt% at 77 K and atmospheric pressure, suggesting that our GOF materials could be significantly optimized in the future.”

As important as its hydrogen-storage properties are, the fact that graphene-oxide production can easily be scaled-up to industrial quantities is a big advantage too. What’s more, it is inexpensive and thought to be safe for people and the environment.

The team also discovered that the hydrogen-adsorption kinetics of GOFs are different compared with other materials. At lower temperatures, there is little adsorption and hardly any hydrogen gas is released either. This means that the material can be loaded with gas at higher temperatures and then cooled below this blocking temperature to hold the hydrogen in place. Gas will not be released until the sample is allowed to warm up. Ideally, this blocking temperature needs to be as close to room temperature as possible for practical applications.

Drug delievery

“We expect to see more work on graphene oxide where it is linked by many different connectors for a variety of chemistry and physics applications,” Burress tells physicsworld.com. “We anticipate these materials to be very useful not only for hydrogen storage but for other gases such as ammonia and carbon dioxide as well.” He also hinted at medical applications: “Once the graphene-oxide layers are separated by sufficiently large distances, one could also imagine adding some biomolecules for drug delivery”.

The researchers now hope to look into possible electronic applications for the GOFs because they may be useful as conducting materials for fuel cells or batteries. Another possibility is to use the GOFs as sensors, where gas adsorption leads to a measurable change in the material’s electronic properties.

The next immediate step is to optimize hydrogen-storage capacity, says the team. This could be achieved in a number of ways: including removing unreacted hydroxyl groups to increase the useable surface area; and optimizing the linkers in terms of concentrations and chemistry.

“We also want to understand the nature of hydrogen-adsorption kinetics and how we can use it to our advantage!” says Burress. “This is just the beginning of new research and there are many new experimental avenues to follow.”

The research was presented last week at the March Meeting of the American Physical Society.

Leading British scientist to scrutinize BBC's science coverage

DACEY steve jones.jpg
Steve Jones to head BBC Trust review

By James Dacey

British biologist Steve Jones is to head a review of impartiality and accuracy in the BBC’s coverage of science, called for by the BBC Trust – the BBC’s governing body. The Trust has also published guidelines for this review, which give the scope and the timetable of activities.

The review comes as pressure has mounted on the BBC in recent years due to issues such as climate change, GM crops and stem cell research becoming increasingly politicized.

For example, in 2007 the corporation cancelled a special day of programmes that was to be devoted to climate change when senior news executives questioned the impartiality of this broadcast.

The BBC Trust will assess BBC content across all of its media outlets including the BBC World Service, over the coming months. They say that this will involve a number of public engagement activities.

“It will ask whether the BBC’s coverage of science taken as a whole, presents a partial view of the nature of science and the role science plays within society,” say the review guidelines.

Jones will scrutinize the results of these exercises before writing a final report, which is expected to be published in the first half of 2011.

“I look forward to sampling some of [the BBC’s] huge coverage of physics, chemistry, biology, ecology, geology and more to see how well it is doing its job,” says Jones, who is head of the genetics, environment and evolution department at University College London.

“Science is by nature a field full of dispute; this is how it advances. Dispute is not the same as bias, though: and a bias towards optimism or pessimism is a real danger, both in the public presentation of science, and in the beliefs of scientists themselves.”

In addition to his academic research, which focuses on the evolutionary and genetic aspects of biology, Jones is also a familiar guest in BBC programming. He has made more than 200 appearances on BBC radio and has also been a guest on a number of BBC current affairs programmes including Question Time and Newsnight.

“He was selected on the basis of his academic credentials, of his knowledge of the media and his reputation amongst the scientific community,” says the BBC Trust in a statement.

Locks and keys build tiny structures

Researchers in the US have invented a “lock and key” technique that causes small particles to assemble themselves into a variety of tiny structures. The method could offer a simple way to create technologically useful materials on the micrometre and nanometre length scales.

Particles measuring between 100 nm and 1 µm are excellent building blocks for making optoelectronic devices. Such particles are about the same size as the wavelengths in the visible part of the electromagnetic spectrum and so interact strongly with light.

One promising way of creating devices is to disperse the particles in a liquid to form a colloid that can then solidify to create a colloidal crystal. The optical properties of such crystals can be tuned by changing the spacing between particles. Devices are usually created by exploiting the ability of some particles to self-assemble into specific structures. Surface chemistry can be used to control the shapes of these structures by coating the particles with molecules, such as DNA strands, that bind to each other.

Shaping a few basic building blocks

Now, Stefano Sacanna and colleagues at New York University have invented a new control method that does not depend on the surface chemistry of particles, but only on their shapes. As a result, the process avoids the many problems associated with coating particles with molecules or treating their surfaces. “Ideally, you could design a cluster with precise geometry and well-defined physical and chemical properties by shaping a few basic building blocks and letting them self-assemble with this simple ‘lock and key’ mechanism,” says Sacanna.

The technique employs two particles with complementary features – for example a spherical cavity on one particle (the lock) and a matching spherical protrusion on the other (the key). The particles are pushed together by the “depletion interaction”, which involves the addition of a third type of particle that is much smaller than either the lock or key. When a lock and key come very close together, the small particles can no longer fit between them. Because there are no small particles between the large particles to push them apart, the particles start to close in on each other as if they are attracted by a short-range force.

The strength of the depletion interaction is proportional to how good the fit is between the lock and key – if the fit is poor, some tiny particles can get into the gaps and push the lock and key apart. As a result, the particles tend to form structures in which locks and keys are stuck together (see figure).

Reversible process

Thanks to the fact that the technique does not depend on controlling the surface chemistry of particles, it allows for more freedom in designing and assembling functional clusters, claims Sacanna. The absence of chemical bonds also means that assembly is reversible. Clusters can be taken apart by simple changing the temperature.

This novel and essential feature can be used to create, for example, mobile parts in micromachinery Stefano Sacanna, New York University

Another important benefit of the technique is that lock-and-key bonds are much more flexible than chemical bonds. “This novel and essential feature can be used to create, for example, mobile parts in micromachinery,” Sacanna tells physicsworld.com.

Several locks may bind to a single key (see figure), which means that a structure can have a number of bonds. It may also be possible to create more than one “pocket” on a lock particle. According to Michael Solomon of the University of Michigan, multiple pockets “would introduce the colloidal equivalent of extended co-ordination complexes – 2D and 3D molecular arrays that self-assemble in fixed geometries from atom and ligand molecules”. “The assembly of colloidal particles might allow access to desirable, but so far elusive, complex colloidal structures,” he adds.

Sacanna and colleagues now plan to make “smart” particles and push their self-assembly to the limit where well-defined structured clusters of particles can self-replicate.

The work is described in Nature 464 575.

More on 'Mpemba's physics'

JOHNSTON ice.jpg
A more sophisticated approach is needed

By Hamish Johnston

When I was a boy I remember hearing that a tray of hot water in a freezer sometimes freezes before a tray of cold water. Being an accepting child rather than a sceptical scientist, I had no problem embracing this as another example of the weirdness of nature.

Called the Mpemba effect, the curiosity was spotted in 1963 by Erasto Mpemba who was making ice cream for a school project. The Tanzanian teenager was in a hurry and put his boiled milk into the freezer before it cooled to room temperature and found that it froze before his classmate’s cooler samples.

When young Erasto asked a teacher how Newton’s law of cooling could be reconciled with his observations he was famously told, “All I can say is that is Mpemba’s physics and not the universal physics.”

Undeterred, Mpemba carried out further experiments, eventually writing a paper with a physicist at the University of Dar es Salaam. The paper was published in 1969 – the same year that Canadian physicist George Kell published an independent observation of the effect.

It turns out that Mpemba was not the first to ponder this phenomenon. The idea has been around for more than 2000 years and the 1969 papers led to a flurry of claims that the effect can be observed in everything from food preparation to frozen pipes.

The papers also spurred a debate among physicists about whether the effect is real or merely the result of bad experimental technique. Indeed, it turns out that it is incredibly difficult to define and control all the variables involved in deciding which sample freezes first.

Some physicists believe that the effect is related to supercooling – when liquid water is cooled it can remain liquid at temperatures well below zero Celsius. It could be that the smaller temperature difference between freezer and cold water encourages more supercooling, lowering the temperature at which the sample freezes.

However, some experiments suggest the opposite – that it is easier to supercool hot water!

Now, James Brownridge at the State University of New York at Binghamton has done a series of nine carefully controlled experiments to understand the conditions under which the Mpemba effect occurs.

His conclusion “Hot water will freeze before cooler water only when the cooler water supercools, and then, only if the nucleation temperature of the cooler water is several degrees cooler than that of the hot water.” The nucleation temperature is the point at which supercooled liquid actually freezes.

He was able to create these conditions in his lab using sealed vials of water that are immersed in a freezing salt solution. He cycled the samples through 28 freeze/thaw cycles and found that the hot water froze before the cold every time.

What I don’t understand about the experiment is that Brownridge seems to have chosen his samples based on the knowledge that there is a 5.5 degree difference between their nucleation temperatures. Or, in other words, the samples are not identical.

So does this mean that the Mpemba effect doesn’t occur with identical samples, but only with samples that differ in a specific way?

You can read more about the Mpemba effect in this Physics World article: Does hot water freeze first?.

Friction in the quantum community

DACEY quantum fracas.jpg
At loggerheads: John Pendry (left) and Ulf Leonhardt (right)

By James Dacey

Everybody loves a good, strong disagreement between two academics at the top of their game, especially when their positions are polar opposites. Two recent papers, by John Pendry of Imperial College, London, and Ulf Leonhardt of St Andrews in Scotland, draw our attention to one such fracas that is really starting to heat up.

The issue in question is quantum friction – does it exist? In a nutshell: Pendry says, absolutely yes; while Leonhardt says, not a chance.

If such a force does prove to exist, as well as crowning a winner in this debate, it could be of great interest to engineers trying to improve the performance of ultra-small mechanical devices.

Let me give you a brief history of the issue…

Over the past few years, Pendry and a number of others have advocated the existence of quantum friction, by building on the pioneering work of Dutch physicist, Hendrick Casimir.

In the mid 20th century, Casimir worked out that two flat surfaces placed in a vacuum should be attracted to one another. This force arises from the fact that, according to quantum mechanics, the energy of an electromagnetic field in a vacuum is not zero but continuously fluctuates around a certain mean value, known as the “zero-point energy”. Casimir showed that the resulting radiation pressure outside the plates will tend to be slightly greater than that between the plates and therefore the plates will be forced together.

Pendry and others became interested in the situation where the first surface is moving relative to the second one, claiming that friction should exist between the two. Pendry, who is chair in theoretical solid-state physics at Imperial College, develops the idea in a new paper in the New Journal of Physics. He argues that fluctuations in the first surface appear to be moving with a Doppler-shifted velocity, relative to the first surface. This Doppler shift destroys the balance between fluctuations as there are now more of them travelling against the relative surface motion than there are heading in the opposite direction. This, he believes, leads to a net frictional force.

This argument, however, is strongly rejected by Leonhardt, who is chair in theoretical physics at the University of St Andrews. In a comment paper submitted to the arXiv preprint server, Leonhardt claims that Pendry has described quantum friction “qualitatively”, but not quantitatively. Leonhardt argues that, “there is no experimental evidence for or against this effect, no facts”. He criticizes Pendry’s idea of quantum friction claiming that “one could apply the same effect to extract an unlimited amount of useful energy from the quantum vacuum”.

Leonhardt contrasts Pendry’s academic efforts with his own approach to this topic, referring to a paper he co-authored last year. In this paper – developing the earlier work of Soviet physicist, Evgeny Lifshitz – Leonhardt carries out an “exact calculation” for a particular configuration of plates, which shows quantum friction to equal precisely zero.

This calculation of Leonhardt is scrutinized in Pendry’s latest paper, and the Imperial researcher is less than impressed by it. Pendry says that Leonhardt has essentially shifted the goalposts on the problem. “[Leonhardt’s team] claim that a moving surface can be replaced by a stationary one that is bianisotropic,” he told me. “Of course, this leads to zero net friction in their theory.”

So, as you can see, the issues that still need to be resolved include: what exactly constitutes a moving surface; and the conditions that could trigger what Pendry refers to as a “Doppler-induced imbalance”.

For now though, the argument rages on.

UK launches space agency

The UK’s science minister Paul Drayson and Peter Mandelson, secretary of state for business, innovation and skills, have today launched the UK Space Agency (UKSA). The new body will take over responsibility for the UK’s space policy and “key” budgets for space under one single management.

The agency, to come into effect on 1 April, will manage around £250m in contracts per year, including the UK’s contribution to major EU projects such as the €3.4bn Galileo global-positioning system and Global Monitoring for Environment and Security. Both are currently funded by the departments for transport and for environment, food and rural affairs, respectively.

The UK spends about £300m per year on civil space research. A large fraction of this goes on the country’s membership of the European Space Agency (ESA), which in 2009 was around £240m, and on its membership of the European Organisation for the Exploitation of Meteorological Satellites, which launches and maintains Earth observation satellites and is currently funded by the UK’s Meteorological Office.

“The action we are taking today shows that we’re really serious about space,” says Drayson. “Britain’s space industry has defied the recession. It can grow to £40bn a year and create 100,000 jobs in 20 years. The government’s commitments on space will help the sector go from strength to strength.”

Innovation centre

At the launch in London today, Drayson and Mandelson also announced a £40m International Space Innovation Centre (ISIC) that will be based at Harwell in Oxfordshire next to the new ESA technical facility, which opened last July.

Funded through public and private money, ISIC is expected to “exploit the data generated by Earth-observation satellites,” as well as using space data to “understand and counter climate change” and advise on the “security and resilience of space systems and services”.

“ISIC will bring together industry and academic research,” says Richard Holdaway, the director of space science and technology for the UK’s Science and Technology Facilities Council (STFC) who first proposed that the UK should build such a centre five years ago. “It will help us to exploit what is a multi-billion pound industry.”

Drayson had earlier this month already indicated that the space agency would manage the ESA subscription fee when he revealed details of his review of the STFC. The review called for the change as a way of protecting the council’s grant programme against rises or falls in the value of the pound against the euro.

However, Keith Mason, STFC’s chief executive, says that the STFC will “continue to play a major role in space research, as a partner of the agency, through our direct involvement in supporting space science and technology, and as the lead in the two National Science and Innovation Campuses at Daresbury and at Harwell, which includes the first ESA centre in the UK”.

The UKSA will have its headquarters in Swindon, UK, and David Williams, director general of the British National Space Centre, will lead the agency until a chief executive is appointed within the next six months.

LHC physics programme set to launch 30 March

CERN has announced that its Large Hadron Collider (LHC) will attempt the first collisions at 7 TeV on Tuesday 30 March – one week today.

Smashing together protons at this energy will set another benchmark for the highest energy yet achieved in a particle accelerator. More significantly, it will mark the beginning of the LHC physics programme, which will test and scrutinize the Standard Model of particle physics.

It’s a bit like firing needles across the Atlantic and getting them to collide halfway. Steve Myers, director for accelerators and technology, CERN

The announcement follows Friday’s news that two 3.5 TeV beams had been successfully circulated around the 27 km circumference of the LHC.

Playing down the hype

Despite the excitement at being so close to such a key milestone, CERN is careful to emphasize the uncertainties ahead. “The LHC is not a turnkey machine,” said CERN director general Rolf-Dieter Heuer. “The machine is working well, but we’re still very much in a commissioning phase and we have to recognize that the first attempt to collide is precisely that. It may take hours or even days to get collisions.”

This is a sentiment echoed by CERN’s director for accelerators and technology, Steve Myers. “Just lining the beams up is a challenge in itself: it’s a bit like firing needles across the Atlantic and getting them to collide halfway.”

Once 7 TeV collisions have been established, CERN’s plan is to run continuously for a period of 18–24 months, with a short technical stop at the end of 2010. Experiments will run throughout this time, with researchers expecting to accumulate one “inverse femtobarn” of data – roughly 10 trillion proton–proton collisions.

The broad aim of this research is to characterize the subatomic particles that emerge from these high-energy collisions, with perhaps the ultimate goal of confirming the existence (or non-existence) of the Higgs boson. This elusive particle is the last missing piece of the Standard Model of particle physics, and its discovery would confirm the most compelling explanation physicists have for how elementary particles acquire mass.

In pursuit of the Higgs

Although the Standard Model does not predict the mass of the Higgs boson, precision measurements of known Standard Model particles mean that its mass is unlikely to be more than 186 GeV. Meanwhile, direct searches made at CERN’s Large Electron–Positron Collider – the forerunner to the LHC – rule out a Higgs that is lighter than 114 GeV.

Running at 7 TeV, the LHC should have the best chance yet of confining the Higg’s mass, being 3.5 times more energetic than the Tevatron collider at Fermilab in the US, which until December was the world’s most energetic collider. What can be discovered, however, will depend largely on how heavy such new particles are and on how easy they are to spot among “background” processes taking place in the proton–proton collisions.

Following this current run, CERN plans to shut down the LHC in 2012 for a year or more to prepare it to go straight to maximum-energy 14 TeV collisions in 2013.

Spiders’ super-strong silk relies on its crystals

If you are a small insect and find yourself staring into the eight eyes of a hungry spider as it wraps you in its silk, unfortunately, your number really is up. This stringy substance may seem a bit flimsy but the more you struggle, the longer it gets, and it will not snap – its tensile strength exceeds that of high grade steel. Now, a group of researchers in the US and Korea are able to explain, numerically, how spider silk combines strength with extreme ductility to deadly effect.

A spider’s silk is made from basic proteins, including some that form thin, planar crystals called beta sheets. These sheets are connected to each other by hydrogen bonds, which are among the weakest types of chemical bond – far weaker, for example, than the covalent bonds found in most organic molecules. However, by stacking multiple beta sheets, a spider’s silk manages to fail gracefully, with hydrogen bonds breaking one by one under external force.

Markus Beuhler and his colleagues at Massachusetts Institute of Technology, working with researchers at Pohang University of Science and Technology, have studied this silk failure in the finest detail to date. Using a series of computer simulations they found that the strength of spider silk depends on a critical size of crystal within the beta sheets of around 3 nm. Once the crystals are allowed to grow beyond 5 nm, however, the silk suddenly becomes weak and brittle.

“This way of failing has clear advantages to the spider,” Beuhler tells physicsworld.com. “Simple hydrogen bonds make it much easier for a spider to repair damage to its web. A covalent bond, like glass, would break catastrophically.”

Beuhler says that these findings could be used to develop innovative applications, such as tough coatings for cars and non-poisonous surgical equipment. He also envisages the emergence of new materials that could possess the strength and flexibility of spider silk while being made from inherently stronger molecules, such as carbon nanotubes.

“The paper shows improved modelling and some more explanations in terms of shear deformation and involved hydrogen bonds being responsible for the mechanical behaviour,” says Mato Knez at the Max Planck Institute of Microstructure Physics, who was not involved in this research. Knez feels, however, that the work is just another step in a developing research field. “It gives some sort of improvement to the understanding of the model system, which was already present to a certain extent.”

This research is published in Nature Materials.

3D invisibility cloak unveiled

The first device to hide an object in three dimensions has been unveiled by a group of physicists in the UK and Germany. While the design only cloaks micro-scale objects from near-infrared wavelengths, the researchers claim that there is nothing in principle to prevent their design from being scaled up to hide much larger artefacts from visible light.

The origins of this design date back to 2006, when David Smith and colleagues at Duke University in North Carolina created a cloak that could bend microwaves around an object, like water flowing around a smooth stone. This early cloak was made using a metamaterial – an artificially constructed material with unusual electromagnetic or other properties – which consisted of a cylinder built up from concentric rings of copper split-ring resonators. This first cloak, however, only worked in two dimensions – in other words, looking at the cylinder from above revealed the presence of the shielded object.

Carpet cloak

Now Tolga Ergin and colleagues at Karlsruhe Institute of Technology in Germany, together with John Pendry of Imperial College in London, have overcome this problem by creating a “carpet cloak”. Proposed in 2008 by Pendry and Jensen Li, this involves hiding an object underneath a bump on the surface of an otherwise smooth material – just as something might be hidden under a carpet – and then smoothing out the resulting bump. This is achieved by creating a bump on a flat mirror and then placing onto the mirror a layer of metamaterial with optical properties such that light appears to reflect off the mirror as if the bump were not there.

This technique was demonstrated experimentally at two different wavelengths last year, with Smith’s group showing that it worked in the microwave region while researchers at Berkeley and Cornell University near New York obtained similar results at infrared wavelengths. However, these cloaks were also limited to just two dimensions.

Ergin’s group has made a carpet cloak in three dimensions by stacking nanofabricated silicon wafers on top of one another in a “woodpile” matrix and then filling in the gaps between the wafers with varying amounts of polymer. This achieves the desired distribution of refractive indices within the structure.

Hiding the bump

The cloak structure was then placed on top of a reflective gold surface containing a bump, leading to a cloaking effect using unpolarized light with wavelengths between 1.4 and 2.7 µm – the near-infrared. Importantly, this effect held for viewing angles up to 60 degrees (with zero degrees representing viewing in just two dimensions).

The bump, however, was very small – just 30 µm (10–6 m) × 10 µm × 1 µm. Team member Martin Wegener says it should be possible to use existing technology to make the cloak bigger in order to hide larger objects, but that this approach would be extremely time-consuming. “Faster nanofabrication tools will have to be developed allowing for three-dimensional structures,” he adds.

For Wegener the aim of the work is not about focusing all efforts on creating invisibility cloaks, but is about exploring a range of applications in transformation optics. This involves calculating what kind of material is needed to bend light in a certain way, by considering light trajectories as the result of the warping of space. Wegener says that transformation optics should lead, for example, to the design of better antennas or smaller optical resonators.

Smith describes the latest work as “very exciting” and agrees that its real importance lies in the development of transformation optics. “Demonstrations like these are paving the way for transformation optical design to become an established design methodology, like ray-tracing,” he says.

The research is published in Science.

Copyright © 2026 by IOP Publishing Ltd and individual contributors