Skip to main content

‘Super-twisted’ light swirls into view

Researchers at the University of Glasgow in the UK are the first to have created “super-twisted light” in the lab. The light is so-called because it has a high degree of circular polarization and could be used to detect minute quantities of biological molecules in solution. Indeed, super-twisted light could help scientists study the proteins responsible for neurodegenerative diseases such as Alzheimer’s or Parkinson’s.

Most biological molecules have a certain chirality (right- or left-handedness) and this intrinsic property can be used to detect biomolecules in “chiroptical” spectroscopic techniques such as circular dichroism, optical rotatory dispersion and Raman optical activity. Here, scientists typically measure the small differences when left- and right-circularly polarized light interacts with a chiral sample. In circularly polarized light, the electric field vector rotates around the direction of propagation creating a right- or left-handed helix.

Although widely employed, these techniques are not all that sensitive because chiroptical effects are inherently weak. As a result, the techniques can only be used to study samples containing relatively high concentrations of target molecules. Recently, researchers put forward the idea of using “super-twisted” light in such techniques to increase sensitivity. This light has a greater level of chirality than that of ordinary circularly polarized light because it is twisted much “tighter”. However, the problem was that, until now, no-one knew how to create it in the lab.

Gammadions of gold

Malcolm Kadodwala and colleagues produced their super-twisted light by shining ordinary light through a specially designed metamaterial made up of chiral gold nanoparticles. The metamaterial comprises left- or right-handed gold gammadions 400 nm long and 100 nm thick deposited on a glass substrate and arranged in a square lattice with a periodicity of 800 nm.

The light produced allowed the team to detect certain proteins at picogram levels, a sensitivity a million times greater than that possible with current chiroptical techniques. “We are very excited about the research,” says Kadodwala. “This light, which does not occur naturally, allows us to detect biological molecules at unprecedented low concentrations.”

The light seems to be particularly effective at detecting amyloid proteins – insoluble molecules that stick together to form plaques. These plaques are thought be responsible for certain neurodegenerative diseases. “In fact, super-twisted light is highly sensitive to the secondary, or beta, structure of a protein,” Kadodwala told physicsworld.com. “Beta structure is found in the coat proteins of certain viruses and in amyloid fibrils.”

The researchers reported their work in Nature Nanotechnology doi:10.1038/nnano.2010.209.

Galaxy Zoo paper goes supernova

By Hamish Johnston

In 2007 a group of astronomers launched Galaxy Zoo with the aim of harnessing people power to classify galaxies.

The idea is that the general public would scan telescope images of galaxies and classify their shapes. Astronomers simply don’t have the time to analyse the hundreds of thousands of galaxy images that are gathered robotically and Galaxy Zoo was a great success.

The Galaxy Zoo team launched several more projects including Galaxy Zoo: The Hunt for Supernovae, which enlists the public in the search for exploding stars.

Now, Galaxy Zoo has published its first scientific paper on supernovae. Nearly 14,000 supernova candidates were classified by more than 2500 individuals within a few hours of data collection.

You can read all about the results here.

LHC sees its first ZZ event

The Large Hadron Collider (LHC) at CERN in Geneva has produced its first pair of Z bosons, based on data released by the compact muon solenoid (CMS) collaboration. Seeing this first pair is an important step in the giant collider’s hunt for the Higgs boson because the generation and analysis of many more such events could provide one of the key signatures of the elusive Higgs.

Believed to provide all particles with mass, the Higgs boson is the last missing piece of the Standard Model of particle physics. The LHC, designed to collide protons into one another at energies of up to 14 TeV, is expected to find the elusive boson – assuming that the Higgs does indeed exist.

Evidence for the Higgs will not come as a single observation. Instead, physicists must accumulate data related to the energy distribution of the particles that the Higgs decays into. One of the cleanest such decay signatures is the transformation of the Higgs into two Z bosons – particles that are one of the carriers of the weak nuclear force. The Z bosons then decay into pairs of heavy charged particles known as muons, which leave an unmistakable footprint in a detector such as CMS.

Layers of particle sensors

Now, the first such event at the LHC has been seen by CMS – one of the collider’s two enormous general-purpose detectors. CMS consists of concentric layers of particle sensors placed inside and around the bore of a 4 T superconducting magnet. Any Z bosons produced by the proton–proton collisions at the centre of the bore are too short-lived to be detected by the surrounding instrumentation. However, the muons last for long enough to travel out from the collision point and traverse all of the detector’s inner sensors. They then travel through a number of gas-filled layers revealing their trajectory via the ionization of this gas. Moving charged particles are bent by a magnetic field such that the curvature of the muons’ paths reveals their momentum.

The CMS data, obtained in the early hours of 24 September, clearly reveal the tracks of four muons (see figure). And the masses of these muons, grouped into two pairs, result in values for the mass of the Z of just over 92 GeV, which is very close to the known Z mass. CMS collaboration member Tommaso Dorigo of the University of Padova in Italy is delighted with the result, describing it on his blog as “as beautiful as they get, or even more so”.

No Higgs required

But Dorigo says that this result on its own provides no evidence that the Higgs boson exists. He points out that pairs of Z bosons can be produced directly by the proton collisions and do not require the intermediate creation of the Higgs. Indeed, he says that this is likely to be the reaction that took place in this case. Showing that the Higgs exists will involve observing many such ZZ pairs and then plotting the distribution of the mass of the pairs. If the pairs are produced in only the direct reaction, then this distribution should be fairly flat; but if instead the Higgs is involved, then the distribution should instead show a peak at a particular value – the mass of the Higgs.

Predicting how many data are likely to be needed to prove that this peak exists, and therefore how long the machine will have to run for before the Higgs is found, is difficult because the fraction of ZZ events that would result from the decay of the Higgs depends on its mass, which is not known from theory. Above about 180 GeV – the combined mass of two Zs – the Higgs can readily decay into a Z-pair, but at lower masses it would be far more likely to decay into other particles that are not so easy to detect.

“For a given Higgs mass we know how many Z pairs, and therefore how many muon quadruplets, we should produce,” says Dorigo. “But since we don’t know the mass, the fraction of muon events that is due to a Higgs could be lower than a tenth or as high as a few tenths.”

Don’t speculate, accumulate

Dorigo is reluctant to speculate on when he and his colleagues might finally bag the Higgs. But in very round terms, he says that about 100 pairs of Zs are likely to be needed, which, he believes, means about 100 times the amount of collision data collected so far. This would be about five times the amount of data that would be accumulated before the collider is due to be switched off for an upgrade to full energy at the end of 2011, meaning that conclusive evidence of a Higgs decay to ZZ pairs before then is unlikely (although other decay signatures might enable a discovery with fewer data).

However, ATLAS team member Andy Parker of Cambridge University in the UK points out that the Fermilab’s Tevatron accelerator in the US could have its lifespan extended to 2014, which might prompt CERN to delay the upgrade for a year. He says that any decision on whether to extend the current run will depend on how well the accelerator performs next year, but he believes that “this year has gone exceptionally well” and that CERN “could still decide to run in 2012”. Either way, he says, the latest CMS result shows that the LHC’s experiments “now have enough data to begin the Higgs search in earnest”.

The work is described in this presentation from the CMS collaboration

Honeycomb windows that could harvest the Sun

A materials science breakthrough in the US and Taiwan could lead to a new type of window that can harness the power of the Sun. The newly created transparent material can efficiently capture photons to generate electricity thanks to its honeycomb structure, which blends the properties of a semiconductor polymer with those of a carbon-rich fullerene.

The chosen polymer, P1, is efficient at absorbing photons, which causes electrons and holes within the material to combine into bound states known as excitons. The role of the fullerene – which is a compound formed when a large number of carbon atoms form ball-shaped molecules – is to then undo this process by dissociating the electrons and holes. Suitably placed electrodes can then extract the charges to produce photocurrents.

Mircea Cotlet, one of the researchers based at Brookhaven National Laboratory near New York City, told physicsworld.com that the biggest challenge was finding a way to merge the polymer and fullerene into a honeycomb lattice. His team achieved this by creating a flow of micron-sized water droplets across a thin layer of the polymer/fullerene solution. Water droplets then self-assemble into large arrays within the solution. Once the newly formed solution has evaporated it leaves behind a hexagonal honeycomb pattern over a large area of the polymer, which the researchers observed using scanning probe and electron microscopy.

“Though such honeycomb-patterned thin films have previously been made using conventional polymers like polystyrene, this is the first report of such a material that blends semiconductors and fullerenes to absorb light and efficiently generate charge and charge separation,” says Cotlet.

A window of opportunity

Cotlet is keen to stress that the idea behind the study was to explore the basic science and to develop self-assembly methods that do not require intense laboratory infrastructure. He reveals, however, that his team now intends to develop the work by implementing the honeycomb into devices and carrying out a number of tests. Among the applications that could spring from the work are optical displays and devices, including transparent solar cells.

Another possibility is to incorporate the honeycomb films into windows. As the polymer chains gather mostly at the edges of hexagons, the films would remain mostly transparent with remaining chains spread thinly across the hexagon centres. “Imagine a house with windows made of this kind of material, which, combined with a solar roof, would cut its electricity costs significantly. This is pretty exciting,” says Cotlet.

It is not yet clear how much electricity these windows could generate but it will not be enough to keep a building self-sustained. “At the end, you have a window in any house, why not get some electricity out of it?” says Cotlet.

The research is described in a research paper in Chemistry of Materials.

Hubble successor hit by budget setback

NASA boss Charles Bolden has announced sweeping changes to the management of the $5bn James Webb Space Telescope (JWST) after an independent report called for the space agency to tackle budget overruns and delays to what is one of NASA’s flagship missions. The report by the seven-member JWST review panel, chaired by John Casani from NASA’s Jet Propulsion Laboratory, also says that the telescope will require an additional funding boost of $1.5bn if it is to launch by late 2015.

The JWST, which was due to be launched in 2014, is designed to study the formation of stars and galaxies and examine the physical and chemical properties of solar systems. It will do this by using four onboard instruments – consisting of cameras and spectrometers – that are cooled to only a few degrees above absolute zero by liquid nitrogen.

JWST is mainly a NASA project with collaboration from the European Space Agency and the Canadian Space Agency. But since its conception in the late 1990s the telescope has increasingly eaten into NASA’s resources, consuming around 40% of the agency’s $1bn astrophysics budget in 2010. The review into the JWST was initiated in June by Democrat senator Barbara Mikulski from Maryland over concerns about schedule delays and cost overruns.

Management changes

In its report, the panel notes that the JWST is in “very good technical shape”, saying that cost increases and schedule delays have been caused by “budgeting and program management, not technical performance”. But in a separate letter sent by Casani to Bolden, Casani suggested that the project’s organization needed to be restructured and called for the way the JWST programme is independently assessed to be improved.

The organizational changes to the JWST’s management, announced by Bolden, will involve assigning a new senior manager at NASA headquarters as well as a programme director who will have both technical and cost staff working alongside them. “No-one is more concerned about the situation we find ourselves in than I am,” Bolden states. “I am disappointed we have not maintained the level of cost control we strive to achieve.”

To enable the telescope to launch by September 2015 at the earliest, the panel has called for an extra $1.5bn on JWST’s budget, requiring $250m to be added in 2011 and 2012 and the remainder at a later date. The report notes that the additional funding must go “hand-in-hand” with management changes.

The JWST will have a 6.5 m diameter mirror, consisting of 18 hexagonal folding mirror segments that are each 1.3 m in diameter, giving the telescope a total collecting area of 24 m2. The telescope will operate for around 10 years in an orbit 1.5 million km away from Earth, at a point in space called Lagrange Point 2.

Nanoribbons make good memories

A new memory cell made of an extremely narrow graphene “nanoribbon” has been unveiled by researchers in Germany, Switzerland and Italy. An important benefit of the new cell is that it can be made much smaller than a conventional silicon cell, resulting in memory chips with much greater storage density than silicon-based devices.

The most important property of a memory chip is its capacity, that is, the amount of information it can store. To satisfy the demand for more powerful computers, the capacity of memory chips per chip area (also known as their storage density) has been increasing exponentially over the last 20 years – so obeying Moore’s law.

Storage density ultimately depends on the size of a unit memory cell (the smaller the better), which stores one bit of information, 0 or 1. Preserving Moore’s law will thus depend on how good scientists are at fabricating ever smaller unit memory cells. Graphene – a 2D sheet of carbon just one atom thick – is a promising material in this respect thanks to its exceptional electronic and mechanical properties because it could allow devices smaller than 10 nm to be made. 10 nm is around the dimension at which devices based on silicon reach their limits.

Reaching the 10 nm scale

Roman Sordan of the Politecnico di Milano and colleagues in Stuttgart and Lausanne have now reached the 10 nm scale by making a memory cell based on graphene nanoribbons – the form of graphene that has the smallest possible area. “Indeed, the area of our new memory cell is so small that it allows for a very high storage density,” Sordan said. “We thus expect that graphene nanoribbon memory chips will allow Moore’s law to continue for the foreseeable future.”

The team fabricated graphene nanoribbons by depositing V2O5 nanofibres atop graphene and then etching the sample using an argon ion beam. The ion beam removes any graphene not protected by the nanofibres. This simple method forms graphene nanoribbons underneath the nanofibres, which are subsequently removed.

Very narrow nanoribbons

The advantage of using nanofibres as etching masks is that the technique can produce very narrow nanoribbons that are less than 20 nm wide. The ribbons also have smoother edges that those made by standard lithography. Rough edges usually degrade the characteristics of a device.

Another advantage of using V2O5 fibres is that they can easily be removed once the ribbons have been patterned – you just need to flush the sample with water, which is a simple and environmentally friendly process, said Sordan.

The researchers found that gate voltage pulses of opposite signs can switch the device between digital on (bit 1) and off (bit 0) states. After the device flips, it remains in the new state even after the gate voltage is reset – that is, it can “remember” its state. “This memory effect probably originates from charges surrounding the nanoribbons, which are trapped by water molecules adsorbed on the SiO2 substrate on which the devices were made,” explained Sordan. “The nanoribbons possess a memory effect most likely due to a simple mechanism by which water vapour from the air attaches to the hydrophilic substrate and then traps charges in the vicinity of the nanoribbons.”

Small and very fast

The device has a transition time (the time a device needs to flip its memory state once triggered) that is three orders of magnitude shorter than that of previously reported memory devices made from either graphene or carbon nanotubes. The transition time is directly related to the highest frequency at which the device can be clocked. This means that the shorter the transition, the higher the clock rate – an important point because memory devices not only need to be small but also very fast, says Sordan.

“Our memory cells are also very robust and versatile,” he added. “They can be used as both static random access memories and nonvolatile flash memories cell for ultrahigh storage density applications.”

The researchers, who have published their work in Small, will now try to develop digital logic gates based on graphene nanoribbons – the other important class of devices needed to realize all-graphene computers. “We have already made graphene logic gates but think that those made from nanoribbons will be better.”

LHC now fully fledged heavy metal collider

In the early hours of Sunday morning, the first collisions between lead ions were recorded at the Large Hadron Collider (LHC) at CERN. The complete transition from protons to lead took just four days, after the final proton beams of 2010 were extracted from the LHC last Thursday.

“The speed of the transition to lead ions is a sign of the maturity of the LHC,” says Rolf-Dieter Heuer, CERN’s director-general. “The machine is running like clockwork after just a few months of routine operation.”

The development marks the beginning of the main physics programme for the ALICE experiment, which has been designed specifically for heavy-ion collisions and is seeking to recreate the conditions that existed just 10–11 s after the Big Bang. At this time, the energy in the universe was so concentrated that protons and neutrons could not hold together – instead, space began to be filled with a dense soup of subatomic particles known as quark–gluon plasma.

Strong but mysterious

One of ALICE’s main scientific goals is to characterize the quark–qluon plasma in an attempt to find out more about the nature of the strong force, one of the four fundamental forces in nature. Despite being responsible for generating 98% of the mass of atoms, the strong force is still the most poorly understood of the forces.

To do this, the detector was specifically designed to track large numbers of particles. It can detect up to 15,000 particles per event, which may be produced from the collisions between lead nuclei occurring in the centre of the detector.

These images show the first collisions, recorded yesterday by ALICE’s innermost detector, the Inner Tracking System. The shaded structures represent a perspective view of the detector elements; and the lines are the reconstructed particle trajectories with the colour scale indicating the energy of the particles. As expected such collisions produce an unprecedented number of particles, reaching 2500–3000 charged particles per collision.

Mini Big Bangs

“We are thrilled with the achievement!” says David Evans, leader of the UK team at the ALICE experiment. “The collisions generated mini Big Bangs and the highest temperatures and densities ever achieved in an experiment.”

Lead ions within the LHC are colliding with a centre-of-mass energy of 2.76 TeV per colliding nucleon pair, which generates temperatures in the region of 10 trillion degrees. The temperatures and densities are an order of magnitude larger than the previous record held by the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory in the US.

CERN engineers will now spend up to a week tuning the beamlines in preparation for the scientific programme. Evans and his fellow researchers will then record data until 6 December when CERN will shut down for maintenance work over Christmas. Operation of the collider will start again with protons in February and physics runs will continue through 2011.

Listening to the sounds of slippery slopes

Transport networks and vulnerable communities at the bottom of slopes could benefit from a landslide early warning system that monitors the noise levels within soil. Claimed to be the first of its kind in the world, it works by monitoring sound to establish whether a large-scale slip is imminent so that measures can be taken.

Worldwide, many thousands of people die each year as a result of slope failures and many others are left homeless without access to basic supplies. In more developed countries, the risk to people’s lives tends to be smaller but the impact on the built environment costs billions of dollars to repair each year, and it can be highly disruptive to transport networks. In an attempt to limit this global hazard, the United Nations has drawn up a strategy, which includes a plan to promote the development of early warning systems.

One way to monitor the risk in real-time is to equip a slope with an array of microphones and to “listen” for increased noise levels as soils begin to move. The trouble with this method is that the relatively low frequencies associated with large-scale slope movements can also be produced by a number of other environmental sources – leading to an unacceptable number of false alarms.

But researchers in the UK believe they can offer a more reliable acoustic monitoring system through an interesting spin on the technique. Instead of positioning acoustic sensors directly onto the slope, they place their microphones within steel tubes that are then in-filled with granular material. Then, if slopes do begin to slip, the sounds produced by the jostling granular material are at higher frequencies than those generated by the surrounding soil. The steel tubes act as a waveguide to amplify the sound.

Field trials

To test its device, the team led by Neil Dixon at Loughborough University carried out a series of tests at Hollin Hill, an active landslide in northern England. By comparing its measurements with traditional slope measurements taken using an inclinometer, Dixon’s team says it found strong correlation between acoustic emission and slope displacement. “Hollin Hill provided us with a controlled environment to test our device, the next stage is to trial this on different types of slope and eventually use it as a means of predicting landslides,” explains Dixon.

Vulnerable people are often more exposed to the risk of landslides and other hazards, due to a lack of knowledge or lack of choice 

Ed Phillips, Practical Action

In the short term, Dixon’s team is seeking to work with railways, buses and other transport companies, which currently account for more than half of the purchases of landslide monitoring systems in the UK. “At present, most systems require a person to go out into the field to check a monitoring device – we are offering a system that can relay information in real-time, for example a text message,” says Dixon.

If the technique can become established, Dixon hopes that the technology can be transferred to developing countries, particularly to tropical regions where heavy rainfall and earthquake activity can leave slopes vulnerable to failure. Dixon envisages a simplified version of the system where the acoustic microsensors are connected to an audible alarm system rather than an electronic communications network.

The development is welcomed Ed Phillips, a spokesperson at Practical Action, a charity that promotes technology for development, who points out that people in the developing world are often more vulnerable to landslides. “Vulnerable people are often more exposed to the risk of landslides and other hazards, due to a lack of knowledge or lack of choice – for example building their homes on steep slopes.”

More landslides to come

According to many climate researchers, the frequency and extent of landslides could increase dramatically in certain parts of the world in the coming years. “It is fair to say that a changing climate could include more extreme rainfall events, and those in turn could lead to more landslides,” says Tim Lenton, an earth systems researcher at the University of East Anglia, UK.

This is a view shared by Richard Jardine, a geomechanics researcher at Imperial College London. “It will probably be greatest in degrading permafrost regions. The impact may be greatest in high mountain areas – Alps, Andes, Himalayas etc – or in regions with weak rocks and soils, such as parts of Siberia, Alaska or Canada. And also in warmer areas where we have steep slopes and high rainfall already such as Central America, Brazil, or China,” he says.

This research is part of an ongoing project involving Loughborough University and the British geological Survey called: assessment of landslides using an acoustic real-time monitoring system (ALARMS).

Neutral positronium scatters like a charged particle

 

Positronium is the atom-like bound state of an electron and its antiparticle the positron – and therefore has no net electrical charge. But physicists in the UK are scratching their heads after finding that positronium interacts with matter as if it were a lone electron, with the mass and positive charge of the positron seemingly invisible. This surprising discovery will spur researchers to find an explanation, and may have consequences from medicine to astrophysics.

Positronium is often regarded as the lightest neutral atomic species. Like a normal hydrogen atom, its nucleus is orbited by a lone electron, but the proton of the nucleus itself is replaced by the positron.

Positronium is an important entity in various disciplines. In medicine, for example, positrons are used to image chemical-reaction pathways inside living cells, in a technique known as positron emission tomography (PET). Yet over 80% of the crucial gamma rays generated in PET scans are from decaying positronium. Meanwhile, in astrophysics, decaying positronium accounts for over 90% of the gamma rays coming from the Milky Way’s centre.

Scarcity of scattering knowledge

Since positronium lives long enough to scatter off other matter before decaying, scientists involved in such disciplines need to understand its scattering properties. Unfortunately, both theory and experiments on these have been hard to come by.

Now, Gaetana Laricchia of University College London and colleagues have recorded the first widespread velocity data for positronium scattering off a variety of atoms and molecules. In their experiment, they use electric and magnetic fields to guide positrons emitted from sodium-22, a radioactive source, to a gas cell. Some of the positrons pick up an electron from the gas, creating a beam of positronium that travels toward a gas target. The researchers used 10 different targets to scatter the positronium, including helium, nitrogen, oxygen and krypton.

The results were not what they expected. Despite positronium being neutral and twice the mass of an electron, its scattering cross section – a measure of the interaction probability as a function of velocity – always resembled an electron on its own.

‘Spectator’ particle?

Laszlo Sarkadi, a nuclear physicist at the Hungarian Academy of Sciences who has previously studied positronium scattering, says the discovery will prompt physicists to examine the detailed dynamics of the scattering, which he thinks cannot be approximated, like other collision systems, to a two-body interaction. Nonetheless, be believes the likely solution is that the positron in the positronium is merely behaving as a “spectator” particle. “The different behaviour of the electron [could be] explained by the polarization of the target during the collision,” Sarkadi adds.

Laricchia agrees that the positronium’s electron is somehow dominating the interaction, but says: “The reason is not yet known, and we hope that our work will stimulate further research.”

The research is published today in Science 330 789.

Flexible metamaterial springs to life

Physicists in the UK have made a new kind of flexible material that could enclose objects to render them and it invisible. Although unlikely to be of much use when it comes to shielding people and other large objects, it could, say the researchers, nevertheless hide small items and make contact lenses more powerful.

Over the last few years physicists have shown how to hide objects by placing them inside so-called invisibility cloaks. These cloaks are made from metamaterials – artificial, engineered materials that have unusual electromagnetic properties. The first such cloak was made from a cylinder consisting of copper rings placed in concentric circles, and enabled an object placed at its centre to be shielded against microwaves, the radiation flowing around the cloak and continuing along its original trajectory as if the cloak were not there. Researchers have since extended this concept to shorter wavelengths by making ever tinier features within the metamaterials, since the features have to be smaller than the wavelength of light used.

All cloaks to date, however, have used metamaterials rooted in hard substrates typically made from glass or silicon. Andrea Di Falco and colleagues at the University of St Andrews in Scotland have instead created a flexible metameterial, operating at visible wavelengths, which, they say, should give metamaterials a much broader range of practical applications. The inspiration was the kind of flexible electronic circuitry used in laptops to connect the screen to the keyboard, but in this case applied to optical structures.

Introducing Meta-flex

The researchers made their material, which they have dubbed “Meta-flex”, by placing a commercially available polymer known as SU8 onto a silicon substrate. They then deposited a 40 nm-thick gold layer onto the polymer and used electron-beam lithography to carve the desired metamaterial features into the gold, before immersing the structure in a suitable solvent to remove the substrate.

The researchers say they have been able to make pieces of Meta-flex as thin as 4 μm and having an area of 40 mm2. They tested the electromagnetic response of two different metamaterial configurations, known as nanoantennas and “fishnet” lattice, by exposing the material to a source of white light and then analysing the transmitted light using a spectrometer. They found that the absorption peaks matched those obtained with the equivalent rigid metamaterials, proving, they say, that Meta-flex can indeed function as an invisibility cloak.

Using this new material to carry out useful functions will require mounting multiple layers on top of one another. But doing so, cautions Di Falco’s colleague Thomas Krauss, will not lead to invisibility cloaks of the kind that could be used by real-life Harry Potters. Cloaks made from Meta-flex, he says, can be larger than the nanometre-scale features engraved into the the material but the size of these cloaks is limited by the need for the cloak to have a certain thickness. With a given fraction of the incident light absorbed by each layer, a thick cloak would absorb practically all of the light that fell on it. “We should be able to cloak objects on the sub-micron level,” he says. “And as we improve the material we should be able to increase this scale, but I find it hard to think that we will ever be able to cloak large objects”.

Better contact lenses

As an example of the kind of object that could be rendered invisible, Krauss suggests electrical cables integrated into clothing. However, he believes that Meta-flex will find more useful applications. For instance, he says, the material could be used to increase the correcting power of contact lenses, making lenses available to people who previously have had to make do with thick glasses. He points out that the ability to bend light is dictated by the variation in refractive index of the media that the light traverses, so making a metamaterial with a refractive index of close to zero creates a ratio of indices approaching infinity and therefore results in an extremely high corrective power. He says that although this principle has been demonstrated previously, it is the manufacture of a flexible substrate that renders it useful for the manufacture of contact lenses.

John Pendry of Imperial College in London, who has pioneered the use of metamaterials, describes the development of the new material as an “interesting and useful advance in metamaterial technology” but “very, very far from achieving a flexible cloak”. He points out that as the instantaneous shape of a cloak changes, so too do the electrical permittivity and magnetic permeability needed for invisibility. So a flexible cloak, he says, would require that the metamaterial be not only flexible but also one whose electrical and magnetic parameters can be continuously reconfigured.

Krauss acknowledges this point but maintains that as long as the shape of the cloak doesn’t change too much the metamaterial parameters can remain fixed. “For a Harry Potter cloak you would need to do what Pendry is saying, but for the kind of specific applications that we’re suggesting you could design a metamaterial of a certain curvature and leave it at that,” he says.

Meta-flex is described in New Journal of Physics 12 113006.

Copyright © 2026 by IOP Publishing Ltd and individual contributors