Skip to main content

Ultramicroelectrodes deliver reliable neural recording

Implanted cortical microelectrode arrays can be used to record and stimulate neural activity, enabling studies of neural circuit function and treatment of many chronic diseases. The reliability of such arrays, however, is limited by insertion trauma and the foreign body response, which can lead to electrode encapsulation and neuron damage. Minimizing these tissue responses is vital for future development of brain-machine interfaces.

The impact of such tissue reactions depends upon factors including the electrode materials, and the size and geometry of the implanted microelectrodes. With this in mind, researchers from the University of Texas at Dallas and Boston University are developing microelectrode arrays based on amorphous silicon carbide (a-SiC). The arrays contain shanks (the parts that penetrate neural tissue) with a maximum transverse dimension of 10 µm and a cross-sectional area of below 60 µm2 (J. Neural. Eng. 15 016007).

“Recording and stimulation of neural activity is greatly influenced by the distance between the electrode site and the target neuron or neuronal network,” explained first author Felix Deku from UT Dallas’ Neural Interfaces Laboratory. “The small shank dimensions of our devices will reduce tissue damage during implantation and minimize the foreign body response, thereby increasing the possibility of our electrodes to be in close proximity to healthy neurons.”

Electrodes for recording neural activity work best with low impedance, while high charge-injection capacity is required for safe stimulation. The tiny shanks in these arrays have electrode sites with extremely small geometric surface areas (from 20–200 µm2). Consequently, they exhibit higher impedance and lower charge-injection capacity than typical silicon-based microelectrodes.

This small size does, however, confer a unique advantage. With one dimension smaller than 25 µm, the electrode sites meet the criteria to act as ultramicroelectrodes (UMEs). “Ultramicroelectrode behaviour allows for hemispherical migration of counterions to the neural interface and produces a substantial increase in the injectable charge per unit area compared to microelectrodes,” said Deku.

To improve stimulation and recording capabilities further, the team also investigated the use of low-impedance coatings – of titanium nitride (TiN) or sputtered iridium oxide (SIROF) – on the electrode sites.

Electrochemical characterization

The researchers chose a-SiC to create the implantable devices as is well-tolerated in the cortex and highly stable in saline. It is also amenable to standard thin-film fabrication processes. For this study, they used plasma enhanced chemical vapor deposition to deposit a-SiC films that completely encapsulate the metal interconnects, and created electrode sites by etching openings in the top of the a-SiC.

Deku and colleagues fabricated a-SiC microelectrode arrays with 16 penetrating shanks, each having a cross-sectional area less than 60 µm2. They characterized the electrochemical properties of gold and gold/platinum UMEs, as well as gold UMEs coated with SIROF or TiN. Cyclic voltammetry revealed cathodal charge storage capacities of 35 and 12 mC/cm2, for SIROF and TiN-coated UMEs respectively, compared with 10 and 2 mC/cm2 for uncoated platinum and gold UMEs, respectively.

Electrochemical impedance spectroscopy of UMEs

Electrochemical impedance spectroscopy demonstrated that coating gold electrodes with SIROF reduced their average impedance at 1 kHz from 2.86 MΩ to 90.2 kΩ, while a TiN coating reduced the impedance to 31.1 kΩ. Finally, the team investigated the ability of the coated UMEs to deliver charge. The maximum charge injection capacity for SIROF-coated UMEs was 3.4 mC/cm2 at 0.0 V anodic bias and 15.3 mC/cm2 at 0.8 V bias; for TiN, these values were 3.2 and 6.2 mC/cm2 at 0.0 and 0.8 V. They point out that these values are notably larger than those reported for similar coatings on larger microelectrodes.

Electrical stimulation capabilities of UMEs

In vivo experiments

Finally, the researchers evaluated the ability of the a-SiC microelectrode arrays to provide neural recordings in vivo. For intracortical studies, they fabricated arrays with 16 penetrating shanks with one electrode per shank. Each shank was 4–5 mµ thick, 9 mµ m wide, 4 mm in length and terminated in a sharp point.

The researchers implanted a microelectrode array with 100 mµ m2 platinum UMEs in the basal ganglia nucleus of an anesthetized zebra finch. Immediately after implantation, they recorded neuronal spike waveforms demonstrating single unit spiking activity. The 16 recorded channels showed no strong coupling between contacts.

They also tested the feasibility of recording spontaneous activity in the motor cortex of an anesthetized rat, using a microelectrode array with SIROF-coated UMEs. They observed spontaneous neural activity recorded simultaneously on nine channels. Distinct spiking activity across multiple channels confirmed that the array recorded from spatially selective neuronal population. The spike shapes were similar to previously reported cortical single unit recordings.

The researchers concluded that the a-SiC microelectrodes have potential for decreased tissue damage and reduced foreign body response, as well as the potential for clinical translation. “While our a-SiC UME technology shows promising capabilities in the acute experiments, one of the drawbacks to existing technologies is the lack of chronic stability,” Deku told medicalphysicsweb. “We are therefore now evaluating our devices for chronic stability.”

PET tracer measures demyelination in mice

Demyelination – the loss or damage of the myelin that surrounds and insulates nerves – is the hallmark of the neurological disorder multiple sclerosis (MS). When segments of this protective membrane are damaged, nerve impulses can be disrupted, causing symptoms ranging from tingling and numbness to weakness, pain and paralysis. Quantitative imaging of demyelination would provide critical clinical insight, but currently there is no reliable way to achieve this.

At present, MRI is used to image demyelination, but it is not quantitative and cannot distinguish between demyelination and inflammation, which often coexist in MS. Now, a multi-institutional team in the USA has described early results of a novel minimally-invasive method to assess myelin damage using PET (Scientific Reports 8 607).

The PET scans employ a tracer designed to target voltage-gated potassium channels, which are found on demyelinated axons. “In healthy myelinated neurons, potassium channels are usually buried underneath the myelin sheath,” explained study author Brian Popko from the University of Chicago. “When there is loss of myelin, these channels become exposed. They migrate throughout the demyelinated segment and their levels increase.”

These exposed neurons leak intracellular potassium. This leaves them unable to propagate electrical impulses, which causes some of the neurological symptoms seen in MS. “So we developed a PET tracer that can target potassium channels,” Popko said.

The team started with an existing MS drug, 4-aminopyridine, which binds to exposed potassium channels. The drug can partially restore nerve conduction and alleviate neurological symptoms in MS patients. Using mouse models of MS, the researchers showed that the drug accumulated in demyelinated areas of the central nervous system.

They then examined fluorine-containing derivatives of 4-aminopyridine for binding to axonal potassium channels. They found that 3-fluoro-4-aminopyridine (3F4AP) had the desired properties and labelled it with fluorine-18 to enable PET imaging. “We were able to show, in rats, that the tracer accumulated to a higher degree in demyelinated areas than in control areas,” said Popko.

“All existing PET tracers used for imaging demyelination bind to myelin and, consequently, demyelinated lesions show as decreases in signal, which can be problematic for imaging small lesions,” explained first author Pedro Brugarolas, currently at MGH/Harvard Medical School. “3F4AP is the first tracer whose signal increases with demyelination, potentially solving some of the problems of its predecessors.”

Finally, in collaboration with scientists at the National Institutes of Health, the researchers conducted a study in healthy monkeys. They confirmed that radiolabelled 3F4AP entered the brain of primates and localized to areas where there was little myelin.

“We think that this PET approach can provide complementary information to MRI, which can help us follow MS lesions over time,” said Popko. “It has the potential to track responses to remyelinating therapies, an unmet need. This approach should also help determine how much disruption of the myelin sheath contributes to other central nervous system disorders.”

Archaeology may help climate-change adaptation

Ancient cultures were able to farm in regions that are now uncultivated. But what is the potential of using their sophisticated techniques to adapt to today’s climate-change-related aridification? A recent publication by Eva Kaptjin of the Royal Belgian Institute of Natural Sciences investigates.

Archaeological finds show how past societies adapted to variable environmental conditions and which techniques succeeded or failed. The water-management techniques of ancient peoples often worked on a local level. Unlike many modern-day water-management projects that include large dams and the rerouting of river water, these local methods are cheaper, more sustainable and don’t need as much oversight from distant authorities.

Complications arise from the many aspects of human societies that do not leave traces in the archaeological record, for example religion or social organization. Such aspects can have a large impact on human behaviour and don’t always lead to rational responses to external threats. So to implement ancient techniques successfully today, it’s important to understand ancient methods in their sociocultural context.

With that in mind, Kaptjin cites the example of the Mexican province of Tabasco, where the government tried to reinstate an old agricultural technique that used raised fields in swamps and lakes. These Chinampas were favoured by Aztec and Mayan societies across the region. The first modern attempt saw Chinampas built using bulldozers and without the involvement of archaeologists and local farmers. This led to failed harvests and a mismatch between the crops produced and market demands. After strict regulations were dropped, local indigenous peoples were able to build and farm the Chinampas more successfully.

A second example are the Cochas in Peru – natural depressions used by Inca societies to store rain and run-off. A modern-day implementation recruited local people as community messengers to advocate for the practice. These messengers informed local communities about the necessity of adaptation to climate change and functioned as contacts locals could turn to when problems arose.

The main lessons learned from this study, which appeared in Wiley Interdisciplinary Reviews: Water, are that modern-day planners need a detailed understanding of the ancient techniques. This includes the geological and geomorphological environment and details of the technical execution, as well as the socio-cultural context in which the method was developed and whether it’s translatable to the modern situation. And the involvement of local communities and inclusion of their environmental knowledge from an early stage is fundamental to success today. But if these lessons are accounted for, the past has great potential for the future.

Neutrons fly left or right depending on size of colliding nuclei

Researchers at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Lab (BNL) in the US have discovered that when spinning protons collide with nuclei they produce neutrons that fly-off in different directions that depend on the size of the target nucleus. The physicists say that this unexpected observation suggests that the neutrons-producing mechanism is different for large and small nuclei. They add that this could have important implications for interpreting other high-energy particle collisions, including the interactions of ultra-high-energy cosmic rays with the Earth’s atmosphere.

RHIC has been operating since 2000 and is the only collider in the world with the ability to precisely control the spin polarization of colliding protons. During its first polarized proton run in 2001–2002 researchers discovered that when a proton with upward spin collides with another proton, the neutron produced in the collision prefers to emerge to the right.

Almost a decade later, in 2011, theoretical physicists published a paper explaining this result. But then in 2015 a PhD student at Seoul National University in South Korea and BNL, Minjung Kim, made a surprise discovery. She observed that when protons collided with gold nuclei – which are much larger than protons – they produce a neutron with a strong preference to travel in the other direction: to the left.

Completely unexpected

This change in directional preference was not predicted by the 2011 theory. “What we found from the collisions with gold nuclei in 2015 was not only that the asymmetry magnitude increased by a factor of around three, but also its sign flipped, with the preferred scattering direction changing from the right to the left,” Alexander Bazilevsky, a physicist at BNL and deputy spokesperson for the PHENIXcollaboration at RHIC, tells Physics World. “It was completely unexpected.”

To confirm Kim’s finding, PHENIX physicists worked on data analysis and detector simulations, and repeated the measurements under more precisely controlled conditions. These new experiments also included collisions between protons and aluminium nuclei, which sit between protons and gold nuclei in size.

The results confirmed that proton-proton collisions produce a directional asymmetry with more neutrons scattering to the right, while proton-gold collisions produce a stronger asymmetry with neutrons scattering to the left. And, collisions with the intermediate-sized aluminium ions produced neutrons with near zero asymmetry – a roughly equal number scattered in each direction.

Charged collisions

To explain their findings the physicists looked closely at the processes and forces affecting the scattering particles. They concluded that in proton–proton collisions the asymmetry is driven by interactions governed by the strong nuclear force, as described in the 2011 theory. In large nuclei with more positive electric charge, however, electromagnetic interactions play a much more important role in particle production than in collisions between two equally charged protons.

“We believe due to much larger electric charge of gold nucleus compared to proton, the electromagnetic interactions take on a larger role for neutron production in competition with strong nuclear interactions, with generated asymmetry of opposite sign to the one produced by strong nuclear interactions,” explains Bazilevsky. He adds that in medium sized aluminium nuclei the asymmetries generated by nuclear and electromagnetic interactions cancel each other out, and the result is no directional asymmetry in the resulting neutrons. “As our paper shows, the asymmetry sign flip happens around the aluminium nucleus, hence we would expect all nucleus lighter than aluminium would generate negative asymmetry (right preference), and all heavier nucleus would produce positive asymmetry (left preference),” he explains.

The study is described in Physical Review Letters.

Optical ‘astrocomb’ could boost searches for Earth-like planets

A new type of laser frequency comb (LFC) has been developed by scientists in Europe. The prototype device could lead to improvements in how scientists search for Earth-like exoplanets, measure the expansion of the Universe and test the fundamental constants of nature.

LFCs produce spectral lines of light with evenly spaced frequencies and have a wide range of applications in metrology and spectroscopy. The new LFC was developed by Tobias Herr of the Swiss Centre for Electronics and MicrotechnologyFrancesco Pepeof the Geneva Observatory and colleagues. It uses a laser-driven microresonator on a silicon-nitride chip that produces 24 GHz pulses for use in calibrating near-infrared spectrometers. This gives it an advantage over traditional LFCs, which operate at frequencies below 10 GHz and create a line spacing that is too small for astronomical spectroscopy.

The pulses are produced by way of a phenomenon known as temporal dissipative Kerr-cavity solitons (DKSs), which involves trapping ultra-short pulses of light in a circular, micron-sized microresonator. Each time the DKS pulse passes the microresonator’s input-output coupler, some of the pulse is siphoned away and directed towards the spectrometer, producing a series of spectral lines that, in the prototype, are each precisely 24 GHz apart. These lines form a spectral comb and act as a precise calibration tool for the spectrometer.

One popular method of detecting exoplanets is the radial velocity technique. This involves measuring a star’s subtle motion that is caused by the gravitational tug of an orbiting planet. These motions are often no faster than walking pace and require highly accurate spectroscopic measurements of the Doppler shift in the star’s light as it moves. The size of the Doppler shift and the period at which it occurs can tell astronomers both the mass and the distance from the star of the planet. The greater the mass of the star, or the less massive or more distant the planet, the smaller the Doppler shift.

Increasing accuracy

Currently, astronomical spectrometers use hollow-cathode lamps that produce a limited number of noisy calibration lines, or standard, low frequency LFCs that pass through Fabry–Pérot etalons to increase their spectral range at the expense of accuracy. For example, the HARPS instrument on the 3.6 m telescope at the European Southern Observatory in Chile can measure a Doppler shift caused by velocities as low as 30 cm/s. Meanwhile, the ESPRESSO, which saw first light on ESO’s Very Large Telescope in December 2017 can achieve an overall spectral resolution of 10 cm/s. Both use thorium-argon lamps as well as LFCs passed through Fabry–Pérot etalons.

Although the success of the DKS frequency comb also depends on the stability of the spectrometer, it has the potential to measure Doppler shifts of just a few centimetres per second. This means that it could, in principle, be used to discover potentially habitable worlds orbiting Sun-like stars.

“In the future we hope to increase the number of LFC lines to reach the 1 cm/s level,” says Herr. “However, before the technology can be used routinely there are number of technological challenges to overcome,” Herr adds. Among these obstacles is the need to increase the span of the LFC to cover the entire near-infrared and optical bands, and also the need to make the technology less complex and more user-friendly for routine use.

Astronomical applications

However, Pepe says, “instruments such as the upcoming NIRPSspectrograph, which will join HARPs on ESO’s 3.6 metre telescope in 2019, will possibly be equipped with LFCs based on this new technology”. Ultimately, the compact nature of the technology could even see it employed in spectrometers on space-based missions.

Beyond planet-finding, the new technology could improve measurements of the redshifts of galaxies, thereby assisting in providing a more accurate measure of dark energy or the Hubble constant. Meanwhile, by observing galaxies at varying distances and measuring shifts in specific emission lines that are directly dependent upon the properties of the fundamental constants of nature, cosmologists are seeking to discover whether these constants are really variables. Because the shifts related to varying constants, such as the fine structure constant, are so tiny, a DKS laser frequency comb is likely to be the best way of making these measurements.

The new laser frequency comb is described in a preprint on arXiv.

Engineers reinvent the inductor after two centuries

Could the inductor be redesigned in a fundamentally new way? Yes, according to new work by researchers in the US, Japan and China who have made the first high-performance inductors from intercalated graphene that work in the 10-50 GHz range thanks to the mechanism of kinetic inductance – rather than magnetic inductance as in conventional devices. The new inductors, which have both small form-factors and high inductance values, of around 1-2 nanoHenry (a combination that has been difficult to obtain so far), are a third smaller in terms of surface area than conventional devices but with the same performance. They might thus be used in ultra-compact wireless communication systems for applications in the Internet of Things (IoT), sensing and energy storage/transfer.

“The inductor was invented nearly 200 years ago, but this is the first time since then that a completely new mechanism (kinetic inductance) is being exploited (using intercalated graphene) to re-invent this fundamental passive device,” team leader Kaustav Banerjee of the University of California, Santa Barbara, tells nanotechweb.org. “This could have significant implications for wireless communications, sensing and energy storage/transfer applications for the IoT era. It also highlights a practical application for graphene beyond circuit interconnects.”

The IoT promises to connect us with as many as 50 billion objects by 2020, with a potential impact of $2.7 to 6.2 trillion per year by 2025. This revolution will require an enormous number of miniaturized, high-performance and scalable wireless connections that are driven by radio-frequency (RF) integrated circuits (RF-ICs). Another important area, that of radio-frequency identification (RF-ID), which relies on electromagnetic fields to automatically identify and track tags attached to objects, is predicted to increase to nearly $19 billion by 2026.

Planar on-chip metal inductors are the main types of device employed in RF-ICs and can take up as much as half of the chip area space. They are also responsible for the major part of the form factor of RF-IDs.

Scaling down difficult for on-chip inductors

However, the problem is that, unlike transistors and interconnects in IC technology that have been successfully scaled down, the same cannot be said for on-chip inductors. This is because inductors have been traditionally designed to work using magnetic fields alone, so a certain amount of inductor area is needed to capture these fields. What is more, conventional metals only have a small kinetic inductance because of their low momentum relaxation time (which is the average time for a carrier – an electron or hole – to lose its original momentum because of a scattering event).

To scale down inductors, we need to improve their inductance density. This is defined by the inductance per unit area = total inductance (Ltotal)/inductor area, where Ltotal is the sum of the magnetic inductance (LM) and the kinetic inductance (LK).

Kinetic inductance is a purely material property

Magnetic inductance is the property of an electrical conductor by which a change in current through it induces an electromotive force in both the conductor itself (self-inductance) and in any nearby conductors (mutual inductance) that opposes the change. Kinetic inductance, for its part, is the inertial mass of mobile charge carriers in alternating electric fields and so does not rely on the magnetic field.

All this means that the magnetic inductance depends on the geometrical design of the inductor itself, while the kinetic inductance is a purely material property, says Banerjee. Improving both the LM and LK should thus improve the overall inductance density, which is our goal.

Carbon nanomaterials to the rescue

Banerjee and colleagues recently found that carbon materials, including carbon nanotube bundles and multilayer graphene, are attractive materials for on-chip inductors thanks to their high magnetic inductance and high LK, which can be equal or even greater than the magnetic inductance. “We can thus use these materials to now scale down the on-chip inductor size without compromising their inductance values.”

In this work, the researchers started out with millimeter-sized multi-layer graphene sample transferred to isolating substrates that they intercalation doped with bromine.

Layer separation increases the kinetic inductance

“Intercalation involves inserting ‘guest’ molecules between the two other ‘host’ molecules in multilayered structures, which in this case are bromine and MLG respectively,” explains Banerjee. “This intercalation introduces two important properties in the MLG. First, and perhaps most crucially, it increases the separation between the adjacent graphene layers. This increase has the effect of ‘decoupling’ these layers and thereby increases the kinetic inductance by increasing the momentum relaxation time.”

Thanks to its linear electronic band structure, monolayer graphene typically has a significantly larger momentum relaxation time compared to its multi-layer counterpart. In fact, interlayer coupling in MLG transforms the linear band structure of the monolayers to hyperbolic, which leads to lower momentum relaxation times and thus lower kinetic inductance.

A new working mechanism for inductors

“Secondly, intercalation dopes the MLG and increases its electrical conductivity, which helps in improving its performance (or Q-factor).”

The work could revolutionize the way we make on chip inductors employed in analogue/RF ICs for wireless communications, and highlights a new working mechanism for these devices, he says. “Such inductors are set to become crucial in the near future where billions of connected IoT devices will send and receive huge quantities of information to each other in applications such as continuous monitoring of vital health signs and security, and also provide unprecedented connectivity for living a smarter life.”

The team, which includes researchers from the Shibaura Institute of Technology in Tokyo and Shanghai Jiao Tong University, is now busy looking into ways to further improve the efficiency of the intercalation process so that the kinetic inductance can be further increased. “This would in turn increase the inductance density and could help further downscale on-chip inductors and thus improve the area-efficiency of the chip as well as its Q-factor,” explains Banerjee. “We are also trying to make the entire inductor fabrication process compatible with back-end-of-line CMOS technology.”

Full details of the research are reported in Nature Electronics doi:10.1038/s41928-017-0010-z.

Arctic reaches a ‘new normal’

The Arctic environment has reached a “new normal”, characterized by thinner sea ice covering less of the surface area, shorter and less extensive winter snow cover, less ice mass in Greenland and Arctic glaciers, and warmer sea surface and permafrost temperatures. That conclusion represents the work of 85 scientists from 12 countries who prepared the 12th annual Arctic Report Card for the US National Oceanic and Atmospheric Administration (NOAA). It was released at the annual meeting of the American Geophysical Union (AGU) in New Orleans in December.

Jeremy Mathis of NOAA told reporters at AGU that changes in the Arctic affect not only that region, but “will impact all of our lives”. They will result in more extreme weather events, higher food prices and climate refugees. Although there were fewer weather anomalies in the Arctic in 2017, compared with the record-breaking warming of 2016, he said, “the Arctic shows no signs of returning to the reliably frozen state it was in just a decade ago”.

Arctic temperatures continue to rise at double the rate of the rest of the planet, NOAA reports. For the year ending September 2017, the average Arctic surface air temperature was the second warmest since 1900, after 2016.

Specifically, reported NOAA’s Emily Osborne at AGU, the average Arctic air temperature was 1.6o C above the long-term average from 1981 to 2010. The record summer temperatures of 2016 were followed by the lowest recorded winter sea ice maximum since satellite observations began in 1979, she said. The cooler summer of 2017 resulted in the minimum sea ice cover being ranked eighth lowest on record, she explained, adding that 10 of the lowest observed sea ice minima occurred in the past 11 years.

The remaining sea ice is much younger, more fragile and more prone to melting than in the past, Osborne noted. The report card says that only 21% of the 2017 ice cover was older than one year, and so thicker. In 1985 this figure was 45%. More exposed seawater means more absorption of radiant energy from the Sun, leading to warmer water, and additional melting of the remaining ice.

The August 2017 sea surface temperatures in the Barents and Chukchi Seas were as much as 4o C warmer than average, NOAA reported. The higher temperatures contributed to a delayed freeze-up in those regions in the autumn.

Turning to land, Vladimir Romanovsky of the University of Alaska Fairbanks reported at AGU that the Eurasian Arctic experienced above-average snow cover in 2017, while North America’s Arctic had less than average snow cover and earlier snow melt, continuing a trend of 11 of the past 12 years. The Greenland ice sheet has been measured since 2002, he said. It experienced less melting than average in 2017, compared with the previous nine years, but remains nevertheless a major contributor to sea level rise.

The Arctic Report Card covers many additional topics, including fisheries, wildfires, greening of the tundra and primary productivity (at the base of the marine food web).

Quantum gravity could be probed by entangled masses

The elusive, yet seemingly inevitable, quantum nature of gravity may finally be demonstrated experimentally. Two separate groups in the UK; Chiara Marletto and Vlatko Vedral at the University of Oxford, and a team led by Sougato Bose from University College London, have proposed experiments that may for the first time reveal a link between the theories of quantum mechanics and general relativity.

Within their domains, both quantum mechanics and Einstein’s general theory of relativity seem to work incredibly well, no matter what problems physicists throw at them. However, some of the physical rules overseeing the theories appear to be fundamentally incompatible, and a unified theory of quantum gravity which marries them together has become notoriously difficult to develop.

“A formidable problem is the immense weakness of the gravitational interaction in comparison to other fundamental forces in nature,” Bose tells Physics World. “For example, even the electrostatic force between two electrons overtakes the gravitational force between two kilogram masses by several orders of magnitude.”

Impossible to test

So far, theories about the quantum nature of gravity have seemed practically impossible to test experimentally. As such, theories ranging from quantum particles known as gravitons, which convey gravity in a similar way to photons for electromagnetic fields, to the all-encompassing ideas of string theory and loop quantum gravity, have all remained speculative.

Early steps to devise an experiment to test quantum gravity were taken by Richard Feynman. He proposed a thought experiment in which a test mass is prepared in a quantum superposition of two different locations, and then interacted with the gravitational field, causing the mass and the field to become entangled. If the two spatial states of the mass could then interfere; reverting the mass back to a single, definite spatial location, the coupling with the gravitational field would subsequently be reversed, showing that gravity has been coherently coupled to a quantum system.

Feynman hoped that this experiment would confirm the quantum nature of gravitational fields, but Marletto and Vedral believe it is not sophisticated enough. Since the interference of the two spatial states of the mass could occur even in the presence of classical gravitational fields, the experiment would not prove that the mass and the field were entangled, unless the entanglement could be measured directly. Therefore, the gravitational field need not be quantized. Bose explains, “Exactly what attribute of the masses to measure in order to conclude the quantum nature of the gravitational field was left open”. Another approach was needed to witness its quantum features.

Entanglement between two masses

Bose, Marletto and their colleagues believe their proposals constitute an improvement on Feynman’s idea. They are based on testing whether the mass could be entangled with a second identical mass via the gravitational field. To do this, the two masses would first be prepared using two adjacent, identical interferometers. These devices are typically used to split light waves into separate beams, which can then be interfered. When tiny masses are involved, however, their quantum wave functions can be split and interfered to superimpose multiple quantum states onto the mass.

“Our two teams took slightly different approaches to the proposal,” Marletto explains to Physics World. “Vedral and I provided a general proof of the fact that any system that can mediate entanglement between two quantum systems must itself be quantum. On the other hand, Bose and his team discussed the details of a specific experiment, using two spin states to create the spatial superposition of the masses.”

If gravitational fields are truly quantum in nature, both teams realized the gravitational attraction between the two masses would cause them to become entangled by the time they left their respective interferometers. As in Feynman’s experiment, the first mass can be entangled with the gravitational field, but this time, no interaction with the gravitational field is required since the second mass could be used as a witness to the quantum properties of the first. This would allow the researchers to confirm that classical gravitational fields could not have been responsible for the interference of the masses.

Unwanted effects

Bose and Marletto’s teams both acknowledge that the challenges posed by shortcomings in present-day technology mean that their proposed experiments have no guarantee of success. If the experiments are not carried out carefully enough, other stronger forces like the Casimir, Van der Waal’s or other unwanted electromagnetic interactions could mimic the desired effects of gravity, entangling the two masses.

The masses could also fail to become entangled even if the gravitational field is quantized, should the nature of quantum gravity be even more subtle and complex than the researchers anticipated, meaning not witnessing entanglement would not be an inconclusive result. They also realize that their experiments still wouldn’t confirm which of the many competing theories of quantum gravity are correct.

The proposals are described in two papers published in Physical Review LettersC Marletto and V Vedral and Bose et al.

EU renewables policy: mixed reactions

A Guide to EU Renewable Energy Policy, published by Edward Elgar, reviews progress on renewable energy policies from the 1980s to the present critically, with a foreword by Rainer Hinrich-Rahlwes identifying what he sees as a loss of nerve, given the initial bold start and the 20% by 2020 renewable energy target set in 2007. The current 27% by 2030 renewable energy and energy efficiency targets are depicted as “hardly more than business as usual”. The replacement of feed-in tariffs (FiTs) by contract auctions is also lamented.

Although not all are quite so aggressive in their assessment, there are good critical chapters on specific country programmes, including the UK, Germany, France, the Netherlands, Denmark, Italy and Spain. A simple summary is that, whereas Germany led with FiTs, which accelerated solar PV fast, and, like Denmark, also pushed wind hard, in Spain renewable deployment was mostly utility-led, so PV solar didn’t grow as much there. In the UK, the support system also favoured large players, with generation costs being higher than with FiTs, so renewables have not expanded as fast as they might have done – its very large wind resource is only being developed relatively slowly. France, Italy and the Netherlands had reacted in mixed ways, with relatively low levels of renewable output resulting.  Given this mixed context, the chapter on the UK is certainly illuminating: it is always interesting to hear how others view us. None too favourably in this case.  The UK is seen as conflicted: strong on climate policy (often taking a lead), but weak on renewables, resisting EU attempts to set specific national targets for them. An “awkward partner”, which has “steadily attempted to debilitate EU polices” on renewables, seeking to defend its interests, which seem to have been mainly free markets and nuclear power.

The book also looks at progress in the new Eastern European member states, with useful chapters on Poland, Romania and Bulgaria, highlighting some of the differences resulting from varying national stances and reactions to EU renewables and climate policy. Poland – keen to protect its coal base – is seen as a laggard, sometimes resisting EU directives, while, although they too had problems, in Romania and Bulgaria more progress was made. Indeed, Romania reached its year 2020 24% renewable energy target in 2014, while Bulgaria also overshot its 16% 2020 target, though that then led to a slow down.

There is also some coverage of the EU’s involvement outside its borders, with chapters on the Solar Med plan and on biomass in Mozambique. Biomass issues are covered throughout the book as a special theme and it is clearly an important and contentious area both within the EU and externally. In terms of external initiatives, the Mediterranean Solar Plan (MSP) story is interesting. The MSP envisaged solar, wind and biomass projects across the Med border regions, linked up perhaps by a “Med ring” supergrid. It was a key project of the Union for the Mediterranean, promoted by France, under its then president Nicholas Sarkozy, in 2008. However, institutional problems emerged, due to Arab–Israeli conflicts, and Spanish resistance to the MSP but, although, as this book reports, they are diminished, the Med Union and MSP still support projects across the MENA region.

In that context, and in relation to the benefits of the MSP to the Mediterranean Partner Countries (MPCs), chapter author Gonzalo Escribano says: The question is whether the MSP has the potential to become a driver for MPCs’ development, or can instead be better considered as an EU-centric project aimed at achieving its own environmental objectives together with the promotion of European industries and engineering firms.” There is clearly a tension there and also issues related to what might be seen as an attempt to “Europeanize” the region.

The EU renewable energy programme is certainly fascinating as an example of an attempt to develop a co-operative approach, despite often conflicting member state interests. And this book is a good policy primer, with an interesting analytic framework for ‘europeanisation’, offering categories for the member state responses to EU policies. In terms of a “vertical” model of interaction, some are pro-active, some are just responsive. So it contrasts EU policy bottom-up “uploaders” with top-down “downloaders” on the basis of their degree of fence sitting/acquiescence, pace setting/ participation, or foot dragging/resistance. No prizes for which categories the UK, or Poland, fall into. Though a “horizontal” classification is also mentioned, based on imitation (not imposition!), with EU enthusiasts no doubt hoping that its policies will be copied and exported elsewhere. Though maybe not all should be: as this book shows, not everyone sees its competitive single-market based drive to “harmonise” EU renewable support systems and get rid of FiTs, as beneficial. These are issues I have been looking at recently in relation to Africa, currently the focus of much EU effort, in a book I have been working on. The EU has been a leader, but it may not have all the answers…

Edward Elgar has also published a Research Handbook on EU Energy Law and Policy edited by Rafael Leal-Arcas and Jan Wouters, with 39 academic contributors. Given this input, it is even more detailed in its review of EU energy policies, legislation, regulation, governance and associated issues. Legal aspects include energy justice issues and marine law – and also corruption. Its coverage is wide, covering most aspects of energy policy (e.g. oil and gas policy), as well as renewables, although, thankfully, it’s not all just about supply: it does look at energy efficiency, lamenting the lack of an integrated approach.

One focus is on what social science research needs to be done to improve the EU’s handling of energy issues. I contributed an overview chapter, in which I concluded “Energy generation and use patterns are changing, partly as a result of new technology, but also due to new consumer and community preferences. In terms of research, understanding these social changes is perhaps the key need. While there is a continued drive towards a single European energy market (with or without the UK) and the proposed European Energy Union, at the grass roots, other forms of economic and social organization are emerging with perhaps different, often more localised, orientations, as well as differing drivers and problems.”

The point being that new forms of enterprise are emerging, with new motivations, including co-ops and community energy groups creating local economic value based on renewable energy use. They are no longer a small minority group – around 40% of the renewable capacity in Germany is locally owned, as are many of the wind projects in Denmark. Responding to that, and indeed supporting more, will require new energy policies, legislation, regulation, and governance, not least to make sure the system can still be balanced.

That of course assumes that the EU really does want to move ahead to a fully sustainable future. The overall message from this book, and even more so from the previous one, is that it is trying, although there are many internal conflicts and diversions, including the only tangentially mentioned nuclear issue, and also the imminent departure of the UK.

How forests could limit earthquake damage to buildings

Buildings in the future could be isolated from earthquakes by being placed behind rows of trees. That’s according to physicists in France, who have shown that certain seismic waves, known as Love waves, could be diverted away from the Earth’s surface as they pass through a forest containing trees of a certain height. The forest acts like a metamaterial – an artificial structure usually used to steer electromagnetic radiation around objects.

Best known for their use as invisibility cloaks, metamaterials are made from large arrays of tiny resonators that manipulate light and other electromagnetic waves in unnatural ways. In recent years, however, the mathematics underlying metamaterials have also been applied to other kinds of radiation, including seismic waves. The idea here is to use arrays of suitably-sized objects either below or above ground – holes or posts of some kind – to divert seismic waves around vulnerable buildings.

Whereas passive isolation typically targets a building’s resonant frequency, seismic cloaks could, in principle, be broadband, according to Sébastien Guenneau of the Fresnel Institute in Marseille . This, he says, would allow extensions to be added to buildings and could be used to protect historical monuments that cannot be altered. Guenneau was part of a team that demonstrated the basic principle of such “seismic cloaks” in 2012 by drilling a 2D grid of 5 m-deep boreholes into top soil and measuring the grid’s effect on acoustic waves generated close by.

The researchers found that just a couple of rows of boreholes could reflect around half of the wave energy back towards the source. A few years later, however, another group, which included Guenneau and Phillippe Roux from the University of Grenoble, showed that nature could do a similar job. They showed that a small pine forest in Grenoble could reflect most of the energy within certain frequency bands of “Rayleigh waves”, which travel just under the surface and are generated by the wind and vibrations from nearby road works.

Love the feeling

Now Guenneau – along with Agnès Maurel of the Langevin Institute in Paris and Jean-Jacques Marigo of the Ecole Polytechnique in Saclay – has shown theoretically that forests should also be able to shield against Love waves. Like Rayleigh waves, these waves travel just below ground and are generated when seismic waves travelling away from an earthquake’s epicentre reach the Earth’s surface. But, whereas Rayleigh waves have both a horizontal and vertical motion, Love waves – which can severely damage a building’s foundations – cause a side-to-side, purely horizontal shaking.

Guenneau and co-workers have found that, like Rayleigh waves, Love waves should set up vibrations in tree trunks. They have identified a new kind of wave that they dub a “spoof Love wave” generated when a seismic wave propagates along wooded ground, whose top soil yields lower shear velocities than does the bulk. This wave is mathematically analogous to an electromagnetic wave known as a “spoof plasmon”, which can propagate along a metal surface studded with metallic pillars – the ground playing the role of the air above the surface while the trees stand in for the pillars.

The researchers considered what would happen when Love waves approach a forest containing rows of progressively shorter trees. They worked out that the resulting spoof Love waves would propagate through the forest until they reach the row containing trees of just the right height. The waves would then set the trees shaking and so turn them into secondary sources that dissipate most of the vibrational energy downwards through the Earth. Conversely, they found that when Love waves approach a forest with progressively taller trees, the seismic energy should largely reflect back to where it came from.

The group also found that trees foliage should affect passing seismic waves, changing the height of a resonating tree for a given wave frequency. “The striking effect of foliage might also lead to revised models of Rayleigh waves in forests,” says Maurel.

Life-saving

As to the practical potential of “arboreal shielding”, Guenneau points out that a five to 10-storey building resonates at no more than about 10 Hz. At that frequency, he says, trees would only have to be 10–15 m tall to resonate with Love waves, while they would need to be a whopping 50–75 m to protect against Rayleigh waves. He therefore envisages trees preventing horizontal shaking, while conventional techniques continue to guard against vertical motion. “Forests could halve the work of civil engineers,” he says.

To show that their idea works in practice, Guenneau and colleagues hope to persuade Roux to investigate Love waves when he starts new experiments in Grenoble, possibly in the autumn. To support their case, the trio first plan to take part in a small lab test involving ultrasonic waves and micron-sized piezoelectric “trees”.

Ping Sheng from the Hong Kong University of Science and Technology, who studies acoustic metamaterials, warns that the proposal would need trees with specific heights usually not found in nature. As such, he argues, the idea would have greater appeal if it could be applied to a more realistic forest. “That would indeed be interesting,” he says.

The research appears on the arXiv server.

Copyright © 2026 by IOP Publishing Ltd and individual contributors