Skip to main content

How to eavesdrop on underwater conversations

The transmission of sound from water to air has been boosted by a factor of 160 by physicists in Korea and Japan. The feat was accomplished using a new type of “acoustic metasurface” that is significantly thinner than the wavelength of the audio signal that it transmits.

Underwater sound reflects almost completely from the air-water interface, with only about 0.1% of the sound energy emerging into the air. Boosting this transmission could improve communications between air and water and could also make it easier to monitor sounds from underwater geological and environmental sources. Today, such monitoring involves using underwater piezoelectrical transducers, which are about 1000 times less sensitive than capacitor-based microphones used in air.

Reflection occurs because the acoustic impedance of water is about 3600 times greater than air. While maximum transmission requires the impedances to be the same, the amount of sound that gets through can be increased by introducing an intermediate material with an impedance value between that of water and air. However, a material with the right impedance would be very difficult to create. Furthermore, the intermediate layer would need to have a thickness on par with audio wavelengths, which measure in the tens of centimetres.

Loaded membrane

Now, Sam Lee and colleagues at Yonsei University in Korea and Hokkaido University in Japan have created an acoustic metasurface that boosts transmission, yet is much thinner than the wavelength of the sound it transmits. The metasurface is a loaded-membrane resonator – a device that has previously been used to create sound absorbers and systems for harvesting acoustic energy.

The resonator comprises a membrane stretched across a thin cylindrical frame of diameter 24 mm. A 60 mg, 7 mm diameter mass is attached to the centre of the membrane. This loaded membrane is placed inside a cylindrical waveguide with water on one side and air on the other. The weighted membrane operates in the air side of the waveguide and is located a few millimetres from the water, which is on the other side of a second, unweighted membrane (see figure).

When sound waves at about 700 Hz – a midrange audio frequency – strike the double membrane structure, about one third of the acoustic power is transmitted into the air. This is about 160 times the transmission that occurs at a bare water-air interface. Remarkably, the 5 mm thickness of the structure is 100 times thinner than the wavelength of sound in air at 700 Hz.

The acoustic metasurface is described in Physical Review Letters.

A virtual tour of virtual reality

By Margaret Harris at Photonics West in San Francisco

“How many people in this room are wearing smart glasses today?”

When Bernard Kress, a photonics expert and optical architect on Microsoft’s HoloLens smart-glasses project, posed this question at the Photonics West trade show, he had reason to expect a decent response. He was, after all, speaking at a standing-room-only session on virtual reality, augmented reality and mixed reality (VR, AR and MR), and the audience was packed with tech-friendly, early-adopter types who had come specifically because they’re interested in such devices. Surely, someone in the audience would put up their hand.

But in the end, the only hand that went up belonged to one of the speakers: Thad Starner, a computing expert at Georgia Tech who has been using smart glasses for more than a decade. And in a way, that pretty much sums up the state of VR/AR/MR today: despite recent progress, it still hasn’t reached a mass consumer market. Indeed, several of the session’s speakers – notably Robert Schultz, head of optics R&D at the VR/AR development firm Vuzix – said as much, pointing out that virtual reality is basically at the same stage as mobile phones were in, oooh, 1985 or thereabouts.

That said, the overall thrust of the session was that – like the “brickphones” carried by Wall Street types in the Gordon Gekko era – today’s VR and AR devices are the advance guard of a technological revolution.  Certainly, the appetite is there. When Starner asked who would wear a display that looked and felt like a normal pair of eyeglasses, and had the same functionality as a smartphone, he got a forest of raised hands.

Getting to that point will require progress in several areas, but one that kept coming up during the session was the need to improve the way VR devices deal with depth cues. Most displays operate at a single, fixed focal distance, which means that everything you see through a headset appears to be in focus. That might sound like a good thing, but as Doug Lanman of Oculus Research pointed out, “Vision scientists will tell you it’s a bug.” Our eyes just don’t see the world that way, and when they’re presented with it in a virtual format, the result is a cocktail of nausea, dizziness and headaches.

Developers and optics experts from a wide range of companies – including start-ups such as Lemnis Tech and Avegant as well as established players like Microsoft, Google and Intel – are exploring various ways of solving this problem, which is known as the vergence-accommodation conflict. Between talks, I had the pleasure of trying out the Lemnis Tech device, which uses adaptive optics to generate natural focusing cues. The hardware part of their solution involves cameras and infrared LEDs that capture information about where the wearer is looking; the software part takes this information and uses it to, in effect, turn the headset into a varifocal system. As a result, when I looked through their headset at a simulated version of the Earth and the Sun, I could feel the system adjusting to my eye position, giving me a more realistic visual experience.

I also tried out a range of other devices, including Digilens’ heads-up display for motorcyclists, uSens’ hand-tracking headset and ODG’s augmented-reality glasses. Of the devices I tested, I found ODG’s device the most comfortable to wear – although the comparison isn’t really a fair one, since the four devices (and their potential applications) are so different. I do, however, think it’s fair to say that no-one has solved the other key challenge facing VR developers, which is to make compact, near-eye displays that resemble normal glasses. The buzzword for this during the talks was “form factor”; my own preferred term is “not making users look like total dorks”.

As the above video shows, that goal is still pretty remote, and reaching it won’t be easy. According to Schultz, the keys to making a truly wearable device include eliminating stray light, “ghost” reflections and unwanted diffraction effects; reducing weight; improving processing power and battery life; designing an intuitive interface; incorporating WiFi and Bluetooth; and – above all – paying attention to style.

Ticking all of those boxes is going to require a host of innovations. But then, you could have said the same thing about turning a 1980s brickphone into today’s sleek smartphones. As another speaker, Tish Shute of Huawei USA, put it, “We are entering an era when computing will become integrated with human perception.” There’s a brave new world out there somewhere, and VR will undoubtedly be a part of it.

Watch the super blue blood moon eclipse live

By Hamish Johnston

Tomorrow,  people around much of the world should be able to see a lunar triple-whammy. There are two full moons this month, which is a relatively rare occurrence called a blue moon. The Moon is also near its closest approach to Earth, which means that it will loom large in the sky as a super moon.

And it just so happens that the Earth will pass between the Sun and the Moon tomorrow, causing the lunar surface to darken in a total lunar eclipse. When this happens, some red light from the Sun manages to refract around the Earth and strike the Moon – giving it a blood red appearance.

While the eclipse will be visible across much of the world, people in western Europe and huge swathes of South America and Africa will miss out because it will be daytime there (see figure).

If you are unlucky and can’t see the super blue blood moon eclipse, NASA TV will be broadcasting observations live from telescopes at NASA’s Armstrong Flight Research Center in Edwards, California; Griffith Observatory in Los Angeles; and the University of Arizona’s Mount Lemmon SkyCenter Observatory. The show starts at 10:30 UTC on 31 January.

Expedition aimed to mitigate earthquake risk in New Zealand

Expedition ship

The “SHIRE” expedition returned in December from obtaining seismic data along the Hikurangi margin off New Zealand, where the Pacific and Australian plates collide. The team involved in the Seismogenesis Hikurangi Integrated Research Experiment hopes to boost knowledge of subduction zone “megathrust” earthquakes and so mitigate earthquake risk in New Zealand.

‘We have obtained an exciting dataset with new details on the structure and properties along the megathrust and look forward to migrating these results with other ongoing studies along the Hikurangi margin,’ said Nathan Bangs of the University of Texas Institute for Geophysics, who was expedition chief scientist, working alongside researchers from the US, New Zealand, Japan and the UK.

The Hikurangi margin has the potential to generate massive earthquakes. The wider SHIRE project includes geophysical imaging as deep as 30 km below the Earth’s surface, as well as looking at slip behaviour over several earthquake cycles, and numerical modelling. The findings should provide insight into slip behaviour and long-term deformation at subduction zones.

The processes governing how faults slip – from destructive earthquakes to slow slip events and aseismic creep – are not well understood. But events such as the devastating 2011 Christchurch earthquake reveal how vital it is to know more.

The SHIRE expedition will test several hypotheses, including that high fluid pressure promotes stable sliding on faults. The aim is to investigate the conditions that lead to aseismic creep, slow earthquakes, and large magnitude earthquakes. The project is one of several investigating the Hikurangi subduction zone, which has a shallow subduction interface that makes it accessible to study, and a range of slip behaviours.

Now the scientists are studying the seismic data they acquired.

Microbubbles increase tumour radiosensitivity

Most solid tumours are poorly oxygenated, rendering them more resistant to both chemo- and radiotherapy. Systemic delivery of oxygen prior to treatment has proven largely ineffective in reversing this resistance. Now, US researchers have shown that bursting oxygen-filled microbubbles injected into breast tumours makes the tumours three-times more sensitive to radiotherapy, improving survival in animal models of the disease (Int. J. Radiat. Oncol. Biol. Phys. doi: 10.1016/j.ijrobp.2018.01.042).

“Finding a way to reverse oxygen deficiency in tumours has been a goal in radiation therapy for over 50 years,” said senior author John Eisenbrey, from Thomas Jefferson University and the Sidney Kimmel Cancer Center. “We’ve demonstrated here that oxygen microbubbles flush tumours with the gas, and make radiation therapy significantly more effective in animal models.”

In the study, Eisenbrey and colleagues delivered oxygen-filled microbubbles into the animal’s blood flow via intravenous injection. The bubbles were burst locally, however, only raising the oxygen level within the tumour itself. They demonstrated that the microbubbles successfully and consistently increased breast tumour oxygenation levels by 20 mmHg, significantly more than control injections of saline or untriggered oxygen microbubbles.

The researchers showed that bursting the microbubbles with ultrasound immediately prior to irradiation almost tripled the radiosensitivity of the tumour. It also nearly doubled survival times in the mice – from 46 days with placebo, nitrogen-filled microbubbles, to 76 days with oxygen-filled microbubbles. Using photoacoustic imaging, the investigators also observed that oxygen increased throughout the cancer mass, even in areas without direct access to blood vessels.

“The very act of bursting these microbubbles within the tumour tissue seems to change the local physiology of the tumour and make cells generally more permeable to oxygen and potentially to chemotherapy as well,” explained Eisenbrey. “We think this is a promising approach to test in patients to amplify the effects of radiation therapy.”

Tactile maps revealed following observed touch

A precise topographic map of the representation of touch in area 3b of the somatosensory cortex – a region thought to only respond to mechanical stimulation – was produced when a subject observed someone else’s fingers being touched. Researchers from the UK (UCL) and Germany (Magdeburg, Leipzig and Bochum) used functional MRI (fMRI) to show that such “foreign source” maps were consistent with maps produced following actual tactile stimulation: they overlapped in the same areas, albeit they were weaker in fMRI signal amplitude (J. Neuroscience 10.1523/JNEUROSCI.0491-17.2017).

This research delves deeper into the multisensory properties of different cortices, and may have implications for studies of cortical plasticity – the brain’s ability to reorganize in response to trauma or learning.

Previous studies have shown that sensory cortices in the brain can receive inputs from other sensory systems, for example visual activity in response to sound or touch. It is unknown, however, to what degree, and to what specificity, these foreign sources activate sensory cortices.

The auditory, somatosensory (tactile) and visual cortices have detailed spatial organization. For instance, sub-regions of the somatosensory cortex that respond to finger touch lie next to each other in a neat pattern. Here, the researchers employed ultra-high-resolution (7 Tesla) fMRI to investigate whether area 3b responds when a subject observes someone else’s fingers being touched, and if so, whether there is specific finger organization.

Observing touch

The fMRI procedures involved two visual sessions and one tactile session. In the visual session, participants viewed someone else’s fingers being touched in a “phase encoded” design (stimulation of consecutive fingers) in a first-person perspective. The phase angle of the response was used to visualize the finger being stroked, producing “finger-phase” maps.

The visual experiment was repeated using a blocked design, in a third-person perspective; this was used to see whether visually-driven maps were only produced after observing “self-referenced” touch (first person), and whether the maps were robust across fMRI design/analysis. In the tactile sessions, the participant’s fingers were stimulated by sandpaper, in both phase-encoded and block design, separately.

Additionally, a separate group of participants underwent electromyography (EMG) to measure the small electrical currents produced following muscle movement. The researchers performed this experiment to verify whether the maps produced while observing touch were confounded by involuntary finger movements. They found that observing touch did not trigger muscle activity.

Soft robots walk like caterpillars and swim like jellyfish

A new type of highly agile robot has been created by a team led by Metin Sitti at the Max Planck Institute for Intelligent Systems. The soft-bodied robots are controlled using applied magnetic fields and can perform a wide range of motions – allowing them to navigate through challenging terrains and environments. With the ability to carry drugs as a cargo, and even construct new tissues, the robots could have important medical applications.

Sitti and colleagues made their robots from ribbons of flexible, hydrophobic silicone rubber, embedded with microparticles of a specially designed magnetic alloy. Each ribbon is 1.5 mm wide and 185 µm thick and has a magnetization that varies sinusoidal along its 4 mm length.

When subjected to magnetic fields, the robots are bent into a wide variety of shapes. By varying the magnetic field periodically in different and creative ways, the team has shown that the robots can be controlled to move using a range of self-propulsion techniques.

Caterpillar or jellyfish

Causing the body of a robot to undulate like a time-dependent sine wave, results in motion that mimics the gait of a caterpillar – allowing the robot to roll, crawl, or walk over rough, irregular terrain. Adapting the motion further, allowed robots to jump over obstacles, and land safely. When immersed in water, the induced motion caused the robots to swim like a jellyfish, propelling itself upwards against gravity. Their hydrophobic coating also allowed the robots to crawl along the surfaces of fluids, and climb meniscuses of water.

Sitti’s team is hopeful that the dexterity of their robots will become useful to the field of medicine. Since the motion-inducing magnetic fields are harmless to humans and can penetrate through tissue, smaller versions of the robots could be used to deliver drugs by carrying them through complex and varied environments inside the body. They could also be put to construction tasks, assembling tissues, and assisting with non-invasive surgeries.

The robots are described in Nature.

Does general relativity violate determinism inside charged black holes?

Under certain extreme conditions Einstein’s general theory of relativity seems to violate determinism, according to an international team of physicists. The group has shown that in a universe expanding under the influence of the cosmological constant, black holes generated by the collapse of highly charged stars should contain a region where physical conditions are not fixed by the stars’ initial state. At odds with a 40-year old idea known as cosmic censorship, the researchers say that signs of this indeterminism might show up in detections of gravitational waves.

Newton’s mechanics allow us in principle to calculate the exact state of a physical system at any point in the future, provided that we know its initial state perfectly. So too with general relativity: a precise knowledge of space’s geometry and its rate of change in the present enables us in theory to predict exactly how space-time will evolve. As such, Einstein’s theory is considered by most physicists to be entirely deterministic.

Charged black holes, however, challenge this deterministic picture. The “Reissner-Nordström” solution of general relativity describes a black hole created when a star that is electrically charged and spherical collapses in on itself under the force of gravity. Hidden from view inside such a black hole’s event horizon lies a second boundary known as the Cauchy horizon, beyond which space-time is smooth but indeterminate. In other words, the future can no longer be predicted.

Strong cosmic censorship

An idea put forward by British physicist Roger Penrose in the 1970s had appeared to forbid such non-deterministic behaviour. His “strong cosmic censorship” conjecture states that there is some mechanism within general relativity – a censor – that prohibits the appearance of Cauchy horizons. In the case of a charged black hole, he calculated that even the slightest perturbation in the initial conditions of the imploding star destroys the Cauchy horizon and yields a singularity in its place. At this point of infinite space-time curvature the relativistic field equations break down and determinism, or its absence, ceases to become an issue.

But in the new work, described in Physical Review Letters, Lisbon University’s Vitor Cardoso and colleagues find that there should be some circumstances when the singularity imposed by cosmic censorship does not form. They considered the net effect of two opposing influences on the Cauchy horizon – the amplification of any tiny perturbation by the immense gravity of a black hole on the one hand, and the damping effect of the black hole’s external environment on the other. Specifically, they worked out what would happen for a highly charged star collapsing in a universe whose expansion is being accelerated by a cosmological constant – as ours appears to be.

Quasinormal modes

To do so, Cardoso and team studied damped oscillations known as quasinormal modes. They have shown that when the collapsing star has enough charge, damping wins out over amplification and the oscillations die away quickly. As Cardoso explains, the charge and cosmological constant essentially provide repulsive forces that counteract the pull of gravity and so diminish its amplifying effects. The upshot, he and his colleagues say, is that the Cauchy horizon is damaged but not completely destroyed. As such, they conclude, there is indeed a region within the black hole where the relativistic field equations work but where determinism breaks down.

Group member João Costa says that this would be a more fundamental breakdown of determinism than that inherent to quantum phenomena. As he points out, although we can’t predict the outcome of any particular quantum measurement we can still work out the probability distribution for an ensemble of measurements. But, he says, beyond the Cauchy horizon such overall predictability would be impossible.

Falling Schrödinger’s cat

“Thinking about Schrödinger’s cat, we know we can assign probabilities to the cat being alive and dead,” says Cardoso. “But if the cat were to fall inside the Cauchy horizon we could not even compute these probabilities.” Although, he adds, “for the cat that is probably irrelevant, since it would be dead anyway.”

As Cardoso and colleagues point out in their paper, charged black holes are not expected to exist in nature. But they say that a close analogy between charge and angular momentum means they expect “very similar results” for neutral, rotating black holes. “Given the non-zero cosmological constant and the existence of rapidly rotating black holes in our universe,” they write, “these results cannot be taken lightly.”

Gary Horowitz of the University of California, Santa Barbara, who was not involved in the work, says the research provides “the best evidence I know for a violation of strong cosmic censorship in a theory of gravity and electromagnetism.” The next step, he adds, would be to fully model gravitational and charge effects – the present analysis relying on simplified massless scalar fields.

Unavoidable presence

In fact, in a paper recently posted to the arXiv server, Shahar Hod of the Ruppin Academic Center and the Hadassah Academic College in Israel claims to have done something similar. Hod has found that charged fields close to Reissner-Nordström black holes decay slowly enough to guarantee unstable Cauchy horizons. Given “the unavoidable presence” of these fields – since the collapsing stars themselves are charged – he concludes that such black holes must respect strong cosmic censorship.

Cardoso’s colleague Aron Jansen describes Hod’s counter-proposal as “an intriguing possible resolution” to the dispute but says more work needs to be done. One complicating factor, he says, is that actual charged matter would consist of fermions not the scalar particles investigated by Hod.

It is possible, adds Cardoso, that the issue could be settled by observing gravitational waves. He says that the idea of indeterminism would be bolstered by the existence of black holes that either have lots of charge or spin very quickly. Alternatively, he speculates, the fading of gravitational wave signals – due to the presence of a black hole’s event horizon – might be modulated by the presence of a Cauchy horizon. He points out, however, that such a signal might also mean dark energy cannot be explained in terms of the cosmological constant.

Soft contact lens could help diabetes patients

A new, soft, flexible contact lens can monitor glucose levels in tears wirelessly and even alert the user if glucose levels are too high by turning off a light-emitting diode embedded in the device. The lens, which has already been successfully tested out on live rabbits, could be used to screen for pre-diabetes and in non-invasive real-time health monitoring applications for diabetics.

Researchers have developed a variety of wearable and non-invasive sensors in recent years. These include electronic-skin coatings that can detect blood oxygen levels, contact lenses made from metal-oxide thin films that can detect glucose levels in tears and flexible integrated sensor arrays based on plastic and silicon integrated circuits that can detect molecules like glucose in sweat.

Most of the contact lens made so far, however, contain brittle and hard components fabricated on plastic films that can block the user’s vision and sometimes even damage the eye. They also often require bulky electronics to measure signals from the sensors, which of course is not very practical.

A team of researchers led by Jang-Ung Park of the Ulsan National Institute of Science and Technology (UNIST) in the Republic of Korea has now succeeded in making soft contact lenses in which glucose sensors, wireless power transfer circuits and LED display are integrated using transparent and stretchable nanostructures. The elastic parts of lens are formed using a silicone elastomer, a material widely used in soft contact lenses today.

Highly transparent and low haze

“Obviously, contact lenses should not obstruct the wearer’s view either, and they must be both highly transparent and have a low haze for optical clarity,” explains Park. “For the hybrid substrate of our lens, we thus used heterogeneous materials whose refractive indices do not vary very much and which match that of local areas of the eye. This ‘index-matching’ approach provides the wearer with a clear view – that is, outstanding transparency (93% in the visible light regime) and low haze (1.6% in the visible light regime).

“What is more, we maximized the elastic portion (a conventional soft contact lens material, as mentioned) so that it covered nearly 97% of the total area of the contact lens. This material allows oxygen to pass through it, thus making it more comfortable for the wearer.” The new set up avoids the need for additional, external measuring equipment and the wireless circuitry responds to glucose concentrations in moisture naturally found in the eye thanks to a sensor made of graphene. The sensor displays this information through the LED and if it is above a certain threshold (0.9 mM), it turns off, so providing a useful warning system to the wearer.

Towards industrialization and commercialization?

The researchers chose 0.9 mM of glucose as a threshold for diagnosing diabetes but this value can be changed by simply changing the resistance of the sensor. As the concentration of glucose in tear fluid increases, the resistance of the sensor decreases and this reduction in turn decreases the resistance of the parallel circuit of the LED and the sensor. This results in the reduction of the bias applied to the circuit so that it reaches a value that is below its turn-off voltage, so turning the LED off.

The researchers say that they have already successfully tested out their device on live rabbit eyes. The animals showed no “abnormal behaviour” during the tests, they say, and the lens remained in place.

“Thanks to the fact that the lens is made using rapid moulding and conventional photolithography techniques, it could easily be produced on the large scale,” Park tells nanotechweb.org. “We believe that it might be commercialized in the next five years, if companies are interested in investing in the device,”

The new lens is detailed in Science Advances DOI: 10.1126/sciadv.aap9841.

India’s cycling ‘sweet spot’

Cycling and other non-motorized transport could help India simultaneously reduce greenhouse-gas emissions and improve public health.

That’s the conclusion of researchers in Germany and Austria, who analyzed Indian household survey data to understand the trade-offs associated with energy usage in a country experiencing rapid urbanization.

Non-motorized transport presents an attractive “sweet spot” for development, whilst other policies, such as improving access to modern cooking fuels, could dramatically lower morbidity while increasing emissions only moderately, the team showed.

“Not all energy use is equal,” said Felix Creutzig of the Mercator Research Institute on Global Commons and Climate Change (MCC) in Germany. “For sustainable pub-lic-health development, the provision of one type of energy, namely electricity, is more important than another, namely gasoline for cars.”

India is expected to double its urban population within the next generation, from 410 million today to more than 800 million in 2050. In principle, this urbanization could improve well-being, by giving access to better infrastructure and living conditions. The reality, however, is that a large portion of India’s urban dwellers are living in slums, with a high risk of health problems.

Creutzig and his colleague Sohail Ahmad at the MCC, and Shonali Pachauri at the International Institute for Applied Systems Analysis, Austria, believe that India’s public policies need to come up to speed. Although previous studies have explored the health burdens of specific transport and energy policies, they only did so at a popula-tion level, which masks smaller, heterogeneous trends, Creutzig and colleagues say.

To look at the issue more closely, the researchers turned to the “micro data” contained in two rounds of India’s longitudinal household surveys, from 2005 and 2012. The data allowed them to explore energy expenditure and use at a household level, and health at an individual level, while controlling for other factors such as income, urbanization and living conditions.

One finding was that a blunt tax on electricity usage – with the aim, for example, of reducing emissions – would be unduly detrimental to public health, by turning poorer people away from modern heating and cooking methods. On the other hand, higher taxes on gasoline could benefit both public health and the climate, especially if the taxes were used to fund clean mass transit.

Such a policy would tie in with the encouragement of cycling, the researchers’ claimed sweet spot. According to the analysis, a 10% increase in cycling could lower diabetes, cardio-vascular diseases and other health issues for 0.29 million people while also abating emissions by 1.5 kilotonnes of carbon dioxide equivalent annually.

In addition, a 10% increase in urbanization, and a concurrent improvement in access to modern cooking and clean water, could increase electricity-related emissions by 84 kilotonnes of carbon dioxide equivalent annually, but lower short-term morbidity for 2.4 million people.

“When starting from very low levels of energy use, access to certain basic energy services – electric and clean cooking – is essential for improving public health and well-being,” said Pachauri. “Greater discretionary energy use associated with personal motorized automobiles fuelled by gasoline and diesel, however, can have adverse implications for health and emissions.”

The team published the findings in Environmental Research Letters (ERL) .

Copyright © 2026 by IOP Publishing Ltd and individual contributors