Skip to main content

Evolution of quantum spins looks surprisingly classical

Describing how matter behaves at the quantum-mechanical level is notoriously hard, because the equations get so difficult to solve once there is more than a handful of particles involved. But a new experiment shows that the fine details might not matter too much – and that, if we “squint” at a many-particle quantum system to blur them, how the system changes over time can look surprisingly like the familiar classical process of diffusion.

Figuring out the exact trajectories of particles of ink in a glass of water as they are battered this way and that by water molecules is difficult. But you do not need to keep track of every molecule because. Fick’s law of diffusion says that the flow of material is simply proportional to its concentration gradient.

Fick’s law is an example of coarse-graining that is commonly used in hydrodynamics. For example, a fluid can be considered a collection of little “parcels”, each containing countless molecules, which move frictionally past one another.

Local blobs

Researchers have recently sought to describe quantum-mechanical many-particle systems using such an approach. Say you have a material containing a bunch of quantum spins that interact with one another, you create a local “blob” of oriented spins and calculate how it will then spread through the system.

“Hydrodynamics is generally the study of how a system goes from local equilibrium to global equilibrium,” says Joel Moore of the University of California at Berkeley. The equations of fluid mechanics, he says, assume that any detailed information about the initial state – where the particles are and how they are moving – is quickly lost once they have experienced just a few interactions (collisions) with others. “Then the fluid equations describe everything on longer time scales, from microseconds to years, very accurately.”

To study an analogous system of quantum spins, Moore has collaborated with Berkeley physicist Norman Yao and others. They examined a tiny single crystal of diamond containing two types of spins, both arising from the unpaired electrons of defects in the carbon-crystal lattice. One kind of defect, called a P1 centre and consisting of a nitrogen atom in place of a carbon, was dispersed randomly through the lattice at a concentration of 100 ppm. The other defect, called an NV centre, consists of a nitrogen substitution next to an empty space in the lattice, was dispersed at a concentration about two hundred times lower.

Spins with feelings

These spins can “feel” one another over long distances of many times the atomic separation. To see how the dynamics of the spins evolved, the researchers used the NV defects both to set up a perturbation and to probe the response. They could polarize these spins in one region using a laser pulse, and then transfer that localized disturbance to the more populous P1 spins by coupling the two using a magnetic field to bring them into resonance. Then they monitored how this perturbation spread through the P1 spins as the system moves towards its global equilibrium configuration.

“The lore is that if you take a uniform initial state and make a packet of excess energy or spin somewhere in the system, this packet will spread out according to some differential equation,” says Yao. For a quantum system, we might expect that to be the Schrödinger equation. But it would be very challenging to describe the process that way for all the interacting spins.

The measurements showed, however, that the overall dynamics were well described by a simpler equation that looked a lot like Fick’s diffusion. In other words, this strictly quantum process turns out to share much the same dynamics as a classical one. If you just measure the spin density at a slightly coarse-grained resolution, explains Yao, then “the differential equation that describes these dynamics can be much simpler than the Schrödinger equation, and instead like the diffusion equation”.

Not a perfect match

However, the behaviour of the spins was not a perfect match for diffusion. This, says Moore, is partly because the spins, unlike colliding particles, can feel one another over long distances. But it also seems to stem from the fact that the P1 defects are not quite all equivalent: the atoms around each defect might have slightly different local arrangements, producing some random disorder.

Yao says that other theoretical studies suggest that other processes of “quantum hydrodynamics” can take different forms, equivalent to other types of coarse-grained classical dynamics. For example, some systems in which spins interact in 1D chains are predicted to have similar dynamics to the so-called Kardar–Parisi–Zhang equation, which governs some kinds of surface growth processes and the way shock waves propagate.

What this suggests is that, at a coarse-grained level, dynamical processes on many-body systems might be rather insensitive to whether they are governed by quantum or classical physics. There is a kind of universality that depends more on the general interactions between the coarse-grained components than on their microscopic details – an echo, perhaps, of the universality seen in magnetic spin systems and classical fluids close to a critical phase transition.

Active field

“Understanding this emergence in detail is challenging for microscopic entities that are quantum, and equally so, in different ways, for microscopics that are classical,” says David Huse of Princeton University in New Jersey. This, he says, “is a long-standing enterprise in the fundamentals of statistical physics, that has become rather active recently for quantum microscopics because of the improved ability to explore it in the lab”.

The experimental work “is an impressive feat”, says Arijeet Pal of University College London. “The demonstration that there is a different hydrodynamic regime from conventional diffusion is an essential first step towards understanding the different dynamical universality classes allowed by quantum physics.”

The research is described in Nature.

The 2021 Physics World China Briefing is now out

While many activities here on Earth have slowed down or been put on hold due to the COVID-19 pandemic, that has not stopped China – and other countries – from forging ahead in space. China has managed several firsts this year, notably landing its first rover on Mars, starting construction of a fully-fledged space station, and successfully returning samples from the Moon.

Cover of the 2021 China Breifing

In this year’s Physics World China Briefing, we report how that progress is only set to accelerate, with China’s space station set to be complete next year, giving it a permanent presence in Earth’s orbit for the next 10 years and more. The mid-2020s will also see China put a dedicated optical space telescope in orbit as well as launch further lunar craft. It may even start building a Moon base by the end of the decade – not to mention an asteroid and further missions to Mars.

We talk to some of the minds behind those missions, including Su Yan from the National Astronomical Observatories of China, Chinese Academy of Sciences, who has played a key role in China’s exploration of the Moon.

Back on Earth, China is also making a big push in materials science, which is set to play a large role in progress towards seven major research areas – including aerospace technology and quantum technology – that China has highlighted in its 14th five-year plan that began this year.

In this year’s briefing, we talk to Zhongfan Liu, founding president of Beijing Graphene Institute, about how it is pioneering the industrialization of new graphene materials as well as Weihua Wang, director of the Songshan Lake Materials Laboratory in Dongguan about a new joint venture with IOP Publishing, which publishes Physics World.

Let us know what you think about the publication on TwitterFacebook or by e-mailing us at pwld@ioppublishing.org.

Where did all the calculus go?

Early in my teaching career, I had a rather uncomfortable exchange with a retired physicist. He challenged me to defend the A-level physics curriculum, which he thought had been “dumbed down” and lacked any solid, mathematical rigour. I regret not putting my thoughts across better at the time, but – with the benefit of almost 15 years teaching experience behind me – I now feel more prepared to respond.

I should add that this physicist’s views were not a one-off. Since then, I have had many similar conversations with other university physics lecturers who mourn the lack of mathematical fluency in their first-year undergraduate students. I should point out that I am a huge advocate of mathematics in physics and believe that mathematics is in fact the more important subject, given that it is at the core of new discoveries.

I was surprised to discover that the inclusion of A-level maths content in physics specifications, or syllabuses, is a more recent affair than I had anticipated. It appears to have emerged in the mid-1980s in response to changes to the maths curriculum, which previously had not been standardized. Instead, there were small but significant variations between the A-level maths syllabuses offered by different exam boards. Unfortunately, attempts to remove the differences ended up having a knock-on effect for physics.

Back in the 1970s students benefited from studying some physics topics both in their maths A-level course and in their physics A-level. This gave students more opportunity to apply their learning as well as additional contact time with teachers to perfect these skills. Universities were therefore reluctant to accept physics students who had studied maths A-level courses that featured little or no physics. Admissions tutors instead preferred to accept students who had benefited from this “double study”.

But when maths A-levels were standardized, physics content was removed to make way for other topics, such as probability and statistics, that are important in the social sciences. The reduction of physics content from the maths curriculum now meant there was less overlap between A-level physics and A-level maths. With less opportunity for those taking physics to refine their skills, physics specifications appear to have coped by increasing the onus on the student to study the material independently. Students studying the two subjects were simply no longer getting the rounded experience that they previously had received.

The uptake of physics A-level dropped over the next decade and physics went from being the most popular science to the least popular. One of the reasons suggested for the low uptake was the perception that physics was disproportionately difficult, which is supported by an analysis of student grades. Bright students performed worse in physics than they did in other subjects, which is off-putting when a student needs to strategically consider their subject choices for applying to university.

A good compromise

We need to accept that there are many other reasons to take physics A-level, besides the desire to study the subject at university. The skills it provides are useful to many other fields, which can lead to engaged, scientifically literate citizens. So it is in our interest to be as inclusive as possible. At A-level, teachers cater for a wide range of career paths, not just those progressing on to a physics degree. We deliver the subject in a way that is attractive and useful to both sets of students and accept that not all A-level physics students will be studying A-level maths. One consequence of adjusting for this balance can be a loss in mathematical rigour, particularly when it comes to notation.

The skills A-level physics provides are useful to other fields, and can lead to engaged, scientifically literate citizens

Niki Bell

It does not help either that when students get to university, lecturers often use a vastly different mathematical notation from what students were taught at school. In fact, lecturers often do not adjust their presentation of mathematics at all to accommodate students’ varying experience. This can be incredibly intimidating, and I often wish that lecturers would adapt their teaching to be more inclusive.

Maths for physics

I believe that the solution lies in recombining A-level maths and physics, by redesigning maths A-level so that students can specialize in their second year. Students do not typically get much opportunity to choose their modules in A-level, which are decided by the exam board or school leaders. But if this were changed, then those wanting to do a physics degree could select “maths for physics” in their second year of maths A-level, allowing them to focus on concepts such as mechanics and applied calculus. Those wanting to specialize in other subject areas would be able to select more appropriate mathematics, such as probability and statistics. This would mean that those not studying maths can still do physics if they wish. Redesigning the maths course this way would reintroduce the overlap between the subjects and give students more opportunity to develop skills in class, not just for physics, but for every field.

I acknowledge that there are practical problems with this model, especially for small colleges that do not have the same staff numbers as larger ones, but I believe the potential benefits are worth it. I have taught on such a course and can attest to its success. Teaching physics to students on a specialized maths course was incredibly fulfilling. Their mathematical ability was excellent, the students were confident and had ample opportunity to hone their subject-specific skills, and those progressing on to study physics at university were well prepared for the mathematical content. Achieving this was not easy. Making the most of the overlap, and creating that experience, took careful management and negotiation between departments, but outcomes for students were worth it.

I believe this is the best way forward, not only for physics students, but maths students too.

Dose measurement system could improve phototherapy for jaundiced newborns

Neonatal jaundice is a common condition in newborn babies, caused by the build-up of bilirubin in the blood. While the majority of babies will recover naturally, if extremely high blood bilirubin levels are left untreated, there’s a small risk that the bilirubin could pass into the brain and cause brain damage. Such cases are usually treated using phototherapy, in which blue light shone onto the skin reduces the bilirubin values to safer levels.

Phototherapy is delivered using overhead lamps or blanket-style illumination devices. Previous studies, however, revealed that many such phototherapy devices fail to deliver the recommended irradiance levels. Now, researchers at University Hospital Coventry have developed the first system that can measure the integrated dose rate of light delivered to a neonate body shape. They describe their approach in Medical Engineering & Physics.

Detection array

Phototherapy devices are typically evaluated by measuring peak irradiance levels at various locations over a neonate’s body surface. But this approach cannot estimate the total rate of energy delivery within a specific spectral range over the entire exposed body surface.

“The integrated dose rate is the parameter that gives the best indication if phototherapy is likely to be appropriate,” explains first author Douglas Clarkson. “A single local intensity value can be misleading if the ‘light field’ size is small relative to the size of the neonate.”

To determine the integrated dose rate delivered by a phototherapy device within a selected wavelength interval, Clarkson and colleagues used an array of 192 blue-wavelength-enhanced silicon photodiodes to measure the dose contributions over the surface of a neonate body shape.

As the physical model for their experiments, the researchers used a body shape based on the  SimNewB simulator device, which has a shape that closely corresponds to a term neonate of average weight and height. They divided the surface into 12 anatomical regions and attached 16 calibrated photodiodes to each region. They then used this setup to estimate the dose contributions by anatomical region, in the spectral range 460–490 nm, for three neonatal phototherapy devices: Natus neoBlue; GE Medical BiliSoft blanket; and GE Medical Giraffe Spot.

As expected, the BiliSoft blanket delivered the majority of output to the rear torso, although the team noted a significant difference between dose to the right and left sides. The Giraffe Spot unit delivered light centred on the naval, with relatively little dose to the arms, legs and head. The Natus neoBlue delivered a more extensive light field, with the arms, legs and upper torso all receiving useful dose contributions.

The measurements revealed significant differences in delivered levels of phototherapy, which would likely also lead to differences in the relative clinical effectiveness of each phototherapy system.

Clinical impact

The reduction in serum bilirubin during treatment is a key parameter indicating the effectiveness of neonatal phototherapy. The test system’s ability to measure the dose rate within specific spectral bands enables the creation of a model relating the rate of energy delivery to the neonate to the rate of bilirubin decrease during treatment.

“Newborns get into a bilirubin imbalance due to breakdown of haemoglobin. It is important that the phototherapy dose rate is sufficiently high to counteract this factor and act to decrease it to safe levels,” says Clarkson, noting that the important clinical focus for this work was provided by consultant neonatologist Prakash Satodia at University Hospital Coventry.

The team also suggest that the technology could help develop a revised medical device equipment standard that allows determination of output power delivery within designated wavelength intervals for a specific neonate baby shape.

The researchers conclude that their system provides the “missing link” in determining the relative effectiveness of neonatal phototherapy lamps and optimizing clinical phototherapy. To provide a more practical measurement device, particularly for premature infant body shapes that would be challenging to cover in photodiodes, they are developing a modified design based on wrap-round thin-film photovoltaic technology.

Clarkson is also creating an interactive software tool to optimize phototherapy based on dose rates delivered within specific wavelength intervals. “It is hoped that this type of software, while having a research focus at this stage, will filter into clinical use as the best way to manage neonatal phototherapy,” he tells Physics World.

Ghost surface polaritons seen for the first time

Hyperbolic polariton illustration

The existence of ghost hyperbolic surface polaritons has been demonstrated by an international collaboration including researchers in China and the US. Based at Huazhong University of Science and Technology (HUST), National University of Singapore (NUS), National Center for Nanoscience and Technology (NCNST) and the City University of New York (CUNY), the team showed that the polariton – a hybrid light-matter quasiparticle – has a record-breaking propagation distance of three times its photon wavelength. This ghost polariton is an exciting discovery that has applications in sub-wavelength, low-loss imaging, sensing and information transfer. The full study is described in Nature.

Previously, hyperbolic polaritons, which arise from the strong coupling of electromagnetic radiation to lattice vibrations (phonons) in anisotropic crystals, had only been observed in two forms: bulk polaritons and surface polaritons. Bulk, volume-confined, hyperbolic polaritons (v-HPs) have a real out-of-plane wavevector and hence can propagate within the material supporting them. Surface-confined hyperbolic polaritons (s-HPs), however, have an entirely imaginary out-of-plane wavevector, and so decay exponentially away from the crystal surface, a property called evanescence. The hyperbolic dispersion of these polaritons is the result of the crystal’s dielectric anisotropy, which results in hyperbolic isofrequency contours in k-space (momentum space) and concave wavefronts in real space.

Most studies on v-HPs and s-HPs have been performed in thin layers of van der Waals crystals. These crystals comprise stacks of covalently bound 2D layers that are held together by weak van der Waals forces. However, in such crystal layers there is no control over the optical axis. This is the direction in which propagating light experiences no birefringence and it is typically aligned with the layers.

Friendly ghosts

Weiliang Ma and colleagues have exploited the fact that the optical axis in calcite (calcium carbonate), a bulk anisotropic crystal, can exist at an angle to the surface and can be selected at will by mechanical cutting. If the axis is indeed slanted, the optical properties of calcite in the mid-infrared region give rise to ghost hyperbolic phonon polaritons (g-HPs) that are highly anisotropic and highly collimated. These g-HPs propagate along the crystal surface and – similar to s-HPs – decay exponentially away from the surface. However, unlike conventional s-HPs, they are not purely evanescent inside the material and their wavefronts are oblique – which means they are not perpendicular to the direction of propagation.

The team was able to demonstrate the intriguing properties of g-HPs experimentally using nanoscale-resolution near-field imaging. They fabricated a gold microdisc on the surface of several calcite crystal samples and directed infrared light onto it. The disc acts as a nanoantenna, which collects the infrared illumination and “launches” two highly confined, diffraction-less polariton rays that travel up to 20 micron in different directions. In comparison, v-HP rays travel about 3 micron in van der Waals materials.

The experiment is depicted in the figure, where to the left of the infrared beam we can see the hyperbolic polariton wavefronts, and to the right we see the two highly-collimated polariton rays. The researchers showed that the angle between the two rays increases with increasing excitation frequency and increasing angle between the optical axis and the crystal surface. Additionally, the polariton confinement can be controlled by the disc diameter.

Polariton revolution

This is the first observation of ghost polaritons and the discovery offers offer multiple ways of controlling their behaviour. “Polaritonics […] has been truly revolutionizing optical sciences in the past few years. Our discovery is the latest example of the exciting science and surprising physics that can emerge from exploring polaritons in quite conventional materials like calcite, with exciting implications for nanophotonic technologies”, says Andrea Alù of the Photonics Initiative at the CUNY Advanced Science Research Center and a co-leader of the research.

Why open-source software is so powerful for physics: find out in the September 2021 issue of Physics World

Cover of Physics World September 2021 issue

Twenty-three thousand. That’s roughly how many people helped create the first ever image of a black hole, taken by the Event Horizon Telescope (EHT) in 2019.

Not all are formal members of the EHT collaboration – the vast majority are those who write, maintain and support the free and open-source software tools that the researchers used in their work.

But without the imaging software all those people had written, it would never have been possible to extract the famous image from the EHT data.

It was perhaps the most high-profile example of how free and open-source software is becoming a powerful tool in academic research, helping scientists to collaborate better and work smarter.

In the September 2021 issue of Physics World, which is now out in print and digital formats, Achintya Rao investigates how such software is being used in physics research, and its role in the wider open-science movement.

If you’re a member of the Institute of Physics, you can read the whole of Physics World magazine every month via our digital apps for iOSAndroid and Web browsers. Let us know what you think about the issue on TwitterFacebook or by e-mailing us at pwld@ioppublishing.org.

For the record, here’s a run-down of what else is in the issue.

• Steven Weinberg: a legend is lost – By unifying the electromagnetic and weak forces, Steven Weinberg was vital to the formation of the Standard Model of particle physics. Michael Banks looks back at a giant of theoretical physics

• John Enderby: a passion for physics and publishing – Michael Banks joins the tributes to Sir John Enderby, the physicist and former chief scientific adviser to IOP Publishing, who died last month aged 90

• Exploring other worlds – Ling Xin talks to Su Yan from the National Astronomical  Observatories of China, Chinese Academy of Sciences, about China’s rise in lunar science

• The wonder of Weinberg – Matin Durrani recalls his brushes with Steven Weinberg,  was probably the greatest theorist of his age

• Where did all the calculus go? – Niki Bell argues that mathematics A-level could be reformed so that it does more to support physics students

• Competitive instincts – Physicists are competitive, but that doesn’t make them cutthroat, argues Robert P Crease

• Crossing the valley – In his third article on funding hi-tech firms, James McKenzie looks at recent initiatives to help them jump over the “valley of death”

• Standing on the shoulders of programmers – Free and open-source software is growing to be a powerful tool in academic research, helping scientists to collaborate better and work smarter. Achintya Rao investigates how such software is being used in physics research, and its role in the wider open-science movement

• Imaging metabolism in action – From improving the sensitivity of ion sources to boosting image resolution, Felicia Green and Anna Simmonds unveil the ambitious biological mass spectrometry programme at the Rosalind Franklin Institute to image molecular interactions in tissues

• The enduring mystery of the solar corona – Physicists have long known that the Sun’s magnetic fields make its corona much hotter than the surface of the star itself. But how – and why – those fields transport and deposit their energy is still a mystery, as Philip G Judge explains

• Keeping nuclear secrets – Margaret Harris reviews Restricted Data: the History of  Nuclear Secrecy in the United States by Alex Wellerstein

• Flights of fancy, feet on the ground – Philip Moriarty reviews Fear of a Black Universe: an Outsider’s Guide to the Future of Physics by Stephon Alexander

• Physics for biological breakthroughs – The European Molecular Biology Laboratory is a highly multidisciplinary place, employing people from across all scientific fields. Laura Hiscott speaks to Wolfgang Huber, a physicist at the lab who uses his mathematical skills to contribute to the life sciences

• Ask me Anything – Careers advice from Joanne O’Meara, professor of physics at the University of Guelph, Ontario, Canada

•A demon of a puzzle In 1871 James Clerk Maxwell proposed a puzzle now known as “Maxwell’s Demon” in his book Theory of Heat. We celebrate its 150th anniversary in this thermodynamics-themed cryptic crossword compiled by Ian Randall

Wave–particle duality quantified for the first time

One of the most counterintuitive concepts in physics – the idea that quantum objects are complementary, behaving like waves in some situations and like particles in others – just got a new and more quantitative foundation. In a twist on the classic double-slit experiment, scientists at Korea’s Institute for Basic Sciences (IBS) used precisely controlled photon sources to measure a photon’s degree of wave-ness and particle-ness. Their results, published in Science Advances, show that the properties of the photon’s source influence its wave and particle character – a discovery that complicates and challenges the common understanding of complementarity.

The double-slit experiment is the archetypal example of complementarity at work. When a single photon encounters a barrier with two thin openings, it produces an interference pattern on a screen placed behind the openings – but only if the photon’s path is not observed. This interference pattern identifies the photon as a wave since a particle would create only one point of light on the screen. However, if detectors are placed at the openings to determine which slit the photon went through, the interference pattern disappears, and the photon behaves like a particle. The principle of complementarity states that both experimental outcomes are needed to fully understand the photon’s quantum nature.

Signal and idler

The new study adds to this principle by showing that the properties of the slits also matter. In their experiment, the IBS researchers shone so-called “seed beams” of laser light onto two crystals of lithium niobate. Each crystal produces two photons when illuminated: a “signal” photon and an “idler” photon. The researchers sent the signal photon into an interferometer to create interference patterns and quantify the photon’s wave nature, while observing the path of the idler photon to pinpoint its particle character. Because the signal and idler photons are produced together, they form a single quantum state described by both the wave and the particle property measurements.

Diagram of the optical system used in experiments that quantified wave-particle duality

By changing the intensity of the seed beams in each crystal, the researchers independently altered the crystals’ chances of emitting photons – a process akin to controlling a photon’s “attraction” to each slit in the classic experiment. When one of the crystals was very likely to emit photons, the pattern the interferometer produced was barely visible, implying that the photon was mostly particle-like. When the crystals’ emission probabilities were equal, the interference pattern was sharp, highlighting the photon’s wave character. “The wave nature of the photon could be extracted as a visibility of the interference pattern,” explains Tai Hyun Yoon, a physicist at IBS and a co-author of the study.

Corroborating theoretical results

In their experiments, Yoon and co-author Minhaeng Cho focused on regimes where the photon was acting partly as a wave and partly as a particle. Previous theoretical studies indicated that the amount of wave-ness and particle-ness in such a system should satisfy a simple equation involving source purity – that is, the likelihood that a particular crystal source will be the one that emits light. The new study is the first complementarity experiment to account for and precisely control this source purity, and it corroborates a prediction made by Xiaofeng Qian and colleagues that source purity µs, interference pattern visibility V and path distinguishability P are related through the expression P2 + V2 = μs2.

“Having this experimental capability makes it possible to confirm the theoretical structures that we were discussing, to test how the source is controlling a single quantum particle’s wave–particle duality,” says Qian, a physicist at Stevens Institute of Technology in New Jersey, US who was not involved in the present study. “This was a great achievement, that they could produce a single photon state where all the parameters were at their control,” agrees Girish Agrawal, a physicist at Texas A&M University, US and Qian’s collaborator on this earlier theoretical work.

The new study also showed that controlling and quantitatively measuring the photon’s wave and particle character can be recast as measuring the entanglement between idler photons and the detectors that identified their path. In this way, researchers connected complementarity to a property of photons that is commonly exploited in practical quantum devices. “This extra controllability [in our set-up] could be an interesting and useful way to quantum engineer states that might be of interest in quantum information,” says Cho.

Besides its possible applied value, the researchers say that their study challenges physicists’ traditional thinking about complementarity. “In the context of pure theory and fundamental experiments, this experiment does add something new,” agrees Peter Milonni, a physicist at the University of Rochester, US who was not an author of the present paper. Qian adds that the experiment quantitatively proves that instead of a photon behaving as a particle or a wave only, the characteristics of the source that produces it – like the slits in the classic experiment – influence how much of each character it has. “This experimental test and the theoretical quantitative analysis really deliver the message that a quantum particle can behave simultaneously, but partially, as both,” he concludes.

Tracking China’s rapid rise in lunar exploration

As chief designer of the data acquisition systems for China’s lunar and Mars missions, what are you currently working on?

I oversee developing and running ground-based antennas and receivers for China’s lunar and Mars missions. We have built three 40–50 m-level steerable radio tele­scopes in Beijing and Kunming to obtain data from the lunar missions and last year we completed a 70 m telescope in Tianjin for the Mars mission. I am also overseeing data processing related to various ­microwave-band scientific payloads. This includes the lunar penetrating radar on the Chang’e-3 mission and the low-frequency radio spectrometer on Chang’e-4. That will also apply for the upcoming lunar regolith penetrating radar on Chang’e-5. In addition, I oversee radar data from the orbiter and rover of the current Mars mission Tianwen-1.

Planetary science is thriving in China and the number of planetary scientists has increased exponentially over the past decade

Su Yan

How does that work differ for the lunar and Mars missions?

Mars is much more distant than the Moon so the technologies involved are quite different. For instance, we can use a 50 m telescope to receive data from the Moon, but it won’t work for Mars since the signals are extremely weak. That is why we built the new telescope in Tianjin and integrated the four telescopes so they now function as a 103 m-aperture telescope. This way, we can meet the massive data needs from a total of 13 scientific instruments onboard the orbiter and rover of Tianwen-1.

How did you get involved in lunar exploration?

I was very lucky. My background is in electrical engineering and as a graduate student I joined the pre-research team of the Five-hundred-meter Aperture Spherical radio Telescope (FAST) under the supervision of Nan Rendong. I spent a lot of time at the Miyun ground station in suburban Beijing developing receivers for pulsar signals. I also had the opportunity to spend a year at the Jodrell Bank Observatory in the UK, where I developed feeds and the “orthomode” transducer for a radio telescope array called MERLIN.

How important were those early days on FAST?

FAST was then in the design phase and the budget to construct it seemed astronomical. However, Nan never stopped advocating for the project even when he was diagnosed with cancer. I admire his devotion to FAST. He used to say “No matter what we accomplish, it’s going to be trivial compared to the vast universe out there. What’s important is to enjoy the process of life.” I will always remember that.

Speaking of enjoyment, you must have had some unforgettable moments in the lunar control room

I still remember how nervous we were when the Chang’e-1 orbiter was supposed to start transferring data and there was no signal on the screen. I was nervous, but a few minutes later the data came pouring in. Another intense moment was when we learned that the Chang’e-3 lander had just touched down safely on the Moon’s surface. We were thrilled! Our team was the first to confirm the success and the landing images we received were so much better than what we had expected.

Which findings from the Chang’e missions excite you the most?

I was thrilled about the world’s first high-resolution radar image of the Moon’s far side, taken by the ground penetrating radar on the Chang’e-4 rover. For the first time, the image showed three distinct ­geological ­layers less than 40 m below the lunar surface (Science Advances 6 eaay6898). The rover has travelled nearly 700 m so far on the far side and we hope that it will continue to head west to “see” a different geo­logical landscape that is mainly made up of basalts.

What is the most challenging part of your job?

I always enjoy field work, but sometimes it is difficult. For example, to test the ground-penetrating radar on the Mars rover, we went to the Laohugou glaciers in western China because the dielectric constant of the glaciers is like the Martian regolith. The site was more than 4400 m above sea level and I suffered from severe altitude sickness. I felt weak, but we had to get the work done. During one outing, the snow was so heavy that we had to carry all the instruments up the glaciers ourselves. It was very challenging physically and mentally for the whole team.

I sincerely hope more young people who are passionate about space exploration will join us

Su Yan

What are you working on next?

We need to complete the data processing for the Chang’e-4 radar and the instruments on Tianwen-1 to see if there are interesting findings. I am also involved in the development of radar devices on future missions such as China’s asteroid probe.

Planetary science is thriving in China and the number of planetary scientists has increased exponentially over the past decade. I sincerely hope more young people who are passionate about space exploration will join us.

Smart inflatable hand offers lighter, more affordable prosthetic

So many of the actions that we perform every day rely on the precise movements of our hands. For people who have had an upper-limb amputation, prosthetics could give back some amount of this function that most take for granted. However, these prosthetics are often heavy, rigid and expensive. Researchers from Massachusetts Institute of Technology (MIT) and Shanghai Jiao Tong University aim to restore function with their high-tech, inflatable neuroprosthetic hand, which they describe in Nature Biomedical Engineering.

More than a feeling

Neuroprosthetics are smart bionic limbs that not only look like the missing body part, but also use the person’s own remaining nerve signals to control movement in the prosthetic. This gives the user back some of the functional movements of their hand. But we don’t only use our hands as grabbing devices – hands also provide tactile feedback on whatever we are touching. To best replicate the human hand, an ideal neuroprosthetic would combine both of these features in a light and flexible package.

The research team explored a new direction when designing their prosthetic. They replaced rigid metal elements with soft and stretchy elastomer controlled by precise inflation of the balloon-like fingers, using a simple pneumatic system instead of electrical motors. Using computer modelling of the required pressures, the pneumatic pumps can achieve five different common grips. Sensors detecting electrical signals from the user’s limb control these pumps to deliver whichever movement the user wishes to perform.

And the user can do more than just control the position of the prosthetic fingers. Thanks to pressure sensors in the fingertips, an electrical signal sent back up the limb provides feedback about what each finger is touching. These electrical impulses enable the user to “feel” the pressure on the artificial fingers and to know, for instance, which finger on the hand is being touched.

Getting hands-on

When the researchers tested this new smart prosthetic hand with volunteers, it worked at least as well as traditional neuroprosthetics in typical tests of hand function. The users were able to handle food, objects and tools, and use them naturally, whilst also being able to interact with people, animals and their environment. Even delicate tasks, such as precisely inserting complex shapes into corresponding slots, were possible.

Going further than typical neuroprosthetics, the researchers also demonstrated that the tactile feedback worked in a blindfolded test, where the user could feel whether an object was in their grasp and lift it if they felt they were holding onto it. This step-change in prosthetic capabilities is made possible by the addition of the pressure sensors.

Importantly, the lightweight design doesn’t sacrifice durability. Despite weighing less than one third of a kilogram and being made of an elastomer, the prosthetic resisted being hit with a hammer or run over with a car,  recovering to remain functional.

Whilst further work is needed to make this new prosthetic a viable option for patients who have undergone amputation, the researchers are optimistic about its potential. “This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” says co-senior author Xuanhe Zhao. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

Were high-energy neutrinos from a supernova detected 34 years ago?

Data collected more than thirty years ago contain what could be evidence of high-energy neutrinos generated by a supernova. That is the claim of Yuichi Oyama, a physicist at the KEK research institute in Japan, who worked on one of two experiments that he says appear to have intercepted such particles from the SN1987A event but which did not release the relevant data at the time. The find could help explain the origin of the most energetic cosmic rays, but other experts reckon the evidence does not stack up.

Supernovae are enormous explosions occurring when heavy stars run out of nuclear fuel and implode, in the process creating shock waves that eject the star’s exterior. SN1987A was the closest supernova to be seen for well over 300 years, taking place about 170,000 light–years from Earth in the Large Magellanic Cloud. Its light arrived on 23 February 1987 and reached peak brightness three months later. It was also the first supernova from which physicists detected neutrinos, with 25 of the elusive subatomic particles being registered by two underground experiments – Kamiokande-II in Japan and IMB in the US.

Those neutrinos were all detected within the space of just 13 s, confirming existing models of supernovae and earning the then Kamiokande-II spokesperson, Masatoshi Koshiba, a share of the 2002 physics Nobel prize. However, the neutrinos all had relatively low energies – a few tens of millions of electronvolts. Theory predicts that such supernovae should also give off neutrinos with billions of electronvolts within about a year of the explosion. This is the result of the decay of pions produced when protons accelerated by the remnant star collide with the ejected material.

Formidable task

Observation of these high-energy neutrinos would confirm that at least some of the extremely energetic cosmic-ray protons reaching Earth are accelerated within supernovae. But identifying such particles is a formidable task. To detect neutrinos, scientists fill huge tanks with water or other liquids and measure the Cherenkov light given off when muons created by the interaction of neutrinos with nuclei in surrounding rock pass through the liquid. But to do so researchers must screen-out detections of muons produced by cosmic ray protons in the atmosphere (the cosmic rays themselves being susceptible to magnetic deflection on their journey to Earth and therefore unable to reveal their own points of origin).

The trick in the case of high-energy neutrinos from supernovae is to look only for those neutrinos reaching detectors from below, having passed through the Earth to get there. Any muon generated by cosmic ray protons in the atmosphere on the far side of the planet will penetrate a few kilometres at most into the Earth’s crust, and therefore cannot be confused with the muons of interest.

In a preprint he recently uploaded to the arXiv server, Oyama analyses the “upward-going muons” recorded by Kamiokande-II and IMB. To work out how many of these events can be tied to high-energy neutrinos from SN1987A, he whittles them down using two criteria. One, that they occurred between 11 August and 20 October 1987. And two, that they were no more than 10° away from the direction of SN1987A. Doing so, he finds four such events that fit the bill – two in each of the experiments.

As Oyama points out, these events might still be noise – specifically, neutrinos generated by cosmic rays on the far side of the Earth. But such background neutrinos are themselves rare. Indeed, he says the Kamiokande-II and IMB data show that not even one such neutrino would be expected in the spatial and temporal window he selected. Combining the individual probabilities that each of the four events is background, he calculates that the odds of their not having originated in SN1987A to be 0.27%.

“First evidence”

As such, he concludes, these events “might be the first evidence of high energy neutrinos from a supernova explosion”.

As to why he is only now publishing the analysis, Oyama says that neither the Kamiokande-II or IMB collaborations considered their respective detections to constitute statistically-significant evidence, and that they only learned of their counterparts’ data when some of the IMB members went to work in Japan in 2004. But even at that point no announcement was forthcoming, and he says he then waited until the death of Koshiba – in November last year – to make the results public.

In doing so, Oyama is sticking his neck out. He released his paper only after having tried, and failed, to persuade the remaining members of the two collaborations to issue a joint publication. Indeed, one former colleague from Kamiokande-II does not agree with how the analysis was done, arguing that it relies on an a-posteriori statistical calculation.

Blind analysis

These doubts are shared by Francis Halzen of the University of Wisconsin-Madison in the US, principal investigator of the IceCube neutrino detector in the South Pole. He points out that Oyama has not employed a “blind analysis”, in which the time period and angular window would be chosen before the data are known.

Indeed, Halzen contrasts the latest research unfavourably with work from his own collaboration, which in 2018 reported that it had identified a specific giant elliptical galaxy as being the source of one high-energy neutrino detected by IceCube. That study, he says, involved a blind analysis, yielded much higher-probability evidence and was corroborated by observations at various electromagnetic wavelengths.

Oyama acknowledges that he did not carry out a blind analysis but points out it was impossible to do so on data more than 30 years old. He says that in any case he tried to guard against making the choice of spatial and temporal windows “too intentional”, adding that very slightly smaller windows would have boosted the statistical significance considerably. The important thing, he maintains, is to release the data and let others decide for themselves on their significance.

Copyright © 2026 by IOP Publishing Ltd and individual contributors