Skip to main content

Skin structures aid sidewinding snakes

Snakes that specialize in “sidewinding” – that is, travelling at an angle relative to the direction their head is pointing – have tiny pits on their stomachs that even out the friction they feel as they move. These pits are very different from the spiky structures found on the bellies of conventionally-slithering snakes, and the US-based scientists who discovered them suggest that they evolved to help sidewinders move more easily in desert habitats.

While most snakes keep their entire bodies on the ground as they travel, sidewinders repeatedly lift portions of themselves off the surface. Previous studies suggested that this pattern of movement helps the snakes spread their weight, reducing their chances of triggering avalanches as they cross steep, sandy slopes.

In the latest work, published in PNAS, physicists Jennifer Rieser of the Georgia Institute of Technology (Georgia Tech) and Tai-De Li of the City University of New York found further clues to the sidewinders’ success. Working with colleagues at Georgia Tech, the University of California Riverside and Zoo Atlanta, they used atomic force microscopy (AFM) to compare skins shed by three sidewinding species – the sidewinder rattlesnake (Crotalus cerastes), the Saharan horned viper (Cerastes cerastes) and the Saharan sand viper (Cerastes vipera) – with samples from non-sidewinding snakes.

Slip-sliding away

The AFM images showed that snakes that move in typical undulating fashion, with s-shaped waves travelling from head to tail, have micrometre-sized, rearward-facing spikes along their bellies. In the sidewinder rattlesnake, which lives in the southwestern US, these spikes are smaller and less prominent, while the two Saharan vipers have no spikes at all. Instead, the sidewinders’ undersides are studded with a uniform pattern of microscopic pits.

Images of skin from the belly of non-sidewinding and sidewinding snakes

To understand how these structures affect movement, the researchers developed a mathematical model of the friction the snakes experience as they move. The model showed that the spikes produce a frictional force that depends on snakes’ direction of travel. “We think that these spiky structures act kind of like corduroy,” Rieser explains. “If you can imagine yourself running your fingers down the ridges of a corduroy pattern, it’s maybe easier than going across all the bumps.

In non-sidewinding species, this directional friction works to the snakes’ advantage, as it reduces their chances of slipping backwards. For sidewinders, though, it’s a disaster. “If you’re trying to sidewind, the way that you’re applying forces to the ground would basically mean you’re almost fighting yourself to move forward,” Rieser tells Physics World. “You would be generating a backwards force.” Although the non-directional structure of the sidewinders’ pits is not as efficient for forward undulation, she adds, it does enhance sidewinding.

Biological inssssspiration

Apart from enhancing our understanding of snake biology, Rieser says the group’s findings could have applications in robotics. “This, long, slender body plan has evolved many times, not just in snakes, so trying to understand more about the environmental pressures that have led to this body plan and how that might be conducive to movement in certain kinds of environments would be a very interesting path forward,” she says.

As for the differences between the sidewinder rattlesnake and the two Saharan species, Rieser, who is now at Emory University, credits her co-author Jessica Tingle with a plausible hypothesis. Because the Sahara Desert is geologically much older than the arid landscape of the American southwest, Tingle suggested that sidewinding behaviour may have evolved earlier in the Saharan species. “Maybe, given more time, these spikes would disappear in the North American sidewinders as well,” Rieser says.

Searching for signs of past life on Mars with NASA’s Perseverance rover

February 2021 is an exciting month for Mars exploration, with three separate missions arriving at the red planet. In this episode of the Physics World Stories podcast, Andrew Glester takes a closer look at one of those missions – NASA’s Perseverance rover. Equipped with sophisticated imaging devices, Perseverance will look for signs of ancient microbial life and will help pave the way for future human missions to our neighbouring planet.

Today, space exploration is an increasingly global pursuit, involving many nations and private companies, with Mars being an enticing destination. On 9 February the Emirates Mars Mission delivered the Hope probe into Martian orbit, which will provide the most complete picture yet of the planet’s atmosphere. That will be followed by China’s Tianwen-1 spacecraft, which arrives in orbit on 10 February ahead of landing a rover in May into a massive impact basin.

Completing the Mars trio is NASA’s Perseverance rover, landing on 18 February – the focus of this episode. Its destination is the Jezero Crater, a 45-km-wide basin in the Martian northern hemisphere, a landform carved by a river roughly 3.5 billion years ago. The mission will collect rock and sediment samples for future return to Earth, search for signs of ancient microbial life, characterise the planet’s geology and climate, and pave the way for human exploration beyond the Moon.

You will hear from Luther Beegle, the principal investigator for the rover’s SHERLOC instrument – a Raman spectroscopy device that can detect organic matter and minerals. You also hear from Kelsey Moore, a geobiologist at NASA’s Jet Propulsion Laboratory, whose research has informed the mission’s search for traces of ancient life.

The podcast is sponsored by Teledyne Princeton Instruments. To learn more about how the company is changing scientific astronomy sign up to their upcoming astronomy webinar.

Rudiments of reality

Frank Wilczek is that rare creature: a first-class scientist who is also an extremely talented communicator. His 2004 Nobel prize, awarded jointly to David Gross and David Politzer, was given for work elucidating the strong force that binds quarks into particles like those in the atomic nucleus. But he is known too for his creativity in other areas of physics, such as postulating the axion. A particle he confesses to naming after a laundry detergent, it was originally designed to explain charge–parity violation in particle interactions, but is now considered a candidate for dark matter. Wilczek has also championed the idea of quasiparticles called anyons in condensed-matter systems, which have properties intermediate between bosons and fermions (with, respectively, integer and half-integer spins). And he coined the notion of time crystals: dynamical systems that are periodic in time.

All these pet topics make an appearance in his latest book Fundamentals: Ten Keys to Reality, but it is no survey of Wilczek’s greatest hits (fascinating though that would be). Instead, he sets out to identify the core concepts underpinning modern physics: what they are, how they arose, and why we believe they are true. The latter aspect is one of the most valuable facets of this delightful book. Wilczek takes great pains to explain the empirical reasons why he and his colleagues believe what they do – a crucial issue, given how exotic and counterintuitive some of these ideas are.

While many of the topics on display here – from special and general relativity to the Higgs boson and inflation – are well-trodden, Wilczek constantly finds fresh ways to present such ideas, so that you emerge with new insight into what they mean. For example, he tells us that space–time can be regarded as an extremely stiff material: it takes something truly cataclysmic, like a collision of neutron stars, to shake it and generate gravitational waves. Or take his description of the theory of the Big Bang: “Fundamentally, [it] is a strange hybrid of two opposing ideas. It postulates complete equilibrium for the non-gravitational interactions, but maximal disequilibrium for gravity.” I, for one, had never thought of it that way before.

In general, his fundamentals come as no surprise: the idea of atoms and particles, say, and the economy of their description: “According to our present best understanding, the primary properties of matter are these three: mass; charge; spin. That’s it.” He points to the dizzying range of scales in space and time, and describes the emergence of complexity from simplicity. “From different perspectives, we are both small and large”, he writes. “Both perspectives capture important truths about our place in the scheme of things. To get a full and realistic understanding of reality, we must embrace them both.”

One of the virtues of Wilczek’s perspective is that, despite the title, there is a practical slant to his questions: science is a useful tool, not a way to know “the mind of God”. “ ‘What happens next?’ is a more approachable question, and proves to be a much more fruitful question, than ‘Why are things the way they are?’ ” he says. That grounded view makes it much easier to forgive the rare lapses into the language of the Romantic Sublime – “Atoms sing songs that bare their souls, in light.” – that seems so beguiling to much “big picture” science writing, especially in the US.

Behind it all is the physicist’s search for unifying principles: concepts and theories that let us carve the bafflingly complex into manageably small and comprehensible pieces. That this is possible at all – that reductionist science boils down to just a few fundamentals – is perhaps the greatest insight of all, and it’s clear that Wilczek is not just grateful but aesthetically moved by the fact. What is sometimes lost in that process is an acknowledgement that taking items apart doesn’t necessarily tell us how they work. Critics of reductionism are often pitching at a straw man, but in this respect at least they are correct. This is nowhere more true than in the question of why we are asking about fundamentals in the first place, and how we conceive of them: that is, the nature of the human body and mind.

The distance between the physicist’s view and the human one is illustrated in this claim by Wilczek:

“Here, in sixteen words, I will supply a simple algorithm for producing the complete works of Shakespeare, at least one proof of Fermat’s Last Theorem, and the paper that will win the Nobel Prize for Physics in 2025:
1. Choose an ASCII character – a letter, number, space, or punctuation mark – at random.
2. Record it.
3. Repeat.
…This outrageous thought experiment illustrates how a very simple – that is, easily described – structure can contain vast complexities within it.”

I’d argue that in fact none of those works would be produced by the algorithm. They would merely be incidental and meaningless repetitions, indistinguishable from all the junk. In the originals, minds inserted meaning at the outset. The works of Shakespeare are not permutations of symbols on paper, but mental constructs that assume certain types of observer: their information is not self-contained. Mind, agency and evolution all involve the construction of meaning – the very quantity that Claude Shannon’s information theory excluded. Fundamentals do not speak to that, because it is intimately connected to notions such as history, context and environment.

What elevates Wilczek’s book above other surveys of the bedrock of physical theory is that he recognizes this. He dismisses the fallacy that all can be understood by building up from a handful of physical fundamentals: “It is tempting to say that this is the ideal description, while other, high-level descriptions are mere approximations – compromises, which reflect weakness in understanding. That attitude…is superficially deep, but deeply superficial. In order to answer questions of interest, we often need to change focus.”

To do so Wilczek enlists Niels’ Bohr’s notion of complementarity, which for him means holding onto multiple viewpoints at once. “The world is simple and complex, logical and weird, lawful and chaotic. Fundamental understanding does not resolve those dualities. Indeed, as we have seen, it highlights and deepens them.” That’s why, as well as consulting the canonical scientific pantheon – Galileo, Newton, Darwin, Maxwell – he says he often goes back to the likes of Plato, St Augustine and David Hume, “to converse with great minds, and to practice thinking differently”.

Fundamentals is, then, not only an exceptional piece of science communication but also a deeply humanistic book. It celebrates what we know without pretending that is more than it is: “The world is complex beyond our ability to grasp, and rich in mysteries, but we know a lot, and are learning more. Humility is in order, but so is self-respect.”

  • 2021 Penguin 272pp £20hb

Microrobotic device steers a laser beam deep inside the body

A team of researchers at Harvard University have created an innovative microrobotic device capable of steering a laser beam at high speed and with a large range of motion, which could help to significantly improve the performance of minimally invasive surgeries.

A paper outlining the results of the research, published recently in the journal Science Robotics, describes how the advanced opto-electro-mechanical device – consisting of a laser-steering microrobot housed within a miniaturized package – can be integrated with existing endoscopic surgical tools, providing a key advantage over the relatively bulky laser-aiming technology currently available.

As lead author Peter York, from Harvard University’s Wyss Institute for Biologically Inspired Engineering and the Harvard Microrobotics Lab, explains, he and his colleagues became interested in the use of lasers for minimally invasive surgery when they learned how lasers are employed for vocal fold polyp resection in a procedure known as transoral laser microsurgery. During this procedure, the precision of the laser allows surgeons to make very fine dissections to retain healthy tissue and recover voice function.

The Harvard team then began investigating how the precision of that procedure could be brought into other surgical arenas such as laparoscopy and gastroenterology. They then made what York describes as the “unfortunate” discovery that the laser used for vocal fold resection is highly specialized for that procedure alone, and is controlled by mirrors outside the patient’s body and aimed through their airway down to the vocal folds.

Microrobotic laser-steering tool

“However, we realized that the laser steering elements could be miniaturized using the tools and techniques we have developed in the Wood Lab at Harvard,” he says. “The idea is to put the laser steering components onto the ends of surgical tools, such as flexible colonoscopes and laparoscopic manipulators, with the ultimate aim of providing dramatically improved incision quality relative to existing static laser, electrocautery and cold surgical tools.”

The key benefits of this approach, York explains, are additional dexterity – the ability to angulate the laser, and robotic control – the ability to move the laser with greater speed and precision than is possible with handheld tools.

MEMS Technology

The device, which is just 6 mm in diameter and 16 mm in length, contains two mirrors controlled by piezoelectric actuators that direct the laser’s position. In order to address the challenge of ensuring enough motion is generated within such a small device, the team used miniature compliant mechanisms that convert the linear motion of the actuators into the rotational motion of integrated mirrors.

“These compliant mechanisms are built using printed circuit MEMS (micro-electro-mechanical systems), which allow them to be built very compactly. It is the same fabrication technology that the Microrobotics Lab at Harvard has used for other advances, such as the Harvard Robobee,” York says. “An alternative to our approach would be, for example, to use miniature electric motors, but these simply aren’t made small enough to be suitably miniaturized for these applications.”

The team is currently focused on deploying the system in clinical settings and tackling the additional challenges of ensuring the robustness of the device in what York describes as the “tricky, constrained environments” found inside the human body.

“For example, the system must be robust to external forces, vibrations and fluids.  This requires encapsulation and additional validation,” he says.

The researchers demonstrated that their laser-steering device could map out and follow complex trajectories with high speed, over a large range and repeat this motion with high accuracy. They also attached the device onto the end of a colonoscope and used it to target lesions in an artificial colon model.

Laser steering

Although the system has so far only been validated using a consumer-grade pointing laser, York reveals that the team is currently working on integrating the device with the types of high-powered lasers used in surgery.

“There are two technical challenges that come from using surgical lasers: firstly, the alignment of all the optical components becomes even more important than with the consumer grade lasers.  Misalignment causes energy to be dumped into the device itself instead of transmitted through, which leads to premature failure,” he says.

“Secondly, thermal loads, even if alignment is perfect, become important to manage. The laser energy that is not transmitted through the device due to inherent limitations of the optical materials must be absorbed as heat,” York adds.

Building a quantum-powered future

With an undergraduate degree in physics and music, a PhD in quantum technology and a number of years building “deep tech” start-ups from the ground up, physicist Ilana Wisby was the perfect candidate to lead and build Oxford Quantum Circuits (OQC). Founded in June 2017 by Peter Leek, OQC developed its first technology at the University of Oxford and today produces the UK’s most advanced quantum computers. Wisby talks here about her career pathway, what it means to be a leader and an entrepreneur, and how quantum technologies will change the world

Were you always keen on physics and science?

I’ve been drawn to science and technology from a young age. That was really encouraged by my parents. I remember getting electronics toys when I was six or seven and experimenting with them for hours on end. I didn’t realize until much later that these weren’t the kind of toys that many girls got to play with at that age. It fuelled my curiosity very early on.

But my passion growing up was music rather than physics, and I initially attended a specialist school to study music. I wanted to be a concert pianist, but later decided that maybe a musician’s lifestyle wasn’t the right track for me. And then I found that I really loved A-level physics, especially the problem-solving and applied thinking side of it.

So how did you pick which pathway – music or physics – you wanted to follow?

I found an undergraduate course that allowed me to study both physics and music, because I was completely non-committal to making a decision at that stage. I did my BSc in physics, with a minor in music, at Royal Holloway, University of London. That was a good way of being able to do all of the physics and feel like I had that degree under my belt, but still get to play with music and write essays. I would work in the lab, and then play the Gamelan orchestra behind the clock tower. It was so good, but sometimes a really strange experience, switching from lectures to banging gongs from the tower.

Royal Holloway had quite a small physics department at that point, with maybe around 60 people at most. It was also a Juno-accredited university and was pioneering in its efforts to attract more women into STEM. This really made a difference to me when making one of my first career choices. I went straight from my undergrad in 2012 to my PhD, working between the quantum detection group at the National Physical Laboratory (NPL) and the quantum devices group at Royal Holloway.

Can you tell me a bit about your PhD and how you came to pick quantum technology as your focus area?

Once I graduated from my BSc I immediately got a job in finance in the City of London. But I was eager to learn more. And when my former supervisor, with whom I had built a very good relationship, reached out to say that he had an available PhD position, it took me no time to decide. I ended up doing my industry-based project on hybrid quantum systems with NPL. My thesis was on coupling between locally doped spin ensembles and superconducting quantum circuits. I built international collaborations while at NPL – I was working with people in Russia and Sweden, as well as other teams at the institute.

I don’t think I really fell in love with physics until my PhD, because that’s when I got to be more hands-on and experimental. For me, the beauty of physics is when you look at the maths applied in realistic scenarios. I also really enjoyed being able to own a project and then see the maths align with something that happened in real life that I created.

You left academia soon after your PhD to work on a number of start-ups – how and why did that come about?

While I loved working on futuristic technologies in an academic setting, the pace and the lack of real-world applications got quite frustrating for me. That’s when I left academia and got into start-up ventures in the City. I worked in parallel as a data scientist at The Behaviouralist – a London consultancy that aimed to utilize behavioural interventions to reduce fuel usage in aviation – and as product director of Snap Out, a start-up working with the University of Surrey to deliver diagnosis of early-onset diabetes in remote regions in India, using diabetic retinopathy. Both these roles involved data science, but within the start-up environment, meaning everything was multidisciplinary. So along with working on data, I was also doing user experience (UX) design, managing projects for clients, creating apps, learning about finances and cybersecurity, and managing teams of people. In a start-up, you get that kind of task diversity, and you’re able to learn very quickly. I ended up as chief operations officer at Snap Out and was managing an international remote team within a few years. A common theme through my PhD and these jobs was building relationships with people, along with having the necessary technical skills.

So how did you find your way back into physics and a more technical role?

At Snap Out we were working with cutting-edge deep-tech AI, which is what I am most interested in. But irrespective of how cutting-edge this was, once you’ve done quantum technology, nothing else is quite as interesting and exciting. I also recognized that quantum, as an industry, was really starting to find its way out from academia. With my PhD and my start-up experience I realized that I was in a unique position. In 2017 I was headhunted by the lead investor of the company Oxford Sciences Innovation (OSI) to be the founding chief executive of Oxford Quantum Circuits (OQC).

You describe yourself as a “deep tech” entrepreneur – can you expand on that?

Deep tech is technology based on significant engineering shifts and new materials – it’s not simply something marginally new, it’s plain disruptive. It presents what the next revolution, or the next paradigm shift for the world, will be. Of course, it’s recognized that there’s a significant amount of R&D to do before the technology is going to be truly commercial and meaningful. From a patent capital sense, that just means that there needs to be patient longer-term investment in those types of areas.

Deep technologies have really long timescales, but will be completely revolutionary once they come through to market

With all my various roles, I’ve always been at the forefront of deep tech, and in that sense, I am a traditional deep tech entrepreneur. Within our portfolio at OSI, we’ve got deep tech for pharmaceutical companies that apply AI machine learning to drug discovery, and even nuclear fusion companies. All of these have really long timescales, but will be completely revolutionary once they come through and can be applied to a large number of market verticals that have a really high impact in the world. I really hope that in 15 years’ time, quantum is not deep tech anymore and there’s something else that has come up that is equally as exciting.

I think that was what was inspirational for me, looking at start-ups in the deep tech space. While I was interested in quantum technology because of my background, when I saw the portfolio of companies at OSI, I felt as though I could work for any of those companies and be satisfied, because they’re all so high impact and interesting. People who do science tend to be intellectually curious, but like everyone else they’re also led by what means something to them, and being able to have an impact and do something that interests them.

Entrepreneurship is primarily about leadership, and I’m very passionate about this. You can be a technical leader, as well as a leader of people. True leadership develops when you start to figure out how you can inspire others, how you can get people applying to your cause or connect dots in different ways, and do things that are slightly outside the traditional path. It’s about not being laser-focused, but instead encouraging a broad, strategic approach to things, which ultimately is all about curiosity. We can be curious beyond science; we can be curious about the world and about people; and then we can make things happen, rather than just talk about it.

What does Oxford Quantum Circuits do, what products and services do you offer?

At Oxford Quantum Circuits (OQC) we build quantum computers – we build the hardware itself, the core of the technology. When I first joined the company around three years ago, it was just a professor and a patent before I turned up with a laptop. Now, we’re a 20-person trading company, we’ve got significant government grants, we’ve just announced and launched our own independent commercial facility, and we’ve got the UK’s most advanced and only commercially available quantum computer. Indeed, we’ll be delivering quantum computers as a service to our partners and customers in the next year.

We know that quantum computers will have a huge impact on the world around us, and our vision is a brighter future for everyone that’s enabled by quantum technologies. It’s our mission to enable that by delivering hardware to the brightest minds. So we don’t build a full end-to-end system, we don’t do the algorithms, but we do the hardware and we build the core, and then we partner with the best people who are available to make it happen. Right now, we have incredibly high-quality devices that are small scale. This year is primarily about acceleration in our R&D to start scaling those systems significantly, which is all enabled by our beautiful new lab, which is incredible.

Oxford Quantum Circuits OSI

What does your role as chief executive involve?

Since I began at OQC, my role has changed continuously. And that’s true not just for me as chief exec, but for all of the team over the last few years. When we started, it was just me, and then I got a PhD student who had just graduated. We slowly started building the shape of the company and also what my role was. I was doing finance; I was reviewing legal documents; I was trying to develop strategy; I was thinking about what fundraising needed to look like; I was learning what having a board meant. I had to learn the full depth of my responsibilities, while trying to be both visionary and high level, but when it’s that size of company and a start-up, it’s 100% all hands-on-deck. No job is too small or too big for anybody. That’s what makes working in a start-up fun at that stage.

Our team members come from different backgrounds. Some of them are straight out of PhDs, some are still in university, some don’t have PhDs or university degrees at all, which is fantastic. I’m trying to build a diverse team from diverse backgrounds and especially in science, that can be really hard. One of the perks of doing this job and having this role is being able to have real influence, so that’s not a responsibility that I take lightly at all.

As the team grew and evolved, so did I. I wouldn’t recognize myself from where I was when I started. It’s been completely super-charged, and that’s enabled by the support of the people I have around me, but also the drive and the thirst to continue to learn and to be better. That doesn’t stop at university, it’s a mindset, this growth mindset, and understanding what goes into that as a leader is very important. So when I’m coaching my team, they’re not just technologists, they’re my team. One of our values is to cultivate love, and that’s important, right? It’s not fluffy –sometimes it’s tough love.

As a leader now, as a chief executive, it’s my job to set the strategy, to set the culture, to bring the money in, to make sure that I’m coaching and growing the company at the rate we all need to develop at in order to be successful and to achieve our milestones. And it’s not an easy job. I work very long hours and it’s a lot of responsibility. I’ve got this team and I’m continuously having to respond to the latest crisis, whether that’s because of my team or a supplier chain breakdown, whether it’s within the sphere of my control or not, I’m continuously having to manage risk, respond, but also keep visionary and keep everybody confident and happy and fulfilled as best I can within a growing organization. But it’s ultimately a very rewarding job.

What do you know today, that you wish you knew when you were starting your career? And what’s your advice for today’s students?

Oh, that’s such a hard one, because you wouldn’t grow if you knew everything from the start! This may sound strange, but I wish I cared less and knew not to take things too personally, to be able to disconnect situations from myself so that I can look at them more objectively. I wish I had greater self-confidence and self-belief from the start. Impostor syndrome is real, and a PhD will often amplify it. It’s important to remember that everybody is trying their best.

The best advice I can offer is to be curious and to invest in relationships and people. You can always develop your technical skills, but you also require emotional intelligence. You can be the smartest, most technical person in the world, but if you can’t communicate it, or work with people, or have a good understanding of how to get the best out of yourself and others, then you’re going to be self-limiting. If you can work on anything, it should be your emotional intelligence because if you are self-aware and have a good view of how other people perceive you, then that’s going to be your most effective tool.

Dark-matter detector result is consistent with previous hint of exotic particles

New data from the PandaX-II particle detector in China leave open the possibility that the XENON1T experiment in Italy has found evidence of new physics. In June 2020 researchers working on XENON1T announced the detection of around 50 events above background levels and concluded that hypothetical solar axions or very magnetic neutrinos might be responsible. The new results from PandaX-II are consistent with these hypotheses but further work will be needed to settle the issue.

XENONIT was built to hunt for a type of dark matter known as weakly interacting massive particles (WIMPs). Housed under a mountain at Italy’s Gran Sasso National Laboratory, it contained 3.5 tonne of liquid xenon and operated between 2016 and 2018. Like other experiments of its type, it was designed to pick up the tiny flashes of light generated when WIMPs in the “halo” of dark matter thought to envelop the Milky Way collide with xenon nuclei.

The events reported in 2020 involved electron, rather than nuclear, recoils. Elena Aprile of Columbia University in the US and colleagues reported 53±15 such recoils at low energy that they could not tie to other identifiable sources of background (these events themselves being considered noise in the search for WIMPs). Careful not to claim any discovery, they instead laid out several possible explanations for the observation.

Two novelties

These explanations included two novelties associated with particles arriving from the Sun – either hypothetical particles known as axions (postulated originally to fix a problem with the strong nuclear force) or neutrinos with a greater magnetic moment than previously observed. Another possibility, they said, was “bosonic dark matter”, which would be absorbed, rather than scattered, by the xenon nuclei and cause electrons to be emitted.

However, as Aprile and colleagues pointed out, the events could also have had a more mundane explanation – the beta decay of tritium nuclei. This would come about when the few neutrons liberated from surrounding rock by cosmic rays create tritium by splitting xenon nuclei. Unlike other background processes, this remains a nuisance since its extent is not possible to estimate reliably.

Aprile and colleagues calculated that the tritium could account for the excess events with a statistical significance of 3.2σ – compared to 3.4σ, 3.2σ and 3.0σ for solar axions, neutrino magnetism and bosonic dark matter, respectively.

Dimmer white dwarfs

Despite their cautious presentation, these results caught the attention of both the public and fellow physicists. For example, theorists put forward several ways to overcome one obvious sticking point with the Sun-based hypotheses – that the flux of the particles involved would make white dwarf stars dimmer than they appear.

In the latest work, Jianglai Liu of Shanghai Jiao Tong University and colleagues did an independent experimental check on the XENON1T results using the PandaX-II detector in the China Jinping Underground Laboratory in Sichuan, south-western China. Although PandaX-II contains just over half a tonne of xenon, the researchers ran their experiment for longer and acquired nearly half the data as their XENON1T counterparts.

The Chinese group had the advantage of being able to better characterize their background spectra, thanks to direct measurement or calibration. In part, this was done by twice injecting methane with one of its hydrogen atoms replaced by tritium into the target. With the two injections carried out three years apart, they say they were able to measure the energy spectrum of the tritium contamination within the experiment.

By in effect subtracting the background spectra of tritium, krypton and radon, the group was able to quantify any signals from putative solar axions or a high neutrino magnetic moment – the two theoretical possibilities that Liu says the researchers used as a “benchmark” in their work. As they report in Chinese Physics Letters, they found that the remaining electron recoils were in fact consistent with the excess events seen by XENON1T. However, they could not fully endorse the earlier result given, they say, that their data were also consistent with a “background-only hypothesis”.

Detector upgrades

To try and establish whether some new physical process really has been observed, the Chinese researchers are increasing their detector mass to 6 tonne – meaning a sensitive target of 4 tonne – while lowering background rates. The upgraded detector is called PandaX-4T and should start taking data this year. Also coming online are an upgraded 8.3 tonne “XENONnT” as well as the 10 tonne LUX-ZEPLIN detector currently being installed in the Sanford Underground Research Facility in South Dakota, US.

According to Liu, the new measurements should yield a verdict soon. “A year of low background data taking from PandaX-4T would be able to offer a definitive answer to the XENON1T excess,” he says, although he adds that it remains to be seen just how low they can make the background.

One group already has an explanation for the XENON1T excess – and it does not rely on exotic new physics. Matthew Szydagis, Cecilia Levy and colleagues at the State University of New York at Albany used what is known as the noble element simulation technique to model background interactions within the Gran Sasso detector and found that around 30 decays of the isotope argon-37 would generate the observed excess.

Levy says that their hypothesis could be investigated by carrying out a thorough calibration of the XENON detector, adding that her group does not know where the argon might come from. Beyond that, she agrees that the observed excess should be scrutinized by the new round of larger experiments. “If it is due to a new particle, it should predictably scale with the more massive detectors,” she says, “and a signal should be clear.”

United Arab Emirates’ Hope probe enters Martian orbit

The $200m Emirates Mars Mission successfully arrived in Martian orbit today, concluding its seven-month journey to the red planet. The arrival of the United Arab Emirates’ (UAE) probe – named Hope – marks the beginning of the science stage in the first interplanetary voyage undertaken by an Arab nation.

At 16:13 GMT, mission control at Dubai’s Mohammed bin Rashid Space Centre (MBRSC) received a signal, relayed from Nasa’s Deep Space Network, to confirm that the car-sized spacecraft had entered into a stable orbit. That followed a nail-biting half hour as Hope fired its Delta V thrusters, slowing its speed from over 121,000 km/h to approximately 18,000 km/h to be captured by Mars’ gravity. The 1500 kg craft will now undergo further manoeuvres and testing during the coming weeks before it begins returning science data from the Martian atmosphere.

“We congratulate our leadership and our people of all nationalities in the UAE,” said Sarah Al Amiri, chair of the UAE Space Agency via Twitter. “The science team has a lot of work to do, and we are all confident that they will make new, great, and tremendous discoveries about the red planet.”

Hope is the first of three separate missions arriving at Mars this month and will be swiftly followed by China’s Tianwen-1 spacecraft, which arrives in orbit on 10 February ahead of landing a rover in May. Then, on 18 February NASA’s Perseverance rover will land and begin searching for signs of ancient life in a Martian crater that was once flooded with water.

Diagram of the Hope probe's journey to Mars

Weather watcher

Hope’s main scientific objective is to study daily and seasonal weather changes, as well as observing how hydrogen and oxygen are lost into space. This data could help us better understand how Mars turned into the dusty barren planet we see today. Hope carries three main instruments: two spectrometers – one operating in the infrared and the other in ultraviolet – and an imager that will study the lower atmosphere at visible and ultraviolet wavelengths.

Sending a mission to Mars was a bold statement from the UAE, an Arabian state with a population of 9.8 million that gained independence from the UK in 1971. Today’s achievement comes just three years since Hazza al-Mansouri became the first Emirati in space when he spent eight days on the International Space Station.

The UAE could have easily just purchased a spacecraft to go to Mars, but they had a goal to build it, not buy it

Brett Landin

Having established a national space agency in 2014, the UAE quickly built up its space capacity by collaborating with established space nations. Launched from Japan’s Tanegashima Space Center in July, the Hope spacecraft was built in partnership with the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado, Boulder. The mission team comprises 200 staff from MBRSC, 150 from LASP, along with support from an international science team and roughly 100 partners.

“The UAE could have easily just purchased a spacecraft to go to Mars, but they had a goal to build it, not buy it,” says LASP engineer Brett Landin, who leads the mission’s spacecraft team. “I think the most fascinating part of this mission has been watching a nation decide to institute a meaningful change and then actually make it happen in a very short period of time.”

Building a ‘knowledge economy’

It may still be early days, but the UAE has grand ambitions for space exploration. Hope is just one project in the nation’s “Mars 2117” programme, which has the ultimate goal of establishing the first human settlement on Mars within the next century. Other key projects are to send an unmanned rover, dubbed Rashid, to the Moon in 2024 and to build a “Mars Science City” in the desert outside of Dubai – a research facility that will eventually host analogue “missions to Mars”.

By investing heavily in space exploration, the UAE hopes to kickstart its science and engineering sectors, to diversify its economy away from oil. Since the Hope mission was first mooted in 2014, mission leaders have spoken regularly about how it can foster interest in science among students. Under broader plans outlined in 2017, the UAE set the target that “knowledge workers” would make up 40% of its total workforce by the end of 2021.

“The UAE’s Mars Mission is a clear reflection of the UAE’s vision and ambition,” says Sanam Vakil, a Middle East researcher at Chatham House, an independent policy institute in the UK. “The project is designed to promote the knowledge-based economy while also inspiring Emiratis and attract other regional nationals to the Emirati economic model.”

Laser-based autofocus unit transforms imaging and workflow outcomes

The PureFocus850 is a laser-based autofocus system that delivers enhanced imaging outcomes, sustained workflow efficiencies and reductions in capital spend across a range of applications in materials science, biological microscopy and industrial inspection. That’s the claim of Prior Scientific, a UK-based manufacturer specializing in the design and production of precision positioning devices, optical systems, automation solutions and microscope components.

The integrated autofocus unit – which comprises an infrared laser diode, precision optics, detector and on-board microcontroller – provides a real-time focusing capability for infinity-corrected optical systems and is suitable for both upright and inverted microscopes. “The PureFocus850 allows powerful autofocus functionality to be installed on the latest commercial microscopes or retrofitted as an upgrade to established imaging systems,” explains Simon Bush, sales engineer (UK and Ireland) at Prior Scientific. “The product is equally at home when integrated with a reflected-light optical system in a production-line context or when incorporated into an OEM imaging platform.”

In terms of operational specifics, the PureFocus850’s motorized offset lens adjusts the imaging depth into the sample, continuously holding the precise distance between imaging focal point and a reference boundary of choice. This capability can be put to work in all manner of imaging applications: from simply scanning across a biological sample on a slide without the need to manually refocus – yielding significant efficiencies for the user – to high-resolution tile-scanning or time-lapse imaging, where focus stability and accuracy determine the success of an experiment or sample analysis.

The commercial roadmap

Right now, Prior Scientific is busy developing several discrete customer segments for the PureFocus850. Within the research community, for example, the hardware autofocus offers scientists a cost-effective upgrade path for their existing microscope facilities, rather than recurring capital spend on new optical systems. A case study in this regard is the Nanofabrication Laboratory at Chalmers University in Gothenburg, Sweden, which required a retrofittable autofocus solution that will work with a variety of samples and objectives while ensuring compatibility with brightfield and darkfield imaging.

Simon Bush

For a research facility like Chalmers, where a variety of experiments are run on the same microscope, other advantages include one-time installation and the ability of the PureFocus850 software to specify and store a range of autofocus settings (for example, laser power, recovery speed, focus stability and focus confirmation parameters). “We released an expanded range of inverted microscope kits at the end of 2020,” adds Bush, “and our aim is to cater for researchers who either require an out-of-the-box autofocus solution or one with an accessible SDK [software development kit] that can be used to develop a novel imaging system.”

Beyond the laboratory, the PureFocus850 provides a versatile option for specialist OEMs developing next-generation microscopy and imaging systems. There are several features – including extra-long-range recapture and interface detection – that are only accessible to OEM customers in a fully automated context, while the motorized offset has the potential to allow dynamic offset recalculation. The latter is paramount for ensuring focus stability over long periods when imaging different layers in multilayer samples, such as during laser scanning confocal microscopy (LSCM) or fluorescence-lifetime imaging microscopy (FLIM).

“The scope for modification is extensive,” notes Bush. “Customers can engage with our R&D team to change the optics, offset mechanics and laser wavelengths – for example, to support long-wavelength imaging such as two-photon microscopy or Raman spectroscopy. The autofocus also integrates with OpenStand, our instrument development platform for building OEM solutions and one-off customizations.”

 Industrial inspection

Out in industry, meanwhile, the PureFocus850 has generated significant interest among customers engaged in low- to medium-throughput materials analysis and inspection. A case in point is Top-Electech, a China-based electronics supplier, which last year integrated the hardware autofocus into an existing microscope to fast-track the inspection and analysis of its PCB components – delivering a 95% reduction in processing time for multisample component arrays.

The PureFocus850 provides a neat fit for this application because it’s not tied to any particular software and can be used as a standalone microscope add-on or integrated into custom protocols via Prior Scientific’s SDK. What’s more, the technical challenges are non-trivial for the Top-Electech quality-assurance team, with electronic components first mounted in resin and then filed down to allow imaging of their internal structure – a process that creates a subtly uneven surface comprising materials with variable reflectance and contrast.

“Traditional software or hardware autofocus systems may struggle to maintain focus on this variable surface, but the PureFocus850 averages the signal reflected by the sample across the microscope’s field of view,” notes Bush. “This ‘line-mode’ capability allows a consistent, reliable signal to be obtained while scanning across each sample, even where parts of the field of view are non-reflective, ensuring the sample is constantly in focus without user intervention.”

Another issue is variation in the amount of light reflected by the sample depending on the magnification of the objective. As such, it helps that Top-Electech engineers can store the optimal laser power for each objective on the microscope to ensure seamless switching between high and low magnification while keeping the sample in focus. By also storing a sample detection threshold for each objective, the engineers are able to avoid large, unnecessary refocusing steps when moving between samples. This protective feature allows the loading of multiple samples onto the microscope simultaneously and the ability to image in sequence without the risk of refocusing onto areas of the microscope stage that do not contain a sample. “In this way,” notes Bush, “the Top-Electech team is able to undertake batch processing rather than loading and imaging samples sequentially.”

Elsewhere, STMicroelectronics, the multinational electronics and semiconductor manufacturer, has combined the PureFocus850 with image recognition software to automate the analysis of silicon-carbide wafers at its manufacturing facility in Sweden. The firm’s engineers use defect-selective etching to assess wafer quality – a process that creates etch pits on the silicon-carbide wafer surface, with the frequency, morphology and distribution of these pits linked to the type and location of potential defects within a sample. This uneven surface would typically cause problems for laser autofocus systems, but the line-mode configuration of the PureFocus850 uses a weighted average of the reflected signal to find the optimal focal plane.

In this use case, sample interrogation focuses on the bottom of the etch pits, such that the imaging plane of interest differs from the main reflective surface of the silicon-carbide wafer. “The analysis is enabled by the motorized optics of the PureFocus850, which allows the imaging plane to be offset from the wafer surface,” explains Bush. Equally important for STMicroelectronics is the efficient acquisition of brightfield and darkfield images, with the user interface allowing engineers to optimize the autofocus settings for each imaging technique based on the intensity of illumination and to easily switch between them (rather than having to find a compromise between the two).

“All told,” concludes Bush, “the PureFocus850 enables STMicroelectronics to acquire higher-quality tile-scans of their silicon-carbide wafers while delivering a massive improvement in throughput – typically an 85% reduction in the wafer scanning time.”

Breath holds protect the heart during proton therapy for breast cancer

Radiation therapy plays an integral role in the management of breast cancer. Following breast-conserving surgery, in which the tumour is removed while leaving as much healthy breast tissue as possible, irradiation of the whole breast is a standard follow-up procedure. In some cases, irradiation of regional lymph nodes and/or a boost to the tumour cavity are also performed.

As survival rates improve for patients with early breast cancer, it’s important to consider potential long-term complications for those undergoing such radiation treatments. In particular, irradiation of the left breast is challenging due to the possibility of delivering dose to the heart and the subsequent risk of long-term cardiac complications.

One way to reduce this risk is via the deep inspiration breath-hold (DIBH) technique, which physically separates cardiac structures from the target volume and helps reduce cardiac dose. Proton therapy can also be employed to help reduce cardiac dose, as protons target the tumour with high conformality while delivering almost zero dose to distal structures.

Researchers from the Rutgers Cancer Institute of New Jersey propose that combining the two techniques could reduce cardiac dose further. As previous studies of proton therapy with DIBH are scarce, they have compared photon DIBH with proton DIBH in 10 patients, reporting their findings in the International Journal of Particle Therapy.

Dose comparisons

The study included 10 patients with left-sided breast cancer who received radiation as part of breast-conserving therapy. All underwent lumpectomy, nine also had sentinel lymph node biopsy and one underwent axillary lymph node dissection. All patients then received photon DIBH with two parallel, opposed beams used to irradiate the whole breast. Most patients also received a boost dose to the tumour cavity and, where required, nodal irradiation.

The researchers also created treatment plans for double-scatter proton therapy with DIBH, using clinical target volumes adjusted to match those of the delivered photon plans. Using each patient’s photon plan as a control for their proton plan, they investigated the doses to a number of cardiac subunits, including the entire heart, left ventricle (LV), left coronary artery (LCA), right coronary artery (RCA), left circumflex coronary artery (LCx) and left anterior descending coronary artery (LAD).

Both plan types provided adequate target coverage, but proton DIBH significantly reduced doses to cardiac structures compared with photon DIBH. This included reductions in: the mean dose to the heart (1.19 to 0.23 Gy); the mean dose to the LV (1.7 to 0.25 Gy); the mean, maximum and half-maximum doses to the LAD (5.54 to 1.15 Gy, 22.15 to 7.7 Gy, 4.42 to 1.61 Gy); and the maximum dose to the LCx (1.35 to 0.13 Gy).

The team point out that the lower mean doses to the entire heart, LV and LAD could lead to an estimated reduction in long-term cardiac mortality of at least 7%. The mean doses to the LCx, LCA and RCA were already low for photon DIBH, thus the clinical significance of further dose reduction to these structures with protons is unknown.

Added lung protection

Radiation pneumonitis (lung inflammation arising from irradiation) is another area of concern when planning breast radiotherapy. The researchers observed that proton DIBH significantly lowered the dose to the left lung compared with photon DIBH. The mean left lung dose was reduced from 8.04 to 2.28 Gy, while volumes receiving 20 Gy and 5 Gy were reduced by 13% and 17%, respectively. Clinically, this lower lung dose from proton therapy may have long-term benefits.

The study also revealed that proton therapy reduced the maximum dose to the right breast, although the result was not statistically significant. The researchers note that skin dose was slightly higher with proton than photon therapy. While the difference was not statistically significant, they suggest caution in this regard.

“Proton DIBH significantly reduces dose to vital organs-at-risk in comparison to photon DIBH in patients requiring whole-breast radiation and/or nodal irradiation,” the researchers conclude. “This may be the new standard-of-care in the future because of its significant long-term clinical benefits.”

They point out, however, that understanding the clinical implications of this dosimetric advantage requires a randomized study with long-term follow-up. In addition, the expected toxicity profile from such proton treatment is difficult to anticipate because of the lack of adequate long-term clinical experience. “We hope that the ongoing RADCOMP study will be able to answer such questions in the coming few years,” they write.

Next-generation planetary missions could hunt for gravitational waves, say astronomers

Spacecraft heading to Uranus and Neptune in the next decade could be used to investigate gravitational waves as they venture into the outer solar system. That is according to a new study by a team of Swiss and Danish researchers, who say that examination of the radio signals from far-flung probes might reveal the signature of these subtle ripples in the fabric of space–time as they roll across our planetary neighbourhood (arXiv: 2101.11975).

The scientists say that gravitational waves would make themselves known through a Doppler shift in the transmissions from distant spacecraft. “When a gravitational wave passes through, it can slightly disturb the radio link by shifting its frequency. We can detect the small deviations in the carrier frequency we receive from the spacecraft and deduce that a gravitational wave has passed,” explains Deniz Soyuer from the University of Zurich, who led the work.

Similar gravitational-wave hunts have been attempted – without success – by previous missions. In principle, it is even something that could have been tried with NASA’s New Horizons spacecraft, which is currently traversing the remote region of our solar system known as the Kuiper belt. What a future generation of outer planet explorers would have on their side, though, is time.

Overlapping observations

Proposed missions to Uranus and Neptune – which planetary scientists hope might launch around 2030 – would take many years to reach their targets, meaning they would have several opportunities to carry out searches for the elusive undulations. “There is one-and-a-half to two months of ideal time in a year to do these kinds of observations when the Earth–Sun–spacecraft angle becomes favourable,” explains Soyuer. “So a 10 year cruise time would yield a total of 10 one-and-a-half-month long observations.”

The technique would also not require any dedicated on-board equipment to be fitted to the probes. “All missions already have Doppler tracking instruments on them, since that is how you track the spacecraft and also conduct gravitational field measurements of the planetary gravitational fields,” says Soyuer. “[It] sounds easy in principle, but the changes in frequency that we want to detect are extremely small.”

Another technical challenge is the noise in the data. Among the main contributors to this, adds Soyuer, is “the mechanical noise of the antenna” listening in back on Earth. If those hurdles can be overcome with advances in technology, the missions could potentially detect the gravitational waves given off by bodies – like stellar-mass black holes – whirling around gargantuan black holes, a phenomenon astronomers call “extreme mass ratio inspirals”. They may also be able to pick up the space–time ripples emanating from colliding supermassive black holes.

If detections do occur, they could provide a “nice overlap” with observations by Europe’s upcoming gravitational-wave mission, LISA, says Laura Nuttall from the University of Portsmouth who is a gravitational-wave expert and member of the LIGO scientific collaboration. “[These missions] are more likely to see different events than LIGO/Virgo are sensitive to as they are probing a different part of the gravitational-wave spectrum,” she says. “So just like LISA, [they] would complement LIGO/Virgo.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors