Autonomous vehicles (AVs) are supposed to be the future. So far, though, much of the work involved in building that future has focused on places like California and Arizona, where rain is rare and roads are wide and spacious. How, then, will AVs cope in the harsher conditions of northern Europe?
For Patrick Sachon, the question is more than hypothetical. As a scientist working in business development at the Met Office, Sachon is trying to understand whether the sensors that AVs use to navigate can do their job adequately in the UK’s famously unsettled weather. If the answer is “no” – or, more likely, a provisional “yes” that depends on weather conditions remaining within a certain safety envelope – then he and his colleagues aim to suggest improvements and explain to regulators what safe use looks like.
Speaking as part of a webinar series organized by the National Physical Laboratory, Sachon noted that the Met Office (the UK’s national weather service) has collaborated with partners in the road transport sector for years. “We’re always working on understanding what roads will be like to drive on,” he explains. While the Met Office’s publicly-funded forecasts are (usually) good enough for citizens to decide whether they need to pack an umbrella, the Highways Agency and local authorities need far more detailed information to help them deploy assets such as gritters, snowploughs and maintenance crews in an optimal way.
The Met Office’s experience of providing such information is useful for AVs, Sachon says, because such vehicles may require similar micro-forecasts to operate safely. As he puts, it, the “structure of precipitation” varies a lot during thunderstorms or heavy showers. While some areas experience high rain rates, others nearby remain relatively dry. And if AV sensors work fine in ordinary rain, but not when it’s chucking it down, then the basic, publicly-available weather forecasts won’t be enough to keep motorists safe.
As Sachon sees it, the key will be to develop vehicle sensors that work alongside weather sensors. The joint information from both types of sensors will then enable regulators and manufacturers to determine exactly how well – and how safely – an AV’s sensors will handle a given set of conditions. Such information will be helpful to the Met Office, too, Sachon adds, because vehicle-based weather sensors are a bit like miniature weather stations. For example, if Met Office scientists see that an AV’s windscreen wipers are on, they’ll know that moisture in the atmosphere is making it down to ground level – something that weather satellites don’t necessarily show. The scientists can then modify their forecasts accordingly, to more accurately reflect local driving conditions.
When asked which weather conditions are most challenging to AV sensors, Sachon began by reminding listeners that human-operated vehicles have drawbacks of their own. “[Artificial intelligence] won’t do the stupid things we do when we drive,” he said. “You can eliminate quite a lot of risks.” With that caveat, though, he acknowledged that AV sensors struggle when precipitation changes phase. While rain and snow are not so difficult on their own, he said, “the transition between phases can be quite a problem”.
Since the UK’s weather often hovers between fog, mist, drizzle, rain and snow, would-be AV purchasers in these islands may be in for a long wait. Another challenge is that hyper-local weather forecasts are not cheap, and it is far from clear who within the AV sector would pay for them. The equivalent service in aviation, Sachon notes, is paid for via a levy on all airlines that fly through UK airspace, with a small fraction of that levy going to the Met Office. In ground transport, however, the focus has traditionally been around keeping roads clear of ice and snow, and the burden of payment has fallen on local authorities rather than individual motorists. Whether any government – local or national – will take on those costs for the sake of an (initially) small number of AV users is, he suggests, an open question.
Simulated CT images of a COVID-19 XCAT phantom at three different radiation dose levels, representing (top left) 50, (top right) 25 and (lower left) 5 mAs settings. The lower right image shows a chest radiographic image of the same model. (Courtesy: American Roentgen Ray Society, American Journal of Roentgenology)
Medical imaging devices and methods are constantly evolving to meet rapidly changing clinical needs. CT and chest radiography, for example, could provide a means to detect lung abnormalities related to COVID-19, but the imaging parameters required to differentiate COVID-19 still need optimization. Clinical imaging trials, however, are often expensive, time consuming and can lack ground truth. Virtual, or in silico, imaging trials offer an effective alternative by simulating patients and imaging systems.
A group of researchers from Duke University took the first steps towards applying this approach to the current pandemic by developing the first computational models of patients with COVID-19. In addition, they showed how these virtual models can be adapted and used together with imaging simulators for COVID-19 imaging studies. Their proof-of-principle result, published in the American Journal of Roentgenology, paves the way towards effective and inexpensive ways of assessing and optimizing imaging protocols for rapid COVID-19 diagnosis.
Virtual COVID-19 patients…
Virtual imaging trials require accurate models of target subjects, also known as phantoms. For this study, the group modelled three distinctive features of the anatomy of a COVID-19 patient: the body; the morphological features of the abnormality; and the texture and material of the abnormality. For the first part, the researchers used the 4D extended cardiac-torso (XCAT) model developed at Duke University. These phantoms are constructed from patient data and include thousands of body structures, as well as anatomically informed mathematical models of texture and tissue-material properties.
To model the specific abnormalities found in SARS-COV-2 patients, the group manually delineated the unique morphological features characteristic of COVID-19 in CT imaging data from 20 patients with confirmed infections. They then incorporated these features, known as ground-glass opacities, into the XCAT models. To make their phantom as realistic as possible, the researchers also modified the texture of each segmented abnormality to that of a material filled with fluid to match the observed pathologies.
… virtually imaged
The group produced three COVID-19 computational models with abnormalities of different shapes and locations within the lungs. The researchers then used these models together with a validated radiographic simulator (DukeSim) to obtain clinically realistic virtual CT and chest-radiography images. “Subjectively,” the authors explain, “the simulated abnormalities were realistic in terms of shape and texture.”
First author Ehsan Abadi (third row, centre) together with the research group (Courtesy: Ehsan Abadi, Department of Radiology, Duke University)
“We have developed strategies to adapt and use virtual imaging trials for imaging studies of COVID-19,” first author Ehsan Abadi concludes. “This will provide the foundation for the effective assessment and optimization of CT and radiography acquisitions and analysis tools to manage the COVID-19 pandemic.” Future work will focus on modelling different stages and manifestations of the disease, with the aim of optimizing screening and follow-up of patients.
Physicists in Germany say they have made the world’s most precise measurement of the deuteron mass by comparing it to the mass of the carbon-12 nucleus. The new work, which was carried out by confining deuterons (which are nuclei of deuterium, or “heavy” hydrogen) and carbon-12 nuclei with strong magnetic and electric fields, provides a crucial independent cross-check with previous measurements that yielded inconsistent values.
Knowing the precise masses of simple atomic nuclei such as hydrogen, its isotopes deuterium and tritium, and the molecular hydrogen ions H2+ and HD+ (a proton and a deuteron, bound by an electron) is crucial for testing fundamental physics theories such as quantum electrodynamics. The mass of the deuteron can also be used to derive the mass of the neutron, which has implications for metrology as well as for atomic, molecular and neutrino physics.
To make these precise measurements, physicists often turn to Penning traps, which use extremely strong magnetic and electric fields to trap charged particles such as single deuterons and other simple ions. Once trapped, the particles oscillate at a particular (cyclotron) frequency that depends on their mass, with heavier particles oscillating more slowly than lighter ones. Hence, if the oscillation frequencies of two different ions are measured in the same trap, one after the other, the ratio of their masses can be calculated to a high precision (around one part in 8.5 x 10-12).
A cryogenic Penning trap mass spectrometer
In the new work, a team from the Max Planck Institute for Nuclear Physics, the Johannes Gutenberg University, the GSI Helmholtz Centre for Heavy Ion Research and the Helmholtz Institute in Mainz used a special cryogenic mass spectrometer that is dedicated to light-ion mass measurements. The setup of this spectrometer, which the researchers dubbed LIONTRAP, consists of a stack of Penning traps. This stack includes a highly optimized seven-electrode precision trap and two adjacent storage traps that sit within the homogeneous magnetic field of a superconducting 3.8 Tesla magnet. The entire set-up is also kept in a near-perfect vacuum (better than 10-17 mbar) at a temperature of about 4K.
The researchers placed a deuteron in the storage trap before transferring it to the precision trap. There, they determined its oscillation frequency with high precision by measuring the tiny alternating currents (known as image currents) induced at the inner surfaces of the trap electrodes by the charge of the moving ion. Finally, they compared this frequency measurement with a similar one made on a carbon-12 ion (12C6+) in the same apparatus.
Adjustable superconducting electromagnetic coil
Study lead author Sascha Rau explains that the team chose 12C6+ because it serves as the mass standard for atoms – meaning that its mass, by definition, is equal to 12 atomic mass units. Hence, measuring the ratio of the deuteron and 12C6+ ion oscillation frequencies gives the mass of the deuteron directly, in atomic mass units.
The precision of previous measurements made using this method was limited by deviations of the trap’s magnetic field from its ideal form. The German team overcame this problem by using an adjustable superconducting electromagnetic coil that is directly wound around the trap chamber, so that the trap can operate without disturbing the magnets in the other coils too much. This set-up allowed them to measure the magnetic field deviations and supress them by a factor of 100. In this way, they determined that the mass of the deuteron is 2.013553212535(17) atomic units, where the number in brackets indicates the statistical uncertainty of the last digits. The mass of the hydrogen molecular ion HD+, determined by the same method, is 3.021378241561(61) atomic units.
The new value for the mass of the deuteron is significantly smaller than the tabulated reference value. To validate their result, Rau and colleagues therefore also calculated the mass of HD+ using masses of the proton and the electron they previously measured. The new result for the deuteron is in excellent agreement with these values, they say, which suggests that the reference value for the deuteron needs to be corrected. The result, which is published in Nature, also agrees with a recent and precise measurement of the deuteron-to- proton mass ratio made by another group.
Nonlinear narrative Rosemary Teague left her PhD part-way through to investigate other careers in physics. (Courtesy: IOP/Lucy Kinghan)
Rosemary Teague is a trainee teacher on the physics PGCE programme at University College London
“What will you do with a physics degree?” asked my aunts and uncles when I excitedly told them I was going to study the subject at Imperial College in London, as an undergraduate. Well, I thought, I’m going to be a physicist. Back then, my understanding of what a career in physics entailed was not far beyond that of my family – I knew it as an academic pursuit and pictured myself, 30 years on, at the forefront of discoveries in renewable energy. Really, though, I wasn’t thinking that far ahead. I was thinking about the first four years: leaving home, making friends, and learning about the laws of the universe.
And that’s what I did. My undergrad years were filled with lectures, labs and dancing. As a self-proclaimed “country bumpkin” who grew up in Gloucestershire, I took to London life relatively well, making life-long friends and taking every opportunity the city presented. I spent my summer holidays in a similar fashion, either travelling or earning money to travel by teaching at summer camps. I spent my third year in Valencia, Spain, as part of the Erasmus programme, and surprised myself by choosing a computational research project.
At this point I noticed my friends and peers were already considering life after our degrees. We were on a four-year programme, so the upcoming summer was our last chance to get some work experience before applying for graduate jobs. I didn’t spend much time thinking about this myself as I was confident that I wanted to stay in academia, and follow that linear path. So, while my friends were off earning big bucks in the City, I crashed with a school friend for a couple of months and worked in a practical lab at the University of Bristol. While the city and research group were lovely, it confirmed that I’m more suited to a computer than optical mounts.
Always one to be prepared, I started looking into PhD programmes at the beginning of my final year. I soon discovered centres for doctoral training (CDTs), which tend to have a more interdisciplinary focus, and incorporate a Master’s qualification into a full PhD programme. CDTs also complement study with opportunities to develop outreach and other career skills. Among standard PhD applications, I applied to several CDTs including one on the theory and simulation of materials at Imperial. When I accepted this offer, I was excited about both the course and staying in London to take the next step in my personal life by moving in with my partner – he would be starting a more traditional PhD in ultrafast lasers, also at Imperial.
I struggled with impostor syndrome and would work long hours to prove that I deserved the place
Not long after starting the MSc, reality hit and my life started to get a bit less rosy. I struggled with impostor syndrome, as many postgraduate students do, and would work long hours to prove that I deserved the place. I also, crucially, wasn’t giving myself the time or space to grieve a close and unexpected loss in my partner’s family. I started to have panic attacks and increasingly depressive thoughts. Luckily, I have a strong network of family and friends who encouraged me to take a break, despite warnings from the university of the workload that would be waiting on my return. I escaped to my parents’ home, was prescribed antidepressants and arranged to start cognitive behavioural therapy (CBT) sessions back in London. These had an incredible impact and gave me a new perspective on my work–life balance.
I realized how much I hated the pressure and competition around working late and, instead of dismissing advice, I heeded it, making the decision to leave university before starting the PhD part of the course. This was an incredibly tough choice – while easier than if I was on a traditional PhD path, without my MSc graduation as an obtainable “end-point”, it felt like I was giving up, quitting on academia and failing to meet my ambitions.
So what then? I was still living in London and paying rent, so I desperately needed an income. I didn’t have the luxury of time to consider what sort of job I wanted and instead applied for anything and everything I could. A recruiter got in touch via LinkedIn and arranged an interview at Ocado Technology, the online supermarket. Two and a half months after handing in my MSc dissertation, I started working for Ocado as a cloud infrastructure engineer. It was a good job that paid well, but I wasn’t passionate about it and the hour-and-a-quarter commute quickly took its toll.
Exploring her options To help her decide what career pathway to follow, Rosemary Teague put together some data on her current expertise, as well as the proficiencies she would like to develop. She then matched these to three possible careers, with teaching emerging as the best path. (Click to enlarge)
After six months, in a move that I’m still not sure my parents understood, I sacrificed 30% of my salary for a job in the outreach and engagement department at the Institute of Physics [which publishes Physics World]. This was in May 2019, and it was the right decision for me – I hadn’t been this happy for a long time. More recently I have been considering what it is that I want out of a career. I know that collaborating and working with others is important to me, and whatever I do must be physics-focused.
By assessing my own skills, and thinking critically about what I truly enjoy, I narrowed down my options to a future career in one of three areas: public outreach and engagement; teaching; or academic publishing. Once I considered the skills these roles need and the opportunities they could offer (see graphic above) I realized that for me right now, a career in teaching is the most rewarding path, helping others to see the joys of physics while also developing myself. With this in mind, I applied for teacher training courses, and am incredibly excited as I began my PGCE last month.
By sharing my story, I hope it will show that there are many ways to start a career and that it’s okay to take time for yourself. My path recently has been far from linear and strongly supported by people I love. I am looking forward to seeing how it twists and turns in the years to come.
On the right track Amber Yallop needed to find a university set-up that worked for her circumstances. (Courtesy: IOP)
Amber Yallop is a new trainee on a graduate scheme with MBDA, after completing her BSc in physics at the University of East Anglia, UK
My degree pathway was not the most traditional or straightforward. It began in September 2016 when I left my hometown of Norwich, UK, to follow my dream of a degree in astrophysics, which has always been my specific area of interest. Although I loved my course, I struggled in the first few months, due to some personal circumstances that were further exacerbated by living away from home. By December 2016 I made the very difficult decision to drop out of my course and return home.
At this point I was unsure if I would ever return to higher education. I took a job at a local high school, where I supported students with special educational needs. The role was challenging, but also incredibly rewarding, and I thoroughly enjoyed my seven months at the school. During my time there, however, I heard that a local university – the University of East Anglia (UEA) – was going to launch a new physics degree, starting in September 2017.
I had previously considered teaching as a career, but in my time working at the school I realized that I much preferred working on a personal, one-to-one basis with the students and I wouldn’t get the same level of involvement and intervention as a class teacher. While I loved the job, I still had high aspirations and the physics course at UEA came along at the right time and in the right place. I chose to take the opportunity, and applied to the course. I was offered a place and decided to give university a second attempt, but this time living at home. This worked much better for me and despite further setbacks during my three years, including being diagnosed with epilepsy, I managed to stay on track and obtain a first-class degree.
I decided to give university a second attempt, but this time living at home. This worked much better for me
During my BSc, I volunteered at Institute of Physics outreach events, including a day at the Royal Norfolk Show, demonstrating and explaining hands-on physics experiments to the public, and also took part in the Norwich Science Festival. In my second year at university, I successfully applied to be the IOP campus ambassador for East Anglia. This role massively improved my confidence and organizational skills, including organizing events like a public lecture with BBC Stargazing Live’s Mark Thompson. This was a very popular event with both students and members of the public.
I was also lucky enough to represent East Anglia on the IOP’s Student Community Panel for the final two years of my degree. This involved travelling across the UK and Ireland to meet other regional representatives. We addressed issues facing physics students across the country, and supported the IOP with student events. My proudest achievement in this role was raising over £1000 in sponsorship for PLANCKS 2020, an annual physics competition for students around the world, hosted by the International Association of Physics Students. Each country holds its own preliminary round, with the winning teams progressing to the final competition, which takes place in a different country each year. The 2020 final was scheduled to be held in May in London, but was postponed as a result of COVID-19.
I have recently relocated to Stevenage to begin a “Hardware in the Loop” graduate scheme with the European defence company MBDA. This is a two-year programme with a permanent position on completion. It includes training, development, outreach and extracurricular opportunities. I am looking forward to applying the knowledge and skills learnt through my degree to the workplace, and I’m excited to see where this next chapter of my life will take me.
How the Institute of Physics can help you
Become a part of a vibrant student community and participate in a host of activities to develop your skills: iop.org/student-community
Network with physicists through our special interest groups – including our Early Career Members’ Group – and our local nations and branches network: iop.org/groups and iop.org/branches
Gain an advantage in the job market by attending our career-themed webinars iop.org/events and make use of the IOP Career Development Hub, which will support you in writing your CV, practising for interviews, delivering presentations and effective time management, and providing access to many other useful resources to support your future career choices: iop.org/member-services
Following graduation, join the Member grade and use the designatory letters MInstP after your name, to demonstrate your commitment and professionalism
Einstein’s general theory of relativity has triumphed once again after being tested in the strongest gravitational field to date. Dimitrios Psaltis at the University of Arizona and members of the Event Horizon Telescope (EHT) collaboration did their analysis using recent images of M87*, which is a supermassive black hole at the centre of a nearby galaxy. Their results set the stage for even more stringent tests of general relativity in the near future.
For over a century, general relativity has had an excellent track record in explaining observations of the universe. All the same, the theory leaves some big questions unanswered: including how to unify gravity with quantum mechanics and the surprise discovery in 1998 of the universe’s accelerating expansion. As a result, physicists are looking for subtle flaws in general relativity that could lead to the development of a more complete theory.
One way to study the theory’s limitations is to search for discrepancies in how it describes distortions of spacetime by the gravitational fields of massive objects. Initially, these tests used objects in the solar system – famously the motion of Mercury. More recently, gravitational waves created by merging black holes and observed by the LIGO–Virgo collaboration have enabled tests in the gravitational fields of objects as heavy as 150 solar masses. Yet despite the increasingly rigorous constraints imposed by these results, cracks have yet to show in Einstein’s theory.
Billions of Suns
M87* has a mass of about 3.5–6.6 billion Suns and its gravitational field is the largest ever used to test general relativity. In 2019, the EHT released its celebrated image of the shadow of M87* – a dark silhouette, surrounded by bright emission from hot plasma. General relativity provides a precise prediction of the size of the shadow and in the case of M87*, the observed size is within 17% of general relativity’s prediction.
It is possible, therefore that a modified version of general relativity could do a better job at predicting the size of the M87* shadow. To test this, Psaltis and colleagues considered alternative models of gravity that modify the general theory of relativity. They focussed on parameters of these alternative models that affected the models’ predictions of the size of the shadow.
By comparing these predictions to the observed shadow, they we able to constrain modifications to Einstein’s theory by a factor of almost 500 compared with earlier solar system tests. The new constraints are similar to those derived from gravitational wave observations. The EHT collaboration now hopes to impose even stricter limits by imaging the shadow of Sagittarius A* – the supermassive black hole at the centre of our own galaxy, whose mass is far more precisely defined than M87*.
In this episode we look at the ground-breaking research on black holes that led to Roger Penrose, Reinhard Genzel and Andrea Ghez winning the 2020 Nobel Prize for Physics. On hand are experts Laura Nuttall of the University of Portsmouth and the LIGO–Virgo–KAGRA collaboration, who studies gravitational waves from merging black holes and Harvard University’s Shep Doeleman, who studies supermassive black holes using the Event Horizon Telescope.
On paper, perovskites make great building blocks for lasers. In their quasi-two-dimensional form, these organic–inorganic materials exhibit tunable colour and excellent stability. The fact that they can fabricated from low-cost starting components in simple solution-based processes makes them attractive for manufacturers, too. There’s just one tiny flaw: perovskite-based lasers abruptly stop working after only a few minutes of constant operation at room temperature. Now, however, a team of researchers in China and Japan say they have overcome this so-called “lasing death” by suppressing long-lived energetic states known as triplet excitons.
Both 2D and quasi-2D perovskites are promising alternatives to silicon in optoelectronics devices. While 2D perovskites are made up of stacked sheets of alternating organic and inorganic layers, their quasi-2D variants contain small regions in which organic and inorganic materials alternate in all directions (as is the case in their 3D counterparts). The quasi-2D versions also contain two different types of organic materials.
In organic semiconductors such as those that make up quasi-2D perovskites, charge carriers – electrons and holes – come together to form an energetic state called an exciton. This entity may exist in a so-called singlet state (which has no net spin because the contributing electron spins point in opposite directions) or in a triplet state (in which the spins point in the same direction). In both cases, the energy in the exciton can then be released as light via a process known as radiative recombination. Triplets generally have a lower energy than singlets, however, and emit hardly any light.
Long triplet lifetimes
Recently, researchers led by Chuanjiang Qin of the Chinese Academy of Sciences and Chihaya Adachi at Kyushu University in Japan found evidence that triplet excitons have lifetimes of nearly a microsecond in these materials. These long lifetimes led them to focus on these excitons as a possible cause of lasing death.
As well as emitting very little or no light, triplet states also tend to interact with light-emitting singlets in a way that causes both to lose their energy without producing light, Qin explains. Eliminating any triplets in perovskites would therefore prevent interference with lasing.
In their latest work, the researchers studied FAPbBr3-based (where FA is formamidinium) quasi-2D perovskites with two different organic cations, phenylethylammonium bromide (PEABr) and 1-naphthylmethylamine bromide (NMABr). The PEABr-based perovskite contains an organic cation with high triplet energy, and the NMABr-based perovskite has an organic cation with low triplet energy.
Holding triplets in a low energy state
To eliminate triplets in these quasi-2D perovskites, the researchers incorporated an organic layer into the materials, which confines the triplets to a low-energy state. Since the excitons want to move to lower energies, the long-lived triplet excitons transfer from the active (that is, light-emitting) portion of the perovskites to the organic layer, they explain. This reduces light losses and allows for lasing under constant optical excitation (also known as optically pumped continuous-wave, or CW lasing) without interruption.
Qin, Adachi and colleagues also discovered that they could make their material lase continuously simply by placing it in air. This is because oxygen can destroy triplets – a result that further suggests that light losses caused by triplets may indeed be the culprits in lasing death.
Unchanged lasing intensity
The researchers quantified their material’s performance by measuring the amplified spontaneous emission (ASE) intensity of the two films as they were optically powered. They found that the ASE, or lasing, remained virtually unchanged after an hour at room temperature in air with a relative humidity of 55%. The lasing spectrum also maintained its narrowness (full width at half maximum) without shifting. The team stress that these measurements were taken without the films being encapsulated in glass, and without a protective layer over their tops, as was the case in previous ASE stability measurements of 3D perovskites such as MAPbBr3.
According to the researchers, the excellent stability of their material comes from the protection provided by the larger cations on its surface. “We have demonstrated the key role of triplets in the lasing process of these types of perovskites and the importance of managing triplets to achieve continuous lasing,” Adachi states in a Kyushu University press release. “These new findings will pave the way for the future development of a new class of electrically operated lasers based on perovskites that are low cost and easily fabricated.”
How do you take a worm’s temperature? With a quantum thermometer of course. This is exactly what researchers have achieved using devices containing nanodiamonds with nitrogen-vacancy (NV) defect centres, the magnetic resonances of which change with temperature. The new technique could be important for a range of clinical applications.
You may ask, why take the temperature of a worm? One of the reasons is that the temperature within a living organism is a direct measure of the biological activities happening inside. Going down to the submicron-scale temperature range, as in this new work, should provide detailed information on cellular and molecular activities. This could be important for clinical applications such as imaging brain sub-tissue structures, visualizing tumour heterogeneity and mapping adipocytes, to cite just three examples. It is, however, no easy task to reduce the size of biocompatible thermometers down to this small scale.
Recent years have the seen the emergence of light-emitting nanothermometers – such as thermoresponsive molecular probes and nanoparticles – that could overcome this technical limitation. Most devices made thus far, however, are not robust enough for long-term use and can only monitor temperature changes over relatively long periods (hours). They are also not completely biocompatible.
Nanodiamond quantum thermometers
The nanodiamond quantum thermometers employed in the new study are promising in many respects. The probes are made of nanodiamond, which naturally contains defects, known as NV centres. These occur when two adjacent carbon atoms in a diamond lattice are replaced with a nitrogen atom and an empty lattice site.
The nitrogen has an extra electron that remains unpaired and so behaves as an isolated spin. This spin can be “up” or “down” or in a superposition of the two. Its state can be probed by illuminating the diamond with laser light and recording the intensity and frequencies of the emitted fluorescence.
NVs in nanodiamonds are ideal as biological probes because they are non-toxic, photostable, have surfaces that can be functionalized and can be easily inserted into living cells. They are also isolated from their surroundings, which means that their quantum behaviour is not immediately affected by surrounding thermal fluctuations, and can detect the very weak magnetic fields that come from nearby electronic or nuclear spins. They can thus be used as highly-sensitive magnetic resonance probes capable of monitoring local spin changes in a material over distances of a few tens of nanometres. And, in contrast to conventional magnetic resonance imaging (MRI) techniques in biology, in which millions of spins are required to produce a measurable signal, the NV defects can detect individual target spins with nanoscale spatial precision.
Healthy worms vs worms with a fever
In their experiments, Masazumi Fujiwara of Osaka City University in Japan and colleagues functionalized the surfaces of the nanodiamonds with polymer structures and injected them into C. elegans nematode worms (one of the most popular animal models in biology). The sensor began by reading the base “healthy” temperature of the creatures as a frequency shift of the optically detected magnetic resonance of the NV defect centres.
Since the nanodiamonds move much more quickly inside a worm than in cultured cells, the researchers developed a fast particle-tracking algorithm. They also included an error-correction filter that takes into account the worm’s body structure, which can cause substantial fluctuations in the intensity of the fluorescent light emitted and can create temperature-measurement artefacts.
Next, the team, who report their work in Science Advances, induced an artificial “fever” in the worms by stimulating their mitochondria with a chemical. Their sensor successfully recorded this temperature increase with a precision of around ±0.22°C.
“It was fascinating to see quantum technology work so well in live animals and I never imagined that the temperatures of tiny worms less than 1 mm in size could deviate from the norm and develop into a fever,” says Fujiwara. “Our results are an important milestone that will guide the future direction of quantum sensing as it shows how it contributes to biology.”
A national co-ordinated approach to material science in the UK is needed to meet the country’s 2050 net-zero carbon targets. That is according to the Henry Royce Institute, which has released five roadmaps detailing how materials science and engineering can contribute to this energy transition.
Funded by the Engineering & Physical Sciences Research Council, the Henry Royce Institute is the UK’s national centre for advanced materials research and innovation. It is a partnership between six UK universities as well as the National Nuclear Laboratory and the United Kingdom Atomic Energy Authority.
The new roadmaps are the result of a series of workshops held earlier this year, in which more than 200 material scientists were brought together by the Henry Royce Institute and the Institute of Physics (IOP), which publishes Physics World. They explored the role of materials in meeting the UK’s target of bringing all greenhouse-gas emissions to net zero by 2050, which the UK passed into law last year.
The roadmaps cover materials for photovoltaic systems, low-carbon generation of hydrogen, thermoelectric energy conversion, caloric energy conversion and low-loss electronics. According to the Henry Royce Institute, in each area the UK will have to bring together the research community, industry and government to identify immediate and long-term requirements for the development of energy materials to replace fossil-fuel technologies.
“These important materials roadmaps demonstrate, in detail, how cutting-edge materials science and engineering will play a key role this major energy transition,” says Julia King, chair of the Henry Royce Institute and a former chief executive of the IOP. “Novel materials will be essential to deliver the disruptive technologies that will bring about the energy-efficient applications and processes we urgently need.”
Showing potential
Oliver Fenwick, a material scientist at Queen Mary University of London who led the thermoelectric energy conversion theme, says that the development of new or improved materials underpins most emerging technologies. “The transition to net-zero emissions presents significant opportunities for new materials, and this is particularly the case for thermoelectric technology,” he says. “The challenge is significant, but the opportunity for the UK in this sector is huge, with 17% of our CO2 emissions coming from space heating and cooling.”
The Henry Royce Institute calls for new national facilities to test technologies and ease their transfer from laboratories to prototypes. It also says the UK government must invest in the material-science skills base and encourage or legislate the use of low-carbon technology.
King adds that many of these next generation materials are already showing great potential. “Our challenge is to deploy them at scale,” she says. “We now need to have a meaningful dialogue with our government partners to agree on what is technically feasible and economically viable.”
Researchers in the US, Denmark and Sweden have designed a novel microstimulation device that can attach to the gastrointestinal (GI) tract and electrically stimulate the stomach muscles to induce contractions. The new device, described in Science Advances, aims to treat disorders such as gastroparesis, which prevents the stomach from properly emptying.
Decades of scientific and medical research has shown that, due to the presence of neural circuits in the GI tract, electrical impulses that stimulate nerves and muscles can improve the health and quality-of-life of patients with GI-related disorders. For instance, to treat gastroparesis, the common approach has been surgical implantation of gastric pacemakers that stimulate the outer muscular layer of the stomach.
To reduce cost barriers and address safety concerns associated with surgical implantations, recent developments in orally administered therapies and ingestible robotic systems could prove of paramount importance. Yet currently available robotic sensors cannot provide electrical microstimulation in a controlled manner. Therefore, there is a clear need for oral electrical stimulation systems that can be employed on an as-needed basis.
The design of the new device was inspired by the way that a parasitic worm, the Taenia solium tapeworm, uses hooks to attach to the GI tract of its host. Between five and 15 minutes after the device is ingested, it autonomously aligns itself with the curved stomach lining and uses a compressed spring to insert tiny hooked stainless steel needles into the stomach muscle wall. The needles act as probes to provide timed electrical pulses, thereby improving the device’s controllability.
Capsule schematic and working principle. (Courtesy: Alex Abramson)
The electrical pulses can conductively signal the probe’s anchoring and detachment events and thus allow the device to communicate its position. Abramson and the team tested the new sensor in ex vivo human stomach tissues and in vivo in pig stomachs. In sedated pigs, the device remained in place for up to two hours after being swallowed and repositioning itself along the stomach lining. Whilst in place, the device induced timed muscle contractions across the tissue.
The researchers also applied the device in awake pigs. The devices were orally ingested by pigs with empty stomachs, and then – as designed – the sensors naturally detached after the pigs ate and digested their food. In the absence of digestion, by localizing to a single spot on the tissue, the device can perform electrical microstimulations for an extended period of time.
“The STIMS [self-orienting technology for injection and electrical micro-stimulation] capacity for tissue wall anchoring and signalling could enable new applications for ingestible devices that focus on the tissue rather than the ambient environment,” the authors write.