Solar cells made from lead halide perovskites are good at converting solar power into electricity and relatively straightforward to manufacture. Unfortunately, they’re also unstable at room temperature and ambient humidity, which is something of a drawback for devices that tend to be located outside. Now, however, researchers at the US Department of Energy’s SLAC National Accelerator Laboratory and Stanford University may have found a solution. Their new technique involves pre-treating the material at high pressures and temperatures, and its developers say it could be scaled up for industrial production.
Perovskites are crystalline materials with an ABX3 structure, where A is caesium, methylammonium (MA) or formamidinium (FA); B is lead or tin; and X is chlorine, bromine or iodine. They are promising candidates for thin-film solar cells because they can absorb light over a broad range of solar spectrum wavelengths thanks to their tuneable bandgaps. Charge carriers (electrons and holes) can also diffuse through them quickly and over long distances. These excellent properties give perovskite solar cells a power conversion efficiency (PCE) of more than 18%, placing them on a par with established solar-cell materials such as silicon, gallium arsenide and cadmium telluride.
One of the most efficient perovskites for solar cell applications is composed of caesium, lead and iodine. This material, CsPbI3, has four possible phases: a yellow room-temperature non-perovskite phase (δ), plus three black high-temperature perovskite-related phases in which the crystal takes on a cubic (α), tetragonal (β) or orthorhombic (γ) structure. While the black phases are efficient at converting sunlight into electricity, heat and humidity quickly make them revert to the yellow phase, which is useless for photovoltaic applications.
High pressure squeeze
The SLAC-Stanford researchers have now shown that it is possible to nudge this yellow phase into an efficient and stable black configuration. Led by Yu Lin, Wendy Mao, Hemamala Karunadasa and Feng Ke, they did this by placing crystals of the yellow phase between the tips of a diamond anvil cell (DAC) and subjecting it to pressures of 0.1 to 0.6 GPa at a temperature of up to 450 °C. They then rapidly cooled the material down and removed the sample from the DAC.
Synchrotron X-ray diffraction and Raman spectroscopy measurements showed that this treatment yielded a version of orthorhombic γ-CsPbI3 that was stable in the presence of moisture (at a relative humidity of 20-30%) and remained efficient at room temperature for 10 to 30 days. This is a significant improvement over earlier efforts to stabilize the bulk black phase at room temperature using, for example, applied strain, surface treatments and changes to the material’s chemical composition – all of which produced good results only when the environment remained moisture-free.
“This is the first time that pressure has been used to control this material’s stability [under ambient conditions],” Lin says. “Now that we’ve found this optimal way to prepare the material, there’s potential for scaling it up for industrial production, and for using this same approach to manipulate other perovskite phases.”
Stabilizing the γ-CsPbI3 phase
Theoretical studies show that the secret to the SLAC-Stanford team’s success lies in the way the high-pressure, high-temperature treatment affects distortion within the perovskite lattice. Other researchers had previously identified three types of distortion: distortions of the BX6 octahedral units making up the material’s crystalline structure; B-cation displacements within the octahedra; and a tilting of individual BX6 octahedra relative to one another to create rigid corner-linked units. Of these three types, the third, known as octahedral tilting, is the most common.
In the current study, Chunjing Jia and Thomas Devereaux of the Stanford Institute for Materials and Energy Sciences (SIMES) used first-principles density functional theory calculations to show that applying pressure to the perovskite affects its tilt. More specifically, the pressure treatment makes it possible to control the relative energy difference between the desired γ-CsPbI3 phase and a competing non-perovskite phase (δ- CsPbI3) – and, ultimately, to stabilize the former.
In August 2020 the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, unveiled the world’s brightest source of high-energy X-rays. An extensive upgrade costing €150m has transformed the facility into the Extremely Brilliant Source, or ESRF–EBS, a fourth-generation synchrotron that boosts the brilliance and coherence of the X-ray beams by around a factor of 100 over its predecessor.
Scientists from around the world started to use the machine on 25 August, for the moment via remote experiments, with the upgraded machine making it possible to study the structure of matter at the atomic level much faster and in greater detail than before. At this year’s ESRF User Meeting, a virtual event that will run from 8–10 February, attendees will be able to explore the technical capabilities of the ESRF–EBS as well as the new science it will enable.
Normally held at the ESRF, this year’s online event will enable scientists in all parts of the world to take part in tutorials, poster sessions and scientific symposia, and to interact with each other remotely to discuss new research ideas. A plenary session on Tuesday 9 February will feature five keynote lectures on different applications of synchrotron science – including industrial catalysis and research into the pathology of the COVID-19 virus – as well as a facility update on the ESRF-EBS.
Suppliers to the ESRF will be taking part in a virtual commercial exhibition, and have also been working closely with the ESRF’s technical teams to deliver solutions that meet the needs of next-generation synchrotrons as well as other major scientific projects. A few examples are highlighted below.
Fast detectors deliver precision for high-energy applications
The ESRF has received eight EIGER2 detectors from DECTRIS as part of its EBS upgrade (Courtesy: DECTRIS)
DECTRIS, which specializes in producing high-performance hybrid photon-counting X-ray detectors, has developed two models for high-energy applications in next-generation synchrotron sources. The EIGER2 X and XE CdTe detectors combine high performance with easy integration and operation, and are equipped with a cadmium-telluride sensor to provide high quantum efficiencies for hard X-ray energies up to 100 keV. With a pixel size of just 75 mm, they also provide excellent spatial resolution.
The detectors’ readout eliminates any dead time, which ensures that no photons are lost between frames. Two adjustable energy thresholds make it possible to simultaneously determine the fluorescence background (the lower threshold) while also measuring the contribution of higher harmonics (the upper threshold). With a one-pixel-point spread function (PSF) and a count-rate capability of 107 photons/s per pixel, the EIGER2 X and XE CdTe detectors deliver more precise measurements than ever before.
Different versions of the detectors are available, with active areas ranging from 77 x 38 mm2 to 311 x 327 mm2. The smallest detectors in the EIGER2 X series offer frame rates of 2 kHz, while the XE versions have been optimized to combine a large active area with a frame rate of up to 550 Hz – ideal for applications such as macromolecular crystallography, materials science, and small-angle X-ray scattering. In addition, both versions have been designed to deliver high efficiency at high energies, yielding the best possible resolution and frame rate for next-generation synchrotrons.
The ESRF, as part of its EBS upgrade, has received eight EIGER2 detectors that are being taken into operation. “The combination of the extremely brilliant source and the high dynamic range and high sensitivity of the EIGER2 CdTe detector will allow us to detect weak features in diffraction data,” says beamline scientist Carlotta Giacobbe. “Ultrafast 3D mapping and ultra-fine slicing will now be accessible for monitoring systems as they evolve during in situ experiments”.
ITER sets new challenges for leak detection
ITER’s vacuum vessel must be kept under ultrahigh vacuum to maintain the right conditions for fusion (Courtesy: ITER)
40-30, a French company that specializes in process and vacuum technologies for large engineering projects, has been a long-time supplier to the ESRF and other major experimental facilities. That includes ITER, the fusion reactor now being built in Cadarache in the south of France, where 40-30 is part of a consortium designing and building one of the most complex leak detection systems in the world.
Achieving and maintaining the right conditions for the fusion reaction between deuterium and tritium requires the vacuum vessel that contains the plasma, along with the 14,000 m3 cryostat and other core components, to be kept under ultrahigh-vacuum conditions. Any leaks or contamination must be detected as soon as possible, but that’s a challenging task when there are around 2000 entry points to the overall system.
“We spent a whole year working out the basic functional requirements of the leak detection systems for ITER, and translating them into technical specifications and contract documentation,” says Roger Martín, the project manager at Fusion for Energy (F4E) who was responsible for procuring the detection system.
The resulting design is a series of helium leak-detection subsystems that exploit high-resolution mass spectrometry to detect helium ingress under specific conditions – including a high-radiation environment with strong magnetic fields, complying with the nuclear safety standards applying to ITER, and even the possibility of seismic activity. Although mass spectrometry is widely used for leak detection, adapting the technology to ITER standards is particularly demanding.
After an extensive tendering process, the €17 m contract for delivering the leak-detection system was awarded to the IG4 consortium, led by Spanish engineering consultants IDOM. 40-30 will be responsible for process and vacuum engineering, as well as equipment specification, while Gutmar will manufacture, assemble and test the equipment.
“We are really proud to be involved in this project,” says Charles Agnetti, CEO of 40-30. “For this mission, 40-30 has assigned its most experienced specialists in vacuum technology and processes, leak detection and gas analysis.” It will take more than three years to deliver the project, with a preliminary design review due in the first half of 2021.
Compact LAMBDA detectors shrink even further
The first generation of the compact LAMBDA 750k X-ray detector (left) seems large compared to the re-engineered version (right).
X-Spectrum GmbH specializes in developing high-resolution and high-speed X-ray detectors built around the Medipix3 RX detector chips, originally developed by CERN. The workhorse among the company’s line-up is the LAMBDA 750k, one of the most compact X-ray detectors on the market. In the latest development, the well-known wedge-shaped case of the LAMBDA 750k has been redesigned to create an even more compact box that measures just 235 x 100 x 42 mm3 – about half the length and height of the previous version.
The X-Spectrum engineers have worked hard to develop an efficient cooling system that fits inside this very small package. That will enable users to install the LAMBDA 750k X-ray in even the most crowded beamlines and tight experimental set-ups.
The LAMBDA 750k is one version of a family of next-generation pixel detectors for X-rays, all based on Medipix3 technology. The LAMBDA series are photon-counting detectors, effectively making them noise free, and offer speeds of up to 23,000 frames per second (with no readout dead-time) along with a small pixel size of 55 µm. A variety of sizes and configurations are available for different applications, and each one can be equipped with different sensor materials to allow high detection efficiency even at high X-ray energies.
The LAMBDA system also has “colour imaging” capabilities, where X-rays hitting the detector can be divided into two energy ranges. Originally developed by DESY in Hamburg, Germany, for use at the PETRA-III synchrotron, the system is designed for high reliability, and has external triggering and gating capability for synchronization with the rest of the experiment. It can also be easily integrated into common beamline control systems.
In most types of medical imaging, the aim is to gain information on the internal structure of the body, looking for growths, tumours or other abnormalities. In doing so, medical practitioners gain critical information that can be used in treatment.
However, for many illnesses we must move beyond simple structure and learn about the way in which an organ functions. This is particularly important for assessing disorders of the brain – such as epilepsy, dementia or mental health problems – where a structural image, no matter how detailed, often looks “normal”, even in patients with profound difficulties. In such instances it is the function of the brain that is perturbed. To gain insight, we must therefore develop technologies that image what the brain is doing, which means developing ways to assess the electrical activity of its 90 billion or so neurons.
For many illnesses we must move beyond simple structure and learn about the way in which an organ functions
One method of imaging brain function is magneto-encephalography (MEG) – a relatively new technology in which we measure the magnetic fields generated by current flowing through neuronal assemblies. Mathematical modelling of these fields generates 3D images showing moment-to-moment fluctuations in neural current (figure 1a). In this way, MEG is a safe and non-invasive way of imaging brain networks as they form and dissolve to support cognition. By letting us probe brain function, MEG is used extensively in research to investigate and understand both the healthy brain, and its perturbation by disease (figure 1b).
1 The power of magnetoencephalography (MEG)a When someone moves their fingers, clusters of neurons in the brain’s primary motor cortex are rendered active, and the resulting current passing through them generates magnetic fields. The component of this field that is radial to the head can be measured by MEG. In the visualization (left), red represents a field going away from the head, blue represents a field going into the head. Via a mathematical modelling process, called source reconstruction, we can use these fields to construct 3D images showing moment-to-moment changes in neural current (right). The yellow blobs show areas of fluctuating neural current during the finger movement. With MEG, brain activity can be resolved not only in space but also in time. Electrical brain activity can be broken down into oscillations at characteristic frequencies (known as neural oscillations or brain waves). These figures show modulation of the amplitudes of electrical oscillations at different frequencies over time, occurring in the primary motor cortex as someone moves their finger. Blue shows a decrease in a neural rhythm relative to baseline, red shows an increase. While a subject moves their hand there is a decrease in beta (~20 Hz) oscillations, which then returns to baseline, post movement, via a ‘rebound’ (i.e. the signal goes above baseline). This rebound is abnormal in patients with schizophrenia and likely indicates a lack of communication between the primary motor cortex and other brain regions in patients (Courtesy: Elena Boto, University of Nottingham; CC BY 4.0 Robson et al. adapted from NeuroImage: Clinical12 869).
Despite excellent promise, the current generation of MEG scanners are severely limited, preventing their widespread adoption. The fields generated by the brain are low amplitude (~100 fT – about a billionth of the Earth’s field) meaning high-sensitivity detectors are required. For many years the only viable option was the superconducting quantum interference device (SQUID) – a cryogenic sensor that relies on quantum tunnelling through an insulating gap between two superconductors (the Josephson effect). The tunnelling current is a function of magnetic flux through the SQUID.
Despite excellent promise, the current generation of MEG scanners are severely limited, preventing their widespread adoption
However, to maintain their superconductivity, SQUIDs must be cooled to –269 °C, which limits the design and deployment of MEG scanners. First, because they operate at such low temperatures, a thermally insulating gap must be maintained between the sensor and the patient’s head to prevent injury. Because magnetic field decays with distance squared, this gap limits sensitivity to the brain’s magnetic field. Second, cryogenics mean that sensors must be fixed in position above the head inside a cryogenic dewar, which means that if a patient moves their head relative to the scanner, the quality of the data goes down drastically. Just a 5 mm shift can render data useless, and many people cannot tolerate this environment. The fixed nature of the sensors also results in a one-size-fits-all helmet. This is a significant barrier to scanning young children and babies, since the helmet is much too large. Finally, the complex combination of SQUID sensors, control electronics and cryogenics makes MEG expensive (typically a price tag of more than £2m plus high running costs).
A quantum revolution
In recent years our research team has tackled MEG’s limitations by exploiting newly developed quantum technology. (This is a collaborative effort by the University of Nottingham’s Sir Peter Mansfield Imaging Centre and the Wellcome Centre for Human Neuroimaging at University College London, funded by the UK National Quantum Technologies programme and Wellcome.) In particular, the development of optically pumped magnetometers (OPMs) has been key. These are quantum enabled magnetic field sensors that offer similar sensitivity to SQUIDs but without the need for cryogenics (figure 2).
2 Optically pumped magnetometers: the basics Each optically pumped magnetometer (OPM) contains a vapour of rubidium-87 atoms enclosed within a glass cell (a). When a beam of circularly polarized light that is resonant with an atomic transition (the D1 line) is directed through the vapour, it “pumps” the rubidium atoms into a quantum state in which their angular momentum is aligned with the beam. Because each atom has a magnetic moment linked to its angular momentum, the spin-polarized atomic vapour has a net magnetization that is highly sensitive to external magnetic fields. With all atoms in the same state and polarization induced, no further absorption of photons can occur, and the intensity of light passing through the cell from the laser is maximized. However, if polarization drops due to, for example, the interaction with an external magnetic field, light is absorbed again (b), and the signal at the photodetector decreases. Unfortunately, polarization is fleeting since atoms naturally “relax”. In a zero-field setting, this is mostly due to spin-exchange collisions, which cause a loss of coherence between precessing spins. In order to maintain a magnetically sensitive state, the effect of this relaxation must be counteracted. Perhaps counterintuitively, this is achieved by increasing the density of the vapour, and thus the rate of spin-exchange collisions. With lots of collisions in a very low magnetic field, the spins do not have enough time to decohere between collisions, meaning that polarization can be maintained along with sensitivity to external magnetic fields. This is called the spin-exchange relaxation free (SERF) regime. In the SERF regime, the bulk magnetic moment of the polarized gas obeys the Bloch equations – a set of equations describing the evolution of the macroscopic magnetization of a system as a function of time. The effect of an external field thus becomes well characterized, showing that polarization of the vapour – measured via light intensity passing through the cell – is a Lorentzian function of external field. Unfortunately, the symmetry of the Lorentzian means that positive and negative fields have the same effect on the vapour, but this is remedied by applying a second, known, external field. Specifically, applying an external oscillating field makes the polarization oscillate with an amplitude that is a linear function of small (<±1 nT) changes in the average field through the cell. Consequently, lock-in detection of the oscillation of the intensity of light passing through the cell provides a signal that varies linearly with field strength, producing a highly sensitive magnetometer.
Recent commercialization by the US company QuSpin has made OPMs robust, easy to use and readily available, while miniaturization has made the most recent generation small and lightweight (similar to a Lego brick in both size and weight). Based on this new design, our team has integrated OPMs into a working prototype MEG device. Because they are so small and don’t need cryogenics, OPMs can be mounted directly on the surface of a human head, increasing sensitivity by removing the thermally insulating gap and getting the sensor closer to the brain.
Based on this new design, our team has integrated optically pumped magnetometers into a working prototype MEG device
This also allows the sensor array to move with the head, making the MEG measurement resilient to subject motion. Similarly, flexibility of OPM placement means an array can adapt to any head size, enabling babies and children as well as adults to be scanned with the same system. The lack of complex cryogenics also means that OPM-based MEG systems are ostensibly cheaper to produce and run. This technology therefore allows MEG to evolve, making systems more practical, more powerful, significantly cheaper and consequently much more suitable for clinical use.
However, significant barriers had to be overcome before OPMs could be used in practice, and one of the biggest was controlling background magnetic fields. The field from the brain is much smaller than the time-varying fields that exist all around us, generated by laboratory equipment, computers, passing cars and even our bodies. Such sources create a cacophony of magnetic interference that makes measuring magnetic fields from the brain akin to hearing a pin drop at a rock concert.
Quantum toys Miniaturization has made each optically pumped magnetometer (OPM) sensor similar in size and weight to a Lego brick. These sensors are manufactured by QuSpin in Colorado, US. (Courtesy: Matthew Brookes, University of Nottingham; Lisa Gilligan-Lee, University of Nottingham)
To complicate matters, there is also the Earth’s magnetic field. It has no impact on SQUIDs because they are sensitive only to time-varying fields – and of course the Earth’s field does not move. But with OPMs we want patients to move freely during a scan, and this means allowing magnetometers to move relative to the Earth’s field. The moving OPMs will therefore measure magnetic field changes that have nothing to do with the brain, because they are rotating in a uniform field (or translating in a field gradient). In some circumstances, this can even prevent OPMs from operating.
For these reasons, both the time-varying environmental magnetic fields and the Earth’s static magnetic field must be removed in the vicinity of the OPM measurement. We achieved this high-fidelity control through a combination of active and passive magnetic screening.
Passive screening involves placing the MEG system in a magnetically shielded room with walls made from multiple layers of high-permeability metal. However, even the most advanced passive shield cannot sufficiently suppress the remnant Earth’s field so that patient movement can be allowed, and for this reason active field compensation systems have been introduced.
3 Active field control Helmholtz coils are devices that can create, or cancel out, a magnetic field in a region . But if we simply used them to control the external fields, the need to generate a complete 3D field vector (i.e. in three orthogonal orientations) would mean that the coils would have to completely surround and enclose the subject. Instead, we constrain coils to two planar systems, where both sets are made up of eight independent coils stacked together. Using techniques adapted from MRI gradient coil design, each coil is constructed to generate a different field or field gradient (e.g. Bx, By, Bz, dBx/dx, dBx/dy) over a central region. This produces a similar performance to that of a Helmholtz cage, but without restricting the subject. These ‘fingerprint’ coils (NeuroImage181 760) are mounted on to two planes, each with an area of 1.6 1.6 m2, separated by 1.5 m. The complexity of the wire paths and the scale of the system means that fabrication of such a design is challenging. Yet easier access and less constraint to the subject makes the system much more practical for MEG studies. Importantly, these designs have other applications; similar techniques are being used to shield quantum gravimeters for advanced geophysical mapping of underground structures. (Courtesy: Niall Holmes, University of Nottingham; Lisa Gilligan-Lee, University of Nottingham)
These consist of intricate electromagnetic coil systems connected to a separate magnetometer array measuring the field close to the patient’s head. A feedback loop then uses these field measurements to generate an inverse field by controlling the currents in the coil array. In combination, these cancel the residual magnetic field almost entirely in the region of the patient’s head (figure 3). Working with industrial partner Magnetic Shields, we have built a unique hybrid active and passive shield in which the Earth’s field surrounding the patient is reduced from 50 μT to ~200 pT, a shielding factor of ~250,000. This technology generates a region of space around 0.5 m3 in which a patient can move freely during a scan.
This technology generates a region of space around 0.5 m3 in which a patient can move freely during a scan
To make the OPM-MEG system a reality we also had to solve a number of other problems. The exact arrangement of sensors on the head was optimized by a combination of analytical analysis and computer simulation. This allows an OPM array to efficiently sample the brain’s magnetic field while also minimizing “cross talk” – an effect where the measurement of field at one sensor is disrupted by the presence of other sensors. Meanwhile, physically holding an array of OPM sensors on the head was achieved by advanced 3D printing. Electronic control and data acquisition systems were also developed to enable synchronized measurements from up to 50 OPMs, while separate control systems were needed to integrate OPM outputs with coil systems and the control of patient stimuli. Finally, mathematical modelling packages were redeveloped to allow imaging of current density in the brain based on the OPM scalp-level field measurements. These developments have led to the world’s first wearable OPM-MEG system, giving complete coverage of the whole brain (figure 4).
4 The new generation of MEG Conventional MEG scanners (left) are cumbersome, one-size-fits-all devices that require patients to keep very still. Via a pathway of critical developments, physicists at the University of Nottingham, collaborating with neuroscientists at University College London, have used optically pumped magnetometers (OPMs) to create a new generation of MEG – a wearable device, adaptable to anyone, in which subjects are free to move, and which gives better data quality. (Courtesy: University of Nottingham)
Wearable imaging
This unique combination of quantum technology and electromagnetic theory makes it possible to conduct previously unimaginable neuroimaging studies where subjects can move freely and interact naturally with the world. Even at this early stage, neuroscientific demonstrations have ranged from measuring brain activity as someone plays ping-pong, to making MEG recordings while a subject explores a virtual world (figure 5).
Laboratories around the world are trying to gain access to this new technology
With ongoing rapid exploitation, these examples are just the beginning; laboratories around the world are trying to gain access to this new technology, and a recently established spin-out company – Cerca Magnetics – is now bringing an integrated OPM-MEG system to the research market. Much of the ongoing technical development is now focused on imaging children’s brains, opening up the possibility of new studies on neurodevelopment. For example, imagine being able to see how a child’s brain function changes before and after they can walk, or talk. This opens a myriad of opportunities, prompting neuroscientists to conceive experiments in a completely new way.
5 Neuroscientific demonstrationsa Brain activity evoked when playing a ball game. Neural oscillations in the ~20 Hz band modulate, with the largest effect centred on the brain regions controlling wrist and arm movement (Nature555 657, reused with permission of Springer Nature). b To test a virtual-reality environment, here the participant finds themselves in a virtual room and must lean around a post to see a chess board in the distance. If the chess board appears on the left, activity is seen in the right visual cortex (red overlay). If the chess board appears on the right, brain activity is seen in left visual cortex (blue overlay) (Neuroimage199 408, reused with permission of Elsevier; Ben McGeorge Henderon). c Brain activity measured in a two-year-old subject during a sensory task (CC BY 4.0 Nature Communications10 4785).
Neuroscience aside, perhaps the ultimate marker of success for OPM-MEG will rest in its clinical applications. Conventional MEG can be used to diagnose epilepsy by identifying specific types of “spike and wave” activity. It is also already used when “resective” surgery (removing the region of the brain causing seizures) is required to treat epilepsy that cannot be controlled by drugs – a MEG scan locates the affected area, significantly increasing the chances of a successful surgery.
Understanding and managing human brain health is one of the major scientific challenges of the 21st century
MEG can also be used to map the “eloquent cortex” (areas of the brain that function normally) around the epileptic focus. This provides neurosurgeons with valuable information – for example, mapping the location in the brain of motor function allows a surgeon to avoid those regions, and so prevent paralyzing the patient. Now, however, the flexibility of OPM-MEG offers the chance to enable young children who suffer with epilepsy to also benefit as it avoids some of the limitations of conventional imaging techniques such as EEG and MRI, which can struggle to localize where the problem is. Motion robustness also ostensibly enables us to record brain activity during a seizure. For epilepsy patients, OPM-MEG is therefore extremely promising.
But it could be applied to other disorders too. For example, this same practicality provides a better means of assessing individuals with Parkinson’s disease who struggle to remain still enough for conventional systems. Moreover, one in four people suffer from a mental health condition at some point in their lives and nascent recordings show that OPM-MEG can measure the connections in the brain that are thought to break down in severe mental health disorders. Meanwhile, characterization of “cortical slowing” (an effect where neural oscillations shift in frequency) in the elderly may generate new and early markers of the onset of dementia. These are just some of the brain disorders that may benefit from this new device.
Ultimately, understanding and managing human brain health is one of the major scientific challenges of the 21st century, and the steps needed to meet that challenge are far from clear. However, from X-rays to MRI, ultrasound to nuclear medicine, physics has always been able to deliver technology that changes lives. As we look to the future, perhaps this early success story of the UK’s quantum technology programme will become a cornerstone of the next generation of healthcare technology.
Robotic platforms enable surgeons to perform complex – usually minimally invasive – procedures with more precision, flexibility and control than possible using conventional techniques. To maximize its effectiveness, minimally invasive surgery requires not only well-engineered robotic instruments, but also precise target definition, achieved via image-guidance technologies.
Prior to surgery in complex anatomical regions, nuclear medicine techniques such as SPECT/CT or PET/CT can be used to create a map of the patient showing the number and location of surgical targets, such as tumours or tumour-related lymph nodes. Such maps are generated by detecting injected radiotracer that localizes in tumour cells.
The same radiotracer also enables “radio-guidance” during robot-assisted, minimally-invasive surgery. To detect the radioactive signal within the patient during the surgery itself, surgeons can use a new device known as a DROP-IN gamma probe, which was developed specifically for minimally invasive robotic procedures. The DROP-IN is a small tethered probe that detects gamma rays and generates a sound (and numeric readout) when it is close to a lesion containing radiotracer. The probe is positioned by the surgeon via the robot and is far more flexible than traditional laparoscopic gamma probes, allowing better localization of target lesions.
“The DROP-IN gamma probe is something we’ve been working on since 2014,” says Matthias van Oosterom from the Interventional Molecular Imaging laboratory at Leiden University Medical Center. “Since roughly 2018, we have been performing proof-of-concept studies in patients using this probe and have evaluated it in over 40 patients so far.”
Despite the availability of a patient map and intraoperative confirmation with the DROP-IN probe, it can still remain challenging to find an efficient route through the patient during surgery, especially when the surgical targets are seated at deep and complex locations. “In analogy to GPS navigation integrated in most modern cars, navigation is expected to take surgical localization to the next level,” says van Oosterom.
The challenge with surgical navigation, however, is that locating the exact position of the surgical instruments within the map of the patient (the SPECT/CT scan) is difficult, as many tool-tracking technologies aren’t suitable for use in a laparoscopic setting. To address this, van Oosterom and colleagues investigated the use of real-time fluorescence-based optical navigation of a DROP-IN gamma probe during robotic surgery. They tested the approach in phantoms and in pigs, reporting their results in the Journal of Nuclear Medicine.
Fluorescence tracking
To ease clinical translation, the team combined the DROP-IN gamma probe with two clinically approved technologies: a Da Vinci robot equipped with a Firefly Si fluorescence laparoscope; and a declipseSPECT navigation system.
To track the DROP-IN gamma probe, the researchers marked its housing with a three-ring pattern of fluorescein, which fluoresces in yellow. Automated detection of this fluorescence allowed the pose of the DROP-IN tip to be estimated with respect to the Firefly laparoscope. The declipseSPECT was then used to determine the pose of the Firefly and the patient within the operating room, based on reference targets attached to the Firefly and the patient.
The researchers first tested their navigation concept using a torso phantom containing bones, artery structures and pelvic lymph nodes filled with radioactive tracer. They fixed the navigation reference target on the phantom’s hip and acquired three SPECT/CT scans, using the reference target to register the scans within the operating room.
The view inside the phantom, as seen on the robotic surgical console, included an augmented reality overlay of the lesions segmented from the SPECT/CT scan. Maintaining a direct line-of-sight between the Firefly and the DROP-IN enabled real-time calculation of the distance between the targeted lesion and the DROP-IN tip, with audible feedback confirming effective navigation.
To investigate DROP-IN tracking in a real-life surgical setting, the researchers next performed robot-assisted laparoscopic surgery on pigs, in the surgical training facility of Orsi Academy. By depositing indocyanine green (ICG, which fluoresces in pink) in the abdominal wall they created surgical targets to which the DROP-IN probe was navigated.
In fluorescence-imaging mode, both the fluorescein markers on the DROP-IN probe and the ICG deposits were visible. Uniquely, fluorescein and ICG fluorescence could be excited simultaneously and the emissions distinguished using the raw signal output of the Firefly camera, allowing the surgeon to visually confirm the localized lesions during navigation. There was no obvious difference in intensity between the two fluorescence signals, and bleaching of the fluorescein rings was not observed during the hour-long experiments.
The researchers conclude that these first steps towards optical navigation of the DROP-IN gamma probe could further integrate interventional nuclear medicine into robotic surgery. The first application could be in prostate cancer treatments, where robotic surgery is already well established, says van Oosterom. He notes that radio-guided surgery using a radiotracer that targets prostate specific membrane antigen (PSMA), is particularly promising. “The DROP-IN gamma probe allows the surgeon to operate on these patients with the robot, to specifically excise tumour cells using the PSMA-targeted tracer,” he explains.
The team has already demonstrated that the DROP-IN works well for robot-assisted radio-guided lymph node dissection in prostate cancer surgery, as described in European Urology. “Following our successful proof-of-concept studies, we are currently extending in-human trials that evaluate the PSMA-targeted robotic surgery,” van Oosterom tells Physics World. “Current findings suggest that our optical navigated DROP-IN gamma probe setup will provide additional value in these in-human studies.”
Testing times All about a new quantum-enabled MEG scanning technique
In most types of medical imaging, the name of the game is to find out about the internal structure of the body, looking for growths, tumours or other abnormalities. But for many illnesses, simple structural information is not enough: you need to know how an organ functions too.
That’s particularly important when assessing disorders of the brain – such as epilepsy, dementia or problems with mental health – where a structural image, no matter how detailed, often looks “normal”, even in patients with profound difficulties.
One method of imaging brain function is magnetoencephalography (MEG), which traditionally use superconducting quantum interference devices (SQUIDs) to measure the tiny magnetic fields created from neuronal assemblies. Trouble is, these devices have to be cooled to –269 °C, which is one reason why MEG scanners are so expensive.
In the February 2021 issue of Physics World magazine, Hannah Coleman and Matt Brookes from the University of Nottingham in the UK explain how “optically pumped magnetometers” could allow MEG to be more widely used. These quantum-enabled magnetic devices are as sensitive as SQUIDs – but don’t need any fiddly cryogenics. You can also read the article online here.
For the record, here’s a run-down of what else is in the issue.
• Radio offers view of gravitational waves – New limits on the size of primordial gravitational waves have been set that are several orders of magnitude lower than the most sensitive laboratory experiment, as Edwin Cartlidge reports
• Physicists welcome Brexit deal – While there is relief that the UK has reached a deal with the European Union, the true impact of the agreement will be difficult to unravel given the ongoing pandemic, as Michael Allen reports
• Protect the scientists of tomorrow – Karel Green says that funders must support all PhD students due to the devastating effects on research caused by the COVID-19 pandemic
• Let’s go green – James McKenzie believes the UK government’s ambitious 10-point-plan for a “green industrial revolution” can deliver – if we put our collective minds to the problem
• Very deep thinking – Robert P Crease discovers why a nuclear-waste programme in Finland can help us to envisage the world a million years from now
• Quantum sensing the brain – Novel technology in healthcare based on fundamental physics saves millions of lives every year but while these machines have revolutionized medicine, the next generation must meet even greater challenges. Hannah Coleman and Matt Brookes are hoping that the University of Nottingham’s new quantum-enabled MEG scanner will herald a new dawn in the study of human brain function
• Improving nuclear fuel safety – A decade after the Fukushima disaster, Michael Allen investigates claims from academia and industry that the next generation of nuclear fuels could reduce the risk of similar accidents occurring ever again
• The beams at the edge of physics – Creating a new cutting-edge accelerator isn’t cheap or easy. But as Kit Chapman discovers, the upcoming Facility for Rare Isotope Beams (FRIB) in Michigan promises great things for nuclear physicists especially those with applications in mind
• Rudiments of reality – Philip Ball reviews Fundamentals: Ten Keys to Reality by Frank Wilczek
• Reinventing the science museum – Margaret Harris reviews Idea Colliders: the Future of Science Museums by Michael John Gorman
• Building a quantum powered future – Quantum physicist and chief executive of Oxford Quantum Circuits Ilana Wisby talks to Tushna Commissariat about deep tech, entrepreneurship and quantum technologies of the future
• Islands in the stream – Logan Chipkin on how other galaxies were discovered
Intense “blue jets” that form during thunderstorms and extend into the stratosphere have been studied in detail by researchers in Denmark, Norway and Spain. Led by Torsten Neubert at the Technical University of Denmark, the team used thunderstorm observations taken aboard the International Space Station (ISS) to work out how the jets form, and how they influence processes higher up in the atmosphere.
Blue jets are short-lived, lightning-like electrical discharges that can appear in the upper reaches of storm clouds. They occur when the potential difference between positively-charged upper cloud regions and negatively-charged boundary layers between the cloud and the stratosphere, exceeds a certain breakdown voltage. At this point, previously insulating air will conduct intense electrical currents.
Within these environments, the jets begin their lives as channels of ionized air called “leaders”. These channels have two propagating tips, which travel in opposite directions due to their opposing charges. As the positive tip propagates upwards towards the negatively charged boundary layer, it transitions into branching discharges called “streamers”, which fan out to form cone-shaped structures that are blue jets.
Electric field pulses
Previous studies have associated blue jets with strong electric field pulses that last between 10-30 µs, and are accompanied by intense bursts of radio waves. Until now, however, researchers had not fully characterized how this behaviour unfolds.
Neubert’s team studied this phenomenon using observations from the ISS, which offers an unimpeded view of the tops of thunderclouds. Measurements were made using the ISS’s Atmosphere-Space Interactions Monitor (ASIM) instrument, which contains three photometers, that measure the intensity of electromagnetic radiation emitted by different atmospheric gases during lightning flashes. In addition, ASIM has two cameras that take images of flashes.
The team used ASIM to monitor a thunderstorm in the central Pacific Ocean in February 2019. During the event, they recorded five intense blue flashes, each lasting around 10 µs. From their measurements, they discovered that the flashes initiated pulsating blue jets between the stratosphere and the ionosphere – the region of Earth’s upper atmosphere where air molecules are ionized by solar radiation.
Neubert’s team concluded that these jets occur as branching streamers are initiated by short, localised leaders, like those found in more familiar cloud-to-ground lightning. As they fanned out, these streamers unleashed waves of ionization in the surrounding air. In addition, the researchers showed how blue jets are accompanied by “elves” in the ionosphere. These dim, expanding glows occur around 100 km above the ground, and can expand to hundreds of kilometres in diameter. Overall, the team’s results offer researchers a far greater understanding of how these dynamic events unfold.
The COVID-19 pandemic, which began early last year, continues to have a devastating impact on research, with the UK last month entering a third national lockdown. The coronavirus has forced labs and universities to close around the world, with experiments grinding to a halt. Any PhD student who conducts lab work or depends on experimental data will have found themselves in an indefinite state of limbo.
In such circumstances it may seem obvious to allocate sufficient funding to allow any PhD student who has had their work held up by the pandemic to extend their research if they need to do so. But funders in the UK – one of the worst-hit countries – think otherwise. They have declared that only those in the final year of their PhDs can be granted an extension. Yet all students are working at a far lower rate than in the pre-COVID days, which is why this decision to give extra time only to those in their final year desperately needs to change.
In the case of UK Research and Innovation (UKRI) – the umbrella body for the seven UK research councils – funding extensions for final-year PhD students were introduced following the first UK lockdown in March 2020. Students reported, however, that the application process was stressful and the questions vague and tedious. Applicants had to prove that they could not finish within their initial funding period and had to request the length they would like to extend for. But since this funding was only available to students who applied, those worst affected by the pandemic – including those literally sick with COVID-19 – were not able to take advantage of it. According to the UKRI, the average extension requested was just under five months, but given the mental and physical toll of the pandemic as well as subsequent lockdowns, this time would likely have increased for many.
In a statement on 11 November 2020 the UKRI’s best solution after seven months of planning was a blanket recommendation that PhD students should simply alter their project to finish within their initial funding period. For those doctoral students who will find it hard to adjust their project, the UKRI offered a measly £19m – or 0.32% of its £6bn budget – in financial support. The way this latest grant allocation is set up threatens to disproportionately affect the most vulnerable, with those from minority ethnic and/or working-class backgrounds, LGBT+ people, those with disabilities and those with dependents being most at risk. This policy threatens to further entrench the systemic inequalities already seen in many technical fields.
Many experimentalists chose their research area because they enjoy practical work or want to be part of a specific collaboration that fits their interest. PhD researchers are now expected to overhaul all their analysis – a task that is by no means easy if not impossible – with projects on the brink of being destroyed. PhD supervisors have spent countless hours converting years of lectures and seminars to work in an online format. That is on top of continuing “world class” research and other day-to-day life and academic responsibilities that may have changed drastically due to the pandemic. By the UKRI prompting academics and their PhD students to “act now” and essentially sort this massive issue out on their own, it is simply passing the buck, showing that the UKRI is not willing to provide competent support themselves. Indeed, this policy led the University and College Union to publish an open letter – signed by more than 1000 academics – condemning the UKRI’s decision and revealing that the UKRI failed to engage with people on the frontline of research, instead relying on consultation with managers and administrators.
Meaningful support
I have been extremely lucky. My PhD research has easily translated online, and I have been able to work during the pandemic. Despite these privileges, it seems the only effort has been put into preserving work and work alone. The innate issues that existed in being a PhD student have been exacerbated. This includes being interchangeably treated as staff or student depending on what is most convenient and being expendable labour – both of which create an environment unconducive to quality research.
Despite all this, I do not regret doing a PhD. I have learnt many skills I would not have otherwise. I get to work for an incredibly welcoming department with supervisors and academics who personally respect their PhD students and have gone above and beyond to make this situation as easy as possible for us. For this I am extremely grateful, and I would encourage anyone who is thinking of undertaking PhD research to do so. How you are treated can be problematic at times, but that is not unique to the field and allowing it to dissuade you will further entrench problems.
But more support needs to be forthcoming from funders to protect PhD students. We are the scientists of tomorrow and our work is constantly used throughout all stages of academia. The UKRI policy, as it stands, could eliminate a generation of vulnerable people who would otherwise make excellent researchers. Real, meaningful support means a default, funded, blanket extension for all, so that those who wish to complete their doctoral studies as they want to, can do so.
Irregular heartbeat. Shortness of breath. These symptoms are often the first signs of infection in patients suffering from a cardiorespiratory disease. But variations in heart and respiration rate can be subtle, particularly in the early stages of infection. Therefore, sensors that can accurately track small changes in heart and lung activity are of great interest to clinicians – especially during the COVID-19 pandemic.
Yong Xu and his team at Wayne State University and Arizona State University have developed a wearable electrochemical sensor that can record heart and lung sounds with ultrahigh sensitivity. Housed in EcoFlex 00-20, a skin-safe silicone rubber used in prosthetics, the low-cost device could aid diagnosis and long-term management of many cardiovascular and respiratory diseases. The researchers describe their new sensor in Applied Physics Letters.
Recording the faintest sounds
The 28 mm wide sensor, designed to be worn on a patient’s chest, uses iodide/triiodide (I–/I3–) redox chemistry to measure mechano-acoustic signals (sounds) from the heart and lungs.
The I–/I3– redox couple is no stranger to motion-sensor technology. Some high-performance seismometers, for example, use iodine-based electrochemistry to convert low-frequency ground vibrations into electrical signals. This is because the current that arises from the redox reactions – the transduction mechanism used to generate the output signal – is extremely sensitive to even the smallest of mechanical motions.
Xu and his team applied the same transduction principles to their device. The sensor consists of a circular cavity filled with an electrolyte solution containing I– and I3–. The cavity connects to a channel containing two platinum anode–cathode pairs. The cavity and channel are topped with a 500 µm EcoFlex diaphragm, which is in contact with the patient’s skin.
When a DC voltage is applied to the two anodes, a current flows between each anode–cathode pair. Both currents are subsequently converted to voltages using a 10 kΩ feedback resistor and an 8.5 nF feedback capacitor. These voltages are then recorded by a data acquisition board and processed to obtain the differential voltage – the final output signal of the sensor.
Because EcoFlex is flexible, small movements in the patient’s chest (like their heartbeat, for example) cause the diaphragm to move the liquid electrolyte along the channel. The movement of the I–/I3– ions relative to each electrode creates a change in current. This change can be observed in the differential voltage signal.
Preliminary tests show promise
So far, the device has only been tested on one volunteer. Nevertheless, it consistently recorded their heartbeat with a signal-to-noise ratio of over 6:1.
The sensor detects heart (top graph) and lung (bottom graph) signals with excellent sensitivity. (Courtesy: Yong Xu)
To measure lung sounds, which are much quieter than heart sounds, the volunteer performed a set of five consecutive breathing cycles. Each period of inhalation, breath-holding and exhalation were easily seen on the waveform of the output signal.
The researchers also monitored the volunteer’s respiration rate by processing the raw data with either a high-frequency (20 to 300 Hz) or low-frequency (0.1 to 15 Hz) bandpass filter. The high-frequency filter was used to extract the amplitude of the heart sound signals, which varies as the patient’s chest volume changes during respiration. Meanwhile, the low-frequency filter was used to isolate the low-frequency component of the signal, which corresponds to deformations within the sensor as the chest expands and contracts during each breathing cycle.
While preliminary, the results demonstrate how the sensor could be applied within wearable health monitoring. The researchers envision that the technology could be used for self-management of cardiac diseases like heart failure, as well as to diagnose respiratory diseases like COVID-19.
Highly ordered nanopatterns spontaneously form on the surface of alloys as the metal solidifies. This process, which is very different from those that occur in the materials’ bulk, could make it possible to create bespoke patterns on metallic structures and construct devices with applications in electronics and optoelectronics.
Patterns are ubiquitous in nature – think leopard spots and zebra stripes – as well as in human-made systems. Within metallic alloys, micron-sized layers and rod-like structures form when the materials leave the liquid phase and solidify into crystals. This phenomenon mostly occurs via metastable processes known as pattern nucleation and growth. Both processes have long been studied in the bulk of alloys, but researchers had previously paid less attention to their role in forming patterns on the alloys’ surfaces.
Enriched minority phase
Researchers led by Kourosh Kalantar-Zadeh of the University of New South Wales (UNSW) in Sydney, Australia used nanoscale infrared (nanoIR) and surface-enhanced Raman spectroscopy to observe a variety of organized nanopatterns forming on the surface of solidifying gallium-based alloys that contained a small fraction of bismuth (Bi0.0022Ga0.9978, EBiGa). The patterns were dominated by the minority bismuth phase and included alternating stripes, curved fibres, dot arrays and even stripe-dot hybrids.
The researchers observed the front of the bismuth-enriched solid propagating along the material’s surface, generating phase-separated patterns in its wake. The nucleation of this bismuth phase then triggers further growth of the phase along the surface – a surface-catalysed process quite different from conventional bulk solidification.
To understand more about these processes, Kalantar-Zadeh and colleagues turned to collaborators at the MacDiarmid Institute for Advanced Materials and Nanotechnology in New Zealand and RMIT in Australia. These collaborators used molecular dynamics simulations to better understand the UNSW group’s experimental findings. Their calculations revealed that the bismuth atoms appear to move around randomly in a sea (or solvent) of gallium atoms that accumulate at the alloy’s surface – something that classical metallurgy fails to predict.
“An exciting finding”
“This previously ignored surface solidification phenomenon improves our fundamental understanding of liquid metal alloys and the phase transition processes they undergo,” Kalantar-Zadeh explains. “The autonomous surface process we observed could be used as a patterning tool for designing metallic structures and creating devices for advanced applications in future electronics and optics.”
Study lead author Jianbo Tang adds that the surface enrichment of minority phases is “an exciting finding” for efforts to develop metallic structures containing rare or expensive materials for surface-based applications. “Given the diversity of metal species and their abundant combinations, the surface solidification effect could lead to energy-efficient nanoengineering of the surface structures of high-melting-point metals (as has been shown with silver and gold) by incorporating them into room-temperature or low-melting-point metal solvents (such as gallium, indium, tin, bismuth and their alloys),” the researchers write in Nature Nanotechnology.
The researchers also point out that the minority atoms can cover a large surface. This might be an advantage for applications that need such atoms to be accessible on the surface of fabricated structures, but it could also be a disadvantage if high-purity alloys are required. The researchers therefore recommend that this pattern-forming ability should be carefully considered in advanced manufacturing processes.
The researchers now plan to conduct further studies of liquid metals, believing that the underlying physics and chemistry of these materials – much of which remains unexplored – could have “fascinating” future applications. “Metals in liquid form can be considered as exotic solvents,” Kalantar-Zadeh tells Physics World. “They can thus be used as reaction media to make industrial-grade materials and to convert carbon dioxide with very little energy. They can also be used as the components of intelligent data processors, sensors and actuators, and for developing advanced soft electronic and optical elements.”
This webinar will show a “fairly new” technology for micro and nano fabrication and characterization. It will discuss the Focused Ion Beam (FIB) technology, a tool available on the market from the early 2000s.
With a FIB it is possible to image and to modify materials at micro and nanoscale by milling and deposition. Despite its complex architecture, it is a relative user-friendly machine, especially when it is included in a dual-beam system with an electron microscope.
Giuseppe Firpo is a physicist and currently a technologist and head of the technical department at Dipartimento di Fisica – Università degli Studi di Genova. He is an expert on vacuum science and technology and has published patents and several scientific papers in peer-review journals on this subject. Since 2005, he has been a FIB user to fabricate nanostructure for biomedical sensing device (Lab on a Chip) and for material science applications. His latest research is on the permeability of ultra-thin membranes for gas separation technology.