Innovation in action: (left to right) the electrical response from a multilayered ceramic capacitor (Courtesy: Park Systems); Measuring the Hall effect (Courtesy: Lake Shore Cryotronics); the power of confocal Raman imaging (Courtesy: WITec).
This time we are featuring webinars from Park Systems and WITec, as well as a white paper from Lake Shore Cryotronics.
Advances in microscopy
Piezoelectric force microscopy (PFM) is a form of scanning probe microscopy (SPM) that lets researchers image and manipulate piezoelectric and ferroelectric domains in materials. In this webinar, members of the technical-marketing team at Park Systems introduce the principles behind the technique, which involves engaging a sample surface with a sharp conductive SPM probe and then applying an AC current to the probe’s tip to deform the surface via a piezoelectric force. The PinPoint variant of PFM, specific to Park Systems’ atomic-force microscopes, lets you image samples with higher spatial resolution and with no lateral force. The webinar, PinPoint Piezoelectric Force Microscopy, also reviews the technique’s potential for investigating the electric domain structures (such as polarity) in piezoelectric and ferroelectric materials.
Atomic force microscopy (AFM) is a powerful tool in nanotechnology that has been widely used to study collagen, which is the most common protein in mammals and is found everywhere in connective tissues, including bone, skin, and muscle. Conventional techniques to characterize these fibrils are mainly based on AFM force-volume spectroscopy, which collects force-distance curves at each pixel to calculate material properties. However, these techniques are slow and it can take hours to acquire an elasticity map, which is one reason why Park Systems has developed the PinPoint Nanomechanical Mode. At least 100 times faster than traditional techniques, it can create a map in minutes, with a correlated topography image that reveals the position and orientation of the sample. In this webinar, In-Air and In-Liquid AFM Imaging with PinPoint Nanomechanical Mode, staff at Park Systems discuss this new mode, which can create quantitative maps of the mechanical properties of various materials, ranging from hard disks to soft tissues, including collagen fibrils.
Scanning ion conductance microscopy (SICM) is a variant of scanning probe microscopy (SPM) that allows researchers to determine the surface topography of samples at nanometre range in a non-destructive manner and under in situ conditions. Scanning electrochemical microscopy (SECM), meanwhile, is another SPM technique, this time letting you study the local electrochemical phenomena at various materials interfaces in liquids. SICM, which uses the increase of access resistance in a nanopipette placed in an electrolyte solution, monitors the ionic current flowing in and out of this probe – a flow that is hindered as the tip closes in on a sample surface. In SECM, in contrast, an electrode tip is used to acquire spatially resolved electrochemical signals over a region of interest, with the 2D raster-scan information yielding images of surface reactivity and information on the rates of chemical processes. In this webinar, Scanning Ion Conductance Microscopy (SICM) and Scanning Electrochemical Microscopy (SECM), applications staff at Park Systems explain the basics of both the techniques and discuss their applications in analytical chemistry and electrochemistry.
Material matters
The Hall effect is not just a key phenomenon in condensed-matter physics – it’s also a vital way for characterising the material properties of semiconductors. In the 88-page Hall Effect Measurement Handbook white paper, Lake Shore senior scientist Jeffrey Lindemuth provides both new and experienced material researchers with a comprehensive guide to the theory of Hall measurements. It includes methods to measure the resistivity and the Hall coefficient of materials, major sources of measurement errors, ways to minimize the effects of these errors, and more besides.
Seeing molecules in 3D
Confocal Raman imaging is a powerful, versatile and ever-more common microscopy technique, which can quickly identify the molecules in a sample and visualize their distribution in 3D space. To find out more about this non-destructive and label-free chemical imaging method, which has big potential in many different fields, why not view the webinar The Analytical Power of Correlative Raman Imaging: New Developments, Tools and Applicationsby Miriam Böhmler and Ute Schmidt. As senior application scientists at WITec, they will introduce the basic principles of Raman microscopy, explain the associated hardware and software, and describe several of its variations – including relevant application examples. They then introduce correlative microscopy and provide details of Raman-AFM and Raman-SEM (RISE) microscopy too.
An array of low-voltage LEDs that emit short wavelength ultraviolet (UV) light could be used to kill harmful microbes on the skin and in the throat and nose – with minimal harmful side-effects to patients. The prototype device was developed by researchers at Germany’s Ferdinand Braun Institute and the Technical University of Berlin, who are currently testing its safety and efficacy.
Treating disease with antibiotics has led to the rapid evolution of drug resistant pathogens – which are believed to be responsible for about 700,000 deaths per year worldwide. One emerging solution to this global health problem involves killing harmful microbes on skin and other living surfaces using short-wavelength ultraviolet light. Unlike the longer-wavelength UV light currently used to kill microbes on inanimate surfaces, this UV light travels a very short distance in skin. As a result, it cannot penetrate the dead upper layers of the skin to reach the living cells below. Any minor damage to skin that does occur is easily managed by the skin’s natural healing response – minimizing any harmful side-effects.
While short-wavelength UV light can be produced by LEDs, components are not widely available on a commercial basis because of various technological challenges. In their study, the FBH and TUB teams developed a prototype device that incorporates emerging LED technology. It features an array of 118 LEDs spread over an area of 64 cm2 and emitting UV light at a wavelength of around 230 nm. The setup delivered a maximum irradiation power of 0.2 mW/cm2 over an area of 36 cm2, with 90% uniformity.
Assessing DNA damage
The researchers will now assess the performance of their device using tissue samples of both skin and mucous membrane. The latter is the soft lining found in the nasal cavity and the back of the throat, which are both preferred habitats for many harmful pathogens. Through these tests, they hope to determine the extent of damage to DNA caused by varying doses of light. They will also compare the pathogen-killing effectiveness of the radiation compared with other UV light at other wavelengths.
The researchers believe their device promises to offer significant safety improvements on previous ultraviolet-based approaches, since the LEDs involved have low voltages, and convey little heat or strain to the skin. They hope their insights may soon lead to miniaturized devices which could be incorporated into endoscopes, allowing pathogens to be killed in small orifices, which are harder to access. Perhaps most importantly, it could also allow doctors to inactivate the SARS-CoV-2 virus responsible for COVID-19 inside the throat – the very spot where it first begins to replicate.
Hovering 3D images are the stuff of science fiction. Just think of R2D2 projecting a message from Princess Leia in Star Wars, or perhaps Iron Man designing his latest tech in Marvel movies. Classics like Blade Runner and Back to the Future even had them in the background of shots as giant advertisements. But in a Faraday cage at the University of Bristol in the UK, fiction is becoming reality with the help of some speakers and polystyrene balls.
As I step into the windowless box that is part of the Ultrasonics and Non-destructive Testing Laboratory, it feels like I’ve walked into a shipping container. Research associate Tatsuki Fushimi (who has since moved to the University of Tsukuba in Japan) explains that he works in here not because he needs to block electromagnetic fields – which is what a Faraday cage is normally used for – but because being inside the sealed box cuts down on drafts that might knock his polystyrene balls out of the air.
Fushimi then shows me what we have come to see: two horizontal grids of 30 miniature loudspeakers spaced around 20 cm apart and facing each other. The 10 mm-diameter speakers are the same as the ultrasonic transmitter and receivers used in car parking sensors. “You can buy them from – maybe not Amazon – but from normal electronic shops,” Fushimi says. After switching on his laptop, he picks up a tiny polystyrene ball, places it between the two transducer arrays and lets it go. And the ball just stays there, hovering, suspended in mid-air.
It starts with a tractor beam
The group at Bristol, and the Interact Lab at the University of Sussex in the UK, are at the forefront of the new field of “acoustic levitation” – essentially using sound to lift objects by counteracting the force of gravity with the pressure of acoustic waves. In 2015 a collaboration between the two groups unveiled a device known as a sonic tractor beam that could levitate objects and rotate and move them in multiple directions (Nature Comm. 6 8661). While similar types of levitation had been demonstrated before, previous attempts required particles to be surrounded by speakers in all, or at least most, directions. The difference with this new device was that it used just a single array of 64 loudspeakers, operating at 40 kHz.
Follow the path Snapshots of a polystyrene bead as it is moved along a 3D path using a single-sided array of transducers, as achieved by researchers at the universities of Bristol and Sussex. (CC BY 4.0/Nature Comms6 8661)
Controlled by a programmable array of transducers, the grid of speakers produced acoustic shapes out of high-pressure ultrasound waves that could surround and trap objects in mid-air and be adjusted to rotate and move them. The researchers created three different acoustic shapes with the sonic tractor beam: tweezers, a vortex that trapped objects at its core, and a cage. In each case they demonstrated that they could control polystyrene particles ranging from around 0.5 to 3 mm in diameter.
Since then, the field has progressed with various innovations. For example, in 2018 a team at the Interact Lab combined a larger array of 256 loudspeakers with an acoustic metamaterial – an engineered material with structural properties that do not usually occur naturally – to bend a beam of sound around an obstacle, and levitate and manipulate an object on the other side. This device was named SoundBender (UIST ‘18 10.1145/3242587.3242590).
Then later in 2018, Bruce Drinkwater, professor of ultrasonics in the Bristol group, and Asier Marzo, who is now based at the Public University of Navarre in Spain, unveiled an acoustic levitation device that used two arrays of 256 loudspeakers to levitate and individually manipulate up to 25 polystyrene balls (diameter 1–3 mm) at the same time (PNAS 116 84). The ability to create and adjust multiple acoustic traps simultaneously opened new possible applications for sonic tractor beams.
It’s no obstacle Gianluca Memoli and Mohd Adili Norasikin levitate a white bead around a LEGO minifigure using SoundBender – a device they developed at the Interact Lab at the University of Sussex. (Courtesy: University of Sussex)
The persistence of vision
An early idea the Bristol researchers had was to create mid-air, hologram-like visual displays by using multiple levitated and illuminated polystyrene beads like they were pixels. But they found this approach didn’t work as well as they had hoped. The graphics created were poor and coarse because the beads had to be at least a wavelength apart (about 1 cm). And the more particles used, the less power there was to manipulate them individually.
The team therefore came up with another idea: trace the image with a single acoustically levitated and illuminated ball travelling at high speeds. Essentially, if you move the illuminated particle fast and precisely enough you can create the illusion of the picture. It’s all thanks to persistence of vision – the capacity of the eye to briefly maintain an image on the retina after it has disappeared, enabling successive images that follow rapidly after each other to be perceived as one.
If you move the illuminated particle fast and precisely enough you can create the illusion of the picture, thanks to persistence of vision
Fushimi says that the levitating particle is essentially a way of displaying the light needed to create the 3D image. “For light to be seen at each point you need something for it to be reflecting off. By placing this particle at a point in space you create like a voxel [a 3D pixel] in space. The question of how you make those voxels appear in mid-air was solved by using acoustic levitation,” he explains.
Back in the Faraday cage, Fushimi fiddles with his computer – which is connected to the arrays via a set of control boards and amplifiers – and the levitated polystyrene ball starts to trace out a circle in mid-air, changing colour as it is illuminated by a nearby multicoloured light. As the bead speeds up while circling continuously on the same path, it becomes blurred, and an image of the circle a couple of centimetres in diameter just about persists.
The polystyrene ball is now moving at 5 Hz, which, Fushimi explains, means that it travels around the circle five times a second. To achieve the persistence-of-vision effect “the minimum frequency that we need to move these particles at is 10 Hz”, he adds. If the ball was completing the circle 10 times every second, in theory a multicoloured image of a circle would appear in mid-air.
1 Speed limit To achieve the persistence-of-vision effect, levitated particles need to be tracing the desired shape at high speed. For a circle, figure-of-eight and square about a couple of centimetres in size, that requires a frequency of 10 Hz. However, as you move the acoustic trap holding the particle with increasing speed, the trap starts to resonate due to a lack of dampening and the shape loses its smoothness. Therefore, to see the shape clearly, you currently need to use lower frequencies and a camera with a slow shutter speed. (Reused from Appl. Phys. Lett. with permission of AIP Publishing)
Unveiled last summer, this “acoustophoretic volumetric display” dramatically increases the speed and accuracy with which a 0.7 mm polystyrene ball can be manipulated compared with previous acoustic levitators (Appl. Phys. Lett. 115 064101). The particle can be positioned with an accuracy of 0.11 mm in the horizontal axis and 0.03 mm in vertical axis, while moving at a speed of 60 cm/s. With this device, the researchers can accurately trace out images such as a 12 mm2replica of the University of Bristol logo, as well as simple shapes like circles, figure-of-eights and squares. They are not yet, however, able to create them fast enough to achieve the persistence-of-vision effect. To see the images, you need to use a camera with a slow shutter speed.
When the polystyrene ball in the Faraday cage hits a frequency of 10 Hz it does produce a circle that can be viewed as an image, but it is not a smooth circle – it has a wavy, squiggly outline (figure 1). But it does demonstrate proof-of-concept.
Armed with two arrays of 256 transducers, they demonstrated that they can move a polystyrene bead at speeds of almost 9 m/s. This allows them to draw 2 cm images of torus knots, smiley faces and letters in less than 0.1 s, which is fast enough for them to be visible to the naked eye. They can also create more dynamic content, such as a number countdown, and still achieve the persistence-of-vision effect. More complex images like 3D globes and the University of Sussex logo can’t be viewed with the naked eye and instead require long camera exposures to be seen properly.
Drinkwater says that the performance of the Sussex device is impressive. “The accelerations and speeds are so good, whereas the hardware is essentially the same [as ours], the only difference is it is bigger. The natural thing to think is that bigger might slow it down, but it doesn’t,” he adds.
He believes that the reason the larger device achieves higher speeds is because the particle is positioned further from the speaker arrays. This means, he explains, that the changes in the phase of the ultrasound required to shift the acoustic traps and the particle can be smaller. And if the phase changes are smaller you hit the limits of the loudspeakers at higher particle speeds. Fushimi says it is much like drawing an image on a wall with a laser pen. “If you are very close to the wall you have to move a lot to move the laser point from one end of the wall to the other, but if you are further away you only have to flick your arm a tiny bit.”
Subramanian says that this could be the case, as the distance between the acoustic traps in their setup is quite small. He adds, however, that the “devil is in the detail” and that he can’t speculate too much until he knows “exactly what [the University of Bristol team] tried and didn’t try, and how exactly they tried it”.
The Sussex researchers are now looking at how they can improve their hardware setup to build large and more complex displays, with faster moving particles. Currently, their acoustic levitation device is centrally controlled, with all the decisions being made on a computer before being sent to the control boards and loudspeakers. “We push the data to a USB port and on to these transducers,” Subramanian explains, “and quickly you start hitting the limits of how much data you can push through one USB port as you start increasing the number of transducers. You need to send amplitude and phase information for each transducer individually at very high speed.”
He sees solutions in making the system wireless – to enable faster data transfer – or less centrally controlled, with more computerization at the control boards and transducers. “So, you don’t send all the information from the PC, but you send some high-level information and each transducer board does the calculation locally,” Subramanian explains.
Subramanian hopes that introducing some intelligence in the transducers will also enable them to produce displays with multiple levitated particles acting as pixels, to create more complex and detailed images. “Our ambition over the next couple of years is to be able to have a talking head that is about the size of a normal human head,” he says (see box below).
Floating singing heads
Floating world A 3D globe created by the University of Sussex display. (Courtesy: Eimontas Jankauskis)
The Interact Lab at the University of Sussex, UK, has just started a project with the Shanghai Academy of Fine Arts in China, with funding from the UK’s Arts and Humanities Research Council. Their aim is to create public installations and displays in which sound is produced through acoustic levitation, with the focus initially on creating museum exhibits. “One of the first things we are trying to do is to create a gallery of talking heads,” says Sriram Subramanian.
He sees these as being heads of famous artists singing – such as a 3D moving image of Freddie Mercury’s face belting out the hits of Queen. And he thinks that the researchers will be able to get to that point in the next six to eight months, although they won’t be life-size singing heads, more likely 20% smaller.
With the current displays, the Sussex team can already produce audible sound. This is done by vibrating the levitating, polystyrene bead at audible frequencies, which is possible because the device uses different elements of the ultrasound signal for levitation and vibration. The ultrasound phase information is used to create the levitation traps, while amplitude modulation is used to generate audible sound. This, Subramanian says, is a cheap way of producing directional sound: “You get it free”.
“If you’re walking past these talking heads there would be no sound coming from them except when you stand and look at, say, the face of Freddie Mercury, then maybe he starts singing one of his songs,” Subramanian says. “And there is no separate speaker, it is the sound coming directly from the beads.”
Understanding the trap
Back at the University of Bristol, researchers are now working to develop more accurate models of the dynamics and shapes of the traps in the acoustic levitator. They hope that this will enable them to work out the limits of acoustic levitation and then explore its applications. “Levitating objects and moving them around is something that is useful for things other than acoustophoretic displays, such as holding samples, continuous production lines and a manner of other manufacturing concepts,” Drinkwater says. Manipulating pharmaceutical products, where contamination is an issue, would be a good example, he adds.
Drinkwater says that they have a simple model of the traps, which is not quite right but not far off. He explains that the system is like a dynamic trap. When the particle is in a node in the acoustic field, which ever direction it moves in, the force gets stronger, making the system stable and holding the particle in place. The particle moves with the acoustic trap, but as you move it faster the trap starts to resonate, and due to a lack of dampening in the system, it becomes unstable. And this is why the circle I saw in Fushimi’s lab loses its smoothness as its frequency is pushed to 10 Hz – the particle vibrates too much (figure 1).
Tom Hill, an expert in nonlinear dynamics at Bristol, says that you can imagine the polystyrene particle as a ball in a bowl. “If you have a ball in a bowl and you are trying to move the ball in a particular path, it is easy if you do it slowly, but as soon as you do it fast it starts sloshing around the bowl,” he explains. “It’s like we’ve got a bowl that is a really complicated shape and actually getting a model of what that shape is, is very, very difficult. Plus, as you move the bowl around it is changing shape and there are other complexities.”
However, the Bristol researchers believe they are nowhere near the inertia limit yet – the amount of force they can apply and the speed at which they can move the particle. Drinkwater says that part of the problem is that the loudspeakers they are using aren’t up to it and that is a big area for potential future research. He says that “the bigger display seems to be one way of sort of getting around that problem” – but adds that you will still eventually hit a limit and go back to “the wobbling around, the bowl problem”.
The aim, Hill says, is to understand the shape of the bowl and how it changes in space. Then you could model exactly how you need to move the bowl to move the particle along the desired path. “The interesting thing with that is it would give you a maximum speed – a fundamental limit to how fast these [levitated particles] can go,” Hill adds.
Drinkwater likens his acoustic traps to an invisible robot arm – they grab things and move them around. And, just like robot arms in factories, his traps could be used to manufacture things, as well as creating images. If he can improve his set-up so that lots of polystyrene balls can be packed into a small space and move quickly and accurately, who knows, perhaps we’ll start seeing the 3D levitating projections of science fiction in real life.
Entanglement – a purely quantum-mechanical effect that allows two or more particles to have a much closer relationship than classical physics permits – can survive high temperatures and chaotic environments. This unexpected finding from researchers at the ICFO in Barcelona, Spain could mean that entanglement-based quantum technologies, which were previously thought to function only in cold, low-noise conditions, may work in “hot and messy” environments too.
Quantum entanglement is the process by which particles such as photons become inextricably linked, such that if one is polarized in a vertical direction, then the other will always be polarized in a horizontal direction. Hence, by measuring the polarization of one photon in the pair, we immediately ascertain the polarization of the other, no matter how far apart they are. Once thought to be a quirky – or even nonsensical – aspect of the quantum world, this “spooky action at a distance”, as Albert Einstein called it, is now being exploited in quantum cryptography and quantum communications systems as well as sensors used to detect gravitational waves.
Entangled states are usually thought of as being extremely fragile. Even the tiniest disturbance (or noise) in their environment can cause entangled particles to “decohere” through random interactions, making the entanglement disappear. Current quantum technologies therefore typically operate at ultracold temperatures, and their designers go to great lengths to keep their quantum systems isolated.
The opposite strategy
The ICFO researchers, led by Morgan Mitchell, have now shown that the opposite strategy – actively promoting random interactions – can help generate and preserve entanglement too. In their experiment, they heated a collection of rubidium-87 (87Rb) atoms to 450 K, creating a vapour of hot alkali atoms. They found that individual atoms in this vapour were not isolated but collided with each other every 20 microseconds. Each collision set their electrons spinning in random directions, producing a magnetization.
Mitchell and colleagues used a laser to monitor this magnetization via a series of measurements that enabled them to detect entanglement between the atoms and study the effect of the atomic collisions. The measurement technique is known as optical quantum non-demolition (QND) because it can measure the electron spins without disturbing them. “If a regular measurement is like a biopsy, in which material is taken and analysed, then a QND measurement can be thought of as like MRI, in which we obtain information without damaging the system,” Mitchell explains.
Singlet-type entangled states
While many different types of collisions between atoms are possible, the most frequent are collisions in which the atoms exchange electron spin. The researchers observed that a huge number of rubidium atoms – at least 1.52 × 1013 out of 5.32 × 1013 participating atoms – were entangled via these spin-exchange collisions and entered singlet-type entangled states. “These are a curious state of two spins: each spin seems to be completely random, pointing in every direction simultaneously,” explains Mitchell. “Nonetheless, the spins always point in exactly opposite directions. It is thus a state that is completely coordinated while also being totally random.”
In the QND measurement, the contribution from a singlet is zero, he adds. “Since they point in opposite directions, what is contributed by one spin is exactly cancelled by the other spin. So, when we see the optical QND signal becoming very ‘quiet’, we know that many singlets have formed.”
And that was not all. As the ICFO team describe in their paper, which is published in Nature Communications, they also observed that the entanglement was non-local, meaning that it involves atoms that are not close to each other. Between any two entangled atoms there are thousands of other atoms, many of which are entangled with still other atoms, in a giant, hot and messy entangled state, they say.
Sensing technologies could benefit
When the measurement is stopped, the entanglement persists for about a millisecond, explains study first author Jia Kong. This means that a new batch of 15 trillion atoms is being entangled 1000 times per second. 1 ms is long enough for each atom to undergo about 50 random collisions, clearly showing that the entanglement is not destroyed by these random events. “This is maybe the most surprising result of our work,” she states.
The findings will be important for sensing technologies based on hot, dense clouds of atoms. One such technology, known as vapour-phase spin-exchange-relaxation-free (SERF) media, operates at 450 K and is used in applications as diverse as magnetometers that can detect magnetic signals from the brain and instruments that look for signs of dark matter and physics beyond the Standard Model. “Whether or not entanglement could survive at these hot temperatures was an open question until now, but our experiments show that it indeed can,” Kong tells Physics World.
The graphical interface of the mobile-phone-based application shows infected and uninfected samples being diagnosed. (Courtesy: CC BY 4.0/Diagnostics 10.3390/diagnostics10050329).
A team of six engineers from the North South University of Bangladesh has developed a program that can automatically diagnose malaria using a smartphone presented with a segmented blood smear image. This approach has potential to overcome the need for expensive equipment and highly trained personnel for malaria diagnosis in resource-limited settings. If patient samples, so-called blood smears, are imaged using a mobile phone and microscope, then the app can analyse the images for the presence of malaria parasites.
Writing in the journal Diagnostics, the developers explain that “the model can work independently in the mobile app without needing Internet connection and can help an individual without any technical expertise to detect malaria parasites from the blood smear”.
The research team developed 10 computational models and evaluated their computational requirements, as well as their accuracy, precision, sensitivity and specificity. Some of the models were trained using autoencoder, a type of artificial neural network used to learn in an unsupervised manner how to detect patterns while disregarding noise. These autoencoder-trained models were the smallest in size at just 73 KB (less than Whatsapp). Such computational efficiency enables automatic diagnosis to be performed on low-cost smartphones.
The engineers who developed the software (left to right): Faizullah Fuhad, Jannat Ferdousey Tuba, Rabiul Ali Sarker, Sifat Momen, Nabeel Mohammed and Tanzilur Rahman.
From computational heavy to lightweight
In 2019, engineers from the US had already developed a neural network that could achieve automatic malaria diagnosis. However, their model was too computationally intensive to work on smartphones or in a web browser. The new model by Faizullah Fuhad and his colleagues requires over 4 million times less processing capacity than the previous model while maintaining very high classification accuracy. The team from Bangladesh demonstrated functionality both offline on mobile phones, as well as online in a web application.
The researchers used a public dataset containing 27,558 images of red blood cells from 150 infected and 50 healthy patients to train the model. The images were taken by placing a smartphone on a conventional light microscope (available from approximately £100). Afterwards, the developers tested the model on a different public dataset, confirming the model’s robust performance.
The pros and cons of diagnosis methods
Currently, malaria can be diagnosed from clinical symptoms such as fever, or by polymerase chain reaction (PCR), rapid diagnostic test (RDT) or microscopy. As clinical diagnosis and PCR require laboratory settings, the other two methods, RDT and microscopy, are most commonly used for malaria diagnosis today.
RDT is a powerful tool, using a test strip similar to a pregnancy test. However, this method has some shortcomings compared with microscopy. It is less sensitive, more expensive and affected by heat and humidity. Also, RDT can neither quantify parasite density nor identify the species of parasite causing the infection. Microscopy is therefore the best available technique, but unfortunately requires highly trained personnel. The new smartphone app developed by Fuhad and colleagues should overcome this limitation.
A technique that reproduces the conditions of the Earth’s mantle at a depth of more than 2000 km could help researchers simulate our planet’s earliest days, when magma covered its surface. The technique, which combines laser-driven shock experiments with X-ray free-electron laser measurements, provides nanosecond-resolution information on the transformations that occur in silicate materials at ultrahigh pressures and temperatures. The work adds to our understanding of the present-day core-mantle boundary and may even shed light on conditions inside “super-Earths” – rocky exoplanets similar to Earth but larger in size.
Terrestrial planets like Earth have silicate-based mantles and iron-rich cores. This structure is thought to be the result of various material-differentiation processes that took place at an early stage of the planet’s development, including radiogenic decay of short-lived nuclides and numerous shock events. Together, these processes created temperatures high enough to sustain a planet-wide ocean of magma.
To better understand what happened during this epoch in the history of Earth and other rocky planets, researchers need to analyse the physical properties of liquid silicates under similar conditions. Such studies could help determine the composition and origin of molten or partially molten domains of liquid silicates that exist in the Earth’s upper mantle today, and possibly also at the boundary between the mantle and magmatic core, which is a vestige of those primordial days.
Doing away with extreme-condition apparatus
Temperatures as high as 6000 K and pressures of more than 100 GPa are, however, difficult to create in the laboratory. For this reason, researchers at Sorbonne University and the University of Grenoble-Alpes in France, together with colleagues at the US Department of Energy’s SLAC National Accelerator Laboratory, developed an alternative method that eliminates the need for ultrahigh-pressure/extreme temperature apparatus. The new technique involves first sending a shockwave through an amorphous magnesium silicate sample using an optical laser. This step, performed at SLAC’s Linac Coherent Light Source (LCLS) X-ray free-electron laser (XFEL), compresses the sample to pressures of up to 130 GPa and heats it to temperatures of 6000 K. The silicate glass thus transforms into a liquid.
Next, they bombarded the sample with ultrafast femtosecond X-ray pulses from the LCLS at the precise moment when the shockwave reached the desired pressure and temperature. These X-rays produced two precise diffraction peaks as they scattered off the sample, enabling the researchers to monitor how the atoms in the sample rearranged themselves at such high pressure and temperatures. The resulting spectral fingerprint is related to the transition from four-fold to six-fold coordination of oxygen atoms around the silicon atoms.
Aside from this atomic rearrangement, the researchers saw no other major structural changes in the silicate melts at pressures of up to 130 GPa – a finding that should be important for better modelling these materials under the conditions present deep inside the Earth, they say.
The team backed up their results with measurements previously obtained in conventional diamond anvil analyses – in which a solid silicate sample is literally crushed to high pressures at room temperature – and molecular dynamics simulations.
Recreating Earth’s early days
“Through our experiments, we have been able to probe geophysical materials at the extremely high temperatures and pressures found deep inside the Earth, to characterize their liquid structure and learn how they behave,” explains study lead author Guillaume Morard. “These studies will allow us to recreate Earth’s early days and understand the processes that shaped our planet.”
The researchers now plan to repeat their experiments at higher X-ray energies. This should enable them to more precisely measure the way in which the atoms rearrange in the liquid silicates. They also hope to try out higher pressures and temperatures. “These latter studies will be important for better understanding how silicate liquids and glasses behave in super-Earth planets,” says Morard.
A new imaging technique has allowed researchers in the UK to create a 3D map that charts the flow of blood through a living zebrafish. Andrew Harvey and colleagues at the University of Glasgow used an optical setup that produces pairs of “Airy beams” corresponding to individual microscopic beads flowing in a fish’s blood. Their approach could lead to new and better ways to explore the characteristics of microscopic biological systems.
The spatial resolution of a conventional optical microscope is about 300 nm, which is on par with half wavelength of visible light. While smaller structures can be observed, their spatial features are blurred. In simple 2D systems, the blurring can be compensated for by locating the centres of blurred objects and constructing “point spread functions” (PSFs) on top of them. This can reduce blurred objects to single points of light to within about 10 nm precision – which is important when tracking single, fluorescently-labelled molecules in biological systems.
PSFs can also be used for simple 3D systems, since their shapes can indicate their depths, or axial distances relative to the imaging apparatus. However, the technique becomes far less effective when imaging 3D groups of objects, which often results in overlapping PSFs that are far more difficult to analyse.
Curved light
Harvey and colleagues have been developing a better 3D method that uses imaging optics to transform a PSFs into an “Airy beam”, a waveform that does not spread out over time and appears to curve as it travels. The shape of the Airy beam depends on the axial distance to the object, allowing the depth of the object to be determined. While this technique is effective, it can be difficult to implement.
In their latest research, the team introduce an even more effective optical setup that is easier to calibrate. Their system converts a PSF into a twin Airy beam that appears as two spots on either side of the object. The separation between the spots increases with axial distance increases and therefore measuring the separation gives the depth of the object. This approach enabled the researchers to locate fluorescent nanocrystal beads to within 30 nm, across an axial range of over 7 micron.
As a proof of concept, they used the technique to observe the motions of 1 micron fluorescent beads when injected into the blood of living zebrafish –tracking their twin Airy beam PSFs at 26 frames-per-second. With high enough bead densities, Harvey and colleagues could clearly map out the 3D shapes of the fish arteries over a depth range of 0.1 mm.
The technique could soon offer significant new opportunities for optically imaging microscopic structures, including living systems on a cellular level. Harvey’s team describes how nanobeads could be made to fluoresce in the presence of oxygen or acidic conditions; or loaded into soft gels that are deformed by growing cells, potentially yielding new insights into the mechanical forces they exert.
Single-layer cell cultures are widely used as an alternative to animal models for investigating the effects of drugs on the brain. But the advantages that such 2D models have in terms of simplicity and accessibility are balanced by some significant shortcomings. Neural networks in 2D cultures respond differently to stimuli compared with those in the 3D physiological structures that they aim to emulate. Cells in 2D cultures also remain viable for just a short time, as the nutrient medium cannot penetrate to the culture’s interior.
To address these problems, researchers in China and the US have demonstrated a 3D tissue construct that sustains a population of neuronal cells for four weeks. The team used a cell-laden bioink to print a stack of three grid-shaped layers on an array of electrodes. Spaces between and within each layer allowed nutrients to reach the cells, while the electrode array measured the cells’ electrophysiological signals. Such constructs reproduce in vivo neural circuits more accurately and over much longer periods than 2D cell cultures, making them a better model for drug testing and studying the dynamics of neural networks.
To fabricate the 3D model, Yu Song, Ting Zhang and colleagues, at the Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing at Tsinghua University, suspended rat neuronal cells in a solution of gelatin, alginate and fibrinogen. With the right combination of nozzle diameter and flow rate, they found that 85% of cells in the bioink survived the printing process. The researchers also varied the proportion of bioink components so that the structure that they built – a stack of three square grids, each 0.5 mm thick and 8 mm across – had mechanical properties similar to those of living brain tissue.
In 3D models, primary cells (collected from rats) are mixed with biocompatible materials to form bioink, and then printed on a petri dish for imaging or a 4×4 electrode array for electrophysiological recording. 2D samples are used as controls. (Courtesy: Biofabrication 10.1088/1758-5090/ab7d76)
A week after printing, nearly all of the cells in the structure were still alive and had started to extend neurites to form a network. A 2D control sample at the same point in time had already lost more than a quarter of its cells. The rate of cell death in the 2D culture continued steadily until the end of the experiment four weeks after printing, at which point less than 25% of its cells were still alive. In the 3D structure, in contrast, more than three-quarters of cells survived to the final measurement.
After cultivating the cells for four weeks, Song and colleagues quantified the neural activity in the two cultures by measuring excitatory postsynaptic potentials (EPSPs). They triggered and detected these electrical signals using a 4 × 4 array of electrodes that lay under each culture. The amplitude of the EPSPs in the 3D structure was larger than in the 2D culture, which the researchers attribute to the printed construct’s greater proportion of surviving cells and their better connectivity.
These EPSPs dropped to zero when the team perfused the 3D construct with tetrodotoxin (TTX), a neurotoxin that inhibits the transmission of sodium ions across cell membranes. The speed with which TTX spread through the culture and shut down the cells’ activity showed that the construct is highly sensitive to the effects of such substances, indicating its suitability as a model for drug-screening applications.
“Our ongoing work includes studying how neuro drugs with different molecular structures diffuse at different speeds in our printed models, and administering electrophysiological stimulation and/or pentylenetetrazole to our models to study epilepsy,” says Wei Sun, director of the Biomanufacturing Center at Tsinghua University. “We are also interested in printing stem cells to build a brain-like model to study neurodevelopment, and integrating the printed brain-like models with microfludics to build a ‘brain-on-a-chip’ device.”
Full details of the work are reported in Biofabrication.
I got some strong reaction to my recent column about how to reduce carbon emissions from air travel. Don’t be so naïve, I was told, people will never stop flying for business – they don’t want to go on a “flight diet”. What a difference a few months makes. The entire global aviation industry has almost ground to a halt due to a virus (SARS-CoV-2) that emerged in China and isn’t especially deadly (in terms of the overall percentage death rate).
Now I don’t wish to make light of the tragic deaths of loved ones or the job losses and economic damage incurred by global efforts to contain this exponentially spreading virus. COVID-19 is indeed a global tragedy on many levels. But could there be a lesson from it in terms of how we deal with climate change? Could the virus be the event that shifts our habits on driving and flying forever?
Could the virus be the event that shifts our habits on driving and flying forever?
The last major killer pandemic – Spanish flu – led to the death of about 50 million people, or just under 3% of the population. However, it didn’t greatly affect the world economy, which was already on its knees after the First World War. The same was true of the 2009 swine flu pandemic. About 284,000 people died (compared to 250–500,000 deaths from seasonal flu annually) but apart from a few travel advisories it was mostly “business as usual”.
So why is COVID-19 so different and such a threat? The trouble is, it’s a totally new virus and we have no immunity to it. What’s worse is that you can be contagious before you show any symptoms. Critically ill people, meanwhile, need lots of doctors, nurses and equipment to be treated. No health service could cope if the virus were allowed to “let rip”, which is why most governments have decided to stop everyone from moving about and interacting.
But the economic impact of that decision has been huge. Many businesses are starved of sales and cannot last long without cash coming in. In many countries, companies are being supported by the state, but no government can hold back the tide for ever. Hardly surprising then that, by 29 April, the US S&P stock-market index of 500 top firms was 13.2% below its February peak.US unemployment was last month nearing 15%, while oil prices fell to a 20-year low and global production began to be cut as demand is expected to slump further.
Until the world finds a vaccine or we all build up enough “herd immunity”, things won’t return to how they were for years, if ever
No-one knows how long this situation will last so we can’t yet say what its ultimate impact will be. But until the world finds a vaccine (and we’re all inoculated) or we all build up enough “herd immunity” not to spread this virus to the vulnerable, things won’t return to how they were for years, if ever. So we’d better adapt and embrace the amazing power of communication and connectivity that we have at our disposal.
E-mail, video conferences, virtual meetings, Internet speeds – wow, with this tech and a bit of planning you can be really productive. Plus, they remove all the “dead time” that you’d waste travelling. We all knew that these tools existed, but was it habit or the expectations of others that stopped us from using them more? And now that lots of other people aren’t stuck in traffic jams or on trains either, they’re more available too.
Since the lockdown began, I’ve discovered that this communication technology works really well – the UK’s investment in broadband and mobile data has really paid off. My kids are managing school and lectures online too, which would have been impossible 10 years ago. And even if I paid the annual subscription costs for the “premium” versions, it would still cost less than one train or car trip into London.
Sure, I miss going out with colleagues and new acquaintances, but social mixing will come back in some form
Sure, I miss going out with colleagues and new acquaintances, but social mixing will come back in some form. I can, however, feel a new set of habits developing especially as the lack of travel means that I have more time to do the things I want. And what a joy not to have to wear all those boring old work shoes, shirts and coats.
According to the UK Office for National Statistics, only 5% of the UK labour force worked mainly from home in 2019. I predict that this figure will rise as employers and employees drool over the prospect of reduced office costs, improved staff retention, a better work–life balance for staff and a wider talent pool to recruit from. Not all work can be done online but a surprising amount can be.
What’s more, the lockdown has made the air cleaner. Pollution levels in New York have almost halved since this time last year. Emissions in China fell by 25% at the start of the year, while coal use dropped by 40% at its six biggest power plants. The slow down is sure to benefit transport emissions, which made up 24% of global carbon emissions in 2016. With fewer vehicles on the road, I can now even hear the birds singing in my garden.
Yes, there will be winners and losers. In doing more online, businesses will save money on offices and travel costs. The big losers will be aircraft manufacturers, airlines, hotels, conference venues, commercial office-space providers and high-street shops. Pickpockets will also struggle, though I can live with that.
Will COVID-19 merely accelerate existing trends that would – and should – have happened anyway?
But for the survivors, will COVID-19 merely accelerate existing trends that would – and should – have happened anyway? Governments that are propping up their national economies will now find that they can now consider much stronger action on climate change. After all, it’s easier to push change if you’re paying for it. And if they’re smart enough, most of those governments will not just consider those changes, they’ll implement them too.
The only question is whether, once the economy has recovered, will we return to our bad old habits.
Last week I was enthusing about how lidar has been used to discover a huge Mayan structure in Mexico – and this week, ground-penetrating radar (GPR) takes the spotlight in the Red Folder. The technique has been used by archaeologists at the University of Cambridge and Ghent University to map a complete Roman city that is still buried underground. Located near Rome, Falerii Novi was first occupied in 241 BC and was populated for over 900 years.
The extensive measurements were taken by Ghent’s Lieven Verdonck as part of an ongoing project to improve GPR technology. The city stretches over 30.5 ha and Verdonck took a GPR reading every 12.5 cm across the entire site. Vast amounts of data were collected, and it could be some time before it is all analysed. Writing in the journal Antiquity, the team says the improved technique “has the potential to revolutionize archaeological studies of urban sites”.
If Verdonck and colleagues ever want to excavate Falerii Novi, they might consider using a “mole-bot” that has been optimized for underground and space exploration. Created by researchers at the Korea Advanced Institute of Science and Technology, the mole-bot is a drilling biomimetic robot inspired by the African mole-rat and European mole. The mole-bot is 25 cm wide, 84 cm long and weighs 26 kg. The team says the robot is three times faster – and has six times higher directional accuracy – than conventional boring machines. You can watch it in action in the video above.
And if you prefer your robots to remain above ground, researchers in France have created a cable-driven robot to track and film flying insects.