Two microelectrode arrays in the “hand area” of the brain measure neural activity. A recurrent neural network (RNN) then converts the signals into probabilities for each character. These probabilities are either thresholded to provide real-time output or combined with a language model, which acts like an autocorrect. (Courtesy: Nature 10.1038/s41586-021-03506-2)
Locked-in syndrome, a neurological disorder that causes complete paralysis of nearly all voluntary muscles, leaves an estimated one in 100,000 people unable to communicate naturally. However, by using computational brain function imaging tools and specially designed software, researchers have enabled paralysed participants to communicate once again.
One method of computer-mediated communication uses flashing “mind spellers”, in which the participant looks at a screen with a keyboard of letters flashing at different speeds. Focusing on a given letter generates a neural response in the visual cortex that corresponds to the flashing frequency, and by measuring this response using electroencephalography, the target letter is determined.
Alternatively, “point-and-click” brain–computer interfaces (BCIs) use implanted intracortical electrodes that measure neural activity to visualize and control the movements of a cursor over an on-screen keyboard.
To date, these two methods have reached communication speeds of 60 and 40 characters per minute, respectively. Now, researchers at Stanford University have designed a novel BCI, based upon the imagined act of handwriting, that has achieved an unmatched 90 characters per minute. They describe this new approach in a paper published in Nature.
Decoding the neural signals of handwriting
The researchers tested their BCI in a single participant selected from the BrainGate study. The participant, known as T5, was paralysed from the neck down following a spinal cord injury, leaving him with no functional hand movement. During five experimental sessions, the team recorded T5’s neural activity using two microelectrode arrays, which were surgically implanted before the study into the area of his brain corresponding to hand control. T5 was instructed to attempt to write as if his hand were not paralysed, imagining holding a pen over a piece of paper and writing characters on top of one another.
The team employed principal component analysis (PCA), a method of reducing the dimensionality of large data sets, to visualize the neural activity recorded while “writing” single characters, by displaying the top three neural dimensions with the greatest variance in the signal. The signals proved to be repeatable over trials but had variations in timing, thought to be due to different handwriting speeds, which had to be corrected for.
To reveal the handwritten characters, the researchers used averages across trials of the same character to decode the pen-tip velocity, displaying distinctive and legible letters. To decode whole sentences, they trained a recurrent neural network (RNN) using thousands of characters written over multiple days. In turn, this network turned the neural activity into probabilities describing the likelihood of the participant writing a particular character at each time.
The interpreted handwritten characters from the decoded neural signals. (Courtesy: F Willett)
The researchers investigated three methods to analyse the measured signals. First, they used thresholding of the probabilities to expose single characters, allowing for real-time on-screen feedback and resulting in an error rate of 5.4%. Second, they used a language model as an autocorrect feature, which decreased the error rate to only 0.89%. Following these experiments, they trained a new RNN on all the available data. This led to extremely high accuracy, with a low error rate of 0.17%, demonstrating the efficacy of the method and the high-performance ceiling that is attainable.
Why is this technique more effective?
This novel BCI method has more than doubled the speed of communication, while maintaining similar accuracy to “point-and-click” BCIs, which employ similar brain imaging methods. The researchers theorized that this rate increase is due to the variety in the spatial and temporal patterns of the neural activity when attempting to handwrite characters, compared with the straight-line pointing movements. They tested this hypothesis with a model and confirmed that the variety of the temporal patterns (different movement speeds) increases the separability of each movement, enabling a faster BCI with sufficient accuracy.
BCIs based on movements such as “mental handwriting” also have notable advantages over visual-based interfaces, giving the participant the freedom to look around and communicate at their own pace. This system, and the results from participant T5, provide a proof-of-concept that a higher pace of communication is possible. While notable challenges remain, such as how this system works across different participants and lengths of time, this study demonstrates an important step-change in the development of BCIs for real-time communication.
A group of “ethical hackers” has obtained access to sensitive systems and proprietary online data hosted by the Fermi National Accelerator Laboratory in the US after accessing multiple unsecured entry points in late April and early May. The group – Sakura Samurai – discovered configuration data for the lab’s NoVa experiment and more than 4500 “tickets” for tracking internal projects.
The Sakura Samurai team has previous experience probing the vulnerabilities of scientific and educational organizations, which hold critical information that if leaked could put those institutions at risk. “Fermilab was no different,” Sakura Samurai member Robert Willis told Physics World. “Oversharing can be very dangerous, especially when it’s sharing credentials that could enable a malicious actor to take over a server with the potential to move across their network to access items that the organization wouldn’t even think of being vulnerable.”
Providing the wrong sensitive information can put not just one asset, but everything, at risk
Robert Willis
The hacking team targeted Fermilab because of its openness and the size of the lab. The hack was performed with Fermilab management’s knowledge so that they could “lock down” critical information before it was performed. “[Fermilab] seemed interesting as it has a vulnerability disclosure programme and is also a physics lab with lots of machinery and a half-billion-dollar grant,” adds Willis. “That would make it very attractive to a threat actor looking to ransomware their assets to hold them hostage.” Indeed, the hacking team found its effort time-consuming owing to Fermilab’s basic openness. “Some findings were without a doubt critical and didn’t need verification from Fermilab. But other findings relied on communications with Fermilab to verify,” Willis says.
Nevertheless, the ethical hacking group found the hack to be relatively simple, with many of the findings emerging with manual methods and basic tools that allowed them to navigate the file structure to find open ports and services. “We may very well have saved Fermilab from a future ransomware attack, considering a set of credentials would have given us the proper access to infect a server, and go from there,” says Willis, who adds that once lab managers were informed of the security issues they responded quickly. “The lab handled the situation very well and fast,” says Willis. “From initial contact to their internal verification and remediations, the entire process was under two weeks.”
Culture of sharing
Fermilab spokesperson Tracy Marc notes that the lab “takes all reports of cybersecurity vulnerabilities seriously, and we are continuing to review the matter”. She denies any concern that experiments could be vulnerable to unethical hacking that could change results, because, she says, their data are “made available through controlled authorization and access methods”.
Willis claims that many of the hacks on large organizations happen because of a lack of understanding of what hackers can do. That can be problematic for managers of organizations like Fermilab that have a culture of sharing. “Treat all publicly accessible information as if someone wants to do something malicious with it,” says Willis. “Providing the wrong sensitive information can put not just one asset, but everything, at risk.”
A study of how water is forced into the nanoscopic pores of materials has led to the design of highly efficient energy absorbers that could find a range of applications including body armour.
The work was done by researchers in UK and Belgium, who have also developed a new set of guidelines for designing reusable, tailorable, and highly efficient energy absorbers.
Materials that are good at absorbing energy during mechanical impacts have a wide range of applications from military body armour to dampening industrial vibration. In state-of-the-art systems, these materials can rely on a range of processes including plastic deformation; the buckling of rigid hollow cells; and dissipation through viscoelastic polymers. However, these approaches often face a trade-off between reusability and energy absorption efficiency.
A promising route forward lies in the pressurized intrusion of liquid water into hydrophobic, nanoporous materials – including metal-organic frameworks (MOFs). As the pressure generated by a mechanical impact forces water into the hydrophobic nanopores, the liquid breaks up into nanoscale clusters. Due to the vast surface areas of these materials, this is an energy-intensive process as the water’s mechanical energy is rapidly converted into surface energy as it enters the pores. Afterward an impact, the water is forced back out of the material, which is ready to use again.
ZIFs and MOFs
This latest research was led by Yueting Sun at the University of Birmingham and explored how this process occurs within zeolitic imidazolate frameworks (ZIFs). These are MOFs that contain transition metal ions arranged in tetrahedral patterns that are bridged by organic ligands. So far, ZIFs with hundreds of possible molecular arrangements have been characterized. Some contain nanoscopic cages of atoms, while others contain arrays of straight channels.
To harness the full potential of ZIFs, the researchers used a combination of experiments and molecular simulations to recreate the intrusion and extrusion of water following realistic, high-energy impacts. Their observations revealed that within ZIFs containing nanocages, energy absorption efficiency improves substantially with the increasing strain rates driven by faster water intrusion. This advantage was not observed in ZIFs containing channels.
The team found that this behaviour occurs because clusters of water molecules have an extremely tight window of time over which they can form within the hydrophobic nanocages – lasting just a few nanoseconds. As a result, both water intrusion pressure and energy absorption density are driven up by the high strain caused by high-energy impacts – something that does not occur when water clusters along channels. From these insights, Sun and colleagues set out new guidelines for designing effective and reusable energy absorbing devices, which can be precisely tailored to specific applications.
The two 110 kg combat robots squared off. One, known as Poison Arrow, was armed with a toothed spinning drum. Its adversary, Son of Wyachi (SOW), had whirling hammers. Poison Arrow smashed into SOW, sending it flying across the arena. SOW broke its radio receiver as it crash-landed, lying motionless as the referee declared a knockout.
The action took place in 2016 in BattleBots – a US “robot-combat” TV series aired by ABC in 2015–2016, and then by the Discovery Channel since 2018. BattleBots is inspired by the original Robot Wars events held in the US in the 1990s; these events also inspired the famed British TV series Robot Wars. Dubbed “the ultimate robot-fighting competition”, BattleBots features fights to the finish between remote-controlled “bots” that employ an array of destructive weapons.
Roared on by a crowd, the robots compete in physical bouts, divided – just as in boxing or wrestling – into different weight classes. While the sport at large features everything from fairyweight to heavyweight contests, BattleBots only involves heavyweight fighters. The 2016 scrap between Poison Arrow and SOW has been judged one of the most dramatic knockouts in BattleBots history.
Many colleges and universities in the US now have teams that compete in the smaller weight-class events – not just because these are held more often and in more places but also because they work well as teaching projects, extracurricular activities and an entry into the sport. BattleBots, though, has an international appeal, with notable British robots to have appeared on the show including Quantum, Bet and Monsoon (developed by engineers in Birmingham, Surrey and Bedfordshire, respectively).
How do you balance spinning a weapon and maintaining stability while driving?
Casey Kuhns, Ursa Major Technologies
One of Poison Arrow’s designers is Casey Kuhns, an avionics engineer at Ursa Major Technologies in Colorado who originally studied physics. As he explained to me over the phone, there’s much more to BattleBots than just spectacular crashes. Designing robots, he believes, is a terrific hook for engaging students and teaching them physics. “We all study the pool ball thing,” he says, pondering Newtonian mechanics using collisions of pool balls, “but here’s a chance to study larger applications.”
Awed spectators to the event told Kuhns that they thought that SOW had flown up to 5 m high before crashing. To check if the audience was right, Poison Arrow’s team members analysed their robot’s fight frame by frame, showing that its opponent had spent 1.1 seconds aloft. Using elementary Newtonian physics, they then calculated its trajectory – finding that however awesome the crash looked, their robot had kicked its adversary 5 m horizontally but only 1.5 m vertically.
Poison Arrow’s creators had, in other words, turned a dramatic encounter between two robots on TV into a “teaching moment” that illustrates simple physics in a fun and accessible way. They also showed how to calculate the energy transfer of the collision, and pointed out that the reason SOW hadn’t tumbled in flight but flew flat like a drone was the gyroscopic stability provided by its spinning hammer.
One key physics principle in combat robotics is rotational inertia. Or, as Kuhns puts it: “How do you balance spinning a weapon and maintaining stability while driving?” Weapons bearings are another issue. Poison Arrow’s drum weighs 32 kg and spins at 9000 revolutions per minute – so how much force, Kuhns asks, can you apply to its bearings before it fails? Electronics plays its part too. “You’ve got high-powered motors in a small setting, lots of emf, and you have to think carefully about what sensors you use and how to route things.”
Poison Arrow’s other team members, who included Zach Goff, director of engineering at L&L Fabrication in Colorado, even developed a calculator to show how long it takes a spinning weapon to ramp up to its maximum allowable tip speed of 250 miles per hour, and to determine kinetic energy, energy draw and other properties. Kuhns and Goff still haven’t figured out how to estimate the aerodynamic drag of a flywheel, which appears to make their calculations underestimate data at higher speeds, and would welcome assistance from Physics World readers.
Cut-throat cat
In one of the most astonishing bouts of the most recent season, MadCatter was low to the ground and painted to look like a cat, with its tail a hammer with a flamethrower. It inched towards Malice, whose weapon was a 65 pound (30 kg) horizontal spinning disc. MadCatter struck Malice, flipping it over in a shower of sparks and flame. A few seconds later, MadCatter struck again and Malice spun away wildly. MadCatter struck a third time, clipping Malice’s left wheel and knocking it on its back. Something extraordinary then happened.
Malice remained fully functional on its back, but was unable to rock itself over thanks to the rotational momentum of its disc, which was now spinning like the wheel of an upside-down bicycle. Malice had wound up in a stable configuration that its designers knew about but hadn’t thought to guard against, because the chances were so freakish of encountering just the right combination of forces to put it there. As the referee counted down a KO, MadCatter sat with its two eyes gazing with astonishment at the hapless Malice.
The two match announcers also gaped, having never seen anything like it either. “It’s physics, Chris,” the one said to the other, by way of explanation.
Heavyweight champ The team behind fighting robot MadCatter, headed by physicist Martin Mason (right). (Courtesy: Daniel Longmire)
Martin Mason, a physicist and head of the MadCatter team, is as much fun to watch as the matches themselves. Like a pro wrestling champion, he’ll adopt an aggressive persona to gee up the audience and inspire people to tune in. Eyes bulging, eyebrows raised, pointing his finger directly at the camera, he’ll shout in a deep gravelly voice something like: “I’m going to pulverize you!” When I phoned Mason in California, I said I was glad to be safely on the other side of the country from him.
I have to be concrete, practical and engage students quickly
Martin Mason, Mt San Antonio Community College
Turns out he’s really friendly and all he means is that his robot is about to pulverize yours, which is nearly always true. Mason’s day job is as challenging as robotics. He’s just one of a handful of physics professors in the engineering department at Mt San Antonio Community College, just outside Los Angeles, which serves about 50,000 mostly minority and lower-income students. Mason uses robotics to get those students to devise models that can be implemented in a short time in vastly different contexts.
“When I studied physics in graduate school I spent a lot of time at the computer,” Mason told me. “Here I have to be concrete, practical and engage students quickly.” Robotics also enables him to teach students how to make the most of limited resources. “We know we are not going to have the best motors and the best materials, but we want to have the robot run at 100% of what it can do.” Mason’s team finds, for instance, that maximizing a robot’s “punch” can be less important than drivability – the ability to immediately recover from contact and come back to hit the opponent again.
The bull
One of the most exciting and legendary battles in combat robotics, also on the 2016 season of BattleBots, featured Minotaur and Blacksmith. They began by circling each other warily. Blacksmith then suddenly slammed Minotaur into the railing, but Minotaur rallied and crushed its spinning drum into Blacksmith’s flank, sending it flying amid showers of sparks. Landing upside down, Blacksmith righted itself and pounded Minotaur with its 8 kg hammer, momentarily stopping Minotaur’s drum.
Restarting its weapon, Minotaur then ground off Blacksmith’s wedge, knocked off the head of its hammer, and pulverized Blacksmith until its motor exploded. Smoke and flames billowed from Blacksmith, which flapped the shaft of its hammer helplessly. As the referee counted down, Minotaur began gyrodancing, tilting itself onto one wheel and spinning in celebratory circles. The fight has so far been viewed nearly 20 million times on YouTube alone.
Minotaur was designed by Marco Meggiolaro, a mechanical engineer at the Pontifical Catholic University of Rio de Janeiro in Brazil. Meggiolaro has written extensively about the physics of combat robotics, and published Riobotz Robot Combat Tutorial – the most up-to-date textbook on the subject. Minotaur was a class project, and its team is composed of Meggiolaro’s current and former students.
He told me that the basic physics of combat robotics – energy storage and transfer – is simple. But incorporating those principles into massive, radio-controlled vehicles that can attack and defend successfully in duels against similar adversaries with an array of different weapons gave him enough educational material for entire courses in physics and engineering. Even a combat robot’s drive system alone poses complex physics issues, Meggiolaro explained.
We burn up a lot of motors
Marco Meggiolaro, Pontifical Catholic University of Rio de Janeiro
You need good acceleration, for example, so that you can reverse direction to dodge and get behind opponents. Gear ratio is vital too to propel a stationary robot to the other side of the arena in under two seconds. Also important is torque – what allowed Minotaur its victory dance – while motors and batteries have to be carefully designed. Minotaur draws 800 A of current in each motor, using brush motors for the drive, and brushless motors for the weapon. “We burn up a lot of motors,” Meggiolaro admits.
Minotaur’s weapon poses especially difficult physics problems. Consisting of a toothed drum designed to launch an opponent or rip off its armour, its key parameters are inertia, strength, number and height of the teeth, rotational speed, collision speed and “tooth bite”. This parameter, which is the overlap between the weapon and the opponent, depends on the number of teeth, the angular velocity of the weapon and the relative speed of the two robots.
Lots of small teeth on a drum will chew away at the opponent like a wood chipper, but that takes too much time and causes little damage. Fewer teeth, on the other hand, will provide an uppercut that transfers lots of energy. A single-toothed drum has greater tooth bite but requires a counterweight; two symmetrically placed teeth make the drum more balanced but provide less tooth bite. One good counterweight material is a tungsten alloy so expensive that Meggiolaro explored asymmetrical shapes to eliminate the need for counterweights.
The critical point
The televised broadcast of these fights is riveting, but I’m told it is even more thrilling in person, and not just because of the deafening screams of the spectators. In person at BattleBots you hear the uncanny sounds of the combatants – the scraping and grating of metal, the motors groaning under suddenly elevated loads, the crackle of sparks, the occasional bursting of the machines into flames, and the drama of seeing 110 kg robots send each other spinning several metres in the air, crash upside down on the ground, right themselves, and then turn and attack each other as handily if they were heavily armoured cats.
In the first few seasons of BattleBots, one effective strategy for competitors was simply to build a robot that was reliable enough to survive in the ring, in the hope that the rival robot would fail, whether by blowing a motor, breaking a drive chain, cracking the chassis or perhaps losing its weapon. After five seasons of engineering trial and error, however, effective strategies rely more and more on physics. Competitors need to inventively incorporate things like bite, energy transfer and absorption, torque and voltage, making the competition ever more effective as tool to learn and study physics.
At a time when robots, computers and other mechanical devices are forcing us to reconsider what it means to think, pitting robots against each other is, in turn, forcing us to think about physics with a fresh perspective.
The gold nanoparticle sensor next to a grain of rice and a one cent coin. The pink stripes contain rod-shaped nanoparticles, while the white stripes are nanoparticle free. (Courtesy: Katja Krüger)
Implantable biosensors that continuously monitor the concentrations of biomarkers in the body could transform the way we diagnose and treat chronic diseases. However, many existing technologies are not suitable for long-term use as they are either rejected by the body or their signal fades with time.
Researchers from the Johannes Gutenberg University Mainz have developed a novel sensor that can detect analytes in the bloodstream for several months without signal or functionality loss. The technology, which combines colour-stable gold nanorods with a tissue-integrating hydrogel scaffold, could offer a universal platform for round-the-clock monitoring of numerous target analytes in vivo. They describe the new sensor in Nano Letters.
Going for gold
Ensuring a steady stream of information is a top priority in medical sensing. For optical sensors like the one created by Carsten Sönnichsen’s research group, this means choosing a sensing element with excellent photostability (resistance to fading). Thanks to a phenomenon called the plasmon effect, rod-shaped gold nanoparticles absorb and scatter near-infrared light with indefinite photostability.
“We are used to coloured objects bleaching over time. Gold nanoparticles, however, do not bleach but keep their colour permanently,” explains first author Katharina Kaefer.
Importantly, gold nanoparticles are compatible with several molecular recognition elements. In their study, the researchers coated their gold nanorods with a special type of DNA receptor called an aptamer. When the aptamer binds to the target analyte, the optical absorption spectrum of the gold shifts – in other words, the nanoparticles change colour. The extent of this colour change, which is captured using an infrared camera, depends on the concentration of the analyte. By changing the type of aptamer used, the technology is not restricted to one specific analyte and can be easily adapted to measure a range of biomarkers.
Sensing under the skin
Less than 1 mm thick, Sönnichsen likens the sensor to an invisible tattoo. The nanoparticles are embedded in a hydrogel scaffold that, when implanted under the skin, integrates with the surrounding tissue (in this case, the skin of hairless rats).
The team demonstrated the response of their sensor by injecting the rats with the antibiotic kanamycin. They found that the extent of gold nanoparticle colour change increased with increasing kanamycin dosage – a phenomenon that was not observed when the rats were injected with saline. What’s more, the sensor remained well-perfused and responded to kanamycin in the bloodstream for over two months. The researchers also checked the device for fibrous encapsulation – a tell-tale sign of implant failure – and observed minimal fibrous tissue formation.
The results highlight the sensor’s potential for long-term implantable biosensing. To develop the technology further, the team hopes to explore features that are useful in personalized medicine, such as sensor read-outs.
Quantum entanglement between two macroscopic vibrating drumheads has been demonstrated by two independent research groups. As well as being used to study the interface between the quantum and classical worlds, the systems could have practical applications in a range of quantum technologies.
Quantum mechanics was first developed to explain the behaviour of tiny objects such as subatomic particles and is still often described as the physics of the very small. Although quantum mechanics applies to everything regardless of its size, detecting quantum effects in large objects is difficult because of the blurring effects of both classical and quantum noise.
Always up for a challenge, physicists are keen to observe quantum phenomena on ever larger scales and now two independent teams have demonstrated the detection and manipulation of entanglement in vibrating aluminium membranes that resemble drumheads.
Quantum-free subspace
In one study, researchers in Finland and Australia used a trick first proposed by Eugene Polzik of the University of Copenhagen and others. They created a “quantum mechanics free subspace” in which they could make measurements that appeared to violate Heisenberg’s uncertainty principle. This famous principle that says that, as one cannot measure the state of a system without disturbing it (a phenomenon called quantum backaction), it is never possible to know both the position and momentum of an object beyond a certain degree of accuracy.
The team achieved this by measuring a specifically chosen resonant frequency between a pair of vibrating membranes (each about 10 micron across) in two separate microwave cavities, such that the quantum backaction was not visible in the signal, explains Mika Sillanpää of Aalto University in Finland, who led the research. “We select the one part in which we are interested in, but Heisenberg dictates that this backaction has to go somewhere,” says Sillanpää; “We put this backaction in the part of the system that we cannot see and which we don’t care about.” The researchers used this extraordinarily precise measurement to create and verify a stable entangled state between the two membranes. Sillanpää now hopes to use the technique to search for gravitational effects in quantum systems.
In the second study, researchers at the US National Institute for Standards and Technology (NIST) in Colorado, created entangled states between two 10 micron-scale membrane resonators in one cavity by simultaneously applying pulses of two different microwave frequencies. If the drums had responded independently to the pulses, the first would have been heated, whereas the second would have been cooled by Doppler cooling (a process used to laser-cool atoms).
Leaking information
However, the simultaneous pulses could also be used to entangle the motions of the membranes using the fact that one membrane interacts with photons generated by the oscillation of the other. “The entangling operation is a little bit more delicate because we want the membranes to strongly interact with each other, but at the same time we don’t want information about that entangled state coming out to the rest of the universe,” explains NIST’s John Teufel. “During the state preparation, you apply microwave pulses to do the entangling and then, when you want to verify, you apply microwave pulses to do the readout.”
In the measurement phase, the researchers measured the position and momenta of the two membranes at the same instant. After 10,000 repetitions, the researchers found that they were correlated so well that the two membranes must be entangled, as otherwise the predictability of the position and momenta of one membrane from those of the other would violate Heisenberg’s uncertainty principle.
“The position of one and the position of the other agrees much better than the diameter of a proton,” says team member Shlomi Kotler, now at the Hebrew University of Jerusalem.
Not just cute science
The researchers are now looking to build more complex networks of entangled resonators: “From a NIST technology perspective, these types of ideas are more than just cute science – they’re the technology you’d want to do quantum communication or quantum networking,” says Teufel.
Commenting on the two studies, Polzik describes them as “great work”. He adds, “To me the results are pretty similar and achieved by similar means in similar systems”. Polzik points out that the results complement work done by his team in 2020, which achieved photon-mediated entanglement between a millimetre-scale membrane and a distant atomic ensemble. He says that the spatial proximity of the mechanical resonators used in the two new studies means that perturbations to one are also likely to affect the other, which could limit research applications. He also points out that the systems could find use in local quantum information processing, perhaps as quantum memories.
Aashish Clerk of the University of Chicago is also impressed with the studies but sees a crucial difference between them: “The Sillanpää experiment does really leverage these really neat backaction-evading measurements to get entanglement, and when they do they’re able to stabilize an entangled state,” he says. “That’s a good thing because the entanglement is just there waiting for you; it’s a bad thing because the system has forgotten anything about its initial state”. He adds, “The Teufel group do something like a two-qubit gate: when they do the entangling operation, the form of the entangled state they get depends on the states of the resonators to begin with. These are two different ways that both generate entanglement, but they’re different ways and they have different possible utility.”
The studies are described in separate papers in Science.
China has become the second country after the US to successfully land a spacecraft on the surface of Mars. According to the state media, the controlled touchdown of Zhurong – a six-wheel rover named after a fire god in Chinese mythology – occurred at around 7:18 a.m. Beijing Time on 15 May. It landed, as planned, in southern Utopia Planitia, which is a largely flat area between 25° and 30° north of the Martian equator.
Landing a spacecraft on the surface of Mars is one of the hardest things to do in planetary exploration
John Logsdon
Zhurong was launched in July 2020 as part of China’s first independent Mars mission Tianwen-1. The mission arrived at Mars in February and it orbited the red planet for three months, searching for the best location to descend. Tianwen-1 then released the lander – containing Zhurong – after 1 a.m. on 15 May. The lander first used a heat shield during entry into the Martian atmosphere, followed by a parachute during descent near the surface, before finally firing a “retro engine” to safely land. With Tianwen-1 still in orbit, it will now provide relay communication between Zhurong and the Earth.
Only about half of the attempts to orbit or land on Mars have succeeded. Before Zhurong, the US has put eight landers and rovers on Mars including most recently the Perseverance rover. “Landing a spacecraft on the surface of Mars is one of the hardest things to do in planetary exploration,” says John Logsdon, a space policy expert with George Washington University. “China is to be congratulated on its success on its first attempt at achieving such a landing.”
Next steps
According to the state media, Zhurong is soon expected to roll off the landing platform and onto the Martian surface for scientific exploration. Onboard Zhurong are six scientific instruments including a navigation and topography camera, a subsurface detection radar, as well as surface magnetic-field detector. As scientists have long suspected that Utopia Planitia is covered with ancient mudflows, the rover will aim to examine the distribution of water/ice and look for signs of past life. “We look forward to Zhurong’s scientific discoveries at Utopia Planitia by probing the surface and subsurface, so we can better understand the history of Martian water and habitability,” says Bernard Foing from the European Space Agency, who is also director of the International Lunar Exploration Working Group’s EuroMoonMars Initiative.
This year marks an intense period of launches and space activities for China. Just last month, the country sent the 22-tonne core module of the Chinese Space Station into the low-Earth orbit. Later this month and early next, a cargo and crewed mission will follow to get the station operational. By 2022, when complete, it will join the US-led International Space Station as the only fully functional space stations.
Radiotherapy using rapid irradiation at high dose rates, known as FLASH, could be used to protect healthy tissues during cancer treatments. Studies in animals have shown that electron irradiation with dose rates above 40 Gy/s reduces normal tissue damage while maintaining the tumour control seen at clinical dose rates (around 2 Gy/min). Meanwhile, the first clinical trials of FLASH radiotherapy are now starting to launch.
The mechanism behind this FLASH effect, however, remains unknown.
One popular theory proposed to explain the FLASH effect is that depletion of oxygen during irradiation creates a temporary hypoxic environment for both healthy and cancer cells. Hypoxic cells are 2–3 times more resistant to radiation than oxygenated cells. And as many cancers are already hypoxic (while healthy tissue is fully oxygenated), FLASH irradiation could provide a protective effect for healthy tissue without impacting the response of cancer cells.
The oxygen depletion is caused by radiolysis of water molecules, a process that creates reactive radicals that then react with oxygen molecules in the tissue. While simulations of such radiolysis processes have been published, there’s a lack of measurement data and systems that can evaluate oxygen consumption. To address these shortfalls, researchers in Germany have experimentally investigated radiolysis-induced oxygen consumption as a potential FLASH mechanism, publishing their findings in Medical Physics.
“We decided to measure the amount of oxygen being consumed by radiation with our oxygen sensors,” says first author Jeannette Jansen, a PhD student in Joao Seco’s group at the German Cancer Research Center (DKFZ). “The focus of the study was to quantify directly the amount of oxygen being removed by FLASH for different amounts of delivered dose.”
Irradiation investigations
Jansen, Seco and collaborators monitored oxygen consumption during FLASH irradiation of cylindrical water phantoms, using water with initial oxygen concentrations of between 0% and 21% atm (the concentration expected for water in contact with air containing 21% oxygen).
Elke Beyreuther (left) and Jörg Pawelke from OncoRay at the experimental proton beam line. (Courtesy: Katja Storch, OncoRay)
The team irradiated the phantoms with several radiation types at different dose rates: 225 kV photons at dose rates up to 52 Gy/s; 400 and 150 MeV/u carbon ions at dose rates up to 9.5 Gy/s; and 224 MeV protons at dose rates up to 340 Gy/s. Carbon ion irradiations were performed at the Heidelberg Ion Beam Therapy Center and proton irradiations at OncoRay Dresden.
The researchers employed TROXSP5 optical sensors to noninvasively measure changes in oxygenation during irradiation. They saw that oxygen levels in the water phantoms reduced linearly with increasing time and dose, and that the consumption rate was independent of the initial oxygen concentration.
The dose rate, however, did affect the rate of oxygen consumption, as well as impacting the dose level required for total depletion. Plotting the amount of oxygen removed per unit dose versus dose rate revealed a nonlinear relationship, with higher dose rates leading to less oxygen consumption for all radiation types.
The amount of oxygen consumed during irradiation was also dependent upon the particle type. For 10 Gy dose delivery, the oxygen consumption was 0.04–0.18% atm for photons, 0.04–0.25% atm for protons, and 0.09–0.17% atm for carbon ions, depending upon on the dose rate.
Oxygen is not to blame
For a phantom filled with water with an initial oxygen concentration of 2% atm, and assuming linear oxygen depletion (as observed in the measured data), the team calculated that 10 Gy radiation cannot deplete oxygen completely in water, for any of the radiation types or dose rates studied. At higher dose rates, 10 Gy irradiation could not reduce the oxygen concentration below 1.75% atm, which is not low enough to induce radioresistance.
“As indicated by the oxygen enhancement ratio, cells must become hypoxic for the protective effect of FLASH to occur,” explains Jansen. “Therefore, the residual amount of oxygen should be less than 0.5% atm to allow a factor of two decrease in oxygen-based damage.”
The team attributes this reduced oxygen consumption seen at higher dose rates to the lower numbers of reactive radicals available to react with oxygen. Although higher dose rates produce more radicals, suggesting that this would lead to higher oxygen consumption, many radicals are instead removed via self-interactions.
For example, the researchers calculated that solvated electrons (free electrons in solution) can diffuse far enough to interact with each other. Such radical recombinations actually lead to radicals being removed faster at higher dose rate. And at FLASH dose rates, this results in a lower steady state of radicals and reduced oxygen consumption – as observed in the experiments.
The researchers conclude that while FLASH irradiation does consume oxygen, their results imply that oxygen depletion alone is not a suitable mechanism to explain the FLASH effect.
“At clinical radiation doses, not enough oxygen is depleted to make a great difference in the resulting oxygen level in the cell, at least not enough to explain a difference in survival,” says Jansen. “And according to our results, less oxygen is depleted for higher dose rates, which is contradictory to what was postulated before.”
“We are currently investigating the FLASH effect in vitro, to test other mechanisms besides oxygen depletion that could explain the FLASH effect,” senior author Seco tells Physics World.
Child focused Researchers are working out how the brains of new-born babies develop by applying AI techniques to these magnetic-resonance imaging (MRI) scans taken as part of the Developing Human Connectome Project, led by King’s College London, Imperial College and the University of Oxford. (Courtesy: Developing Human Connectome Project)
What benefits does AI bring to medical physics that other tools cannot?
Artificial intelligence (AI) is a constantly evolving discipline that could be of great scientific value to medical physics in three main areas. First, with most modern hospitals relying heavily on imaging to diagnose patients, AI could help busy staff to screen patients earlier and so manage their workloads more efficiently. If we had well-trained deep-learning models, then someone going in for, say, a magnetic resonance imaging (MRI) or computed tomography (CT) scan could immediately be flagged as a priority by the algorithm in case something suspicious is found. In fact, a group led by Ryan McTaggart from Brown University in the US has already developed a practical tool to quickly identify and prioritize patients with potentially life-threatening blockages to their blood vessels (Radiology297 640). Such a tool would be impossible without AI technologies; instead, a radiologist would have to laboriously go through each image one by one.
Second, research has shown that deep-learning models can often reduce medical errors. In a paper published last year in Nature (577 89), for example, an AI system studying X-ray mammograms was shown to be better than human experts when it came to predicting whether or not a patient has breast cancer. More specifically, the model was found to be as good as two doctors looking at the images, and better at spotting cancer than a single doctor, while also reducing the number of “false-negative” results. Such systems will never replace medical staff, but would serve as an extra set of eyes, while also being able to work 24/7 without getting tired or making mistakes.
Finally, I believe that AI tools will soon be used in self-diagnostic applications. I can imagine doctors in a local GP clinic, for example, helping patients to monitor their health using their smartphones and to keep track of their physical condition – even diagnosing themselves when worried. Quite simply, AI software can do things that would be impossible with other tools in medical physics.
What are the disadvantages of using AI?
AI techniques are still in their infancy. Yes, they have shown great potential, and yes, they can already solve lots of tasks. But AI is not yet fully “intelligent”. One critical example is a 2019 study in the journal Jama Dermatology (155 1135), which showed that a deep-learning algorithm trained on skin-cancer images could be wrongly biased towards predicting melanoma. Such biases will have to be removed from our training datasets for the AI to become accurate enough for use in the real world.
Machine-learning techniques currently depend on high-quality datasets. Any bias introduced in the data could therefore be used as a “shortcut” for the algorithm to exploit. It’s a problem that extends beyond medical imaging. For example, with current technology, a self-driving car cannot accurately distinguish between a picture of a human and a real person on the road, which could lead to potentially disastrous outcomes.
The good news is that finding these vulnerabilities will let researchers maximize the capabilities of AI, and thereby improve our understanding of how they work and of how to develop new and better algorithms.
How do you use AI in your research?
As part of my PhD, we are seeking to improve our understanding of how a baby’s brain develops during the first few weeks of its life. In particular, we are trying to characterize the developmental trajectory of a healthy brain by applying AI to MRI scans of babies acquired via the developing Human Connectome Project (dHCP), led by King’s College London, Imperial College and the University of Oxford. In my project, we are trying to create a “movie” of how a normal brain develops, but as we only have “snapshots” of different newborn babies at different ages, we first need to match similar anatomical areas of the human brain across all the different images and babies. It’s challenging work as the brain changes significantly during those vital first few weeks.
To achieve this goal, I am building a deep-learning framework that can take both structural and microstructural information of the brain and find these “anatomical correspondences” over time. By the end of my PhD in 2022 we hope to have a 4D model of brain development that can be used to detect abnormalities, predict developmental trajectories, and find if there are any anatomical regions in the brain that are important for normal development.
Smart thinking AI is being used to automatically localize the foetus and its heart in MRI images of a mother’s uterus as part of the iFind (intelligent Foetal Imaging Diagnosis) project. (Courtesy: Alena Uus)
Along with Alena Uus and Maria Deprez, I am also using AI to automatically localize the foetus in MRI images of a mother’s uterus. The dataset is part of the iFind (intelligent Foetal Imaging Diagnosis) project, led by clinicians from King’s College London, St Thomas’ Hospital, Imperial College, the University of Florence in Italy, the Hospital for Sick Children in Toronto, Canada, and Philips Healthcare. In this work, not only do we try and detect the foetus in the images, but also its different organs, which will ultimately help improve the information available for clinicians to perform their diagnosis.
In what specific areas of medical physics will AI be most useful and most crucial in future?
One key example that comes to mind is the fastMRI challenge – a collaborative research project between Facebook AI Research and the NYU Langone Health medical centre in New York City, which was launched in 2018 to make MRI scans faster. For most patients, MRI scans are uncomfortable if they take too long. They’re also expensive. But by acquiring “under-sampled” images and using AI methods to reconstruct the data, the hope is that we can make scans up to 10 times faster.
At the same time, I believe that AI will soon be used to help radiology departments with management and planning. More specifically, I can envisage having an AI algorithm that can decide which tests a patient should have, and how a treatment should be personalized for each individual. Our imaging protocols could also benefit from this by training algorithms to predict the most effective and cost-effective ways of optimizing our current imaging protocols.
AI is here to stay – at least for the foreseeable future – and I think it will eventually become part of standard clinical practice. But as with all new technology, there is still much work to be done – and it will be vital to develop models that are ethical, accurate and trustworthy.
Quantum computers can in principle solve certain problems faster than classical computers. Building quantum machines that can actually outperform classical computers for some specific tasks is a milestone termed as “quantum advantage”. Boson sampling has been considered as an intermediate step for linear optical quantum computing, and a strong candidate to demonstrate the quantum computational advantage. Since the proposal of boson sampling in 2013, however, there is a major experimental challenge in scaling up to a non-trivial regime of more than 50 photons. The speaker will outline the three components of the boson sampling machine, and show his path to building a 76-photon 100-mode photonic quantum computer to demonstrate quantum advantage.
Chao-Yang Lu was born in 1982 in Zhejiang, China. He obtained a Bachelor’s degree from the University of Science and Technology of China (USTC) in 2004, and a PhD in physics from the Cavendish Laboratory, University of Cambridge in 2011. Since 2011, he is a professor of physics at USTC. His current research interest includes quantum computation, solid-state quantum photonics, multiparticle entanglement, quantum teleportation, superconducting circuits, and atomic arrays. His work on quantum teleportation was selected by Physics World as “Breakthrough of the Year 2015”. He was the chair of Quantum 2020 and has served as an editorial board member in international journals such as Quantum Science and Technology, PhotoniX, Advanced Photonics, Advanced Quantum Technology, Science Bulletin and iScience.
Speaker relationship with IOP Publishing
Editorial board member, Quantum Science and Technology and chair for IOP Publishing’s Quantum 2020 virtual conference that took place in October 2020.
Why not sign up for our other Quantum Week webinars? Even if you’re not able to join the live event, registering now enables you to access the recording as soon as it’s available.