“Marangoni surfers” can manipulate surface tension gradients at fluid interfaces to propel themselves to ultrafast speeds, researchers in Switzerland, Germany and the UK have found. The team, led by Kilian Dietrich at ETH Zurich, demonstrated the effect by illuminating Janus spheres with a special laser, enabling them to reach speeds of up to 10,000 times their length per second.
Active particles are unique in their ability to convert energy from their surrounding environments into straight-line motions – a behaviour made possible through the in-built asymmetry in their shapes or compositions. Owing to their simplicity, particles called Janus spheres are a popular choice for these experiments.
These are silica microbeads with one side left bare, and the other coated in gold. One mechanism that can propel the particles is asymmetric heating of the two surfaces by a light source, with the gold side absorbing more heat than the silica. The resulting temperature gradient surrounding the particle propels the particle in a straight path.
On its own, this mechanism is highly inefficient – only propelling particles to speeds of a few times their body length per second. In their study, Dietrich’s team sought to boost this efficiency by coupling the temperature gradient to surface-tension gradients in the surrounding fluid. They achieved this by positioning Janus spheres at a flat oil-water interface, then illuminating them with a powerful, specially-shaped laser beam.
Surface tension gradients
The resulting temperature asymmetry surrounding the particle lowered the surface tension at the oil-water interface on the warmer gold side, inducing the Marangoni effect – whereby flows arise along surface tension gradients at fluid interfaces. The Janus spheres could then “surf” along the resulting flows, in the direction of the surface tension gradients they created themselves.
In their wake, Dietrich and colleagues found that the Marangoni-surfing spheres left areas in which concentrations of surface tension-reducing surfactant molecules were depleted. This created a second surface tension gradient which acted in the opposite direction to the first – imposing a restriction on their motion. Therefore, the overall velocities of the spheres arose from a balance between these two components.
Dietrich’s team could easily control the velocities of their Marangoni surfers by varying either the power of the laser, or the concentration of surfactants at the interface. Their demonstrated velocities spanned four orders of magnitude: from microns, to several centimetres – around 10,000 times a sphere’s body length – per second. This represented a vast increase on the speeds of particles which propel themselves along temperature gradients alone.
The team’s discoveries offer new insights into systems whose dynamics are far from thermal equilibrium. Through further research, they could lead to new ways to manipulate active matter – in which large ensembles of suspended particles borrow energy from their surrounding environments to propel themselves and exert mechanical forces.
I’m not a machine (as far as I’m aware) but one day AI bots will probably replace at least some aspects of my job as a science journalist. Still, could these computer systems ever really be capable of doing “proper” science journalism? And what is science journalism anyway?
These were some of the questions at the heart of a session on AI journalism at the European Conference of Science Journalism (ECSJ), which took place on 1 September in Trieste, Italy, with most participants joining remotely.
“If we deploy this technology in a way that’s helpful for us, I think it can help science journalism, enrich it and make our jobs easier,” said Mićo Tatalović, news editor of Research Fortnight. Tatalović pointed to several examples where AI has already entered the newsroom and other forms of science communication.
They include (e) Science News, a website that employs an AI editor to curate its news across a range of disciplines. Elsewhere, there’s Science Surveyor, which can provide background information to a field, and SciNote, which can help you write your next scientific paper.
Tatalović spent a year working with computer scientists at Massachusetts Institute of Technology developing an AI bot to automate science news-writing. The group trained a neural network to write jargon-free text by feeding it with news articles from Science Daily along with the corresponding research papers behind the stories. Tatalović believes that the system would become increasingly skilled with additional data.
So could such systems eventually remove the need for science reporters like me?
Not likely, said Harry Collins, a sociologist from Cardiff University, another speaker on the ECSF panel. “There is a carapace of public descriptions of what AI can do, which doesn’t necessarily reflect what AI really can actually do.”
A case in point was the Guardian’s op-ed last week with the somewhat misleading headline “A robot wrote this entire article. Are you scared yet, human?“, which was produced by the GPT-3 language generator. A note following the reasonably well-written piece revealed that editors had cherrypicked and edited the best bits from eight different essays – giving readers an exaggerated view of GPT-3’s capabilities.
Interested in the sociological dimensions of big-science projects, Collins spent over four decades following the quest to directly detect gravitational waves. He believes the real key to that historic first detection in 2015 was human interaction, underlining that the LIGO-Virgo collaboration had previously rejected several apparent detections. “The real art of science is working out what’s good data and what’s rubbish,” he said.
Collins argued that if AI is of limited use in breakthrough science, then by extension the same is true for science journalism. “Serious journalism is where people are trying to get themselves to the frontier of science and develop a real serious understanding of science,” he said.
You can see the full discussion in this video, also featuring Fabiana Zollo from Ca’ Foscari University of Venice and Charlie Beckett of the London School of Economics.
A comparison of three commercially available artificial intelligence (AI) systems for breast cancer detection has found that the best of them performs as well as a human radiologist. Researchers applied the algorithms to a database of mammograms captured during routine cancer screening of nearly 9000 women in Sweden. The results suggest that AI systems could relieve some of the burden that screening programmes impose on radiologists. They might also reduce the number of cancers that slip through such programmes undetected.
Population-wide screening campaigns can cut breast-cancer mortality drastically by catching tumours before they grow and spread. Many of these programmes employ a “double-reader” approach, in which each mammogram is assessed independently by two radiologists. This increases the procedure’s sensitivity – meaning that more breast abnormalities are caught – but it can strain clinical resources. AI-based systems might alleviate some of this strain – if their effectiveness can be proved.
Fredrik Strand. (Courtesy: Martin Stenmark)
“The motivation behind our study was curiosity about how good AI algorithms had become in relation to screening mammography,” says Fredrik Strand at Karolinska Institutet in Stockholm. “I work in the breast radiology department, and have heard many companies market their systems but it was not possible to understand exactly how good they were.”
The companies behind the algorithms that the team tested chose to keep their identities hidden. Each system is a variation on an artificial neural network, differing in details such as their architecture, the image pre-processing they apply and how they were trained.
The researchers fed the algorithms with unprocessed mammographic images from the Swedish Cohort of Screen-Age Women dataset. The sample included 739 women who had been diagnosed with breast cancer less than 12 months after screening, and 8066 women who had received no diagnosis of breast cancer within 24 months. Also included in the dataset, but not accessible to the algorithms, were the binary “normal/abnormal” decisions made by the first and second human readers for each image.
The three AI algorithms rate each mammogram on a scale of 0 to 1, where 1 corresponds to maximum confidence that an abnormality is present. To translate this approach into the binary system used by radiologists, Strand and colleagues chose a threshold for each AI algorithm so that the binary decisions assumed a specificity (the proportion of negative cases classified correctly) of 96.6%, equivalent to the average specificity of the first readers. This meant that only mammograms that scored above the threshold value for each algorithm were classed as abnormal cases. The ground truth to which they were compared comprised all cancers detected at screening or within 12 months thereafter.
Under this system, the researchers found that the three algorithms, AI-1, AI-2 and AI-3, achieved sensitivities of 81.9%, 67.0% and 67.4%, respectively. In comparison, the first and second readers averaged 77.4% and 80.1%. Some of the abnormal cases identified by the algorithms were in patients whose images the human readers had classified as normal, but who then received a cancer diagnosis clinically (outside of the screening programme) less than a year after the examination.
This suggests that AI algorithms could help correct false negatives, particularly when used within schemes based on single-reader screening. Strand and colleagues showed that this was the case by measuring the performance of combinations of human and AI readers: pairing AI-1 with an average human first reader, for example, increased the number of cancers detected during screening by 8%. However, this came with a 77% rise in the overall number of abnormal assessments (including both true and false positives). The researchers say that the decision to use a single human reader or high-performing AI algorithm, or a human–AI hybrid system, would therefore need to be made after a careful cost–benefit analysis.
As the field advances, we can expect the performance of AI algorithms to improve. “I have no idea how effective they might become, but I do know that there are several avenues for improvement,” says Strand. “One option is to analyse all four images from an examination as one entity, which would allow better correlation between the two views of each breast. Another is to compare to prior images in order to better identify what has changed, as cancer is something that should grow over the years.”
Full details of the research are published in JAMA Oncology.
A gamma ray flare originating from a distant blazar was likely generated by magnetic reconnection within a black hole’s relativistic jet, a pair of researchers in Germany have proposed. Amit Shukla at the Indian Institute of Technology Indore and Karl Mannheim at the University of Würzburg used observations from NASA’s Fermi-LAT space telescope to reveal how “mini-jets” form within the blazar’s larger plasma jets, producing high-energy gamma rays. Their conclusions provide new insights into how the magnetic fields surrounding supermassive black holes dissipate their vast amounts of energy.
Powerful magnetized jets are common features of the spinning supermassive black holes that occupy the centres of large galaxies. Within these features, plumes of accelerated matter can extend to hundreds of thousands of light-years along the black hole’s rotational axis; dissipating their energy by emitting radiation from across the entire electromagnetic spectrum. These emissions are thought to be boosted by shock waves travelling along the jets, accelerating particles to highly relativistic speeds. However, Shukla and Mannheim propose that these boosts would be too inefficient within a black hole’s magnetically dominated plasma to fully explain how the jets dissipate their energy.
The duo explores this idea in their study, through observations gathered by Fermi-LAT, which is a space-based gamma-ray detector. In 2018, Fermi-LAT observed a giant gamma-ray flare in the distant blazar 3C 279, which endured for almost six months. Yet within this time, the flare displayed a distinct flickering; sometimes doubling in brightness on timescales of just a few minutes. The observations provided Shukla and Mannheim with an ideal opportunity to examine how energy is dissipated within the innermost parts of black hole jets.
Magnetic topologies
Based on the timescales of the flickering they observed, the researchers concluded that the regions of gamma-ray emission within the burst were limited in size. This suggested that the accelerations responsible are driven by structures far smaller than jet-spanning shock waves. Instead, Shukla and Mannheim argue that they can be better explained by the process of magnetic reconnection – which describes how the topologies of magnetic fields within highly conductive plasmas can be rearranged. This process converts the magnetic energy of the plasma into kinetic and heat energy, driving particle accelerations.
In addition, Shukla and Mannheim found that gamma rays in the burst were not being attenuated by pair production – in which electron-positron pairs are created during collisions between gamma and ultraviolet photons. This would suggest that the responsible accelerations were taking place at light-year distances from the central black hole. This far away, kinks emerge within the jet’s thin, relativistic plasma columns, introducing turbulence. In these conditions, magnetic reconnection can readily occur.
The duo tested these ideas by incorporating them into a model black hole jet. They found that through turbulence-driven reconnection, the jet’s magnetic field fragments to form smaller clumps of plasma. These interact with each other and grow within the reconnection region; eventually forming mini-jets within the larger jet, which dissipate their energy through smaller-scale gamma bursts. If correct, this conclusion could suitably explain the characteristic flickering observed by Fermi-LAT, and may ultimately improve astronomers’ understanding of the complex, often mysterious physics of black hole jets.
On 23 March 2020 UK prime minister Boris Johnson announced a lockdown to tackle the spread of coronavirus, following the example of other countries around the world who chose this strategy to halt the virus’ progression. This decision came days after Johnson’s government toyed with the idea of letting the virus spread and infect up to 70% of the population, in order to develop so-called “herd immunity”. The stark policy shift left people wondering what had changed.
To many, the models produced by the physicist-turned-epidemiologist Neil Ferguson and his group at Imperial College London were critical. They predicted that should no action be taken, the death toll in the UK could reach 500,000, and may exceed 2 million in the US. As well as providing a shocking reality check about the pandemic, the work highlighted an increasingly popular new tool that is profoundly changing medical research.
While in vitro and in vivo experiments have long been a staple of medical-based research, the rapid increase of computational power in recent decades has enabled the emergence of a new experimental field: computational (in silico) modelling. From surgery to drug design, these numerical models are not only used to describe physiological phenomena, but also to derive useful information and even drive clinical decisions.
“Modelling is about encapsulating our knowledge into a set of rules or equations. It is thus at the core of any science,” says Pablo Lamata from King’s College London, UK. “We are currently experiencing a ‘computational boost’ in our modelling capabilities.”
Twin hearts Pablo Lamata and his group at King’s College London create digital twins of a patient’s heart to infer properties that are not readily available to doctors. They use a modelling blend to teach computers how to learn patterns from data (statistical modelling) and how to make predictions based on knowledge of the workings of the cardiovascular system (mechanistic modelling). (Courtesy: Pablo Lamata)
The ever-increasing amount of available data – from wearable sensors to digital medical images – has also sped up the applications of modelling. Lamata’s group, for example, combines in silico heart models with medical images of the heart to create patient-specific numerical heart models – so-called digital twins. Such models could in future provide doctors with vital information regarding cardiac properties that are currently unavailable, such as heart stiffness. This is important because when the heart fills up with blood (during diastole), stiffness can prevent the ventricle from filling up properly, a phenomenon associated with heart failure in about 50% of patients. These models could also provide new understandings on the mechanisms leading to this outcome.
“We obviously cannot touch a beating heart to know the stiffness, but we can use these models governed by the rules and laws of the material properties to infer that important piece of diagnostic and prognostic information,” Lamata explains. “The stiffness of the heart becomes another key biomarker that will tell us how the health of the heart is coping with disease.”
Reducing uncertainty
Similar approaches are used in other medical fields. Paul Sweeney, a mathematician from the University of Cambridge in the UK, for example, uses in silico models to predict perfusion – the passage of fluids through the circulatory or lymphatic systems to an organ or tissue – and drug delivery across whole tumours at the scale of the smallest blood vessels (approximately 3 μm). “Our models allow us to understand how a tumour’s micro-architecture influences the distribution of fluid and mass through cancerous tissue, which is important for engineering new anti-cancer drugs, or optimizing strategies of current therapeutics,” Sweeney says. As with heart stiffness, such data are otherwise unavailable through conventional experiments in isolation, which means that introducing modelling can inform the development of cancer therapeutics.
But getting to the point where information can be derived from these models is only the last stage in a long and thorough development process. From defining the problem to selecting the modelling strategy to address it, each step requires crucial choices, which are reassessed later to ensure they reflect reality. “It is only through many iterations of searching for the agreement between model and data that the weak links are revealed and addressed,” notes Lamata. This is why, just as with weather forecasting or death-toll projections in the COVID-19 crisis, models are constantly updated as more data become available.
Sometimes this means revisiting the assumptions that models are based on: they are necessary at the beginning to facilitate the modeller’s task, but their impact on predictions should be carefully handled. “The best solution to deal with this is the use of sensitivity analysis [assessing a parameter’s influence on the model prediction by varying it while keeping all other parameters constant] – but the limit is always going to be in those aspects that your model can’t account for,” Lamata says. It is important to keep this in mind when considering what can be inferred from the model output and what cannot, while also leaving room for improvement.
Modelling blood The tumour models that Paul Sweeney builds at the University of Cambridge use well-established mathematical models, such as those for microcirculatory blood flow (image at top of article), in combination with new approaches for predicting fluid distribution through the interstitial tissue (above). Both models are parameterized and validated against biomedical imaging data. (Courtesy: Paul Sweeney)
To Sweeney, there is always scope for increasing the accuracy of models, whether by supplying additional experimental data or incorporating more models of other complex biological mechanisms. “A potential trade-off is always between the accuracy of the model and the amount of experimental data available,” he says. “In other words, will additional data make the model more accurate or cause overfitting to the empirical data?”
Indeed, this is the caveat that all models face: the need to balance accuracy against simplicity. If a model relies too heavily on the data that it was developed from, its prediction for other datasets might not be accurate – the problem of overfitting Sweeney refers to. Conversely, developing as basic as possible a model, with little reliance on data or patient characteristics, can also yield unreliable predictions, as the “one-size-fits-all” approach fails to account for a patient’s individualities.
The purpose of the model dictates where to put the emphasis, although simplicity is usually favoured. The focus hence shifts back to the importance of accurately defining the problem that the model is addressing in the first place. “You want the minimum model to correctly address the problem,” concludes Lamata.
Designing new technologies
Confronting a model’s predictions with reality remains the quickest method of validation, although this can be achieved faster in some fields than others. Take biomaterials, where in silico models help us understand how molecules behave and interact with their environment, which can quickly be replicated in vitro.
Over at the University of Nottingham in the UK, for example, bioengineer Alvaro Mata and his group use modelling as a stepping stone for developing innovative materials and therapies for tissue engineering and regenerative medicine. “Molecular dynamic simulations are key to elucidate mechanisms by which molecules assemble together,” explains Mata. “This allows us to translate those assemblies at the molecular level into fabrication platforms capable of engineering functional structures.”
His group recently used this approach to understand how graphene oxide can exploit the flexible regions of a protein to create a new bioink material for 3D-printing tissue-like vasculature structures. Through the models, the researchers learnt how to guide their assembly at various size scales, from the cellular level to the final complex structure. “Simulations can dramatically facilitate testing and optimization of materials, structures and processes, saving both time and money,” Mata says.
Functional boost By using molecular dynamics simulations of the interaction between elastin proteins (gold/green flexible filaments) and graphene oxide (rigid gold with red and white balls), Alvaro Mata and colleagues at the University of Nottingham can understand and optimize the underlying mechanism of the assembly of tubular structures. These studies considerably boost the chances that the functional structure engineered from this interaction is viable. (Courtesy: Alvaro Mata)
Compared with in vitro and in vivo experiments, in silico simulations have the advantage of being fast, cheap, safe, easy to implement and free of experimental errors. Consequently, they are becoming increasingly helpful in designing new technologies and strategies.
A prime example of this is cardiac resynchronization therapy (CRT) in patients with heart failure. This treatment involves placing two pacing leads controlled by a pacemaker into the patient’s heart to augment the electrical activation and synchronize the beating of the two ventricles. Traditionally, the leads’ location and timings for stimulation are derived from electrocardiograms (ECGs) and medical images, but 30% of patients do not see a clear benefit from this strategy. By producing computational heart models from the patient’s scans and simulating various pacing strategies on them, Steven Niederer’s group at King’s College London can identify the best area to electrically stimulate the heart and investigate the effects of changing the pacing.
Such models are particularly complex, requiring multi-scale modelling to link cellular dynamics, blood flow, electrophysiology and tissue deformations within a common anatomical heart geometry. There is still a long way to go before the technology can be adopted as a support tool for clinical decision-making, but studies in a small number of patients have shown that such models can perform patient-specific predictions about the acute haemodynamic CRT response. This further demonstrates that clinical intervention guided by in silico models is no longer just a pipe dream.
In fact, in 2015 Alberto Figueroa and his team at the University of Michigan in the US helped perform the first surgical intervention that used computerized blood flow simulation. The team’s open-source software, CRIMSON, uses MRI scans and haemodynamic variables (blood pressure and flow) to produce 3D computer models of a patient’s circulatory system. This can be used to simulate different surgical alternatives and determine which yields the best prognosis before heading to the operating theatre.
The CRIMSON software has already been used to plan several complex cardiac operations, such as “Fontan procedures”, which involves rewiring pulmonary circulation in patients born with just one functioning ventricle. The venous return – the flow of blood back into the heart – is rerouted to bypass the heart and connected directly to the pulmonary arteries for transport to the lungs. Simulations help surgeons decide where to make the surgical connections so that blood flow is ideally balanced between the lungs.
In silico trials in reality
With the surge in applications of computational modelling and the demonstration of its clinical relevance, regulatory bodies are taking note and have already acknowledged the benefit of these models.
For example, in 2011 the US Food and Drug Administration (FDA) approved the first in silico diabetes type 1 model as a possible substitute for pre-clinical animal testing for new control strategies for type 1 diabetes. A few years later, the FDA went further by approving FFRCT software that had been developed by the US medical firm Heartflow to measure coronary blockages non-invasively from CT scans. This was therefore the first clinical technology based on subject-specific modelling to get the green light. The software has also received CE marking and regulatory approval in Japan.
In specific cases, such as the assessment of drug toxicity on the heart, the FDA is now even sparking new collaborations between academia and industry to rise to the challenge.
One success story comes from the University of Oxford in the UK, where Blanca Rodriguez’s group developed Virtual Assay – software that can run in silico drug trials in populations of human cardiac cell models. Designed to predict drug safety and efficacy, the software simulates the effects of drugs on the electrophysiology and calcium dynamics of human cardiomyocytes – cells that control the heart’s ability to contract. Specific heart rhythm patterns can therefore be inferred for each drug compound and dose simulated, with the objective of spotting any drug-induced arrythmias. An in silico trial of 62 drugs, led by Rodriguez’s team in collaboration with Janssen Pharmaceutica, showed that the software was more accurate at predicting abnormal heart rhythms than animal trials (89% accuracy versus 75% in rabbits).
These results captured the attention of other pharmaceutical companies – a sector in which developing new compounds costs several billions of dollars, and 20–50% of drug candidates are abandoned due to cardiovascular safety issues. Eight major pharmaceutical companies are now evaluating Virtual Assay in the early drug development process to assess arrhythmic risk. This strategy is also backed by animal protection groups, as in silico models for pharmaceutical R&D could reduce the need for animal use by a third.
This conjunction of interest from regulatory bodies, industry, clinics, academia and even animal-welfare groups has led to the establishment of networks and initiatives around the world to promote the development, validation and use of in silico medicine technologies.
The Virtual Physiological Human Institute led the way when it was opened to members in 2011. As part of a two year EU-funded project starting in 2013, it produced a roadmap for the introduction of in silico clinical trials. Lamata, Rodriguez and Niederer are part of a network of universities, industries and regulatory bodies working to bring personalized in silico cardiology to the clinics. Just as research is becoming more and more interdisciplinary, this combination of expertise is vital to steer the use of models in the right direction and facilitate their clinical translation.
“Oddly, one challenge with our models is knowing where to begin analysing their predictions as they produce a vast wealth of data. The interdisciplinary expertise on our team makes this process far easier by providing unique perspectives on how to tackle this challenge,” says Sweeney.
As Lamata puts it, it’s easy to be tempted to “think big” and include a lot of variables and complexity to have a huge model prediction power. “But it’s important to keep things as simple as possible for validation, which is an incremental process. Working with clinicians helps to keep the end-goal in sight.”
If incorporating in silico models to complement traditional in vitro and in vivo experiments is a new paradigm, all stakeholders seem to adapt well. Sweeney did not perceive much resistance from biologists and clinicians, who are more used to working with statistics and correlations drawn from large cohorts than equation-based, patient-specific simulations. For Lamata, the challenge with demonstrating the accuracy of in silico-based predictions remains the same as for any scientific advance: the need for evidence. And this takes time to generate. “The main cultural shift we require is one towards open science, where we make our data and tools available for the fast generation of the required evidence,” he notes.
The momentum is there: industrialists, policy-makers and clinicians are on board; personalized medicine is gaining traction as computational power keeps increasing and initiatives flourish; evidence of computational models’ capacity to enhance diagnosis, prognosis and treatment is mounting. The COVID-19 pandemic has shown that models can even influence government decisions. It might just be a matter of time before in silico models in medicine become as ubiquitous as the computers they are run on.
Ultimately, if all models are limited by their hypotheses, the possibilities that they offer are limitless. You just need to know exactly what you are looking for.
When I was in high school I loved physics and hoped to study theoretical physics in college. However, I didn’t do very well in the national college entrance examination and ended up choosing cell biology. I became very interested in the physical aspects of living cells and did a Master’s in biophysics at what is now Peking University Health Science Center. I learned to use various biophysics instruments and techniques to study cytoskeletons and cell membranes. I also started to realize that biophysical signals could be as important as biochemical signals in understanding cells, tissues and organs.
How did physics help your PhD in cell biology?
I did my PhD with Michael Sheetz at Duke University Medical Center in the US. This involved using laser optical tweezers to precisely measure the tension of cell membranes. We did this by putting a latex bead on the cell’s surface and then removed the bead to pull out a membrane tube – which is very elastic – from the cell. We then measured the force needed to pull the tube and calculated the cell membrane tension. It was the first time that laser tweezers were used to study living cells and the work involved a lot of physics.
On the one hand, the potential market for academic publishing in China is huge. On the other, there still needs to be an improvement in the quantity and quality of our publications.
What do you currently work on?
My current research focuses on tissue and organ regeneration, especially through the design of collagen scaffolds. Our lab recently completed a clinical trial with about 400 patients for a “smart bone formation” product and now we’re waiting for official clearance from the National Medical Products Administration. We also did a clinical study on intrauterine tissue regeneration with about 100 patients who had Asherman’s syndrome – a rare condition in which scar tissue forms in the uterus. More than 60 healthy babies have been born as a result of that project.
What are some of the key research topics carried out at the Institute of Genetics and Developmental Biology?
A significant part of the institute’s research efforts are dedicated to agriculture-based molecular breeding. My colleagues use genome sequencing and other modern biotechnologies to cultivate products such as rice, wheat and corn with high-quality yields and better resistance to disease. Our institute has also partnered with local governments to set up several breeding centers across the country.
What are some of the main issues facing researchers at your institute?
One issue is that about 10% of our research groups face funding shortage, in particular those that are not directly engaged in the institute’s major research projects. Another is that my colleagues and I don’t have enough graduate students or postdocs to work with. The quota cap on student admission in China has favoured universities over research institutes. For instance, I can only take on one doctoral student each year. This has become a bottleneck problem for the institute. The mechanism for industrial translation also needs to be improved.
How has COVID-19 affected your lab and are you now starting to reopen?
There certainly have been interruptions. My lab was closed for a couple of months during the worst of the pandemic. Some students are now back in the lab, especially those who are about to graduate. We’ve recovered about 60–70% of our research productivity. But we still can’t get all the lab supplies we need because the suppliers are manufacturing at a reduced capacity. I believe the impacts of COVID-19 in our field are temporary rather than long term. Students might face delays, but they shouldn’t have problems finding a job or further training opportunities as they move ahead.
You recently became the editor-in-chief of Biomedical Materials (BMM). Why did you decide to take on this role?
I’ve worked on biomedical materials for two decades and I’m aware of the importance of high-quality publications for our field. Early on, I was thinking of starting my own journal, but that could be a time-consuming process. So, when IOP Publishing, which publishes Physics World, contacted me about the BMM appointment, I immediately realized it’s a great opportunity and accepted.
What are the strengths and challenges of the journal?
This year is the 15th anniversary of BMM and the editorial board has worked hard to process manuscripts in an efficient manner. However, the journal faces increasing challenges from both the rapidly evolving discipline itself and from rival journals. We plan to distinguish BMM from similar publications by highlighting special topics such as the study of physical and mechanical mechanisms of biomedical materials in tissue regeneration. We will also expand the journal’s scope to include the latest advances in industry.
How are you going to make it more appealing to researchers in the field, especially those from China?
China has an expanding biomedical materials research community. To attract more authors and readers from China, we’re working on assembling a team to promote BMM through Chinese social-media performs and via online talks.
How do you see academic publishing in China?
On the one hand, the potential market for academic publishing in China is huge. On the other, there still needs to be an improvement in the quantity and quality of our publications. While research papers are essential, review articles, outreach and educational content are equally important in the world of academic publishing. For example, reviews play a major role in interpreting and synthesizing new research findings for a broader audience and they are particularly helpful for graduate students and postdocs who are interested in the overall development of their field.
Phosphine, which is a gas produced exclusively by microbes on Earth and considered to be a strong signature of life on other worlds, has been detected in the clouds of Venus. The discovery is perhaps the strongest evidence yet of life beyond Earth.
The idea to search for phosphine as a biosignature on other worlds is a recent one, developed in 2019 by astronomers led by Clara Sousa-Silva at the Massachusetts Institute of Technology, and independently by Greaves. Phosphine is a molecule derived from phosphorus and is an essential building block of RNA and DNA. On Earth it is produced by anaerobic bacteria, which are microbes that do not require oxygen. They absorb phosphate minerals and combine them with hydrogen, releasing phosphine in the process. Importantly, phosphine is not produced by any known geological process, at least not on Earth.
“In terms of the most distinctive biomarkers [in the solar system] where we can’t find a geological explanation, this is very strong,” Greaves tells Physics World.
A very different planet
Sousa-Silva’s work, which addressed how astronomers might detect phosphine in the atmospheres of Earth-like exoplanets, concluded that the presence of phosphine would act as an iron-clad biomarker since no other process on Earth-like planets is known to produce it. Venus, however, is a different type of planet to Earth. Its surface swelters at an average temperature of 460 °C, and is crushed under an atmospheric pressure of 93 bar – compared with 1 bar on Earth. The planet’s dense atmosphere is almost entirely made of carbon dioxide, laced with clouds of sulphuric acid. It is possible that some unknown chemical reaction in these extreme conditions could be producing the phosphine, but one problem is the lack of hydrogen.
Phosphine is formed from a phosphorus atom bonded to three hydrogen atoms. In the outer solar system, Jupiter and Saturn are able to produce phosphine via a non-biological process. These planets are hydrogen rich, however, and with so much hydrogen available in the high temperatures and pressures deep within their interiors, it is a relatively straightforward process for them to produce phosphine that is then dredged into their upper atmosphere by convection currents.
Venus, on the other hand, has very little hydrogen, having lost it to space long ago, along with most of the planet’s water. Instead, Venus is carbon rich. Without free hydrogen, it is difficult to conceive of a non-biological process to create phosphine. Furthermore, even if some geological reaction were taking place, adding all the possible sources such as volcanoes and the existence of favourable minerals, it would still come in at 10,000 times short of the observed phosphine abundance, which is 20 parts-per-billion.
No known geological process
“That doesn’t mean that the biological origin is the correct idea,” says Greaves. “It just means that we can’t find a really viable geological process.”
There is little previous evidence for phosphorous, too, on Venus, the only other detection having been made by the Soviet Union’s Vega 2 lander in 1985.
“This new detection of phosphine is significant because it suggests the widespread presence of phosphorous in Venus’ clouds,” says Sanjay Limaye, who is an atmospheric physicist from the University of Wisconsin, Madison, but who was not involved in the phosphine discovery. Limaye is a former chair of NASA’s Venus Exploration Analysis Group.
Scientists have speculated about the existence of microbial life in Venus’ clouds, attributing said life to unidentified ultraviolet-absorbing particles present within the planet’s atmosphere. These particles are currently being mapped by the Japanese Aerospace Exploration Agency’s Akatsuki orbiter.
Venusian habitable zone
Despite Venus’ generally hellish conditions, some regions of the planet are more clement than others. “The altitude that we probed is the top end of what is sometimes called the Venusian habitable zone,” says Greaves. This extends about 47–60 km above the surface, where temperatures range between 0 and 100 °C and atmospheric pressures average about 1 bar. However, the clouds also pose a hazard to life: it is not clear how microbes could survive in conditions that are 95% sulphuric acid.
More observations are required, says Limaye: “Confirmation of the presence of phosphine by other means is very much needed.”
It has been suggested that future missions to Venus could incorporate balloons or winged craft that could explore this potentially habitable region of the atmosphere. In the meantime, NASA is considering two future missions to Venus: VERITAS, which will study the planet from orbit, observing primarily with a synthetic aperture radar; and DAVINCI+, which will be a probe that will dive through Venus’ atmosphere.
“The radar should be able to provide some clues about the presence of liquid water on the surface in the past, while more crucially, the probe may be able to sample the cloud composition and search for the presence of phosphine,” says Limaye.
If the phosphine does prove to be biological in origin, it would mean that, surprisingly, Venus would be the first planet beyond Earth to be found to harbour life. Given its considerably harsh conditions, it would throw the idea of the habitable zone wide open.
The first part of this webinar focuses on the theoretical aspects of small field dosimetry, whereas the second part – to be held 12 November – presents the best practice in small field dosimetry based on TRS 483 protocol.
This meeting has applied to CAMPEP for approval of one MPCEC hour, and to EBAMP for approval for one CME credit.
The participants of this webinar, presented by Dr Lutz Müller and Dr Hui Khee Looe, will get more insight into the following topics:
What is special about small radiation fields?
Perturbation effects – the interaction between detector and small photon fields.
Strengths and limitations of specific detectors.
Correction factors in SF protocols.
Application of codes of practice for small field dosimetry, mainly IAEA/AAPM TRS 483.
Lutz Müller (left) holds a PhD in nuclear physics from Technical University Munich. After his PhD, he worked in experimental nuclear physics research at the National Nuclear Physics Institute in Padua, Italy. In 2000 he joined IBA Dosimetry and is currently Director of IBA’s International Competence Center (ICC). Hui Khee Looe (right) is Deputy Head of the Department of Medical Physics, University Clinic for Medical Radiation Physics and Radiation Protection, Pius Hospital, Oldenburg, Germany, and holds a PhD from Oldenburg University. He leads a research group on Computational Methods in Dosimetry and has a strong interest in Small Field Dosimetry.
Researchers in Switzerland and Italy have developed a method for generating currents of electrons with a known quantum spin without the need for large external magnetic fields. This could enable the development of devices that are compatible with superconducting electronic elements, paving the way for the next generation of highly efficient electronics.
Following the discovery of giant magnetoresistance as well as the observation of spin injection and detection in metals in the late 1980s, a field of research known as “spintronics” emerged dedicated to creating practical devices that exploit electron spin. Semiconductor-based spintronics systems have garnered particular research interest because semiconductors can be integrated within modern-day electronics, thus improving the efficiency and storage capacity of devices. But in order to make useful spintronics devices, researchers need to be able to control and detect the spin state of electrons with a high level of accuracy.
Controlling electron spin
One method for controlling the electron spin current is a device known as the “spin valve”, which usually consists of a non-magnetic material sandwiched between ferromagnetic materials. This material configuration allows electrons with one spin to propagate through the device, while the opposite spin is reflected or scattered away. This occurs because spin propagation depends on the alignment of the magnetic moments in the ferromagnet. Thus, a “spin polarized current” is produced. This is a flow of electrons that, in theory, all are in a set spin state (all spin-up or all spin-down).
Spin-valve trio: Andreas Baumgartner (left), Arunav Bordoloi (centre) and Christian Schonenberger. (Courtesy: University of Basel Department of Physics)
However, these types of spin valve are either not very efficient or require very large polarizing magnetic fields both imposing severe limitations on experiments — for example, experiments involving material systems that are sensitive to magnetic fields. To overcome this and achieve a highly spin-polarized current, researchers are looking for alternative methods to create spin valves using semiconductor materials.
Tiny magnetic fields
Now, physicists at the University of Basel along with collaborators at the National Enterprise for nanoScience and nanoTechnology have created a device that can control electron spin currents without the need for large external magnetic fields and with a high efficiency. In a recent paper published in Communications Physics, they describe how a pair of coupled quantum dots formed in an indium arsenide (InAs) nanowire with nearby individual nanomagnets can be used as a spin valve with an electrically tuneable spin polarization of up to ±80%.
The team created the quantum dots by electrically defining two areas where electrons in the nanowire are confined in all three spatial directions. They then employed ferromagnetic side gates to generate small local magnetic fields across each dot. This gate-based configuration means that only very small magnetic field of up to 40 mT are needed to obtain a very high efficiency.
The device operates by generating a spin-polarized tunnel current using the first dot, which is then detected by the second dot. By magnetizing the ferromagnetic split gates in parallel or anti-parallel, the researchers can decide whether electrons of a certain spin can pass through each part of the device. The probability that an electron with that spin tunnels through both dots can be controlled using the ferromagnetic side gates, allowing a spin-polarized current to flow when they are aligned but no current at all if they are anti-parallel. The researchers were able to “tune” the device by experimenting with different applied fields and gate voltages. They were able to achieve a high spin polarization efficiency with the potential to reach the theoretical limit of 100%.
New quantum technologies
This type of spin valve could be very useful in applications for which magnetic fields can have a drastic impact on the material characteristics – such as suppressing superconductivity or altering electronic band structures. The manipulation of electron spin with such small magnetic fields may allow researchers to develop new quantum technologies that utilize spin-based quantum phenomena such as entanglement and the confirmation of Majorana fermions in topological superconductors along with facilitating the investigation into new unexplored physics.
As a physics teacher, I am used to standing in front of students in a classroom, teaching and engaging them face-to-face. Now that the pandemic has forced schools and universities to reconsider face-to-face classes, two 21st-century concepts have become buzzwords: work from home (WFH) and online distance learning (ODL).
As the popularity of WFH and ODL increased, the popularity of Zoom, the virtual meeting app, increased as well. Suddenly, teachers all over the world found themselves presenting lessons in front of a camera hooked to a computer, “facing” students sitting behind screens many kilometres away.
This method of communication projects the complex physical self into a composed, quiet, virtual identity. For a full week in April, I attended daily four-hour Zoom meetings. During this time, my default device setting was a muted microphone. I turned off my camera for two reasons: to trade upload speed for download speed, and to save every precious bit of my tight data allowance. I was there to listen, so I spoke only when acknowledged, my physical self muted and turned into a meticulously handpicked profile picture. From a fun-loving jokester, I was turning into a faceless, lifeless and soulless being. In short, I was becoming a Zoombie.
Birth of the Zoombies
It is a cliché to say that technology gifts us connectivity, especially at a time when physical connections are impossible. But the trade-off is a lack of physical interaction, and even a reduction in the feeling of being alive. All I saw on my screen or heard from my speakers was a stream of ones and zeroes; all sights and sounds were converted to electrical currents. I sat in front of the laptop with my ears, eyes and brain glued to the device – not with eager focus, but with the sort of lazy attention manifested by hunched shoulders, drooped eyes, and uninspired fingers.
At first, I thought I might be the only one who felt this way. But as my Zoom meetings increased in number and frequency, I noticed that most participants were doing the same thing. Microphones muted, cameras off, sporadic chat messages: lifeless personae drudging in and out of meetings. Soon, I realized that students may be having the same experience with ODL delivered in synchronous sessions. Muted, unresponsive, answering only when asked, detached and bereft of life, the numbers of Zoombies were increasing at a rate that would put 10 seasons of The Walking Dead to shame. And I would not have any of it.
The fight to stay alive
As a classroom teacher I took it upon myself to keep my classes alive even if lessons had to be conducted via Zoom. By harnessing my more-than-a-decade experience as an educator, I knew I could find ways to bring my virtual classroom to life.
The first conclusion I drew is that synchronous class meetings must be as succinct and targeted as possible to shorten the highly regulated environment of a Zoom meeting. This mode should be reserved for important and urgent matters, such as setting learning targets and deliverables for the week, explaining the most challenging concepts, deciding on a particular class issue, and – I believe this is of the highest importance – providing life and socialization to the virtual classroom.
My second conclusion is that there should be a checking-in routine. Letting students interact with each other, even for a brief period, can inspire attention and engagement. Assigning different hosts and “passing” the microphone whenever possible – all while encouraging candidness and candour in front of the camera, just as you would in a classroom – can also improve interaction.
Third, teachers should seek ways to elicit discussion and interaction among students. This can include integrating live polls or quizzes, asking thought-provoking questions, or demonstrating discrepant events (a favourite of science teachers) in front of the camera. In short, we must inject life into our virtual classrooms. The more the students feel alive, the less their chances of turning into Zoombies.
Fourth, as teachers, we should remember that some of our students may have limited connectivity in terms of Internet speeds and data caps. We must wear the hat of a connoisseur (choosing the best learning material), and a museum curator (laying out the virtual class like a museum with themed galleries) when designing our online courses. We should make each minute count and every moment meaningful. This not only enables students to use their time and data allowances efficiently, it also encourages their agency as learners. Let them explore the virtual classroom on their own in an asynchronous mode, moving and learning at their own pace rather than being herded or forced to follow others.
Fifth, and last, it is lovely to bid students goodbye before they exit at the end of a week or a learning stage, through a synchronous interaction. Once again, this is a way to inject life and make them look forward to the next learning session.
World War Z(oom)
As we try to keep our students from becoming Zoombies, we must also consider the pandemic’s side effect: a general feeling of anxiety. Some of our students might be anxious about missing school and friends, studying at home without a conducive environment or schedule, or worried about family members whose jobs or health have been affected. Leaving our students in such a worried state adds momentum to their Zoombie transformation.
The best weapon we can use in this war against Zoombies is to encourage positive student–student and teacher–student relationships in our virtual classes. For example, how can an online course evoke a genuine interest in everyone’s wellbeing and safety during the pandemic? Is it possible to elicit the contexts in which students are learning, and then use those contexts in designing classroom activities?
These actions may require extra effort, but if we wish to design our online classes in a truly human-centred way, such questions must be front and centre in our considerations. Instead of students feeling that they are listening to pre-recorded lectures, reading plain texts, or taking tests that are automatically checked and marked by an algorithm, a virtual classroom with a “beating heart” makes a potent weapon against Zoombification.
Finally, we need to remember that we are not teaching remotely in normal circumstances, but during a health emergency of global scale. When we accept this truth, it becomes easier to keep students feeling alive in our classes. Because if Zoom turns our students into lifeless and unresponsive beings, that would be scarier than all the world’s zombie movies combined.
A longer version of this article appears as a Letter to the Editor in the journal Physics Education, which – like Physics World – is published by IOP Publishing.