N95 masks are much more than a simple screen filter and use lots of interesting physics – including van der Waals forces – to stop you from breathing in nasty virus particles. The above video does a superb job of explaining how the masks work and also touches on efforts to make them reusable.
Some corals are very sensitive to light, which makes it tricky to obtain high quality images of the organisms. Now, Philippe Laissue at the UK’s University of Essex and the Marine Biological Laboratory (MBL) in Woods Hole Massachusetts and colleagues have created a custom light-sheet microscope that allows the gentle and non-invasive observation of corals and their polyps over eight hours and at high resolution.
Coral see: image of a coral polyp (in cyan from reflected light) with algae (red from chlorophyll fluorescence). (Courtesy: Loretta Roberson)
They used their microscope to obtain this stunning image (left), which shows the re-infection of an Astrangia coral polyp (in cyan from reflected light) with algae (red from chlorophyll fluorescence).
PIP-II at Fermilab will be the first accelerator built in the US with significant contributions from other countries — with India, Italy, UK, France and Poland building major components.
Described as the new heart of Fermilab, PIP-II will be an 800 MeV superconducting linear accelerator that delivers a high-intensity proton beam. It can operate in both steady-state and pulsed mode and some of its protons will be used to create an intense beam of neutrinos that will travel 1300 km underground to a detector in South Dakota. The video below describes how PIP-II will accelerate protons.
Using sunlight to decompose water could be a clean and renewable way to produce hydrogen fuel, but the photocatalysts traditionally used to promote the process are relatively inefficient. Researchers in Japan have now developed a model system based on strontium titanate that has an external quantum efficiency of 96%, proving that almost perfectly efficient catalysts are possible.
Because the combustion of hydrogen yields only pure water as a waste product, it is often touted as an environmentally-friendly alternative to fossil fuels. The caveat is that to be truly “green”, the hydrogen itself needs to be produced using renewable energy. Solar water splitting, in which sunlight is directed onto an aqueous suspension of light-activated semiconducting particles, is one way of cleanly producing hydrogen. When these particles absorb solar photons, the resulting electron-hole pairs catalyze the breakdown of water, liberating the hydrogen.
Several roles
The drawback of this method is that the catalytic process is highly complex, requiring the semiconductor particles to play several roles at once. First, they must be able to absorb light in the solar spectrum range, which means they need to have narrow bandgaps near the 500nm peak of the Sun’s emissions. Second, they need to generate and then separate electron-hole pairs. Third, they must allow these holes and electrons to travel to the particle-water interface and catalyse the production of hydrogen (a process that requires electrons) and oxygen (a process that requires holes) from water. Last, but not least, they need to minimize unwanted side processes (which can lower the overall efficiency of the system) occurring at each step along the way.
That is a long list, and although researchers have long been searching for efficient photocatalytic materials, typical photocatalysts have an external quantum efficiency (EQE) – that is, the fraction of photons impinging on the system that end up being used to produce hydrogen – of less than 10%.
Strategies for reducing loss mechanisms
In their work, a team of researchers led by Kazunari Domen of Shinshu University in Nagano and the University of Tokyo focused on strontium titanate (SrTiO3), a photocatalytic water-splitter that was discovered in the 1970s. Although SrTiO3 is impractical for making real-world photocatalysts (it produces electron-hole pairs by absorbing near-ultraviolet light rather than visible light), the researchers argue that it is nevertheless a good model system because the mechanisms responsible for its efficiency losses are well understood.
Domen and colleagues studied several ways of reducing loss mechanisms in SrTiO3. The first involved suppressing charge carrier recombination, which occurs when electrons and holes recombine before they can take part in the water-splitting reaction. Since defects in the crystal lattice act as potential recombination hubs, the researchers used a flux treatment to improve the crystallinity of the photocatalyst particles, thereby reducing the number of lattice defects. They then reduced the number of chemical defects in the lattice by aluminium doping.
The team’s second strategy was to further suppress charge recombination by taking advantage of the fact that electron and holes in SrTiO3 crystals collect at different crystal facets. They did this by selectively depositing specific co-catalysts on the different facets to enhance hydrogen production at the electron-collecting facets and oxygen production at the hole-collecting ones. Although this approach is not new, and was developed and refined by other research groups, Domen tells Physics World that in the present work, his team was able to demonstrate the approach’s effectiveness “more clearly than any former study”.
Finally, the researchers prevented an unwanted side reaction (the oxygen-reduction reaction) by encasing the rhodium co-catalysts for the hydrogen-producing reaction in a chromium-based protective shell.
Near-unity internal quantum efficiency
By combining these three strategies, the team demonstrated an EQE of up to 96% for their material when it was irradiated with light in the 350-360 nm range. This translates into an internal quantum efficiency (IQE), which is the fraction of absorbed photons that can be used to produce hydrogen, of near unity, which implies that the photocatalyst is almost perfect.
Domen and colleagues hope their strategies to improve the efficiency of SrTiO3 will also work for photocatalysts driven by visible light. They have published their results in Nature and Simone Pokrant of the University of Salzburg in Austria, who was not involved in this work, details their findings and their implications in a related Nature News and Views article.
From virus spikes to narwhal tusks, the stingers of many living organisms have the same basic mechanical design. Now a team of physicists led by Kaare Jensen has studied the mechanical properties of more than 200 natural and manmade stingers to discover why.
The researchers at the Technical University of Denmark found a clear relationship between the structural properties of stingers large and small – thereby solving a long-standing evolutionary mystery. As a bonus, their work could lead to artificial structures that better mimic the desirable properties of natural stingers.
A staggering variety of living organisms come equipped with pointed outgrowths for purposes ranging from catching prey to combating rivals. These stingers range in size from tens of nanometres to several metres, yet are remarkably similar in terms of their design and mechanical properties – even in very different organisms. This similarity also extends to artificial structures such as nails and pointed weapons.
Resistance to elastic deformation
To understand this ubiquity of design, Jensen’s team experimented with artificial stingers made from the polymer polydimethylsiloxane and compared their results with previous studies of natural stingers. For simplicity, the team define a stinger as being straight and rigid with a long, slender tapered shape and a roughly circular cross-section. Their definition also includes strong resistance to elastic deformation.
Within these constraints, they constructed a database of stingers from over 200 objects, including viruses, algae, and larger vertebrates and invertebrates: both marine and terrestrial. They also included manmade objects such as nails used in construction, hypodermic needles, and spears from ancient warfare.
The experiments revealed how stingers buckled and broke when subjected to a load. When combined with the results of previous studies described in the scientific literature, these data revealed an optimal relationship between the diameters, lengths, stiffnesses, and friction forces per unit area of different stingers.
A crucial insight revealed by the study is that stinger structures exist at the mechanical limits imposed by friction, elastic stability, and costs incurred by maintaining tissue. This constraint ultimately explains the universality found in stinger structures: no matter their size, living organisms evolve until their outgrowths reach these limits, causing their stingers to converge at the same shape.
Having explained this long-standing mystery, Jensen’s team will now explore how their findings could apply to biomechanics and the evolution of animal weapons, and potentially lead to new medical applications. Elsewhere, they could inform engineers as to how manmade structures which mimic natural stingers could be precisely manufactured on different scales to improve performance and reduce material costs.
A bipartisan group of US senators and representatives has introduced legislation in Congress that would significantly change the operation of the National Science Foundation (NSF). Proponents of the bill say that the proposal aims “to solidify the United States’ leadership in scientific and technological innovation through increased investments in the discovery, creation, and commercialization of technology fields of the future”. To do so, the so-called Endless Frontier Act would expand the NSF’s remit, rename the organization and provide more than $100bn in support. The proposal has gained approval from many, but some have objected that it may undercut the NSF’s main objective, which is to fund basic scientific research.
Those behind the bill – four prominent US congresspeople – say that its introduction stems from the perception that international competitors, and particularly China, threaten to overtake the US technologically. “To win the 21st century, we need to invest in technologies of the future,” says Ro Kahana, a Democratic congressperson from California. “That means increasing public funding into those sectors of our economy that will drive innovation and create new jobs.”
Chuck Schumer, a New Yorker who leads the Democratic minority in the Senate, says that the US “cannot afford” to continue to underinvest in science while still “lead[ing] the world” in advanced research. That view is backed by Republican senator Todd Young of Indiana. “By virtue of being the first to emerge on the other side of this pandemic, the Chinese Communist Party is working hard to use the crisis to its advantage by extending influence over the global economy,” he claims. The new act, adds Republican representative Mike Gallagher of Wisconsin, who is the fourth member of the group introducing the legislation, “is a down payment for future generations of American technological leadership”.
The group announced the bill shortly after the 70th birthday of the NSF on 10 May. The legislation is named after a report – Science: the Endless Frontier– by Vannevar Bush, who was at the time director of the US government’s Office of Scientific Research and Development. That report, published 75 years ago on 5 July, laid the foundations for the US’s postwar boom in science and technology.
Ringing the changes
The changes envisioned by the group start with the name. The NSF would become the National Science and Technology Foundation (NTSF), with the additional “T” creating a new technology directorate – with its own deputy director – that has “flexible personnel, programme management, and awarding authorities”. The new directorate would fund research in 10 specific areas including: artificial intelligence and machine learning; high-performance computing, semiconductors, and advanced computer hardware; quantum computing and information systems; robotics, automation and advanced manufacturing; and advanced energy technology.
The transformation, however, would not come cheap, costing $100bn over five years. The bill authorizes an extra $10bn over five years to enable the Department of Commerce – another science-related government agency – to designate at least 10 regional hubs across the country. These would act as global centres for research, development and manufacturing of key technologies. That amount would finance a series of authorized activities. They include increased research spending at universities, which could form consortia with private industry to create focused research centres and develop other ways to advance new technologies. It would also include programmes to facilitate and accelerate the transfer of new technologies from the laboratory to the market as well as increased spending on research collaborations with US partners.
Leaders of research institutions have shown their enthusiasm for the bill. “To maintain global competitiveness and nurture future job creation, our country must prioritize research that will be fundamental to innovation and discovery,” says New York University president Andrew Hamilton.
Rafael Reif, president of the Massachusetts Institute of Technology, agrees. “Supporting fundamental research with an eye to real-world challenges is the kind of thinking that drove the Defense Advanced Research Projects Agency to develop what became the Internet,” he wrote in The Hill. “Such use-inspired basic research, funded by NSF…is what’s needed to retain US leadership in both science and technology, to keep us prosperous and secure.”
Yet the proposal has drawn some criticism. Former NSF director Arden Bement told Science of his concern that the bill could indicate to Congress – which appropriates agencies’ funds – that investments in the bill’s innovative technologies override the importance of the NSF’s core mission of funding fundamental, curiosity-driven research. But Bement’s successor France Córdova, who completed her six-year term as NSF director in March, argues that current-day science involves more seamless integration between fundamental and applied research.
The relevant committees of the House of Representatives and the Senate have yet to schedule hearings on the bill. Given the impact of coronavirus and protests from the Black Lives Matter movement as well as the forthcoming presidential election in November, an early decision on the legislation looks unlikely. Nevertheless, the bipartisan group that promotes the legislation has given legislators and the US scientific community indications of a new approach to the relationship between government-funded research and application.
An improved data-mining technique combines information from multiple imaging modalities and other clinical parameters to predict the risk of cancer metastasis. In the new approach, imaging data from cancer patients are fed into a deep neural network whose output is processed by three different classification algorithms. A novel way of fusing the results from the three classifiers then yields a prediction of whether, by the time of a follow-up consultation months or years later, the cancers will have spread. The US-based researchers who developed the method demonstrated its effectiveness using diagnostic and treatment-planning images acquired for 188 patients with head-and-neck cancer.
As with cancers elsewhere in the body, early-stage cancers of the head and neck are treated using radiotherapy, with increasing success. When treatment fails, it is often down to the growth of new tumours far from the site of the initial disease. Predicting which patients are most likely to develop distant metastasis (DM) is vital, so that low-risk patients can be spared the severe side effects that accompany the systemic treatments used to control cancer proliferation.
One method of categorizing patients in this way is by extracting DM risk indicators from large imaging datasets — an approach known as “radiomics.” But while this technique has been shown to make good use of quantitative features found in single-modality datasets (see CT-based radiomics reveals prostate cancer risk), multi-modality datasets have not yet been used to their best advantage, due to the relatively simple way that disparate features from each image type are combined.
The team compiled PET and CT images of 188 head-and-neck cancer patients who had follow-up consultations between six and 112 months after scanning. The images, which were acquired at various institutions, had already been studied by physicians, and 257 features – such as textural and geometrical characteristics – had been extracted for each patient. Also included in the dataset were other clinical parameters such as patient age and gender, and the extent to which the disease had already progressed at the time of imaging.
Zhou and colleagues inputted a subset of these data points into a deep neural network, which fused them into a single feature set. They then used this combined feature set to train a predictive model that could be tuned to optimize both sensitivity (the likelihood of making a true positive prediction of DM) and specificity (the likelihood of making a true false prediction of DM) simultaneously depending on clinical need. Usually, the outputs of this model would be sorted by a single classification algorithm to predict outcome: DM or no DM. The researchers note that combining three different classification algorithms improves the reliability of their approach.
Tested against a separate subset of patient data, the researchers found that their M-radiomics approach outperformed versions of the technique that lacked either the deep neural network, sensitivity–specificity optimization, or classifier-combination steps. They think that the method could be improved further by including image features outside of the physician-delineated tumour boundaries, and by developing a standardized, automatic feature-extraction procedure to minimize variability between images captured at different institutions.
Zhiguo Zhou (left) and Jing Wang. (Courtesy: University of Central Missouri/UT Southwestern Medical Center)
Zhou and colleagues plan to validate their method with a multi-institutional prospective study – in contrast to the retrospective dataset that they used for their demonstration – which should show its value for aiding clinical decision making in similar high-risk head-and-neck cancer patients.
“Once validated, we hope clinical adoption of the model can be realized in two to three years,” says Zhou’s co-author, Jing Wang, at UT Southwestern Medical Center. As M-radiomics is a generalized framework, the approach could also be extended to predict treatment outcomes for primary cancers in other anatomical sites.
In this episode of the Physics World Weekly podcast the philosopher of science Bob Crease chats about how physicists react when they have discovered something new – the topic of his latest column in Physics World: “The feelings you get when you discover something new”.
Murder and the interpretations of quantum mechanics feature highly in the plot of the television series Devs, which is set on the campus of a fictitious quantum computer company in Silicon Valley. Science writer Phil Ball joins Physics World editors for a lively discussion about how quantum physics is portrayed in Devs and whether it works as a physics thriller.
One way of treating people with COVID-19 is to prevent the virus from attacking healthy cells. We chat about how researchers in the US are developing nanosponges that do this by soaking up virus particles.
There have been enormous improvements in MRI over the last 20 years. MR-only workflows and the invention of MR-guided linacs brought new perspectives into radiation therapy. Are lasers still important for the alignment of patients on MRI? What kind of different workflows are currently practised in RT?
(Courtesy: Royal Philips)
The webinar presented by Raphael Schmidt and Michael Uhr will help the audience to:
Learn more about workflows in MR
Get to know the relevance of lasers for patient alignment
Get an overview on different solutions for MRI and MR-linac
Raphael Schmidt is responsible for the product management of laser systems for CT and MRI from LAP. During his studies at the Karlsruhe Institute of Technology (KIT) he gained broad experience in different workflows in radiation therapy while analyzing them. The topic of his final thesis dealt with the improvement of workflows in RT trough new information and assistance systems. Schmidt holds a degree in industrial engineering and management.
Michal Uhr is the responsible product manager for the APOLLO, APOLLO MR3T and ASTOR room lasers from LAP. Before switching to product management, Michael gained extensive experience in LAP service over many years at customer sites worldwide.
The first tetraquark comprising all charm quarks and antiquarks may have been spotted by physicists working on the LHCb experiment on the Large Hadron Collider (LHC) at CERN. The exotic hadron was discovered as it decayed into two J/ψ mesons, each of which is made from a charm quark and charm antiquark. The particle appears to be the first known tetraquark to be made entirely of “heavy quarks”, which are the charm and beauty quarks (but not the top quark, which is the heaviest quark but does not form hadrons).
“Particles made up of four quarks are already exotic, and the one we have just discovered is the first to be made up of four heavy quarks of the same type, specifically two charm quarks and two charm antiquarks,” explains Giovanni Passaleva, who is just stepping down as spokesperson for LHCb. “Up until now, the LHCb and other experiments had only observed tetraquarks with two heavy quarks at most and none with more than two quarks of the same type.”
The new tetraquark is dubbed X(6900), with the number referring to its mass of 6900 MeV/c2 (6.9 GeV/c2). The X denotes the fact that LHCb physicists are not yet certain about key properties of the particle including its spin, parity and quark content.
Hadrons are made of two or more bound quarks or antiquarks. Mesons comprise a quark and antiquark, whereas baryons such as protons and neutrons comprise three quarks. However, nature does not stop at three quarks and several tetraquarks (two quarks and two antiquarks) and pentaquarks (four quarks and an antiquark) have been discovered.
Predicted mass
Evidence for the X(6900) tetraquark comes as a bump in the mass distribution spectrum of pairs of J/ψ particles produced by proton-proton collisions at the LHC. The bump has a statistical significance of more than 5σ, which is considered a discovery in particle physics. The bump is centred at 6.9 GeV/c2, which is well within the 5.8–7.4 GeV/c2 mass range predicted for a tetraquark comprising two charm quarks and two charm antiquarks.
An important question surrounding tetraquarks and pentaquarks is the nature of their internal structures. This is defined by strong-force interactions between quarks, which are extremely difficult to calculate. In a tetraquark, for example, the quarks and antiquarks could all be tightly bound together – or they could be arranged as two quark–antiquark pairs loosely bound in a molecule-like structure. Or indeed, a tetraquark could have a configuration somewhere between these two extremes.
“This new result provides further crucial evidence on the behaviour of quarks and how they interact through the strong force,” says Tim Gershon of the University of Warwick and spokesperson for LHCb-UK, adding “It will undoubtedly be of great interest to theorists working to better understand exotic hadrons.” “Data that we will collect with an upgraded [LHCb] detector in the coming years will allow us to widen the search for further such particles, and may resolve once and for all the debate over their substructure.”
Frazer-Nash Manufacturing is a company that’s always been intent on playing to its strengths. In this way, the specialist UK-based workshop has carved out a niche by providing design, precision engineering and low-volume manufacturing services to a range of international customers – most notably in the food processing and space industries. Now, diversification is on the agenda and Frazer-Nash is eyeing long-term growth opportunities as a supplier of custom engineering and manufacturing services to the scientific research community – whether that’s “big science” facilities, academic research groups or technology start-up ventures.
Put simply, Frazer-Nash is putting itself out there, supporting researchers in the physical sciences with custom parts requirements that cannot be sourced off-the-shelf or from a standard product range. The Hampshire-based outfit offers a broad portfolio of precision manufacturing capabilities – traditional milling, turning, grinding as well as wire electrical-discharge machining (EDM), surface treatments and 3D printing of custom metal parts – and, as such, is able to deliver one-off projects, prototype development and small-volume production runs.
Paul Mortlock: “The whole ethos here is based on long-term investment and year-on-year improvement.”
“We specialize in low-volume manufacturing for customers with high-quality expectations,” explains Paul Mortlock, managing director of Frazer-Nash. Whether it’s a complex metal part or something more straightforward, the formula remains the same: a rigorous approach to materials traceability (from raw material to finished part), work-in-progress product inspection, and delivery versus customer specifications and tolerances.
“We handle everything in-house,” Mortlock adds, “and that gives us 100% control over our production process and the quality of our final output.” Underpinning that operational model, the company is fully certified to the ISO9001:2015 and AS9100 Rev D quality-assurance standards as well as the ISO14001 standard for environmental management.
Pivoting for growth
Beyond the day-to-day operation, the strategic priorities are clear. Frazer-Nash’s manufacturing capabilities are characterized by a relentless focus on continuous improvement that spans materials quality, digital inventory management and ongoing investment in new machines, control software and staff training. “The whole ethos here is based on long-term capital investment and year-on-year improvement across our processes, our people and our workflows,” says Mortlock.
That mindset is mandatory given the exacting requirements of Frazer-Nash customers. A case in point is the food processing industry, for which Frazer-Nash produces a range of proprietary equipment – extruder heads, cutting systems, filling systems and the like. “Our food-industry solutions conform to the latest international food-safety specifications with open-access structures, sloping surfaces and crevice-free design,” explains Mortlock. “We also ensure that the correct grades of stainless steel or other approved materials are used for our EU and US customers.”
Yet while the food industry will remain a core revenue stream, Mortlock and his colleagues are increasingly pivoting their business-development efforts towards the scientific research market and the space industry. There’s plenty of momentum already, with Frazer-Nash routinely providing specialist metal parts to satellite manufacturers and rocket-engine developers – the likes of Skyrora and Reaction Engines in the UK, for example – while also making significant inroads with research customers across the physical sciences. The latter include big-science facilities like CERN, the particle physics laboratory in Geneva, and ITER, the next-generation fusion research facility in southern France, as well as smaller academic groups at the University of Southampton and the University of Surrey in the UK.
“They all need a reliable, trusted partner to make their system parts,” explains Mortlock. “We’re providing vacuum chambers, waveguide ports, beam targets and pressure vessels – typically metal parts and systems with demanding requirements for cleanliness, dimensional tolerances and high- or ultrahigh-vacuum capabilities.”
It’s all about dialogue
While every project is unique, Frazer-Nash’s research customers invariably kick off with a variation on the same theme: “This is what we want – can you help us make it?” From that staring point, the requirements-gathering evolves into a granular dialogue between supplier and customer, covering off key metrics like manufacturability, cost, performance, reliability, even materials.
To a large extent, it’s then about prioritizing tolerances against those metrics – which ones matter the most. The same goes for materials specifications, with the choice of US or EU-grade steels, for example, often making a big difference to product lead-times. “Generally we’ll offer advice on manufacturing,” notes Mortlock, “but we don’t tend to be involved with design of the customer’s parts. They are the design authority.”
Even so, all customers get to tap a deep seam of accumulated and learned experience – a result of Frazer-Nash successfully tackling all sorts of complex manufacturing challenges across the food and aerospace industries over the years. This “assimilated savvy“ could be as fundamental as the metallurgy – whether a generic stock material or a proprietary grade of stainless steel will deliver the required performance – or something more specific – such as the finishing and surface treatment needed if a part is destined for an abrasive environment.
“In a lot of cases,” says Mortlock, “customers are coming to us with a v1.0 prototype and they want an enhanced version that’s built to last. Our task is to make that part more reliable, more robust and more readily manufacturable.”
The answers, inevitably, lie in the technical capability and domain-knowledge of the Frazer-Nash engineering team. With this in mind, the company’s long-term investment programme doesn’t just cover plant and machinery, it prioritizes training, development, staff progression and, crucially, retention. The Frazer-Nash wellness programme is a case in point, with even a consultant dietician on hand to advise staff on all aspects of their nutrition, sleep and exercise.
“This is a high-performing team of engineers, technicians and support staff and we take a holistic approach to their personal development and welfare,” Mortlock concludes. “After all, they’re why we don’t just promise high-dependability in low-volume manufacturing – we deliver.”
Frazer-Nash Manufacturing: built to last
While its commercial focus today is realigning towards customers in scientific research and the space industry, Frazer-Nash Manufacturing can trace a diverse legacy of engineering innovation that stretches back more than a century.
Quality control: Frazer-Nash prioritizes 100% inspection on all manufactured items. (Courtesy: Frazer-Nash Manufacturing)
In 1910, English mechanical engineer and designer Archibald Frazer-Nash jointly developed a lightweight automobile, the GN Cycle Car.
Some 12 years later, the British inventor set up the Frazer-Nash sports car company to develop a successor to the GN.
Various Frazer-Nash models were introduced over the following decades, creating a classic sports car marque that’s still admired around the world.
In 1929, Frazer-Nash turned his attention to other areas of engineering, setting up a separate company to service the needs of the emerging commercial and military aerospace industry.
Further diversification followed in the second half of the 20th century, with Frazer-Nash businesses expanding into a wide range of sectors, including postal machines, special-purpose equipment and consultancy services.
In 1990, the wider Frazer-Nash group was rationalized into several smaller specialist companies, with Frazer-Nash Manufacturing comprising a significant portion of the manufacturing and design departments of the former group.
In 2011, Frazer-Nash Manufacturing moved to its current production facility in Petersfield, Hampshire.
The company’s workshop plant includes CNC milling machines (five-axis, four-axis and three-axis); CNC turning and mill turn machines; wire EDM systems; and an additive manufacturing system.
The COVID-19 pandemic has forced the cancellation of countless conferences and workshops across the globe since the start of the year. Some, such as the American Physical Society’s March and April meetings, managed to move online and others are now planning likewise. With the possibility that international face-to-face meetings could be at least a year away, online seems set to become the “new normal”.
Despite some initial technical hiccups, online tools such as Zoom have been shown to offer many advantages. They’re cheap to use and we can reduce our carbon footprint by not travelling unnecessarily to distant international locations for meetings that usually last several days. Online tools also let people with, say, child-care issues or visa restrictions take part in conferences that they may have otherwise have found difficult to attend.
At first sight, it looks like the pandemic – together with increasing concerns over climate change – could lead to a new and more efficient form of doing science. But could a wholesale shift online turn out to be misguided? While there are many positives of such a move, there are many dangers too. A world based entirely on online interactions could hamper the progress of science and damage its relationship to society.
We need to be aware of these issues now before they potentially take hold. If we don’t, then it may be difficult, or perhaps impossible, to unravel the long-term effects given that they could take a generation to emerge.
Building trust
Before COVID-19, face-to-face communication at physical conferences was the norm. It is not only quick and efficient (once you’ve made your way to the location that is) but it also plays a fundamental role in the social aspects of science. At such meetings, scientists often develop a common “language” that is not only learned by students but also developed and improved upon by researchers.
It is this process of “socialization” that teaches values central to science and the distinctions between acceptable and unacceptable criticisms. And, of course, face-to-face meetings are where new topics, collaborations and communities emerge. This is because when people meet, they are – consciously or unconsciously – more likely to commit to fresh ideas or new approaches.
There is a kind of “spinning flywheel” here that is based on the trust and mutual understanding that has already built up. This makes it possible to easily move online, at least in the short term, without much difficulty so that the early online experiences are likely to be good.
An obvious example was the confirmation of the discovery of gravitational waves, which was announced at a press conference in February 2016. All the previous five months’ worth of work had been done via teleconference and thousands of e-mails without any face-to-face international meetings. But there is one crucial element: it was all based on decades of trust that had been built up in hundreds of face-to-face meetings beforehand.
The dangers, then, of a wholesale shift to online meetings will only arise as science develops and changes. When that happens, old languages and relationships become irrelevant – older scientists leave the profession and new ones enter: the flywheel loses momentum. This means that new generations need to be inducted into the fundamental norm of integrity, which defines the social institution of science and the foundations of the scientific discipline. It starts with education, where at meetings the relationship of authority and truth in science is revealed because a PhD student can, and sometimes does, criticize a Nobel laureate. That truth must be seen to trump authority is another crucial value of science.
In the current wholesale shift to online we expect to see success in the short term but increasing difficulties for the development of new science in the medium term. In the longer term, if online becomes the new norm, the challenges could extend far beyond science itself. Such a move will erode the boundary between scientific expertise and online tools such as social media.
The aspects that make science special are all developed through face-to-face socialization and it doesn’t seem to happen naturally over the Internet. Indeed, if the boundary between information and misinformation comes to be fought out over the Internet rather than the seminar or workshop, science will lose the battle, as it nearly has, for example, in the revolt against the MMR vaccine, or, in some locales, the acceptance of climate change.
The rise of populism in Western countries and the attack on scientific expertise has put society in danger. The way that science deals with decision-making in the face of uncertainty is a vital lesson for decision making by elected governments. At the same time, scientific expertise acts as one of the checks and balances in pluralist democracies. Respect for scientific expertise prevents a political leader from insisting on anything they like such as the truth about climate change. Without science as an exemplar of how to make difficult decisions in an uncertain world – such as with the effect of COVID-19 – we will find ourselves living in a dystopia. We have to get this right.
We need to travel less, and of course, take advantage of what online has to offer and seek to improve it. But the new enthusiasm for virtual meetings must not turn the current short-term successes into a long-term disaster. For areas that are well established there is perhaps less danger for a shift to an online world, but for those that involve much disagreement and passion, face-to-face will be the only way to propel the science forward.