Skip to main content

Three is a crowd as physicists watch trapped atoms form molecules

Physicists in New Zealand have used optical tweezers to combine three atoms, with two of the atoms forming a molecule in the presence of the third. They were able to measure the rate at which this “three-body recombination” occurs and found it to be much lower than had been expected. The technique could be used to provide important information about how atoms combine to form molecules.

In atomic physics three is a crowd because it is notoriously difficult to calculate how three or more atoms will interact with each other to form a molecule. “Fundamentally, we think that, if we write up the equations of motion from quantum mechanics, these will describe our system,” explains atomic physicist Mikkel Andersen at the University of Otago in Dunedin. “But you can only do that and get an exact solution for a very, very simple system. The question then often becomes finding out, under different conditions, what’s important and what we can throw out of the equations.”

The question is not just of academic interest, as everything around us results from atoms combining. Previous researchers have studied atomic combination using Bose–Einstein condensates, but these contain many atoms, which makes it difficult to disentangle various effects. “We’re trying to develop the capability to build small quantum systems such as molecules atom by atom,” explains Andersen.

Scattered photons

In the new work, Andersen and colleagues at Otago and Massey University in Auckland cooled three atoms of rubidium to 17.8 μK in three separate optical traps positioned about 4.5 μm apart. They then carefully merged the three traps, allowing the atoms to interact. After about a second, the researchers irradiated the combined trap with light, counted the number of photons it scattered and inferred from this how many atoms it contained.

By repeating the experiment multiple times with the same time interval between merging and measurement, the researchers measured the probability of the trap containing various numbers of atoms after that time interval. By varying the time interval, the researchers measured how these probabilities evolved in time.

The results provide the first-ever observation of three-body collisions and combinations at an atom-by-atom level. Andersen explains that the results provide important information about how molecules form: “If you put two atoms in an optical tweezer they don’t usually form a molecule by themselves because something else needs to take away the binding energy But if you put in three, two of them can form a molecule, with and the third taking away the binding energy.”

Lower rate

The researchers were surprised that the rate of this “three-body recombination” was more than ten times lower than theoretical predictions and previous experimental work in Bose–Einstein condensates had suggested. Indeed, Andersen was so perplexed by the results that he had an independent group of people repeat the measurements. “I was convinced it could not be true!” he recalls.

Since describing the research in  Physical Review Letters, he says, “We have been contacted by a number of people from the scientific community with suggestions as to what could be going on so, in the next year, we will be conducting a number of experiments to verify whether or not these suggestions are actually true.”

Cold-atom experimentalist Hans-Christoph Nägerl of the University of Innsbruck in Austria, who works with Bose–Einstein condensates, is impressed by the work. “I think this bottom-up approach to looking at few-body processes really has a future,” he says. “I think we’ll see a lot of results with similar systems.”

He is less convinced by the team’s conclusions regarding the recombination rate, however. “First of all, the temperature is not really ultracold, so I’m not too surprised that the rates are different from the zero-temperature expectations,” he says. “The second point – which they raise – is the role of confinement. A few years back my group published a paper – which they don’t cite – looking at what happens to correlations in confined dimensions. We only looked at the repulsive case, but the evidence clearly showed that confinement strongly modifies the three-body correlations.” His conclusion is that “it’s a very nice addition to what’s been done previously. Will one see new physics? It’s hard for me to judge.”

Physics joins the fight against the coronavirus, philosophical differences of physics and chemistry, escape to an exotic exoplanet

The novel coronavirus responsible for the current pandemic has only been known for a few months but scientists have already gained a vast amount of information about it. Some of this knowledge has been gained by structural biologists who use techniques first developed by physicists. In this podcast episode, the science journalist Jon Cartwright explains what X-ray crystallography and other techniques can tell us about how the virus reproduces.

Also featured in this episode is the philosopher Vanessa Seifert, who is fascinated by the relationships between chemistry and quantum physics. We also travel to exotic exoplanets and chat about our fantastic student contributors.

We are all working at home here on Physics World, so this podcast was recorded using laptop microphones rather than the usual professional equipment.

 

 

COVID-19: how physics is helping the fight against the pandemic

It probably originated in one of the several species of horseshoe bat found throughout east and south-east Asia. Possibly, a pig or another animal ate the bat’s droppings off a piece of fruit, before being sold at a wet market in Wuhan, China, and subsequently infecting one of the stallholders. Or maybe the first transmission to a human occurred elsewhere.

There is a lot we don’t know about the novel coronavirus now called SARS-CoV-2 and its resultant disease, COVID-19. What we do know is that Chinese authorities alerted the World Health Organization (WHO) to the first known cases in Wuhan at the end of last year. Less than a fortnight later, one of those infected people was dead. By the end of January, with more than 10,000 diagnosed cases and 200 fatalities in China alone, and with the virus cropping up far beyond the country’s borders, the WHO declared a global emergency.

As of this article’s publication (19 March), the WHO reports that the virus has spread to 166 countries, areas and territories, with over 205,000 confirmed cases worldwide and the number of deaths exceeding 8500. The status of “pandemic” was officially designated on 11 March and many countries have introduced social distancing, travel restrictions and quarantine methods to try to curb the spread. Festivals, sports events, parades and conferences are being called off due to the front-line support services they require and the concern that large gatherings of people could help spread the virus. The American Physical Society, for example, axed both its annual March meeting in Denver, Colorado, and April meeting in Washington DC.

When it comes to viruses, there is good reason to worry about novelty. Throughout its history, humanity has had to contend with new diseases springing up seemingly out of nowhere, spreading like wildfire and leaving scores of dead in their wake. In ages past, bacterial plagues were often the source of that terror. Since the birth of modern medicine, however, novel viruses have assumed the mantle of doom. Take Spanish flu for example, which killed up to 100 million people a century ago, and then more recently, HIV, which has led to around 32 million deaths to date. It is only a matter of time before another devastating pandemic, and though epidemiologists do not know what type of virus it will be, they do know that it will be different from anything witnessed before.

Whether or not SARS-CoV-2 is the next “big one”, there is something else epidemiologists are grimly aware of: today, disease travels fast. The Black Death that ravaged Europe, as well as parts of Asia and Africa, in the mid-14th century spread at an average of just 1.5 km a day – hardly surprising, since this was before ships could reliably cross oceans and the fastest mode of transport was by horse. Contrast that with the 2015 outbreak of Zika virus in South America, where the daily dispersion was on average 42 km, peaking in the densest-populated parts of Brazil at 634 km. Faced with more populous cities, more mobile people and more international travel, scientists must respond to the threat of viral pandemics faster than ever.

Structural biology has reached the stage where it’s fast enough for almost anything

Fortunately, those scientists now have much more efficient tools at their disposal. Structural biology – the study of the structure and function of biological macromolecules – has come a long way since it was first used as the basis of rational (as opposed to trial-and-error) drug design 30 years ago. Back in the early 1990s, viral structures deposited in the Protein Data Bank – an international repository for structures of biological macromolecules – numbered just a few dozen annually, but by the mid-2010s, there were well over 500 new additions a year. Modern techniques, such as automation and cryo-electron microscopy (cryo-EM), mean that viral structures can be identified almost instantly in many cases. “Structural biology has reached the stage where it’s fast enough for almost anything,” says Alexander Wlodawer, chief of the macromolecular crystallography laboratory at the US National Cancer Institute in Frederick, Maryland.

But is it fast enough to halt a pandemic?

The speed of physics

Physics-based techniques play a huge role in the field of structural biology. The vast majority of biological macromolecule structures are obtained by X-ray crystallography, going back to 1934, when John Desmond Bernal and Dorothy Hodgkin recorded the first X-ray diffraction pattern of a crystallized protein, the digestive enzyme pepsin.  Their work stemmed from that of physicists such as Wilhelm Röntgen, who discovered X-rays; Max von Laue, who discovered that X-ray wavelengths are comparable with inter-atomic distances and are therefore diffracted by crystals; and William Henry and William Lawrence Bragg, who showed how to use a diffraction pattern to analyse the corresponding crystal structure. Hodgkin went on to win the 1964 Nobel Prize for Chemistry for her determinations by X-ray techniques of the structures of important biochemical substances.

Single biological molecules also diffract X-rays, but only very weakly. Crystallization, as Bernal and Hodgkin employed for pepsin, is helpful because it results in the repetition of huge numbers of molecules in an ordered, 3D lattice, so that all their tiny signals reinforce one another and become detectable – by photographic plates in the early days and by active pixel detectors today. These signals are not images of the molecules, for there are no materials that can substantially refract, and thereby focus, scattered X-rays. Rather, the signals are merely the sum of the contributions of X-rays diffracted from different parts of the molecule. To pick apart these contributions, structural biologists rely on a mathematical tool – the Fourier transform. The calculated contributions are then equated with possible atomic structures by a lot of careful (and now largely computer-driven) interpretation.

I04 beamline at Diamond Light Source

Of course, to obtain the signals in the first place requires X-rays. Nowadays, synchrotron radiation sources – large facilities that accelerate electrons in a continuous ring – are ideal for macromolecular crystallography because they produce high-intensity X-rays with a very narrow spread of wavelengths. At these machines, according to Wlodawer, diffraction datasets that would have taken months with X-rays from traditional rotating anode generators take just seconds to compile.

Technological developments such as these spurred the first forays into rational drug design, in which scientists study the structure and function of molecules in order to work out what drugs might bind to them – and in the case of viruses, prevent them from replicating. Antiviral drugs for HIV were an early success. When HIV protease was identified in 1985 as an essential enzyme – and therefore a potential drug target – in the virus’s life cycle, it took four years for its first crystal structures to be determined, and a further six years for the first licensed drugs to inhibit it. “That’s probably one of the best-documented cases of how quickly rational drug design can go,” says Wlodawer, who contributed to the international effort.

Today, it might have gone faster. The four-year delay in obtaining the structure of HIV protease was primarily due not to the brilliance or quality of X-rays, but to the lack of sizeable crystals of the enzyme. Current synchrotrons and ever newer free-electron lasers – which extract diffraction data from molecular crystals in the few femtoseconds before they are annihilated – employ techniques such as serial crystallography to build up a complete diffraction dataset from numerous partial datasets of crystals that would otherwise be too small. More­over, both the crystallization and data collection are now automated, so that structural biologists need not even visit a light source themselves: they simply post their proteins to a facility and download the dataset when it is ready.

The analysis of SARS-CoV-2 is a prime example of this type of modern pipeline in action. On 5 February this year, a little over a month after the Chinese authorities disclosed the existence of the new coronavirus, a research team led by Zihe Rao and Haitao Yang at ShanghaiTech University in China uploaded the structure of the virus’s main protease to the Protein Data Bank (DOI: 10.2210/pdb6lu7/pdb), having obtained the dataset using X-ray crystallography at the Shanghai Synchrotron Radiation Facility. “A decade ago, that would have taken a year,” says Wlodawer. “At least.” The structure is already helping pharmaceutical companies to explore potential drugs, such as those used to tackle HIV.

The protein pipeline

Even if molecules refuse to be crystallized, there is still the chance of obtaining structures using cryo-EM, a technique pioneered by Jacques Dubochet of the University of Lausanne in Switzerland, Joachim Frank of Columbia University in New York City, US, and Richard Henderson of the MRC Laboratory of Molecular Biology in Cambridge, UK, for which they shared the 2017 Nobel Prize for Chemistry. In a cryo-EM experiment, a solution containing the biomolecule or complex of interest is applied to a sample holder, or “grid”, as a thin layer. The grid is flash-frozen in liquid ethane to vitrify the sample, which is then imaged by the electron microscope with low doses of electrons to minimize radiation damage. Because single molecules or complexes are imaged directly, there is no need for crystallization.

The coronavirus SARS-CoV-2

Thanks to cryo-EM, Daniel Wrapp and Nianshuang Wang of the University of Texas at Austin, US, and colleagues were able to obtain the structure of an outer “spike” protein of SARS-CoV-2 that is believed to enable the new virus to weasel its way into host cells. From harvesting the protein to submitting a paper to the journal Science on 10 February, the entire process took just 12 days (10.1126/science.abb2507). “Without cryo-EM,” says the University of Texas’s Jason McLellan, an author on the paper, “it may not have been possible at all.”

The structure of the external spike is more useful for creating coronavirus vaccines than drugs. If host cells are exposed to virus-like particles that brandish the same external features, while being hollow inside, those cells can still help the body build an immunity but without the risk of being exposed to a dangerous, fully fledged virus. David Stuart – a structural biologist at the University of Oxford in the UK and director of life sciences at the Diamond Light Source, a “third-generation” synchrotron – has used this synthetic trick to create a new vaccine for foot-and-mouth disease. This virus, which is still devastating livestock in large parts of Africa, the Middle East and Asia, is in a family of single-stranded “positive sense” RNA viruses that also includes polio and human rhinovirus – the latter being behind most cases of the common cold. “Only in the past few years have we been able to exploit structural biology to understand immunity to disease,” he says. The knowledge of viral structures can even be used to design synthetic “therapeutic antibodies” to directly attack diseased cells, he adds.

Stuart obtained the structure of the foot-and-mouth virus itself back in 1989 at the (now defunct) Synchrotron Radiation Source in Daresbury, UK. Indeed, it was one of the first viral structures ever deposited in the Protein Data Bank. For that reason, he knows first-hand how much the techniques have progressed. “Getting those first structures, that was a big deal!” Stuart recalls.

Beware the unknowns

It is too early to predict how long it will take to develop drugs or vaccines for SARS-CoV-2. A US biotechnology firm, Moderna Therapeutics, has already begun human trials for a vaccine candidate, but even if it successful, it could still take up to 18 months for it to be available to the public. Borrowing the infamous terminology of the former US defence secretary Donald Rumsfeld, Stephen Cusack – the head of the European Molecular Biology Laboratory (EMBL) in Grenoble, France – puts the virus in a category of formidable “unknown unknowns”, which covers viruses that break out without precedent, such as HIV, Zika and the 2002 coronavirus, SARS-CoV. But Cusack says we should still beware the more familiar “known unknown” pandemic influenza, which has struck three times since the Spanish flu of 1918. Its most recent incarnation in 2009, swine flu, is believed to have infected up to a fifth, and killed up to half a million, of the world’s population – although that, relatively speaking, was not such a bad case. Similar numbers are met every year with seasonal flu – in Cusack’s terminology, the “known known”.

Though he graduated as a physicist, Cusack has spent much of his career studying influenza as a structural biologist, and in particular its polymerase – the enzyme behind the virus’s transcription and replication. Like other viruses, in order to replicate, influenza has to produce a code for its proteins known as messenger RNA (mRNA). This needs to match the mRNA of the healthy host cell that the virus is invading in order to trick the cell to produce more of the virus. Some viruses have their own enzymes to synthesize the matching mRNA from scratch; influenza does not, and instead steals a “cap” from the host-cell mRNA as a primer. Biochemists have known of this influenza “cap snatching” for many years, but in 2014 Cusack’s group used structural-biology techniques to understand its basic mechanism at an atomic level. In their most recent work, yet to be published, the researchers have employed cryo-EM to snapshot different stages of the entire polymerase transcription – in effect creating a molecular movie – in order to uncover weaknesses that can be targeted by drugs. “If you can stop this mechanism from working, you can stop the virus from replicating,” Cusack says.

All of which suggests that structural biologists are well-equipped to tackle the next pandemic, be it a known-unknown or unknown-unknown. Whether their techniques are sufficiently advanced to prevent some of the huge death tolls humanity has suffered in the past, however, is still an open question. Those who study complex networks believe they can now predict the rate at which pandemics spread in the modern world (see box below), although these only stress the shortness of the deadlines on which scientists must act. Moreover, finding a drug is only the first step in a long regulatory process involving fabrication, initial toxicology testing and clinical trials.

Even then, there is no guarantee of success. In 2018 Xofluza was the first antiviral drug for influenza approved by regulators in Japan and the US for decades, billed by the press as a “miracle cure” that was able to stop the virus dead in its tracks just 24 hours after a single dose. A year later, the Japanese company that developed it, Shionogi, discovered that, far from being killed off, the virus in patients taking the drug spontaneously mutated into a more resilient form. Working with Shionogi, Cusack and colleagues used structural-biology techniques to show that although the mutated virus bound to the drug less tightly, it was also less fit to replicate, leaving a question mark over the drug’s efficacy. “No-one knows whether the drug will be useless in a year or two,” he says.

And therein lies a lesson. “The virus is always cleverer than you are,” Cusack says. “But we knew that anyway.”

The global village

Experts from the World Health Organization visit Wuhan

Back in the 14th century, when the bubonic plague known commonly as the Black Death was advancing through Europe, geographical proximity to an infected town or village was a reliable indicator of how likely a neighbouring settlement’s inhabitants would contract the disease. In today’s globalized world, however, with the ease of long-distance travel, that is no longer true – at least according to the physicist Dirk Brockmann of Humboldt University Berlin in Germany and the physicist-turned-social-scientist Dirk Helbing of ETH Zurich in Switzerland. In 2013, based on an analysis of airports, the pair showed that computer models could better describe the progress of previous epidemics, such as the swine flu of 2009, if they spread through an “effective” distance between points – one that is dependent not on miles or kilometres but the density of traffic flow between them (Science 342 1337).

Although their model – and others like them – could be important for predicting the spread of current viruses and targeting disease-control measures, population flow between cities is not always known to high accuracy. In a preprint posted online in February this year, however, Piet Van Mieghem and colleagues at the Delft University of Technology in the Netherlands showed that it is possible to make short-term predictions about the rate of progression of the latest coronavirus, SARS-CoV-2, from Wuhan to other cities in the Chinese province of Hubei if population interactions between cities are inferred from initial observations of the spread, rather than relying on prior knowledge. Comparing their model with real data, they found that its predictions for infections in Hubei cities three days ahead were within 7.5% of the actual numbers (arXiv:2002.04482). “The coronavirus pandemic has been a good case to demonstrate the power of our method,” says Van Mieghem.

Millimetre-scale transceiver boosts ingestible sensors

Researchers at Imec, a Leuven, Belgium-based centre for nanoelectronics and digital technologies, have developed a wireless receiver and transmitter small enough to fit inside a millimetre-scale capsule. The transceiver, which was presented at the International Solid-State Circuits conference in San Francisco, US, last month, is 1/30th the size of today’s state-of-the-art systems and could be used in a broad range of so-called “ingestibles” – sensors that monitor health conditions from inside the human body.

Like their external, wearable cousins, ingestible sensors are designed to measure and track health parameters over a period of time. Unlike them, however, ingestibles need to be able to transmit data autonomously to a receiver station outside the body. “The development of such devices brings along a specific set of challenges,” says Imec’s Christian Bachmann. “They have to be extremely small, consume very little power and be able to connect wirelessly.”

A miniaturization revolution

Bachmann, who serves as programme manager for the Sensitive Networks project at Imec’s laboratory in Eindhoven, Netherlands, explains that these goals are only achievable thanks to an ongoing “miniaturization revolution” in nanoelectronics. This revolution has enabled researchers to develop smart, small and lightweight devices that combine minimal power consumption with maximal patient comfort.

Imec’s wireless transceiver occupies a volume of less than 55 mm³, with areas of 3.5 and 15 mm2 on its sides. It supports transmissions at the medical 400MHz frequency band and contains an on-chip tuneable antenna. According to the Imec team, the most significant achievement is that the module does not require a crystal-based oscillator. Such oscillators are commonly used to precisely stabilize the frequency of radio signals and network protocol timing, but the Imec researchers instead created an on-chip mechanism that, in effect, uses the wireless network to calibrate itself. The lack of an off-chip crystal device made it possible to achieve the extremely small “form factor” needed for an ingestible.

According to Bachmann, on-chip tuning for the miniaturized antenna does more than just guarantee reliable data transmission. It also offers a large flexibility in impedance tuning – meaning that it works over a wide range of impedances in the body, equivalent to a filled and empty stomach. “We foresaw a tuneable impedance that can ‘tune’ itself to conditions seen in the digestive tract, so that the transceiver always makes a reliable link,” he says.

A range of uses

While ingestible devices are in their infancy, their potential benefits are considerable. As well as helping to monitor digestive processes and diagnose gastrointestinal diseases, ingestibles could also replace procedures like endoscopic inspections and stool sample analyses – which can be very uncomfortable and provide only one-time observations. Bachmann says the Imec transceiver has several potential use cases, including ingestibles designed to monitor health, nutrition or biomarkers that indicate the presence of disease. However, he cautions that clinical applications will require a further integration of electronics at the nano level.

“For the moment, there are some camera-integrated pills that can be used to study the digestive tract,” he says. “But these pills are still quite big today.” More advanced applications will probably arise as early demonstrators in the healthcare domain. Apart from cameras, ingestibles could also carry other sensors, which – like the transceivers – will need miniature batteries capable of powering them for several weeks.

Despite these obstacles, Giovanni Traverso, who studies ingestibles and implantable robotics at the Massachusetts Institute of Technology in the US, calls the Imec device “a welcome contribution to the field”. Traverso, who was not involved in the Imec project, adds that a smaller transceiver “certainly boosts the miniaturization of ingestibles” by making more room for other components such as sensors and batteries. “It’s key to make the devices smaller and smaller, as this maximizes the safety while transiting the gastrointestinal tract,” he says.

The end result

That raises an important question: what happens after an ingestible has done its job? Does it just follow the natural way out of the body? Bachmann’s response is that Imec is currently exploring ways of fixing ingestible sensors at certain locations along the gastrointestinal tract. “This would enable longer recordings in specific places of interest and keep patients comfortably outside the hospital while their health data is collected and sent in real time to a doctor,” he says.

Traverso notes that this question is particularly important in his work, which focuses on using ingestibles for drug delivery. For this application, it may be important to keep ingestibles in place for days or even weeks. “We’ve made a lot of progress during the past five years, keeping devices in place in the stomach or in the gastrointestinal tract,” he says. For example, a device could be swallowed as a pill, unfold in the stomach, and then dissolve in the acid environment once it has delivered the required drugs.

Hybrid infrared–optical microscope could improve cancer diagnostics

A novel hybrid microscope delivers the same information as standard optical microscopy without the need for detrimental tissue staining, while also providing molecular insight into tissue biopsies. Developed by researchers from the University of Illinois at Urbana-Champaign, the system adds infrared capability to the ubiquitous, standard optical microscope. The new system could deeply impact histopathology, both in the clinics and within research, by offering faster diagnosis, lower cost and wider availability (PNAS 10.1073/pnas.1912400117).

Histopathology is the microscopic study of tissues to spot and identify disease manifestations such as tumours. The gold standard technique requires the addition of dyes or stains to human tissue biopsies. This enables pathologists to see the shapes and patterns of the cells under a microscope and distinguish cancerous tissues from healthy ones. However, even for highly trained readers, such diagnostics can prove tricky and are subjective.

Moreover, the information given by optical microscopes is limited and superficial, as it does not shed any light on the underlying molecular changes driving cancer. Infrared (IR) microscopy, on the other hand, can provide such details by measuring the molecular composition of tissues. But while conventional optical microscopes are widespread and easy to use, IR microscopes are expensive and require the sample to undergo an extensive preparation – making this approach impractical in most settings.

A ready-to-build hybrid microscope

A team led by Rohit Bhargava bypassed the limitations of both techniques by coupling them. The feat was achieved by adding an IR laser and an interference objective to an optical camera, which harnesses the strengths of both modalities.

Hybrid microscope

The hybrid microscope has the same high resolution, large field-of-view and accessibility of an optical system, while its software can use the IR data to compute an image that looks similar to a conventional stained sample. This combination allows researchers to use an all-digital approach to biopsies and derive information about tissue density, scattering, path length or visible absorption that exceeds that offered by standard microscopy.

“We built the hybrid microscope from off-the-shelf components. This is important because it allows others to easily build their own microscope or upgrade an existing microscope,” says first author Martin Schnell, a postdoctoral fellow in Bhargava’s group.

AI helps pathologists

The researchers tested the performance of the hybrid microscope on unstained breast tissue samples and compared the results with conventional techniques. They developed an iterative search algorithm to identify the cell types in each biopsy – such as healthy and malignant cells in the epithelium, stroma, red blood cells and some additional tissues – using the 22 available frequency bands of the IR spectrum. The team subsequently found that only seven bands were needed to obtain accurate classification and five to obtain an area under the curve (a metric assessing a tool’s performance) above 0.90, which could considerably speed up diagnosis.

More work needs to be done on the analysis of the hybrid images. The researchers are now working to optimize machine-learning programs that can measure multiple IR wavelengths, creating images that readily distinguish between multiple cell types, and integrate that data with the detailed optical images to precisely map cancer within a sample.

“It is very intriguing what this additional detail can offer in terms of pathology diagnoses,” Bhargava says. “This could help speed up the wait for results, reduce costs of reagents and people to stain tissue, and provide an ‘all-digital’ solution for cancer pathology.”

A stance against forced retirement

Making reasonable estimates is a core skill for a scientist. When I interviewed candidates to study physics at the University of Oxford, I’d ask them a variant of the “Fermi problem”. Enrico Fermi famously asked students to estimate the number of piano tuners in Chicago, whereas my version asked how many barbers there are in Oxford. Reasonably accurate answers can be obtained using sensible approximations and any available data (Oxford has a population of 120,000, half go to a barber and do so once every six weeks, etc).

I found myself doing a similar calculation when faced with forced retirement as a physics professor at Oxford in 2015. Although the UK’s 2010 Equality Act outlaws fixed-age retirement, an employer can impose an Employer Justified Retirement Age (EJRA) but it must show that it is a proportionate means of achieving some legitimate aim. When my request for a further extension of employment to continue my active and funded research was refused, an employment tribunal upheld my claim of age discrimination. The university is appealing the judgement (bit.ly/2SssP9s).

Female academics data

Oxford claimed its EJRA policy, by creating vacancies, improved gender diversity and opportunities for younger academics. I questioned that it was proportionate by doing a Fermi-style estimation – using reasonable approximations and available data – of the extent to which it could achieve these aims. An EJRA changes only the rate of vacancy creation by bringing forward some vacancies that would occur anyway – no-one works forever. Assuming, initially, that everyone works until retirement and extends their career by 10%, preventing such extensions by an EJRA changes the vacancy rate by 10%. However, data show that, at Oxford, only 40% of employees stayed until retirement and then, at most, only 50% of them wished to extend. The resulting change of 2–4% in the vacancy rate was judged “trivial” by the tribunal and not proportionate to the heavy discrimination involved.

When Oxford introduced its EJRA in 2011, it committed, but failed, to monitor its effectiveness by comparison with the rest of the Russell Group of UK universities, none of which – except Cambridge – operates forced retirement. Using data from the Higher Education Statistics Agency I was able to show that there was no evidence of any impact on gender diversity (see figure above). The effect on opportunities for younger people was similarly trivial as indicated by the proportion of academics aged over 67. These results, confirmed by rigorous statistical analysis by Oxford’s own statistics consultant, the late Daniel Lunn, are entirely consistent with the trivial size of the EJRA’s effect on vacancy creation.

Loss of opportunities

It cannot be right to dismiss active physicists, or indeed any productive academic, at some arbitrary age. It is traumatic to be forcibly retired, especially when one’s work is still in full swing and there are new ideas to be explored. The “emeritus” status offered by Oxford, instead of full employment, is of no use to experimental scientists who need a research team and principal-investigator status to apply for their own research funding.

It is simply ageism that underlies many of the arguments used to justify mandatory retirement. Age is often used as a proxy for competence and this lazy stereotype feeds off the myth that scientists have their best ideas when they are young. Indeed, studies have shown that a scientist’s most impactful work can occur at any stage in their career.

The argument that younger people gain from the “freeing up” of posts ignores the loss of opportunities for graduate students and postdocs provided by experienced, grant-winning, senior academics. It is ageism that sees a 40-year-old as “filling” a post whereas a 65-year-old is “blocking” a post. Apart from providing the dignity and fulfilment of employment, there are general imperatives for people to work longer, including the growing pension burden and increases in life expectancy. Recent studies by the World Health Organization also highlight the physical and mental health benefits of working longer. The International Standards Organization is currently conducting a project on the economic and social benefits of an “age-inclusive” workforce.

Ageism is endemic in our society and attitudes persist that would be recognized as shameful if they related to race, religion or sexual orientation. The University of Oxford’s claim that dismissing older academics is necessary to maintain its high standards is simply ageism, implying that academic performance deteriorates with age. If retirement policies are to be truly evidence-based, they need to be justified by reasoned estimates of proportionality that are consistent with the available data.

Transverse arch puts a spring in your step, biomechanics study reveals

 

The stiffness of the human foot is strongly influenced by an arch that spans its width, a new study suggests. An international research team, led by Madhusudhan Venkadesan at Yale University in the US, came to this conclusion by doing simulations and experiments of the physical mechanisms underlying the foot’s transverse tarsal arch (TTA). Their discovery could lead to new advances in medicine and biotechnology – and deliver new insights into how bipedalism first evolved in our distant ancestors.

Humans are unique among primates because the inherent stiffness of our feet enables us to efficiently push off the ground when walking and running (see video). The median longitudinal arch (MLA), which runs from the heel to the ball of the foot, is thought to play a critical role in this stiffness.

Stiffened by a bowstring-like arrangement of ligaments, the MLA not only keeps the foot rigid. It also stores and releases mechanical energy like a spring as we walk and run. Yet despite our detailed knowledge about the role of the MLA, the precise relation between midfoot curvature and stiffness is still widely debated among anatomists.

Elastic shells

Venkadesan’s team believe that previous analyses of the foot had overlooked the stiffening influence of the TTA, which spans the width of the foot perpendicular to the MLA.  To understand the role of the TTA, the team subjected uniform elastic shells to curvatures in both longitudinal and transverse directions; stiffening each curve with ligament-imitating springs.

Measurements on the shells – and computer simulations – have revealed that the transverse bending contributes more to the stiffness of the shell than the longitudinal bending. This is independent of other factors including shell size and thickness. The team also tested the importance of the TTA theory using cadaver feet. This showed that by cutting transverse ligaments, overall foot stiffness is reduced by 40%; compared with just 23% for longitudinal ligaments.

Evolutionary history

Venkadesan and colleagues are also exploring how and when the foot’s stiffness and curvature first appeared in the evolutionary history of our ancestors. They have studied a variety of fossils of extinct hominins – which were more closely related to humans than to chimpanzees. This analysis revealed that human-like TTAs predate our own genus, Homo, by over 1.5 million years. This suggests that both the MLA and TTA were critical for the emergence of human bipedalism.

Future studies could also help us better understand the role of the MLA.  For example, the MLAs of individual feet have a range of curvatures that is not reflected in the range of foot stiffnesses. It is possible, therefore that the TTA and MLA work together to create the optimum overall stiffness.

The researchers hope that their insights could lead to new treatments of flatfoot disorders, which can significantly reduce a person’s mobility. The research could also lead to more advanced artificial feet for prosthetic limbs and even robots that can walk and run like humans.

The research is described in Nature.

Open innovation meets the technology challenge of 5G networks

Supermicro’s SuperServer

Mobile operators around the globe are gearing up for a new era of 5G network services. The move to 5G promises higher transmission speeds and more bandwidth, allowing videos and other data-rich content to be uploaded and downloaded up to 20 times more quickly than with current 4G technology. Perhaps even more importantly, 5G networks promise to be much more responsive for time-critical applications: the latency, which measures the time taken for data entered at one point of the network to elicit a response, is set to plummet from 50 ms today to just 1 ms when the roll out is complete.

This improved responsiveness will be crucial for real-time consumer applications, such as self-driving cars, lag-free gaming, and live streaming without the annoyance of buffering. But it will also play an important role in delivering improved and more personalized healthcare services, allowing patients visiting their local clinic to be treated by the best specialists from all over the world via video links, with remote diagnosis and monitoring using systems powered by artificial intelligence (AI). At the same time, first responders with real-time access to sensor data and network-assisted AI will be able to make better informed decisions in the most challenging conditions.

But achieving such performance improvements is forcing mobile operators to rethink and redesign their networks. 5G will exploit higher frequencies to speed up network connections, but this has the effect of shortening the transmission range. More base stations will be needed to provide the same coverage as today, and more computing power will need to be available at the edge of the network – in local offices and branches, for example, and even at the radio tower itself.

“There’s a new wave of technology coming out at the edge to enable low-latency applications, such as those exploiting artificial intelligence and video technologies,” says Jeff Sharpe, director for IoT and embedded solutions at Supermicro, a leading developer of high-performance hardware solutions for datacentres and edge computing. “These technologies will allow network operators to optimize their networks and deliver better services to their customers.”

The new-look network will still have high-performance computing power in the core of the network. That high-end compute would be used, for example, to develop and train the models used for different AI applications. But intelligent edge computing will bring that power to wherever it is needed, allowing end users to exploit the AI algorithms to process and analyse incoming data in real time.

“Operators will also need to exploit cloud-based software solutions to support the move to edge computing,” comments Yaming Wang, director for IoT and embedded solutions at Supermicro. “To do that the operators are focused on adopting an open hardware architecture as well as open-source software.”

That will be a fundamental shift from today’s mobile networks, in which most of the equipment has been sourced from a small number of companies providing proprietary solutions. The effect, says Wang, has been to slow down the evolution of network technology, with many innovations relying instead on the development of improved software services.

As a result, the world’s leading network operators – including the likes of AT&T, Verizon and Deutsche Telekom – have come together to form the Open Radio-Access Network (O-RAN) Alliance. Its mission is to build an open 5G infrastructure from virtualized network elements that allow installed equipment to be used more flexibly, standardized interfaces, and hardware sourced from multiple vendors.

“The O-RAN Alliance was created to accelerate the delivery of products that support a common, open architecture that we, as operators, view as the foundation of our next-generation wireless infrastructure,” explains Deutsche Telekom’s Alex Jinsung Choi. “It will also ensure that we have a broad community of suppliers driven by innovation and open market competition.”

That approach plays to the strengths of a company like Supermicro, which has focused on developing open-architecture hardware platforms and building virtualized solutions with different software partners. These virtual network elements – essentially a combination of hardware and software that performs a specific network function – will be distributed throughout the radio-access network to deliver high-performance computing to end users, and to support the more dynamic needs of 5G services.

“Supermicro sees the edge as different areas,” explains Sharpe. “We have equipment that’s specifically designed to be installed in a customer premise, something like a local banking office that needs high-end technology for security applications. We also have a high-performance server that’s designed to be used in a controlled environment, such as a micro data centre.”

Supermicro’ high-performance server, the 1019P, comes in a compact rackmount format – less deep than standard data centre equipment – that allows it to be deployed in branch offices and other network-oriented indoor locations such as repurposed telephone central offices.  It can run many different applications, and has two expandable slots that can be used interchangeably to provide local computing power or to support network O-RAN applications.

For compute-intensive applications such as AI inferencing, one or both slots can be configured with 2nd Generation Intel® Xeon® Scalable processors, designed specifically for data-centric computing and offering built-in AI acceleration. Alternatively, it can be fitted with Intel®’s Programmable Acceleration Card N3000, a field-programmable gate array (FPGA) that supports site-to-site communications for an open 5G radio-access network.

“Intel® and Supermicro address this network transformation opportunity as partners,” says Allen Leibovitch, senior product marketing manager at Supermicro. “Intel® often supports us in developing hardware and software reference designs, including verified Intel® Select Solutions.”

Supermicro’s SuperServer

Another high-performance server, the SuperServer E403-9D-16C-IPD2 has been designed for installation on the radio tower itself. It also has expandable slots capable of running both FPGA and Xeon®-enabled processing technologies, and the whole package fits inside a standard IP65 environmental enclosure to enable it to operate in the harshest of weather conditions. “Our new outdoor SuperServer brings high-performance data centre capability to the cell site itself,” says Leibovitch. “This will be essential for network providers to deploy dynamic 5G networks and to implement advanced real-time applications and services for their customers,” says Leibovitch.

Visit the Supermicro website to find out more about the company’s open hardware solutions for 5G networks.

Seismic imaging technology sees deep inside the brain

Brain imaging

A computational technique developed to process seismic images of the Earth’s subsurface could allow for high-resolution human brain imaging, reports a new study by researchers from Imperial College London. Although presently in the simulated, proof-of-concept stage only, the development could pave the way towards a cheaper, portable and more universally applicable method for rapid diagnosis of stroke and head trauma, and continuous monitoring of a wide range of neurological conditions (npj Digit. Med. 10.1038/s41746-020-0240-8).

Both of the leading conventional techniques for performing imaging on the brain come with inherent limitations. MR imaging is unsuitable for use on patients who have – or are suspected could have – metallic implants or harbour foreign bodies. It is also impractical for use on severely obese, claustrophobic or uncooperative patients. X-ray CT, meanwhile, involves exposure to harmful ionizing radiation, ruling out its use with young patients or for continuous monitoring. Both modalities also require large, expensive, high-powered machines that cannot practically be set up outside of hospital or laboratory settings.

In contrast, ultrasound imaging is universally safe for use and can be made portable – but traditional applications have not been able to scan within the human skull. This is because the bone attenuates, scatters and reflects the waves in complex ways that cannot be undone by simple algorithms.

In a new study, physicist Lluiís Guasch and colleagues turned to a computational technique known as full-waveform inversion (FWI), which is used by geophysicists to extract three-dimensional images of the Earth’s subsurface from data collected by seismometers on the passage of waves underground. A nonlinear data-fitting procedure, FWI works by using real-world seismic data to create a rough model of subsurface conditions from which wave equations can be solved to produce mock data. The model is then iteratively improved until this output provides the best fit for the real-world data.

Transducer array

Instead of using seismometers across the Earth’s surface, the researchers instead envisage using a helmet-like mesh of 1024 ultrasound transceivers. Through simulation, they show that in such a set-up, FWI is indeed capable of reconstructing high-resolution images of the brain like an MRI scan – one in which grey matter, white matter, ventricles and structure can be clearly seen. They also demonstrate in the lab that ultrasound transceivers are able to record signals from within a human skull with the required signal-to-noise ratio for processing with their FWI algorithm.

“This is the first time FWI has been applied to the task of imaging inside a human skull,” says Guasch. “In many ways, it is easier to apply FWI in medical imaging than in geophysics.” This, he explains, is because – unlike when dealing with the unique nature of different subsurface images – individual skulls have commonalities that can help guide the image reconstruction process.

“Neurology has been waiting for a new, universally applicable imaging modality for decades; FWI could well be the answer,” adds co-author Parashkev Nachev.

Furthermore, the researchers say that it should be possible to eventually realize a clinical version of their scanner that is portable – sized to fit on a motorbike or within an ambulance – that could allow scans to be undertaken on patients in advance of reaching hospital. Similarly, the device could be mounted on a frame to perform bedside imaging. The one drawback of the approach, however, is that it presently takes considerable time – as the helmet produces 1024 × 1024 individual ultrasound signals, which take around 32 hours to process on a conventional server.

“This is an important piece of work, as most imaging physicists would have assumed that, quite apart from the large signal attenuation that the skull produces, the multiple internal reflections and scattering of the soft tissue signals as they hit the bone interface would render any hope of reconstruction impossible,” says Stephen Williams, an imaging scientist from the University of Manchester who was not involved in the present study. “The paper provides compelling evidence that a physical realization of the concept should be possible, provided that the computer processing time can be reduced by around 200 times compared to the simulations reported in the article.”

The researchers note that three-dimensional ultrasound tomography using FWI could find particular relevance for rapid diagnosis and treatment of stroke. With their initial study complete, they are moving to further develop their prototype system with the goal of producing the first brain image of a live human subject – alongside improving the robustness of their image generation algorithm and lowering computational costs.

Suction forces enable precise bioprinting

A technique described by its creators as “like picking up a pea by placing a drinking straw on it and sucking through the straw” could make it easier to fabricate precise 3D patterns of biological tissues in the laboratory. The approach, dubbed aspiration-assisted bioprinting, could be used for applications such as regenerative medicine, tissue engineering and in vitro modelling of human diseases.

In 3D bioprinting, cell-laden hydrogels or “bioinks” are used to build biological structures layer-by-layer. Recent advances in the field mean that researchers can routinely fabricate patterned tissues and vascular-like networks and perfuse them with living cells and nutrients. The techniques employed vary depending on the viscosity and nature of the bioinks, and include ink-jet printing, microvalve- and extrusion-based bioprinting to name but three.

The great hope of 3D bioprinting is that it will enable patient-specific human tissues to be fabricated in the lab – perhaps even using a patient’s own cells. The problem is that current 3D bioprinting techniques cannot accurately position the densely packed aggregates of living cells that act as building blocks for functional human tissues and organs. These aggregates, known as “tissue spheroids”, can also be rendered non-viable if the printing process damages their biological, structural or mechanical properties. A further challenge is that most techniques cannot print spheroids of different sizes, or accommodate the scaffold-like structures that are the starting point for many tissue-engineering applications.

Aspiration-assisted bioprinting

A team of researchers at Pennsylvania State University in the US has now developed a new bioprinting technique that overcomes these difficulties by using suction to pick up and print different types of spheroids. The spheroids they tested were made of human or mouse mesenchymal stem cell aggregates and ranged in size from 80 to 600 microns. To avoid damaging them, the researchers kept the suction force to a minimum value, which they calculated based on the critical lifting pressure needed to overcome the thermodynamic barrier at the interface between the air, the tissue and the cell growth medium.

By holding the suction forces on the spheroids, team leader Ibrahim Ozbolat and colleagues demonstrated that they could move the spheroids to the proper locations before releasing them. They used this technique in conjunction with conventional micro-valve printing to build up tissues.

Collective capillary sprouting

By controlling the exact placement and type of spheroid, the Penn State team created tissues made from different types of cell, such as bone, as well as tissues that consist of a single cell type. This precise control also enabled them to create a matrix of spheroids with capillaries sprouting in specific directions. Since capillaries deliver oxygen and nutrients to cells, and are thus crucial for tissue growth and viability, controlling their spread is an important step towards creating viable tissues.

As well as bioprinting spheroids, Ozbolat says the team also printed tissue strands and single electrocytes – the modified muscle or nerve cells that generate electricity in fish such as electric eels. The bioprinted electrocytes might be used to fabricate biological batteries for various applications, including pacemakers, cochlear implants and brain chips, he tells Physics World.

The researchers, who report their work in Science Advances, say they are now focusing on improving their system so it can print spheroids at a higher rate, which would allow them to create larger tissue samples faster and with more intricate shapes.

Copyright © 2025 by IOP Publishing Ltd and individual contributors