Skip to main content

Quantum computing in the cloud

Quantum computers – devices that use the quantum mechanical superposition principle to process information – are being developed, built and studied in organizations ranging from universities and national laboratories to start-ups and large corporations such as Google, IBM, Intel and Microsoft. These devices are of great interest because they could solve certain computationally “hard” problems, such as searching large unordered lists or factorizing large numbers, much faster than any classical computer. This is because the quantum mechanical superposition principle is akin to an exponential computational parallelism – in other words, it makes it possible to explore multiple computational paths at once.

Because nature is fundamentally quantum mechanical, quantum computers also have the potential to solve problems concerning the structure and dynamics of solids, molecules, atoms, atomic nuclei or subatomic particles. Researchers have made great progress in solving such problems on classical computers, but the required computational effort typically increases exponentially as the number of particles rises. Thus, it is no surprise that scientists in these fields view quantum computers with a lot of interest.

Many different technologies are being explored as the basis for building quantum processors. These include superconductors, ion traps, optical devices, diamonds with nitrogen-vacancy centres and ultracold neutral atoms – to name just a few. The challenge in all cases is to keep quantum states coherent for long enough to execute algorithms (which requires strictly isolating the quantum processor from external perturbations or noise) while maintaining the ability to manipulate these states in a controlled way (which inevitably requires introducing couplings between the fragile quantum system and the noisy environment).

Recently, universal quantum processors with more than 50 quantum bits, or qubits, have been demonstrated – an exciting milestone because, even at this relatively low level of complexity, quantum processors are becoming too large for their operations to be simulated on all but the most powerful classical supercomputers. The utility of these 50-qubit machines to solve “hard” scientific problems is currently limited by the number of quantum-logic operations that can be performed before decoherence sets in (a few tens), and much R&D effort is focused on increasing such coherence times. Nevertheless, some problems can already be solved on such devices. The question is, how?

First, find a computer

Within the research sector, scientists have taken the first steps towards using quantum devices to solve problems in chemistry, materials science, nuclear physics and particle physics. In most cases, these problems have been studied by collaborations between scientists and the developers, owners and/or operators of the devices. However, a combination of publicly available software (such as PyQuil, QISKit and XACC) to program quantum computing processors, coupled with improved access to the devices themselves, is beginning to open the field to a much broader array of interested parties. The companies IBM and Rigetti, for instance, allow users access to their quantum computers via the IBM Q Experience and the Rigetti Forest API, respectively. These are cloud-based services: users can test and develop their programs on simulators, and run them on the quantum devices, without ever having to leave their offices.

As an example, we recently used the IBM and Rigetti cloud services to compute the binding energy of the deuteron – the bound state of a proton and a neutron that forms the centre of a heavy hydrogen atom. The quantum devices we used consisted of about 20 superconducting qubits, or transmons. The fidelity of their quantum operations on single qubits exceeds 99%, and their two-qubit fidelity is around 95%. Each qubit is typically connected to 3–5 neighbours. It is expected that these specifications (number of qubits, fidelities and connectivity) will improve with time, but the near future of universal quantum computing is likely to be based on similar parameters – what John Preskill of the California Institute of Technology calls “noisy intermediate-scale quantum” (NISQ) technology.

The deuteron is the simplest atomic nucleus, and its properties are well known, making it a good test case for quantum computing. Also, because qubits are two-state quantum-mechanical systems (conveniently thought of as a “spin up” and a “spin down” state), there is a natural mapping between qubits and fermions – that is, particles with half-integer spin that obey the Pauli exclusion principle – such as the proton and neutron that make up a deuteron. Conceptually, each qubit represents an orbital (or a discretized) position that a fermion can occupy, and spin up and down correspond to zero or one fermion occupying that orbital, respectively. Based on this ­Jordan-Wigner mapping, a quantum chip can simulate as many fermions as it has qubits.

Another helpful feature of the quantum computation of the deuteron binding energy is that the calculation itself can be simplified. The translational invariance of the problem reduces the bound-state calculation of the proton and the neutron to a single-particle problem that depends only on the relative distance between the particles. Furthermore, the deuteron’s Hamiltonian becomes simpler in the limit of long wavelengths, as details of the complicated strong interaction between protons and neutrons are not resolved at low energies. These simplifications allowed us to perform our quantum computation using only two and three qubits.

Then, do your calculation

We prepared a family of entangled quantum states on the quantum processor, and calculated the deuteron’s energy on the quantum chip. The state preparation consists of a unitary operation, decomposed into a sequence of single- and two-qubit quantum logical operations, acting on an initial state. With an eye towards the relatively low two-qubit fidelities, we employed a minimum number of two-qubit CNOT (controlled-not) operations for this task. To compute the deuteron’s energy, we measured expectation values of Pauli operators in the Hamiltonian, projecting the qubit states onto classical bits. This is a stochastic process, and we collected statistics from up to 10,000 measurements for each prepared quantum state. This is about the maximum number of measurements that users can make through cloud access, but it was sufficient for us because we were limited by noise and not by statistics. More complicated physical systems employing a larger number of qubits, or demanding a higher precision, could, however, require more measurements.

To compute the binding energy of the deuteron, we had to find the minimum energy of all the quantum states we prepared. This minimization was done with a classical computer, using the results from the quantum chip as input. We used two versions of the deuteron’s Hamiltonian, one for two and one for three qubits. The two-qubit calculation involved only a single CNOT operation and, as a consequence, did not suffer from significant noise.

However, the three-qubit calculation was considerably affected by noise, because the quantum circuit involved three CNOT operations. To understand the systematic effects of the noise, we inserted extra pairs of CNOT operations – equivalent to identity operators in the absence of noise – into the quantum circuits. This further increased the noise level and allowed us to measure and subtract the noise in the energy calculations. As a result, our efforts yielded the first quantum computation of an atomic nucleus, performed via the cloud.

What next?

For our calculation, we used quantum processors alongside classical computers. However, quantum computers hold great promise for standalone applications as well. The dynamics of interacting fermions, for instance, is generated by a unitary time-evolution operator and can therefore be naturally implemented by unitary gate operations on a quantum chip.

In a separate experiment, we used the IBM quantum cloud to simulate the Schwinger model – a prototypical quantum-field theory that describes the dynamics of electrons and positrons coupled via the electromagnetic field. Our work follows that carried out by Esteban Martinez and collaborators at the University of Innsbruck, who explored the dynamics of the Schwinger model in 2016 using a highly optimized trapped-ion system as a quantum device, which permitted them to apply hundreds(!) of quantum operations. To make our simulation possible via cloud access to a NISQ device, we exploited the model’s symmetries to reduce the complexity of our quantum circuit. We then applied the circuit to an initial ground state, generating the unitary time evolution, and measured the electron-positron content as a function of time using only two qubits.

The publicly available Python APIs from IBM and Rigetti made our cloud ­quantum-computing experience quite easy. They allowed us to test our programs on simulators (where imperfections such as noise can be avoided) and to run the calculations on actual quantum hardware without needing to know many details about the hardware itself. However, while the software decomposed our state-preparation unitary operation into a sequence of elementary quantum-logic operations, the decomposition was not optimized for the hardware. This forced us to tinker with the quantum circuits to minimize the number of two-qubit operations. Looking into the future, and considering more
complex systems, it would be great if this type of decomposition optimization could be automated.

For most of its history, quantum computing has only been experimentally available to a select few researchers with the know-how to build and operate such devices. Cloud quantum computing is set to change that. We have found it a liberating experience – a great equalizer that has the potential to bring quantum computing to many, just as the devices themselves are beginning to prove their worth.

The rise and rise of cryogenic electron microscopy

Combining physics, biology and chemistry, structural biology investigates the anatomy of biological macromolecules, and proteins in particular, improving understanding of diseases and enabling drug discovery. Speaking at the 68th Lindau Nobel Laureate meeting in Germany last week, biophysicist Joachim Frank says that the field is entering a new era with a bright future thanks to advances in cryogenic electron microscopy (cryo-EM).

Frank, a German-American Nobel Laureate in chemistry based at Columbia University, New York, traced the history of cryo-EM to the present day in his Lindau lecture. A technique pioneer, Frank shared the 2017 prize with fellow biophysicists Richard Henderson and Jacques Dubochet, for the significant contributions he made to its advancement.

Cryo-EM rapidly freezes molecular samples in solution and images them with an electron beam to reveal the molecular structure. Today, resolutions of 3-4 Å are routinely achievable, while at its upper limits, the technique can achieve atomic resolutions of 2 Å.

The technique’s particular strength is its ability to construct models of single, unattached molecules from images of the molecules in their natural state – encompassing a range of arrangements and binding states with other molecules.

A mainstay for structure determination, X-ray crystallography, in contrast, demands molecules are prepared as crystals, a form not typically taken by biomolecules. Crystallography can, however, take advantage of the regularly-ordered molecules to obtain diffraction patterns with which their structures can be reconstructed.

In the early 1970s, researchers saw electron tomography of individual molecule as a solution. However, in a major drawback, images of intact molecules were not possible, as the doses required inflict significant damage on the samples.

At around the same time, during his PhD, Frank had a new idea. It was to use computational and mathematical techniques to wrangle 2D images of hundreds of thousands of molecules in a sample. By aligning and averaging them, a single, clearer 3D picture of the molecule could be constructed, he proposed. The ribosome, a molecular machine found in large quantities in the cell cytosol that makes proteins, was to become his test object in developing the techniques.

A workbench for molecular reconstruction

To process and reconstruct the cryo-EM images, Frank developed SPIDER, a modular image processing program and the first of its kind in electron microscopy, around the turn of the 1980s. The computational equivalent of a workbench, it comprised hundreds of operations.

Key techniques developed in Frank’s lab included a correlation averaging technique to identify and average ribosomes facing in a particular direction into a single structure. In another, multivariate statistical analysis was applied to tackle the structural heterogeneity that occurs across a population of molecules in a sample, classifying and grouping similar structures together. Bringing all the techniques together, Frank and post-doc Michael Radermacher completed their first molecular reconstruction, the large subunit of the ribosome, in 1986.

Around the same time, Dubochet and colleagues at the European Molecular Biology Laboratory in Heidelberg made an important advance. For the first time, Dubochet vitrified sample solutions into a glass-like fluid by rapid freezing using liquid ethane cooled to -196°C. The technique prevents the formation of ice crystals that otherwise damage the molecules and diffract the electron beam, resulting in unusable images.

“This now gave the method I developed a very big boost, because now we could look at molecules in their native states,” said Frank. Exploiting this, Frank and collaborators went on to reconstruct several structures for the first time in the mid-1990s. They included a calcium release channel, octopus haemocyanin and the Escherichia coli ribosome. “These were all pioneering contributions at the time.”

Revealing ribosome movement

Another landmark finding followed in 2000. Frank and post-doc Rajendra Agrawal took advantage of further improvements in resolution to reveal the structure of the Escherichia coli ribosome in unprecedented detail. Their analysis revealed a ratchet-like movement of the ribosomes’ two subunits relative to one another during translation, the process where messenger RNA is used synthesize proteins. The movement proved critical to the ribosome’s functioning.

Despite the breakthroughs, however, Frank still saw resolution as a limiting factor in the lab’s research. In 2013, they achieved their best imaging on film after two years going through 260,000 images of the ribosome of Trypanosoma brucei, a parasite that causes African sleeping sickness. “We got stuck at 5.5 Å resolution. It was essentially a wall.” At this resolution, it was not possible to infer structures with high precision. Side chains on the molecules, for example, require a resolution of around 3 Å.

Cameras take cryo-EM to next level

Arrival of the first commercial single electron detection cameras in 2012, however, has had a decisive impact. Their detective quantum efficiency (DQE) is significantly higher than that of film. Frank’s lab used them in 2016 to resolve the structure of the ribosome in Trypanosoma cruzi, a parasite responsible for Chagas disease, at a resolution of 2.5 Å. His technique, with the aid of the camera even allowed a single water molecule to be resolved. “I couldn’t believe it when I first saw it,” said Frank.

Trypanosoma cruzi ribosome large subunit

Combined with maximum likelihood statistical methods, cryo-EM imaging technology is now enabling the determination of multiple, co-existing structures in a given sample at near-atomic resolution. This often results in snapshots of molecules in different states. Dubbed a “story in a sample” by Frank, the information means cellular functions can be visualized indirectly. Through these capabilities, he predicts a boom in knowledge. “There’s going to be a huge expansion of [structural] databases relevant for molecular medicine.”

Joachim Frank’s full lecture, Single-Particle Cryo-EM of Biological Molecules – the Sky Is the Limit, can be seen below. (Courtesy: Lindau Nobel Laureate meetings)

Warmer world needs more protected habitat

Some time later this century, the world’s need for protected habitat will be more acute even than today.

The greatest danger to the wild vertebrates that roam the planet will not be the intruding humans, their livestock and their pesticides and herbicides. It will be human-induced global warming and climate change.

The conversion of wilderness – forest, grassland and swamp – to urban growth, agriculture and pasture has already caused losses of perhaps one species in 10 in the natural ecosystems disturbed by humankind.

But what could be catastrophic climate change driven by profligate human burning of fossil fuels could by 2070 overtake the damage delivered by changes in the way land is used, with catastrophic consequences for birds, reptiles, mammals and other vertebrates.

Losses could reach 20% or even 40%, according to a new study in an academic journal, the Proceedings of the Royal Society B.

And a second, separate study in another journal spells out the challenge for governments, communities and conservators: the present targets for biodiversity conservation are simply inadequate. They leave 83% of the land surface unprotected, and 90% of the oceans not effectively conserved.

There have been calls to set at least half of the globe aside for the wild animals, plants and fungi that – until human numbers began to expand – dominated the planet. But the latest study, in the journal Nature Ecology and Evolution, suggests that even a half-share for nature might not be enough to save many species from extinction.

Researchers have been warning for two decades that climate change poses a real threat to the thousands of known species of wild creature, and millions of plants and animals yet to be identified and monitored.

They have argued that climate change will damage the forests that provide a natural home for countless forms of life; that global warming already presents dangers for known species; and that climate change may already have claimed more victims than anyone has so far realized.

Natural answer

They have also, in different ways, proved again and again that rich, biodiverse habitats, especially forests, are part of the natural machinery for limiting climate change – and in any case, in simple cash terms, forests are worth more to humankind as natural forests than as plantations, or cattle ranches.

And to rub home the message a third study in the same week, in the Journal of Animal Ecology, highlights the direct dangers of warmer sea waters to the colonies of black-browed albatross in the Southern Ocean. Meticulous monitoring since 1979 has showed that the biggest variation in population growth depends simply on sea water temperatures as the juvenile birds set off for their first year of independence over the open sea.

The cold Antarctic waters are rich in dissolved oxygen and support enormous levels of plant and tiny animal life on which the birds, fish and sea mammals depend. As waters warm, food becomes less available.

“As our oceans are projected to warm, fewer juvenile albatrosses will manage to survive and populations are expected to decline at a faster rate,” said Stéphanie Jenouvrier, of the Woods Hole Oceanographic Institution in the US.

The albatross populations of the Southern Hemisphere are already vulnerable: climate change will make them even more at hazard. And researchers have already pointed out that although great tracts of the world have already been declared reserves, many of those territories already protected have been systematically degraded by human invasion.

Heavy demands

“Humanity asks a lot of the natural world. We need it to purify our water and air, to maintain our soils, and to regulate our climate,” said Martine Maron of the University of Queensland, Australia, who led the Royal Society study.

“Yet even as we increase the extent of protected areas, they don’t necessarily prevent the loss of natural systems. They’re often located in areas that might not have been lost anyway – and the current target of protecting 17% of terrestrial systems will never be enough to protect species as well as provide the benefits humanity needs.”

And her co-author James Watson, of the Wildlife Conservation Society, who is also based at the University of Queensland, said: “We need a big, bold plan.

“There is no doubt that when we add up the different environmental goals to halt biodiversity loss, stabilise runaway climate change and to ensure other critical ecosystems services such as pollination and clean water are maintained, we will need far more than 50% of the Earth’s natural systems to remain intact.”

Watching planet birth, citizen science and a quantum love song

In the latest episode of Physics World Weekly, we’re joined by the astronomer and science communicator, Chris Lintott. A presenter on the BBC TV show The Sky at Night, Lintott discusses the story that broke this week that astronomers have captured the first images of a planet still in its formation. PDS 70b is a gas giant 370 light-years from Earth, which was imaged using an instrument of the Very Large Telescope (VLT) in Chile.

When Lintott last featured in a Physics World podcast back in 2015, he spoke about the Zooniverse – the citizen science platform he co-founded, which enables the general public to help scientists crunch through data sets. Lintott gives an update on the Zooniverse, which has expanded significantly during the past three years and currently offers nearly 90 projects, spanning the sciences, arts and humanities.  Lintott believes the combination of human insight and machine learning is emerging as a very powerful scientific tool.

Playing out this week’s show is a song inspired by quantum mechanics, sent by one of our listeners. If you enjoy the podcast then you can subscribe via iTunes or your chosen podcast app.

 

Mantis shrimp strikes again to inspire tougher composite materials

Famous for its great strength, the mantis shrimp pummels its prey with a dactyl club that moves at 80 km/h. But how the brittle material in the club survives repeated high-velocity impacts is something that has puzzled material scientists.

In 2017, theoretical work by US-based researchers showed that the impact energy is dissipated through the twisting of micro-cracks around spiral fibres within the club and that this prevents fibre breakage and catastrophic damage. That work was led by Purdue University’s Pablo Zavattieri and David Kisailus of the University of California, Riverside – and their teams have now performed a combination of computational and experimental studies that back-up this mechanistic theory. Their research also offers a new way of improving the toughness of composite materials.

The aerospace and other industries are keen to develop composite materials that are more resilient and lighter weight than materials available today. Designing better composites is a challenge because there is no ideal tool for predicting the properties of such materials and the slow process of slow trial and error can be the only route to success.

Some scientists are taking a different approach and examining materials that have emerged from nature’s process of trial and error, that is – evolution. In this search, scientists have been drawn to the strength displayed by the mantis shrimp and its smashing dactyl club.

Strengthening fibres

The skeleton of the club is made of calcium carbonate and calcium phosphate – a composition that is expected to be brittle with properties similar to everyday ceramics. However, its strength seems to lie in the spiral- helicoidal architecture of fibres within the club. These fibres are made of chitin, which is a long-chain polymer commonly found in the exoskeletons of crustaceans and insects. In previous studies, composites were designed with a similar spiral architecture. They were found to be much tougher than the traditional quasi-isotropic composite geometry in which the directionality of fibres is shifted by 45° between layers.

It’s okay to have multiple micro-cracks, as long as they don’t coalesce to form one big crack that splits the material

Pablo Zavattieri

Further microscopic analysis of the shrimp’s mighty club revealed that millions of micro-cracks appeared after an assault on a prey. However, these cracks are twisted around the spiral fibres without breaking the fibres themselves. “It’s okay to have multiple micro-cracks, as long as they don’t coalesce to form one big crack that splits the material,” explains Zavattieri.

Shrimp material

Using these observations, Zavattieri, Kisailus and colleagues delved further into the mechanism of how exactly a helicoidal architecture imparts strength to a material. They proposed the twisting cracks hypothesis – suggesting that the helicoidal architecture enabled the energy from an impact to dissipate within spiral micro-cracks, so preventing catastrophic failure.

To prove this hypothesis the scientists developed a theoretical model backed up by ancillary 3D simulations to examine the local stress intensity factors. This confirmed that as cracks dissipate within the shrimp’s dactyl club, local strain is reduced.

3D printing

The scientists further validated the twisting cracks hypothesis by combining computational and experimental biomimetic models. Composites with spiral architectures were made from carbon fibres and an epoxy matrix using traditional composite processes and 3D printing.

“The advantage of 3D printing is that you can make composites exactly how you want them, with different angles between layers,” says Zavattieri. “Of course there are some differences, because you’re printing fibres 1 mm in diameter and in nature it is a nanometre, but it allows you to demonstrate the mechanism.”

Pressure was applied to the top of the 3D printed composite bars to bend them. In the helicoidal materials the resultant fractures were observed to twist. Camera and digital image correlation techniques were used to examine crack shape, stress distribution and energy dissipation mechanisms. These techniques confirmed the twisting cracks mechanism and demonstrated how it improved fracture resistance in composites.

“They’ve coupled all of the essentials- from examining the microstructure of the natural material to 3D printing experiments- and so clearly demonstrated a successful bottom up design of materials,” says a materials expert, who wished to remain anonymous.

Zavattieri is excited that this basic work can be immediately implemented to create new composites using available technology. He envisions that the extrapolation of these studies will have numerous applications. For instance, Kisailus’ and Zavattieri’s groups are currently developing lightweight fibre-reinforced composites for aerospace, automotive and civil engineering.

“We are beginning to observe that borrowing ideas from nature is very beneficial,” says Zavattieri.

The research is described in the Journal of the Mechanical Behavior of Biomedical Materials and the International Journal of Solids and Structures.

Physical versus chemical paradigms play off in nanomedicine

Packaging requires specificity

Patrick Couvreur, professor at UMR CNRS 8612 in Paris, France, was publishing work on nanoparticle drug carriers as far back as 1979. His early work in the field included development of biodegradable polyalkylcyanoacrylate (PACA) nanoparticles that could make doxorubicin (DOX) – a widely used anticancer drug – invisible in vivo, helping it get past the multiple-drug-resistant defences of hepatocarcinoma. Further developments in this technology led to the release of Livatag®, which showed so much promise it was fast-tracked through Food and Drug Administration (FDA) clearance. Speaking to attendees of Nanotech 2018, Couvreur pointed out that in some ways Livatag has been a victim of its own success. In phase II clinical trials the Livatag survival curve was better than the “chemoembolization” control arm, where anti-cancer drugs are injected directly into the blood vessel feeding a cancerous tumor. However in the multicentric (conducted at more than one medical centre) phase III clinical trial, the survival curve overlapped the control arm of “polychemotherapy” treatment involving several different drugs.

Couvreur has gone on to develop other medicines based on the same approach, such as monochrome antibody streptavidin for targeting alzheimers. Despite these successes and the time elapsed since the first demonstration of PACA encapsulated DOX, Couvreur told attendees that the number of nanomedicines on the market or even in phase III clinical trials remains very low. He suggested that reasons for this include issues with loading and the fact that a fraction of the drug remains at the surface of the nanoparticles where they are not protected and their release is not controlled. “What was needed was a move from a physical to a chemical encapsulation paradigm using linkers,” said Couvreur.

Chemistry takes over

Patrick Couvreur addresses attendees at Nanotech France 2018

Couvreur described “squalenoylation” as “a new platform for nanomedicines”, giving as an example the success of the nanoassembly of anticancer drug gemcitabine with squalene linkers – SQGem. In the case of doxorubicin linked to squalene nanoparticles, features that improve the medicine’s efficacy include the extension of the nanoparticle by blood flow along streams, which gives longer activity post injection. In addition, interactions between SQGem and lipoproteins mean that it is readily transported by them, particularly cholesterol-rich lipoproteins. Tumour cells attract cholesterol to multiply, so this transport provides an indirect targeting mechanism.

Other applications of squalenoylation include nanoparticles of squalene with cis-diamminedichloroplatinum (CDDP) to increase intracellular delivery of platin and ROS production. Couvreur and his team have also investigated the possibility of combining with adenosine to treat spinal cord injury and brain ischemia. Here the blood-brain barrier can pose a challenge, but Couvreur and colleagues found that the nanoparticles interact with peripheral receptors of adenosine, relaxing the brain vessels and inducing neuroprotection of the brain microcirculation. As a result reperfusion improves while the nanoparticles themselves do not cross the blood-brain barrier.

Getting physical with viruses

Antibiotics – which revolutionized medicine in the 20th century – are among the most frequently administered drugs available on account of their broad efficacy against a range of infectious bacterial diseases. However no such drug exists to combat viral infections – yet. Following on from Couvreur, Francesco Stellacci a professor at Ecole Polytechnique de Lausanne (EPFL) in Switzerland, told attendees at Nanotech France 2018 about work using decorated gold nanoparticles that seem to mimic host cells. The nanoparticles lure the virus in to bond and attempt to infect them, but instead the nanoparticle applies a physical pressure on the virus that results in its rupture and disarmament.

This mechanism here is physical, which has the advantage that it is also non-toxic. “There are a lot of virucidal molecules out there, but they are toxic,” said Stellacci, highlighting alcohol as an example. Mild administration of alcohol may have other benefits but its intake is not effective as a viricide.

The gold nanoparticle viricides also work at nanomolar concentrations – significant since most FDA approved drugs are nanomolar. In addition Stellacci and his team tested them against wild-type viruses extracted from patients and found they are not only effective but that the effect is non-reversible. This means that once the gold nanoparticle viricides have reduced viral populations in a sample by a log 2 difference, the population depletion holds even following dilution of the whole sample, something that is not true of some alternative antiviral substances such as heparin.

Stellacci and colleagues demonstrated the approach on respiratory syncytial virus (RSV), which kills half a million each year. They have also demonstrated the efficacy of the puncturing ligands without the gold nanoparticle, attaching them to cyclodextrin instead. His team are in the process of attempts to apply the approach to combat rotavirus, which leads to diarrhoea, one of the main causes of death in children under five across the world.

Catalysis plays a role

Developments in catalysis may also have spin-off benefits for drug delivery. Jean-Pierre Mahy, professor at the Université de Paris Sud in France spoke to Nanotech France attendees about some of the progress in designing artificial enzymes that “combine the robustness of chemical catalysts with the activity of enzymes in mild conditions.”

One of the primary goals of his research has been to mimic the cytochromes P450 hemoproteins for selective oxidation – no small feat. The heme moiety of P450s has been described as “responsible for the remarkable and often exquisite, catalytic prowess of these enzymes”. As natural approaches involve multiple electron transfer processes and are very fiddly to reproduce, Mahy and colleagues turned to artificial hemoproteins (hemoartzymes) with mono oxygenase activity. To provide robust structure, high loading, enzyme protection and potential recycling, their recent work has focused on metallorganic frameworks (MOF).

Mahy and colleagues have shown that microperoxidase 8 (MP8) – a heme octapeptide obtained by hydrolytic digestion of horse heart cytochrome c – can have both peroxidase-like and cytochrome P450-like activities. They combined this with an MOF from MIL 101 nanoparticles (where MIL stands for Material of Institut Lavoisier) and were able to demonstrate charge-selective oxidation activity. They have also designed an artificial reductase based on a water-soluble polyimine polymer decorated with hydrophobic groups that allows use of O2 as an oxidant. “This is the first entirely synthetic heme monooxygenase,” said Mahy.

Alongside these demonstrations of the bioactivity of metalloenzymes, the ability to compartmentalize them in MOFs has suggested potential compatibility with living cells, and it is here that possible therapeutic applications really come into play. Some of Mahy’s most recent work has demonstrated artzymes catalysing organic reactions at the surface of living cells. The cells can then enantioselectively catalyse the abiotic Diels-Alder cycloaddition reaction of cyclopentadiene and azachalcone, and as Mahy told attendees, “This could be used to activate drugs.” In addition there is potential for on site synthesis of drugs and metabolites.

Nanotech France is an annual conference that took place this year in Paris on 27–29 June.

All-optical ultrasound delivers video-rate tissue imaging

Ultrasound is one of the most common medical imaging tools, but the electronic components in ultrasound probes make it difficult to miniaturize them for endoscopic applications. Such electronic ultrasound systems are also unsuitable for use within MRI scanners.

To address these shortcomings, researchers from University College London have developed an ultrasound system that uses optical, instead of electronic, components. The team has now demonstrated the first use of an all-optical ultrasound imager for video-rate, real-time 2D imaging of biological tissue (Biomed. Opt. Express 9 3481).

“All-optical ultrasound imaging probes have the potential to revolutionize image-guided interventions,” says first author Erwin Alles. “A lack of electronics and the resulting MRI compatibility will allow for true multimodality image guidance, with probes that are potentially just a fraction of the cost of conventional electronic counterparts.”

All-optical ultrasound systems eliminate the electronic transducers in standard ultrasonic probes by using light to both transmit and receive the ultrasound. Pulsed laser light generates ultrasound waves, scanning mirrors control the transmission of the waves into tissue, and a fibre-optic sensor receives the reflected waves.

The team also developed methods to acquire and display images at video rates. “Through the combination of a new imaging paradigm, new optical ultrasound generating materials, optimized ultrasound source geometries and a highly sensitive fibre-optic ultrasound detector, we achieved image frame rates that were up to three orders of magnitude faster than the current state-of-the-art,” Alles explains.

Optical components are easily miniaturized, offering the potential to create a minimally invasive probe. The scanning mirrors built into the device enable it to acquire images in different modes, and rapidly switch between modes without needing to swap the imaging probe. In addition, the light source can be dynamically adjusted to generate either low-frequency ultrasound, which penetrates deep into tissue, or high-frequency ultrasound, which offers higher-resolution images at a shallower depth.

The team tested their prototype system by imaging a deceased zebrafish, as well as an ex vivo pig artery manipulated to emulate the dynamics of pulsing blood. The all-optical device exhibited comparable imaging capabilities to an electronic high-frequency ultrasound system, and captured the dynamics of the pulsating carotid artery. The system demonstrated a sustained frame rate of 15 Hz, a dynamic range of 30 dB, a penetration depth of at least 6 mm and a resolution of 75×100 µm.

To adapt the technology for clinical use, the researchers are developing a long, flexible imaging probe for free-hand operation, as well as miniaturized versions for endoscopic applications.

Static electric field suppresses superconductivity

A static electric field can be used to manipulate the superconducting state of metallic superconducting thin films, according to new experiments by researchers in Italy. The effect, which was first put forward by the London brothers, Fritz and Heinz, in their original formulation of superconductivity back in 1935, might be exploited in novel-concept devices such as supercurrent and Josephson field-effect transistors, as well as classical and possibly even quantum bits.

“It seems that we are realizing a novel phase of the superconducting state driven by electric fields,” says Francesco Giazotto of the Consiglio Nazionale delle Ricerche (CNR) and the Scuola Normale Superiore in Pisa, who led this research effort. “At the moment, we are unclear as to the type of phase transition we are inducing but our finding definitely represents something very intriguing from the fundamental physics point of view.”

Electrostatic fields should not affect either a metal or a superconductor

Superconductivity is the complete absence of electrical resistance in a material and is observed in many materials when they are cooled to below their superconducting transition temperature (Tc). In the Bardeen–Cooper–Schrieffer (BCS) theory of (“conventional”) low-temperature superconductivity, this occurs when electrons overcome their mutual electrical repulsion and form “Cooper pairs” that then travel unheeded through the material as a supercurrent.

According to the theory of electrostatic screening, an electrostatic field should not have any effect on either a metal or a superconductor. Giazotto and colleagues have now turned this idea on its head and have found that an intense electric field can dramatically affect the superconducting state, be used to control the supercurrent, and, at sufficiently intense fields, quench the superconductivity altogether.

Spatial deformation of the Cooper pairing parameter?

The researchers obtained their results by applying intense static electric fields on the order of about 10V/m through either the side or bottom gates of all-metallic supercurrent transistors made of two different BCS superconducting thin films (a titanium- and an aluminium-based one). The devices were made using standard lithography and simple metallic thin film deposition techniques.

Giazotto and colleagues say that the superconductivity quenching might come from a spatial deformation of the Cooper pairing parameter by the electric fields localized at the surface of the superconductor. This leads to a reduced available area through which supercurrent can flow. More experiments will be needed to confirm this hypothesis, however.

“From the basic physics point of view, our results suggest that there are still some very important aspects of conventional superconductivity that need to be understood,” says Giazotto. “In this context, I’d say that we are still rather far from understanding the microscopic mechanism driving the phenomena we have observed. We believe that such a field-effect-driven phase transition in superconductivity could represent a valuable platform for developing new theories within the BCS model.”

A whole new area of research?

As for applications, it might be exploited to make new-concept devices, including all-metallic superconducting field-effect transistors and advanced quantum information architectures, he tells Physics World. Other possible devices include tunable Josephson weak links or interferometers and Coulombic and coherent caloritronic structures.

Reporting their study in Nature Nanotechnology 10.1038/s41565-018-0190-3, the researchers say they are now busy trying to better understand the microscopic origin of the field effect. “From the experimental side, we are looking at this effect in a wider range of metallic superconductors and investigating the impact of electric fields on Josephson interferometers,” says Giazotto. “In principle, our work may have opened up a whole new area of research focusing on the role of intense electric fields on superconductivity. Time will reveal whether this is the case or not.”

Fooled by time

In ordinary experience, time is permanently present in the world. Continuous and flowing, it moves in one direction – from past to future, the border being a momentary “now”. Thanks to this movement, humans remember, perceive, plan and act consciously and deliberately. Humans do so as individuals and in groups, transforming themselves and the world, creating culture, history and science. Even doing physics, in which you creatively use what you already know to make fresh discoveries, requires living time this way.

Yet many physicists declare such everyday experience of time a mirage. “For we convinced physicists,” Einstein wrote, “the distinction between past, present, and future is only an illusion, however persistent.” Brian Greene wrote in the New York Times that “the temporal categories of past, present and future” are “subjective” and that the “everyday conception of time appears illusory.” One chapter in theoretical physicist Carlo Rovelli’s book, Reality Is Not What It Seems, was even entitled “Time does not exist.”

The fog of time

Rovelli appears less dismissive in his new book The Order of Time (for more information on it see this interview). Time can be approached in two ways, he writes – either as something foundational to human experience or as foundational to the world itself. He describes the former as a “fog” or “blur” that results from us seeing nature at a distance. Sure, this fog is important in the sense that it opens up a space or dimension for us to be human – “we are this space”, he writes – and to encounter the cosmos. But time, he insists, is not a part of that cosmos. It is a mere “epiphenomenon”, like seeing the Sun “set”.

Rovelli admits that time is “perhaps the greatest remaining mystery”. But physics promises to dispel that mystery, for it sees the difference between the two approaches and studies “the nature of time free from the fog caused by our emotions”. To help us picture this dual view of time, Rovelli invokes Paul McCartney’s song Fool on the Hill, who – as Rovelli puts it – “sees the Earth turn when he sees the setting sun”. Just as the fool can appreciate something deep – the Earth spins – from watching the Sun go down, so we ought also to be able to perceive the  “profound structure of the world [even though] time as we know it no longer exists”.

Rovelli’s book seems to leave room for the everyday experience of time. Still, he does not call that kind of time fully real in the sense of belonging to the ultimate elements of the world that physicists study. Everyday time is not an illusion, he thinks, but not real either. One might compare his approach with that of the philosopher Immanuel Kant, who said that time is “empirically real”, encountered and measurable in the world we live, yet “transcendentally ideal”, or part of that world only insofar as it is a precondition for having any experience of the world at all – a feature of the mind’s programming software if you like.

But experienced time comes first, even before the distinction between it and physicists’ time. Without experienced time, humans lack any encounter with the world at all, from storms to supernovae. The world is disclosed in, and thanks to, experienced time, which therefore has a kind of priority over what appears.

For centuries, philosophers and scientists have been tempted to seek some seemingly permanent, unchanging stuff in experience – quantum fields, say – that give rise to everything else, including human experience, and name this the “real”.  The trouble with the “reality trick” is not just that we keep changing our minds about the fundamental stuff. It also downgrades the importance of everything else, most notably our lived experience. Rovelli seems to recognize this in a chapter on how language can promote certain erroneous assumptions about reality and existence. Still, it is all too tempting to revert to language that suggests that storms and supernovae are not real but merely epiphenomena.

To study a melody, for instance, you have to experience it. Only after that experience can we break it down and use a clock to say that such and such note occurred at 17 or 92 seconds into it. Only because of a continuous qualitative movement is there a unified melody in the first place to which that note belongs. This upsurge of reality was dubbed “duration” by Henri Bergson, who regarded it as the font of the abstract, homogenous time measured by clocks; Martin Heidegger described something similar under the name of “temporality”. This is not wordplay, but part of an attempt to describe fundamental features of the world evocatively that is not merely “poetic” but indicative of qualities not otherwise accessible.

The critical point

You can’t explain time by putting physicists in charge of “what time really is” and then trying to stitch this together with experienced time. That inevitably results in experienced time having a secondary status – discussed only in humanities courses that get axed from the curriculum when the next budget crisis hits. The task for philosophers of time is to explain that physicists’ conceptions of time are highly selective, mathematized ideas that are useful, but grow out of human concerns that arise in experienced time.

You can’t explain time by putting physicists in charge of “what time really is”

“He never listens to them,” runs the final verse of McCartney’s song. “He knows that they’re the fools.” That’s where McCartney’s lyrics annoy me, for I hear the reality trick being played yet again. Yes, I know the guy on the hill can see simultaneously both the Sun setting and the Earth spinning, which is good. And I know I’m over-reading the song. But the lyrics claim superciliously that this guy sees deepest of all, and that those who see otherwise are deluded. To clear up the mystery of time, and many other issues dividing philosophers and physicists, we must stop insisting there is just one right way to see these things. That makes fools of us all.

Remote sensing reveals olive tree infection early

Europe’s olive trees are succumbing to a bacterium spread by sap-feeding insects. But remote imaging from planes or drones could detect infected trees before their symptoms appear.

Although common in the Americas, the Xylella fastidiosa bacterium spread to Europe only recently. It has destroyed many olive orchards in Italy’s Apulia region. There is no cure so culling of infected trees to prevent further spreading of the disease is the only option. That means earlier detection is crucial.

“The spread of plant diseases is predicted to become an increasing problem with climate change, including for the UK,” said Rocio Hernandez-Clemente of Swansea University, UK. “International cooperation is essential for early detection, to control damage and prevent spread. This study demonstrates the possibility of detection of symptoms at an early stage, and may be adapted to drones and aircraft for widespread use”.

Xylella causes disease in more than 350 plants but olive trees are particularly vulnerable. Infection causes their branches and twigs to wither, and their leaves to scorch.

Infographic of increasing severity of Xylella fastidiosa symptoms in olive tree crowns in colours from green to red detected by thermal images. Courtesy: Alberto Hornero - Swansea University

The remote sensing technique discovers the infection using cameras that perform hyperspectral and thermal image analysis. The team also tested trees on the ground to confirm their findings were accurate.

“Our study found that the effects of the bacterial infection can be remotely detected before any visible symptoms appear, allowing for rapid and accurate mapping of Xylella-infected olive trees across target orchards,” said Peter North of Swansea University, UK.

The team reported their results in Nature Plants.

 

 

Copyright © 2025 by IOP Publishing Ltd and individual contributors