Skip to main content

Boron arsenide crystals could cool computer chips

Unwanted heat is a big problem in modern electronic systems that are based on conventional silicon circuits – and the problem is getting worse as devices become ever smaller and more sophisticated. Carrying away this heat is critical and researchers are developing efficient heat-conducting materials to meet this challenge. Three teams from around the US are now saying that crystals of the semiconductor boron arsenide (BAs) show promise in this context and they have measured a high thermal conductivity of more than 1000 W/m/K at room temperature for this material. This value is three times higher than that of copper or silicon carbide, two materials that are routinely employed for spreading heat in electronics.

The value we measured (on crystal sizes of about 0.5 mm) is surpassed only by diamond and the basal plane value of graphite,” says Bing Lv of the University of Texas (UT) at Dallas, who led one of the research groups together with David Cahill of the University of Illinois at Urbana-Champaign.

The other two groups, led by Zhifeng Ren of the University of Houston (UH) and Yongjie Hu of the University of California at Los Angeles (UCLA) measured local thermal conductivities of 1000 W/m/K and 1300 W/m/K respectively, with Ren’s group also measuring a value of 900W/mK on large crystals of about 4 mm x 2 mm x 1 mm.

“The UH/UT Austin paper reported transport data of about 900 W/m/K across a length of at least 2 mm using Raman spectroscopy across the same distance and TDTR, finding about 1000 W/m/K locally on a spot less than 20 microns,” explains Ren.

Predicted thermal conductivity as high as that of diamond

Researchers predicted that BAs should have a theoretical thermal conductivity as high as that of diamond (2200 W/m/K), which is the best heat conductor known, back in 2013. However, to reach this high value, high quality crystals are needed since defects and impurities dramatically degrade thermal properties.

Lv and colleagues, then at the University of Houston, made BAs crystals in 2015 but the material only had a thermal conductivity of 200 W/m/K. Since then the researchers have optimized their crystal-growing process using a modified version of a technique called chemical vapour transport. Here, they place boron and arsenic in a chamber containing hot and cold areas and the two elements are then transported (by different chemicals) from the hot end to the cooler end, where they combine to form crystals.

Heat in crystals is carried by phonons (which are vibrations of the crystal lattice). Lv explains that the large differences in the masses of boron and arsenic atoms creates a big frequency gap between acoustic and optic phonons, which allow the phonons to travel more efficiently through the crystals. The researchers measured the thermal conductivity of their BAs using a method called time-domain thermoreflectance or TDTR, which was developed in Cahill’s lab in Illinois.

Ren’s team also used chemical vapour deposition to make their large crystals (measuring 4 mm x 2 mm x 1 mm, as mentioned). These are a significant improvement on the ones they previously made, which were less than 500 microns across and were thus too small for certain measurement techniques. The researchers also measured the thermal conductivity of their BAs using TDTR as well as some other techinques.

Hu and colleagues, for their part, made BAs single crystals more than 2 mm in size with undetectable defects and measured their thermal conductivity using the TDTR technique. Their spectroscopy study combined with atomistic calculations reveal that the phonon vibration spectrum of BAs allows for “very long phonon mean free paths and strong high-order anharmonicity through a four- phonon process”.

First known semiconductor with ultrahigh thermal conductivity

The result from all three groups mean that BAs is the first known semiconductor with a bandgap comparable to silicon of around 1.5 eV to have an ultrahigh thermal conductivity and it could be a revolutionary thermal management material, according to the researchers.

And that is not all: “There is also a close match between the thermal expansion coefficients of BAs and silicon. This is a non-negligible advantage for minimizing thermal stresses and reducing the need for thermal interface materials when incorporating it into conventional semiconducting devices,” says Lv.

“Our team is now busy looking into other processes to improve the yield of this material for large-scale applications,” he tells Physics World. “We are also trying to control the types of defects that are present in these crystals and better understand how they affect its thermal conductivity.”

The research is detailed in three papers in Science.

Rising sea levels could cost the world $14 trillion a year by 2100

Failure to meet the United Nations’ 2 ºC warming limits will lead to sea level rise and dire global economic consequences, new research has warned.

Published today in Environmental Research Letters, a study led by the UK National Oceanographic Centre (NOC) found flooding from rising sea levels could cost $14 trillion worldwide annually by 2100, if the target of holding global temperatures below 2 °C above pre-industrial levels is missed.

The researchers also found that upper-middle income countries such as China would see the largest increase in flood costs, whereas the highest income countries would suffer the least, thanks to existing high levels of protection infrastructure.

Svetlana Jevrejeva, from the NOC, is the study’s lead author. She said: “More than 600 million people live in low-elevation coastal areas, less than 10 metres above sea level. In a warming climate, global sea level will rise due to melting of land-based glaciers and ice sheets, and from the thermal expansion of ocean waters. So, sea level rise is one of the most damaging aspects of our warming climate.”

Sea level projections exist for emissions scenarios and socio-economic scenarios. However, there are no scenarios covering limiting warming below the 2 °C and 1.5 °C targets during the entire 21st century and beyond.

The study team explored the pace and consequences of global and regional sea level rise with restricted warming of 1.5 ºC and 2 ºC, and compared them to sea level projections with unmitigated warming following emissions scenario Representative Concentration Pathway (RCP) 8.5.

Using World Bank income groups (high, upper middle, lower middle and low income countries), they then assessed the impact of sea level rise in coastal areas from a global perspective, and for some individual countries using the Dynamic Interactive Vulnerability Assessment modelling framework.

Jevrejeva said: “We found that with a temperature rise trajectory of 1.5 °C, by 2100 the median sea level will have risen by 0.52 m. But, if the 2 °C target is missed, we will see a median sea level rise of 0.86 m, and a worst-case rise of 1.8 m.

“If warming is not mitigated and follows the RCP8.5 sea level rise projections, the global annual flood costs without adaptation will increase to $14 trillion per year for a median sea level rise of 0.86 m, and up to $27 trillion per year for 1.8 m. This would account for 2.8% of global GDP in 2100.”

The projected difference in coastal sea levels is also likely to mean tropical areas will see extreme sea levels more often.

“These extreme sea levels will have a negative effect on the economies of developing coastal nations, and the habitability of low-lying coastlines,” said Jevrejeva. “Small, low-lying island nations such as the Maldives will be very easily affected, and the pressures on their natural resources and environmental will become even greater.

“These results place further emphasis on putting even greater efforts into mitigating rising global temperatures.”

Thermal imaging monitors radiotherapy efficacy

Thermal images

The ability to assess the impact of radiation on malignant tumours during a course of radiotherapy could help improve its effectiveness for individual patients. Based on tumour response, physicians could modify the treatment regimen, dose and radiation field accordingly.

Israeli researchers have now demonstrated that thermography may provide a viable radiotherapy monitoring tool for such treatment optimization. They have developed a method to detect tumours in a thermal image and estimate changes in tumour and vasculature during radiotherapy, validating this in a study of six patients with advanced breast cancer (J. Biomed. Opt. 23 058001).

Thermography had been rejected as a breast cancer detection tool, due to its suboptimal sensitivity and specificity. However, for an already detected tumour undergoing radiation or chemotherapy, it could prove a highly effective monitoring tool, when incorporating algorithms developed by the research team.

The multi-institutional team had conducted research on thermal imaging to understand tumour aggressiveness in animal models. They hypothesized that because malignant tumours are characterized by abnormal metabolic and perfusion rates, they will generate a different temperature distribution pattern compared with healthy tissue. By measuring skin temperature maps at the tumour location before and during treatment, the reaction of a tumour to radiotherapy can be measured.

Researchers

Israel Gannot from Tel-Aviv University and co-authors developed a four-step algorithm to analyse the thermal images. First, images were converted from colour to grey scale and a fixed temperature range of 7°C set for all images, to enable comparison of the entropy (which characterizes the homogeneity of the image) in different images. Images were filtered using a Frangi filter designed to emphasize tubular structures. This filter highlighted blobs of heat (the malignant tumour) and long, narrow tubular objects (the blood vessel network). Images were enlarged sevenfold to observe local temperature changes in the blood vessels.

In the final step, feature extraction, the algorithm calculates entropy in the cropped thermal image of the tumour area and in the filtered tumour image. It then estimates changes in tumour regularity and vasculature shape during radiotherapy.

Patient imaging

The six patients were women with stage IV breast cancer and distant metastatic disease. None had undergone surgical resection of their tumours, which had a diameter larger than 1 cm at a depth of less than 1 cm. All patients received 15 radiation fractions of 3 Gy, administered over three weeks.

The patients underwent thermal imaging before each radiotherapy session and a day after the end of the session. Room temperature and humidity were controlled during image acquisition and fluorescent lights were turned off. The thermal camera, positioned 1 m from the patient, acquired images containing either 320×256 or 320×240 pixels.

The authors report that entropy was reduced in the tumour areas, for all patients, during radiation treatment. They described the appearance of the tumour vasculature as “a crab with many arms”. To quantify changes in the shape of vascular networks, they converted the images into binary images and counted the number of objects before and after radiotherapy. They saw a reduction in the number of objects, indicating a reduction in the vessels supplying nutrients to the tumour.

Expanding applications

The researchers selected breast cancer for the initial research because co-researcher Merav Ben-David, from Sheba Medical Center, specializes in breast cancer treatment. They are now looking at additional applications, such as the treatment of cervical cancer and head-and-neck cancer.

“We are collecting more data to run big data statistics,” Gannot tells Physics World. “We are also starting to implement the use of this technology as a tool for early warning of breast cancer by women at their home, using a thermal camera attached to a cell phone with our algorithms implemented in the smartphone app. This is intended for use in addition to mammography. It could fill the time span between mammography examinations when many cancers develop.”

In future research, the authors are planning to use thermal imaging devices with multiple angles and perform real-time analysis. They are planning larger studies to evaluate the efficacy of thermography to monitor radiotherapy, chemotherapy and immunotherapy treatments.

Neural networks, explained

What are neural networks?

Artificial neural networks are a form of machine-learning algorithm with a structure roughly based on that of the human brain. Like other kinds of machine-­learning algorithms, they can solve problems through trial and error without being explicitly programmed with rules to follow. They’re often called “artificial intelligence” (AI), and although they are are much less advanced than science-fiction AIs, they can control self-driving cars, deliver ads, recognize faces, translate texts and even help artists design new paintings – or create bizarre new paint colours with names like “sudden pine” and “sting grey”.

How do neural networks work?

Neural networks were first developed in the 1950s to test theories about the way that interconnected neurons in the human brain store information and react to input data. As in the brain, the output of an artificial neural network depends on the strength of the connections between its virtual neurons – except in this case, the “neurons” are not actual cells, but connected modules of a computer program. When the virtual neurons are connected in several layers, this is known as deep learning.

A learning process tunes these connection strengths via trial and error, attempting to maximize the neural network’s performance at solving some problem. The goal might be to match input data and make predictions about new data the network hasn’t seen before (supervised learning), or maximizing a “reward” function to discover new solutions to a problem (reinforcement learning). The architecture of a neural network, including the number and arrangement of its neurons, or the division of labour between specialized sub-modules, is usually tailored to each problem.

Why have I heard so much about them?

The growing availability of cheap cloud computing and graphics processing units (GPUs) are key factors behind the rise of neural networks, making them both more powerful and more accessible. The availability of large amounts of new training data, such as databases of labelled medical images, satellite images or customer browsing histories, has also helped boost the power of neural networks. In addition, the proliferation of new open-source tools such as Tensorflow, Keras and Torch has helped make neural networks accessible to programmers and non-programmers from a variety of fields. Finally, success begets success: as the value of neural networks in commercial applications becomes more apparent, developers have sought new ways of exploiting their capabilities – including using them to aid scientific research.

What are neural networks good at?

They’re great at matching patterns and finding subtle trends in highly multivariate data. Crucially, they make progress towards their goal even if the programmer doesn’t know how to solve the problem ahead of time. This is useful for problems with solutions that are complex or poorly understood. In image recognition, for example, the programmer may not be able to write down all the rules for determining whether a given image contains a cat, but given enough examples, a neural network can determine for itself what the important features are. Similarly, a neural network can learn to identify the signature of a planetary transit without being told which features are important. All it needs is a set of sample starlight curves that correspond to planetary transits, and another set of light curves that do not. This makes neural networks an unusually flexible tool, and the fact that neural network frameworks come in “flavours” specialized for tasks such as classifying data, making predictions, and designing devices and systems only adds to their flexibility.

Neural networks are also particularly well suited for projects that generate too much data to be easily sorted or stored, especially if the occasional mistake can be tolerated. Often, they’re used to flag events of interest for human review. In a 2017 study of exoplanet candidates, for example, software engineer Christopher Shallue of Google Brain and astronomer Andrew Vanderburg of the University of Texas at Austin used neural networks to search lists of candidate light curves for those most likely to correspond to true planetary transits. The results enabled them to reduce the number of candidates by more than an order of magnitude. In another astronomy application, a team from the Observatoire de Sauverny in Switzerland used a neural network to examine huge datasets of galaxy images, looking for those that might contain gravitational lenses. Other groups have used neural network classifiers to identify rare, interesting, collision events in data from the Large Hadron Collider at CERN.

Another kind of neural network can generate predictions based on input data. Networks of this type have, for example, been used to predict the absorption spectrum of a nanoparticle based on its structure, after being given examples of other nanoparticles and their absorption spectra. Such networks are being used in chemistry and drug discovery as well, for example to predict the binding affinities of proteins and ligands based on their structures.

In combination with a technique called reinforcement learning, neural networks can also be used to solve design problems. In reinforcement learning, rather than trying to imitate a list of examples, a neural network tries to maximize the value of a reward function. For example, a neural network controlling the limbs of a robot might adjust its own connections in a way that, through trial and error, ends up maximizing the robot’s horizontal speed. Another algorithm might control the spectral phase of an ultrashort laser pulse, trying to maximize the ratio of two fragmentation products generated when the laser pulse hits a certain molecule.

Sounds great! What’s the catch?

Because neural network algorithms solve problems in whatever ways they can manage, they sometimes arrive at solutions that aren’t particularly useful – and it can take an expert to detect how and where they have gone wrong. Hence, they are not a substitute for a good understanding of the problem. Below are a few possible pitfalls.

Black-box solutions
In general, neural networks (and other machine-learning algorithms) don’t explain how they arrived at their solutions. This can make it harder to understand whether these solutions are exploiting new physics, or are based on a bug or some simple effect that has been overlooked. Machine-learning research is full of anecdotes of algorithms arriving at seemingly perfect solutions that turn out to stem from problems with the algorithm itself. For example, in 2013 researchers at MIT Lincoln Labs tested a computer program that was supposed to learn to sort a list of numbers. It achieved a perfect score, but then the programmers discovered that it had done so by deleting the list. (According to the algorithm’s reward function, a deleted list yielded a perfect score because, technically, the list was no longer unsorted.) In another example, a machine-learning algorithm was used to shape laser pulses to selectively fragment molecules. Although the resulting laser pulses were very complex, in many cases the dominant effect turned out to be the overall change in laser pulse intensity rather than the pulse’s complex structure.

To combat this problem, researchers are working on algorithmic interpretability, developing techniques for discovering how algorithms make their decisions. For example, some image-recognition algorithms can now report which pixels were important in making their decisions, and individual layers of neurons can report which kinds of features (like a dog’s floppy ear) they have learned to find.

Solving the wrong problem
Users of neural networks also have to make sure their algorithm has actually solved the correct problem. Otherwise, undetected biases in the input datasets may produce unintended results. For example, Roberto Novoa, a clinical dermatologist at Stanford University in the US, has described a time when he and his colleagues designed an algorithm to recognize skin cancer – only to discover that they’d accidentally designed a ruler detector instead, because the largest tumours had been photographed with rulers next to them for scale. Another group, this time at the University of Washington, demonstrated a deliberately bad algorithm that was, in theory, supposed to classify husky dogs and wolves, but actually functioned as a snow detector: they’d trained their algorithm with a dataset in which most of the wolf pictures had snowy backgrounds.

Careful review of an algorithm’s results by human experts can help detect and correct these problems. For example, the abovementioned study on star transits flagged suspected exoplanets for human review, rather than simply generating a “We found a new planet!” press release. This was fortunate because most of the “exoplanets” turned out to be artefacts that the algorithm had not learned to detect.

Class imbalances and overfitting
When researchers try to train data-­classifying machine-learning algorithms, they often run into a problem called class imbalance. This means that they have many more training examples of one data category than others, which is often the case for studies that are searching for rare events. The result of class imbalance can be an algorithm that doesn’t have enough data to make progress, yet “thinks” it is doing splendidly. To cite one recently reported example from the solar storm team at NASA’s Frontier Development Lab, if solar flares are very rare in the training dataset, the algorithm can achieve near-perfect accuracy by predicting zero solar flares. This is also a problem for planetary transit studies because true planetary transits are relatively rare.

To address class imbalance, the rule of thumb is to include roughly equal numbers of training examples in each category. Data-augmentation techniques can help with this. However, using data augmentation, or simulated data, can lead to another problem: overfitting. This is one of the most persistent problems with neural networks. In short, the algorithm learns to match its training data very well, but isn’t able to generalize to new data. One likely example is the Google Flu algorithm, which made headlines in the early 2010s for its ability to anticipate flu outbreaks by tracking how often people searched for information on flu symptoms. However, as new data started to accumulate, Google Flu turned out to be much less accurate, and its reported success is now thought to be due to overfitting. In another example, an algorithm was supposed to evolve a circuit that could produce an oscillating signal; instead, researchers at the University of Sussex and Hewlett-Packard Labs in Bristol, UK, found that it evolved a radio that could pick up an oscillating signal from nearby computers. This is a clear example of overfitting because the circuit would only have worked in its original lab environment.

The way to detect overfitting is to test the model against data and situations it hasn’t seen. This is especially important if the model was trained on simulated data (like simulated images of gravitational lenses, or simulated physics), to make sure the model hasn’t learned to use artefacts of the simulation.

In conclusion

Neural networks can be a very useful tool, but users must be careful not to trust them blindly. Their impressive abilities are a complement to, rather than a substitute for, critical thinking and human expertise.

Ultracold atoms behave like a ferrofluid, say physicists

Collective spin oscillations have been spotted for the first time in an ultracold atomic gas. The discovery was made by Bruno Laburthe-Tolra and colleagues at the University of Paris 13.

Their experiment involves cooling about 40,000 chromium atoms to 400 nK, where the all the atoms condense into a single quantum state called a Bose-Einstein condensate (BEC). All of the atoms are in their lowest energy spin state, which has a non-zero spin magnetic moment.

The BEC is the shape of a rugby ball and is confined in an optical track. The team’s experiment begins with the chromium spins aligned in a direction perpendicular to the long axis of the BEC. Then, a magnetic field gradient is applied along the long axis of the BEC, which creates an effective coupling between the spins which encourages a spin to point in the same direction as its neighbours.

Tilted spins

Then, a radio-frequency pulse is fired at the BEC, which applies a torque to the spins causing them to rotate. The team then measured the directions of the spins as they evolved over about 40 ms. In the absence of a coupling, the spins should rotate independently and the alignment would be lost.

Instead, the team found that the spins try to maintain their alignment and rotate collectively in a spin wave. Such waves have been seen in solids and liquids – where very short distances between neighbouring atoms can result in strong spin coupling – but this is the first time that the behaviour has been observed in a dilute quantum gas. Indeed, calculations done by Laburthe-Tolra and colleagues suggest that the system behaves very much like a ferrofluid – a liquid that becomes strongly magnetized when placed in a magnetic field.

The research is described in Physical Review Letters.

New app scopes-out neutrinos, hairstyle inspired by the physicist’s favourite shrimp, tracking euro coins

Described as a “new app to demystify the neutrino,” NeutrinoScope has just been launched by Cambridge Consultants and physicists at the UK’s Durham University. It can be downloaded free of charge from iTunes and provides key facts about the elusive particles – including how they are produced in both nuclear reactors and bananas. Augmented reality is used throughout to illustrate, for example, the neutrino flux through a user’s local environment.

This week we published a story about the physicist’s favourite crustacean, the mantis shrimp. This amazing creature has an incredibly strong club that it can use to smash its way out of an aquarium. It can also see circularly-polarized light and some species have spectacular coloration. Now, Bristol hair salon JamesB  offers an amazing hairstyle reminiscent of the shrimp’s visual display.

Today more than 20 countries mint their own euro coins. If you receive change in Germany, for example, most of the coins will usually be German – but there will probably be a French, Italian or other nationality in the mix. In “Euro-mixing in Slovenia: ten years later” Mojca Čepič and Katarina Susman of the University of Ljubljana look at how foreign euro coins have mixed into the local currency since Slovenia joined the Eurozone in 2007. They found that the percentage of Slovenian coins in circulation stabilized at 28%, which is much higher than their initial predication.

Double-sided microfluidic blood oxygenator makes artificial placenta

Preterm births account for about 10% of all births in the US and according to the World Health Organization, this number is increasing rapidly. The survival rate for babies with a gestational age of 28 weeks or less is lower than 50% with respiratory disease syndrome being the second major cause of death. This is because lungs are among the last organs to fully develop. One of the main challenges here is to deliver oxygen to the new-borns using external devices, such as mechanical pumps, until their lungs are fully formed, but these ventilators can cause serious problems in themselves.

Researchers at McMaster University in Canada led by P. Ravi Selvaganapathy and Paracelsus Medical University in Germany led by Christoph Fusch have now developed a passive lung device that is pumped by the baby’s own heart (in which the arterio-venous pressure differential is between just 20 and 60 mmHg). Such a device is known as an “artificial placenta” and consists of microchannels that efficiently exchange oxygen between the blood and outside air. Such a device would be connected to the umbilical cord of the new-born.

Although the concept itself is not new, the design developed by Selvaganapathy and colleagues makes use of both sides of the microchannel network for gas exchange (as opposed to just one side as in previous devices). This significantly increases the surface area to volume ratio of the device.

343% better in terms of oxygen transfer

The highest-performing “double-sided single oxygenator units” (dsSOUs) that the researchers made were about 343% better in terms of oxygen transfer compared to single-sided SOUs with the same height. They used their design to make a prototype containing a gas exchange membrane with a stainless-steel reinforced thin (50-micron-thick) PDMS layer on the microchannel network. This design was based on a previous one developed in their lab (Biomicrofluidics 12 014107).

In the present work, they succeeded in incorporating a slightly thicker (150-micron) steel reinforced membrane on the other side of the blood channels to increase gas exchange. The new fabrication process means a 100% increase in the surface area for gas exchange while continuing to mimic the placenta by ensuring that the priming volume (how much blood is removed at a time for oxygenation) remains low.

“The key innovation here is developing a large-area microfluidic device,” says Selvaganapathy. “You want it to be microfluidic because a 1-kg baby, for example, might only have 100 ml of blood. You want a device to use only a one-tenth of that volume at a time.”

Meeting 30% of the oxygenation needs of a preterm neonate

The team has already produced an optimized oxygenator to build a lung assist device (LAD) that could meet 30% of the oxygenation needs of a preterm neonate weighing between 1 and 2 kg. The LAD provides an oxygen uptake of 0.78-2.86 ml/min, which correspond to an increase in oxygen saturation from around 57 to 100% in a pure oxygen environment.

Reporting on their work in Biomicrofluidics 12 044101, the researchers say that the design could be improved by coating the surface of the dsSOUs with antithrombin-heparin or polyethylene glycol to improve the anticoagulation properties of PDMS surfaces, which are in contact with blood. “Finally, new designs that have higher gas exchange can be used to provide the sufficient oxygenation in ambient air,” they write.

Jeffrey Borenstein of the Charles Stark Draper Laboratory in Cambridge, Massachusetts, who was not involved in this work says that this new study “targets an extremely exciting and promising opportunity in artificial organs research, using microfluidics technology to overcome many of the current limitations of respiratory assist devices.

“Selvaganapathy and colleagues’ advance points to microfluidics as a means to reduce the incidence of clotting, miniaturize the device, and potentially operate the system on room air to enable portable and wearable systems,” he tells Physics World.

Record-breaking entanglement uses photon polarization, position and orbital angular momentum

Physicists in China have fully entangled 18 qubits by exploiting the polarization, spatial position and orbital angular momentum of six photons. By developing very stable optical components to carry out technically demanding quantum logic operations, the researchers generated more combinations of quantum states at the same time than ever before – over a quarter of a million. They say that their research creates a “new and versatile platform” for quantum information processing.

Hans Bachor of the Australian National University in Canberra describes the work as a “true tour de force” in photon entanglement. “This team has amazing technology and patience,” he says, having been able to generate entangled pairs of photons “significantly” faster and more efficiently than in the past.

The decades-old dream of building a quantum-mechanical device that can outperform classical computers relies on the principle of superposition. Whereas classical bits exist as either a “0” or a “1” at any point in time, quantum bits, or “qubits” can take on both values simultaneously. Combining many qubits, in principle, leads to an exponential increase in computing power.

Physicists are now working on several different technologies to boost the qubit count as high as possible. Last year, researchers at the University of Maryland built a basic kind of quantum computer known as a quantum simulator consisting of 53 qubits made from trapped ytterbium ions. And in March, Google announced that it had built a 72-qubit superconducting processor based on the design of an earlier linear array of nine qubits.

Control and readout

However, quantity is not everything, according to Chao-Yang Lu of the University of Science and Technology of China (USTC) in Hefei. He says it is also crucial to individually control and readout each qubit, as well as having a way of hooking up all of the qubits together using the phenomenon of entanglement. Described by Einstein as “spooky action at a distance”, entanglement is what enables multiple qubits – each held in a superposition of two states – to yield the exponential performance boost.

Maximizing the benefits of entanglement involves not only increasing the number of entangled particles as far as possible but also raising their “degrees of freedom” – the number of properties of each particle that can be exploited to carry information. Three years ago, Lu’s was part of Jian-Wei Pan’s group at USTC, which entangled and teleport two degrees of freedom – spin and orbital angular momentum (OAM) – from one photon to another. In the latest work, they have gone one better and have entangled a photon’s spatial information.

The team begin by firing ultraviolet laser pulses at three non-linear crystals lined up one after another. This generates three pairs of photons entangled via polarization. They then use two polarizing beam splitters to combine the photons so each particle is entangled with every other, before sending each photon through an additional beam splitter and two spiral phase plates. This entangles the photons spatially and via their OAM, respectively. Finally, they measure each of the three degrees of freedom in turn. The last and most difficult of these measurements – the OAM – is achieved by using two consecutive controlled-NOT gates to transfer this property to the polarization, which, the researchers say, “can be conveniently and efficiently read out”.

Significant technical hurdle

Lu says that a significant technical hurdle was operating the 30 single-photon interferometers – one for the spatial measurement and four for the OAM of each photon – with sub-wavelength stability. This they did by specially designing the beam splitter and combiner of each interferometer and gluing them to a glass plate, which says Lu, isolated the set-up from temperature fluctuations and mechanical vibrations. The researchers also had to find a way of simultaneously recording all the combinations of the 18 qubits – of which there were 262,144 (218). To do this, they brought together 48 single-photon counters and a home-made counting system with 48 channels (being 23 channels per photon).

By successfully recording all combinations at the same time, Pan and colleagues beat the previous record of 14 fully-entangled trapped-ion qubits reported by Rainer Blatt and colleagues at the University of Innsbruck in 2011. Among possible applications of the new work, Lu says it could finally allow demonstration of the “surface code” developed in 2012 for error correction – a vital task in a practical quantum computer. Implementing this code requires very precise control of many qubits – and in particular, the ability to entangle them all.

Bachor says that the latest work represents a “great step” in showing the advantages of quantum technology over conventional computing. He believes that single-photon technology could play an important role in quantum cryptography and in ferrying data between processors within quantum computers. But he reckons that other technologies – perhaps trapped ions, semiconductor or superconducting qubits – are more likely to yield the first roughly 50-qubit computer capable of demonstrating major advantages over classical devices in executing highly-tailored algorithms. As to when that might happen, “a number of years’ time” is the most he will venture. “I won’t make a prediction,” he says.

The research is described in Physical Review Letters.

Quantum computing in the cloud

Quantum computers – devices that use the quantum mechanical superposition principle to process information – are being developed, built and studied in organizations ranging from universities and national laboratories to start-ups and large corporations such as Google, IBM, Intel and Microsoft. These devices are of great interest because they could solve certain computationally “hard” problems, such as searching large unordered lists or factorizing large numbers, much faster than any classical computer. This is because the quantum mechanical superposition principle is akin to an exponential computational parallelism – in other words, it makes it possible to explore multiple computational paths at once.

Because nature is fundamentally quantum mechanical, quantum computers also have the potential to solve problems concerning the structure and dynamics of solids, molecules, atoms, atomic nuclei or subatomic particles. Researchers have made great progress in solving such problems on classical computers, but the required computational effort typically increases exponentially as the number of particles rises. Thus, it is no surprise that scientists in these fields view quantum computers with a lot of interest.

Many different technologies are being explored as the basis for building quantum processors. These include superconductors, ion traps, optical devices, diamonds with nitrogen-vacancy centres and ultracold neutral atoms – to name just a few. The challenge in all cases is to keep quantum states coherent for long enough to execute algorithms (which requires strictly isolating the quantum processor from external perturbations or noise) while maintaining the ability to manipulate these states in a controlled way (which inevitably requires introducing couplings between the fragile quantum system and the noisy environment).

Recently, universal quantum processors with more than 50 quantum bits, or qubits, have been demonstrated – an exciting milestone because, even at this relatively low level of complexity, quantum processors are becoming too large for their operations to be simulated on all but the most powerful classical supercomputers. The utility of these 50-qubit machines to solve “hard” scientific problems is currently limited by the number of quantum-logic operations that can be performed before decoherence sets in (a few tens), and much R&D effort is focused on increasing such coherence times. Nevertheless, some problems can already be solved on such devices. The question is, how?

First, find a computer

Within the research sector, scientists have taken the first steps towards using quantum devices to solve problems in chemistry, materials science, nuclear physics and particle physics. In most cases, these problems have been studied by collaborations between scientists and the developers, owners and/or operators of the devices. However, a combination of publicly available software (such as PyQuil, QISKit and XACC) to program quantum computing processors, coupled with improved access to the devices themselves, is beginning to open the field to a much broader array of interested parties. The companies IBM and Rigetti, for instance, allow users access to their quantum computers via the IBM Q Experience and the Rigetti Forest API, respectively. These are cloud-based services: users can test and develop their programs on simulators, and run them on the quantum devices, without ever having to leave their offices.

As an example, we recently used the IBM and Rigetti cloud services to compute the binding energy of the deuteron – the bound state of a proton and a neutron that forms the centre of a heavy hydrogen atom. The quantum devices we used consisted of about 20 superconducting qubits, or transmons. The fidelity of their quantum operations on single qubits exceeds 99%, and their two-qubit fidelity is around 95%. Each qubit is typically connected to 3–5 neighbours. It is expected that these specifications (number of qubits, fidelities and connectivity) will improve with time, but the near future of universal quantum computing is likely to be based on similar parameters – what John Preskill of the California Institute of Technology calls “noisy intermediate-scale quantum” (NISQ) technology.

The deuteron is the simplest atomic nucleus, and its properties are well known, making it a good test case for quantum computing. Also, because qubits are two-state quantum-mechanical systems (conveniently thought of as a “spin up” and a “spin down” state), there is a natural mapping between qubits and fermions – that is, particles with half-integer spin that obey the Pauli exclusion principle – such as the proton and neutron that make up a deuteron. Conceptually, each qubit represents an orbital (or a discretized) position that a fermion can occupy, and spin up and down correspond to zero or one fermion occupying that orbital, respectively. Based on this ­Jordan-Wigner mapping, a quantum chip can simulate as many fermions as it has qubits.

Another helpful feature of the quantum computation of the deuteron binding energy is that the calculation itself can be simplified. The translational invariance of the problem reduces the bound-state calculation of the proton and the neutron to a single-particle problem that depends only on the relative distance between the particles. Furthermore, the deuteron’s Hamiltonian becomes simpler in the limit of long wavelengths, as details of the complicated strong interaction between protons and neutrons are not resolved at low energies. These simplifications allowed us to perform our quantum computation using only two and three qubits.

Then, do your calculation

We prepared a family of entangled quantum states on the quantum processor, and calculated the deuteron’s energy on the quantum chip. The state preparation consists of a unitary operation, decomposed into a sequence of single- and two-qubit quantum logical operations, acting on an initial state. With an eye towards the relatively low two-qubit fidelities, we employed a minimum number of two-qubit CNOT (controlled-not) operations for this task. To compute the deuteron’s energy, we measured expectation values of Pauli operators in the Hamiltonian, projecting the qubit states onto classical bits. This is a stochastic process, and we collected statistics from up to 10,000 measurements for each prepared quantum state. This is about the maximum number of measurements that users can make through cloud access, but it was sufficient for us because we were limited by noise and not by statistics. More complicated physical systems employing a larger number of qubits, or demanding a higher precision, could, however, require more measurements.

To compute the binding energy of the deuteron, we had to find the minimum energy of all the quantum states we prepared. This minimization was done with a classical computer, using the results from the quantum chip as input. We used two versions of the deuteron’s Hamiltonian, one for two and one for three qubits. The two-qubit calculation involved only a single CNOT operation and, as a consequence, did not suffer from significant noise.

However, the three-qubit calculation was considerably affected by noise, because the quantum circuit involved three CNOT operations. To understand the systematic effects of the noise, we inserted extra pairs of CNOT operations – equivalent to identity operators in the absence of noise – into the quantum circuits. This further increased the noise level and allowed us to measure and subtract the noise in the energy calculations. As a result, our efforts yielded the first quantum computation of an atomic nucleus, performed via the cloud.

What next?

For our calculation, we used quantum processors alongside classical computers. However, quantum computers hold great promise for standalone applications as well. The dynamics of interacting fermions, for instance, is generated by a unitary time-evolution operator and can therefore be naturally implemented by unitary gate operations on a quantum chip.

In a separate experiment, we used the IBM quantum cloud to simulate the Schwinger model – a prototypical quantum-field theory that describes the dynamics of electrons and positrons coupled via the electromagnetic field. Our work follows that carried out by Esteban Martinez and collaborators at the University of Innsbruck, who explored the dynamics of the Schwinger model in 2016 using a highly optimized trapped-ion system as a quantum device, which permitted them to apply hundreds(!) of quantum operations. To make our simulation possible via cloud access to a NISQ device, we exploited the model’s symmetries to reduce the complexity of our quantum circuit. We then applied the circuit to an initial ground state, generating the unitary time evolution, and measured the electron-positron content as a function of time using only two qubits.

The publicly available Python APIs from IBM and Rigetti made our cloud ­quantum-computing experience quite easy. They allowed us to test our programs on simulators (where imperfections such as noise can be avoided) and to run the calculations on actual quantum hardware without needing to know many details about the hardware itself. However, while the software decomposed our state-preparation unitary operation into a sequence of elementary quantum-logic operations, the decomposition was not optimized for the hardware. This forced us to tinker with the quantum circuits to minimize the number of two-qubit operations. Looking into the future, and considering more
complex systems, it would be great if this type of decomposition optimization could be automated.

For most of its history, quantum computing has only been experimentally available to a select few researchers with the know-how to build and operate such devices. Cloud quantum computing is set to change that. We have found it a liberating experience – a great equalizer that has the potential to bring quantum computing to many, just as the devices themselves are beginning to prove their worth.

The rise and rise of cryogenic electron microscopy

Combining physics, biology and chemistry, structural biology investigates the anatomy of biological macromolecules, and proteins in particular, improving understanding of diseases and enabling drug discovery. Speaking at the 68th Lindau Nobel Laureate meeting in Germany last week, biophysicist Joachim Frank says that the field is entering a new era with a bright future thanks to advances in cryogenic electron microscopy (cryo-EM).

Frank, a German-American Nobel Laureate in chemistry based at Columbia University, New York, traced the history of cryo-EM to the present day in his Lindau lecture. A technique pioneer, Frank shared the 2017 prize with fellow biophysicists Richard Henderson and Jacques Dubochet, for the significant contributions he made to its advancement.

Cryo-EM rapidly freezes molecular samples in solution and images them with an electron beam to reveal the molecular structure. Today, resolutions of 3-4 Å are routinely achievable, while at its upper limits, the technique can achieve atomic resolutions of 2 Å.

The technique’s particular strength is its ability to construct models of single, unattached molecules from images of the molecules in their natural state – encompassing a range of arrangements and binding states with other molecules.

A mainstay for structure determination, X-ray crystallography, in contrast, demands molecules are prepared as crystals, a form not typically taken by biomolecules. Crystallography can, however, take advantage of the regularly-ordered molecules to obtain diffraction patterns with which their structures can be reconstructed.

In the early 1970s, researchers saw electron tomography of individual molecule as a solution. However, in a major drawback, images of intact molecules were not possible, as the doses required inflict significant damage on the samples.

At around the same time, during his PhD, Frank had a new idea. It was to use computational and mathematical techniques to wrangle 2D images of hundreds of thousands of molecules in a sample. By aligning and averaging them, a single, clearer 3D picture of the molecule could be constructed, he proposed. The ribosome, a molecular machine found in large quantities in the cell cytosol that makes proteins, was to become his test object in developing the techniques.

A workbench for molecular reconstruction

To process and reconstruct the cryo-EM images, Frank developed SPIDER, a modular image processing program and the first of its kind in electron microscopy, around the turn of the 1980s. The computational equivalent of a workbench, it comprised hundreds of operations.

Key techniques developed in Frank’s lab included a correlation averaging technique to identify and average ribosomes facing in a particular direction into a single structure. In another, multivariate statistical analysis was applied to tackle the structural heterogeneity that occurs across a population of molecules in a sample, classifying and grouping similar structures together. Bringing all the techniques together, Frank and post-doc Michael Radermacher completed their first molecular reconstruction, the large subunit of the ribosome, in 1986.

Around the same time, Dubochet and colleagues at the European Molecular Biology Laboratory in Heidelberg made an important advance. For the first time, Dubochet vitrified sample solutions into a glass-like fluid by rapid freezing using liquid ethane cooled to -196°C. The technique prevents the formation of ice crystals that otherwise damage the molecules and diffract the electron beam, resulting in unusable images.

“This now gave the method I developed a very big boost, because now we could look at molecules in their native states,” said Frank. Exploiting this, Frank and collaborators went on to reconstruct several structures for the first time in the mid-1990s. They included a calcium release channel, octopus haemocyanin and the Escherichia coli ribosome. “These were all pioneering contributions at the time.”

Revealing ribosome movement

Another landmark finding followed in 2000. Frank and post-doc Rajendra Agrawal took advantage of further improvements in resolution to reveal the structure of the Escherichia coli ribosome in unprecedented detail. Their analysis revealed a ratchet-like movement of the ribosomes’ two subunits relative to one another during translation, the process where messenger RNA is used synthesize proteins. The movement proved critical to the ribosome’s functioning.

Despite the breakthroughs, however, Frank still saw resolution as a limiting factor in the lab’s research. In 2013, they achieved their best imaging on film after two years going through 260,000 images of the ribosome of Trypanosoma brucei, a parasite that causes African sleeping sickness. “We got stuck at 5.5 Å resolution. It was essentially a wall.” At this resolution, it was not possible to infer structures with high precision. Side chains on the molecules, for example, require a resolution of around 3 Å.

Cameras take cryo-EM to next level

Arrival of the first commercial single electron detection cameras in 2012, however, has had a decisive impact. Their detective quantum efficiency (DQE) is significantly higher than that of film. Frank’s lab used them in 2016 to resolve the structure of the ribosome in Trypanosoma cruzi, a parasite responsible for Chagas disease, at a resolution of 2.5 Å. His technique, with the aid of the camera even allowed a single water molecule to be resolved. “I couldn’t believe it when I first saw it,” said Frank.

Trypanosoma cruzi ribosome large subunit

Combined with maximum likelihood statistical methods, cryo-EM imaging technology is now enabling the determination of multiple, co-existing structures in a given sample at near-atomic resolution. This often results in snapshots of molecules in different states. Dubbed a “story in a sample” by Frank, the information means cellular functions can be visualized indirectly. Through these capabilities, he predicts a boom in knowledge. “There’s going to be a huge expansion of [structural] databases relevant for molecular medicine.”

Joachim Frank’s full lecture, Single-Particle Cryo-EM of Biological Molecules – the Sky Is the Limit, can be seen below. (Courtesy: Lindau Nobel Laureate meetings)

Copyright © 2025 by IOP Publishing Ltd and individual contributors