Skip to main content

How to achieve a scientific ‘Arab spring’

In the UK we often grumble when funding agencies request information about how blue-sky research might have an impact or lead to some application or societal benefit. Spare a thought then for the state, and status, of science in the Arab world. The faltering Arab Spring – a wave of protests that began in December 2010 and has since swept through the Arab world – has shown the hunger among many of the 370 million people spread across the Middle East and North Africa for societal change.

But while the traditional powerhouses of the region – Egypt, Iraq and Syria – grapple with more immediate political crises, others such as the oil-rich Gulf States, like Saudi Arabia, are tentatively attempting a cultural renaissance. As with many nations in the developing world, Arab leaders understand that to get the most out of their natural resources, they need to invest in science.

However, what we have yet to see in the region is a scientific Arab Spring – a different sort of awakening that will transform attitudes towards the value of science and scientific research. Yes, government funding for science and education has grown sharply in recent years in many of these countries, but most Arabs are still disengaged from science and see it as a secular, even atheist, Western construct. Most have forgotten the many wonderful contributions – from astronomy to medicine and philosophy – that were made by Arab and Persian scholars during the height of the “golden age” of science that began in the first half of the ninth century and continued for several hundred years.

This was an age epitomized by a spirit of rational enquiry at a time when most of Europe was stuck in the Dark Ages. But this freethinking spirit gradually went into decline in the Middle Ages. Those bygone days are now long forgotten, as, sadly, is the culture of freedom of thought and a curiosity-driven quest for knowledge that so epitomized that period.

Developing scientific research

The most obvious effect of this malaise is the poor quality of academic research in many Arab universities. I sometimes get to review papers submitted to physics journals and on the whole – possibly due to lack of resources and infrastructure – the quality is not like those from the Western world. So should we be trying to encourage them by allowing this work to be published? No, there has to be a level playing field and quality threshold for research publications in the top journals.

An example of where there is a serious attempt to boost the quality of research in the Arab world is the fascinating story of the co-educational King Abdullah University of Science and Technology (KAUST), built on a brand new campus in the desert near Jeddah in Saudi Arabia (see November 2009 pp12–13). This vast new research institution has the third largest endowment of any university in the world after Harvard and Yale.

But it is not simply a matter of throwing money at the problem. Even more important is having the political will to ensure real freedom of thinking. To compete globally requires more than just the latest equipment. The whole infrastructure needs to be addressed – from laboratory technicians who understand how to use and maintain equipment to the exercise of real intellectual freedom on the part of the scientists who must have access to the best books and the latest research journals.

There also needs to be far better quality assurance within universities as well as higher levels of motivation and incentives and better salaries to stop the current brain drain of so many of the brightest minds. Currently, more than half of all Arab students who study abroad do not return home, and one can understand why. Even KAUST has been criticized for not educating enough local students, favouring those from overseas instead.

Finding the right balance

Despite a sharp increase in funding, there remains an overwhelming emphasis on applied research, innovation and technology in areas such as water desalination, energy and agriculture, although we are beginning to see a real shift towards investing in areas like biotechnology and nanotechnology. Even at KAUST, research is focused on supporting Saudi Arabia’s post-oil future in key areas such as exploiting solar energy and developing crops that can survive the country’s hot, dry climate. It is inevitable that such areas will remain a priority in that part of the world, but the right balance between pure and applied research has yet to be found.

If the Arab world hopes to develop a more enlightened culture of scientific research, it cannot afford to ignore curiosity-driven basic research in favour of applied research. Many have questioned whether properly funding blue-sky research in areas such as particle physics or astronomy is a luxury that can be put to one side. But while the standard case for basic research – that it leads to unexpected applications further down the line – always needs to be made, the real reason it is required is that basic research encourages the freedom of thought that is so lacking in the Arab region at the moment.

Indeed, KAUST is an isolated bubble within a still conservative society. It is no good having such research institutions for the select few if there is no interaction with the wider community. One way of nurturing trust in science is by engaging with the general population through science festivals and other forms of public dialogue, which is only slowly beginning to take off in the Arab world. There have been successful festivals run in cities such as Cairo, Doha, Abu Dhabi and Dubai that have been greeted with remarkable enthusiasm and large numbers of attendees. But unlike political reform, which can happen remarkably quickly, I believe that when it comes to science and research, ingrained negative attitudes will take longer to change, but I remain ever-optimistic.

Dwarf planet could illuminate the dark sector

A dwarf-planet candidate called UX25 and its tiny satellite could provide the first evidence of a new cosmological model that includes antigravity, say Alberto Vecchiato and Mario Gai of the Astrophysical Observatory of Turin in Italy. The model dispenses with concepts such as dark matter, dark energy and cosmic inflation, and the astronomers say that it could be tested by observing the motion of the two objects as they move through the outer solar system.

In 1915 Albert Einstein’s fledgling general theory of relativity received a major credibility boost when it was used to explain a discrepancy in Mercury’s orbit that could not be accounted for by Newtonian physics alone. Now, nearly a century later, Vecchiato and Gai have calculated that UX25 and its tiny satellite – which orbit the Sun in the Kuiper belt beyond Neptune – could be used as a “natural laboratory” to test an ambitious new model of the universe.

Gravitational charges

Developed by CERN physicist Dragan Hajdukovic, the model is based upon the concept that empty space – also known as the quantum vacuum – is not really empty at all. Instead, it consists of “virtual” matter and antimatter particles that constantly blink in and out of existence. Hajdukovic’s idea is that these particles have opposing gravitational charges, similar to positive and negative electrical charges. He further predicts that in the presence of a gravitational field, virtual particles in the quantum vacuum will generate a secondary gravitational field that has an amplifying effect. The end result is that galaxies and other objects will appear to have stronger gravitational fields than would be predicted by the mass of their stars alone – a discrepancy that most astronomers explain by invoking the hypothetical and mysterious substance known as dark matter.

In Hajdukovic’s new model of the universe, there is also no need for dark energy, the enigmatic force that scientists think is causing the universe to expand at an accelerated rate. The idea is that if virtual particles have gravitational charges, then space–time itself can exert a kind of pressure that causes objects to repel each other. His theory would also negate the need for cosmic inflation, a theorized rapid swelling of the early universe when space–time itself expanded faster than the speed of light. “My theory provides encouraging initial answers to many different fundamental questions in physics,” says Hajdukovic.

Distant elliptical orbits

Hajdukovic has previously suggested that his theory could be tested if a minor planet with a small satellite that has an elliptical orbit can be found. The system would need to be located far from the Sun and other massive bodies.

Now, Vecchiato and Gai suggest that Hajdukovic’s model can be tested by using existing ground and space telescopes to observe the UX25 system – which is about 43 times farther from the Sun than is the Earth. “The properties of quantum vacuums described in Hajdukovic’s theory would apply an additional [gravitational] force on UX25, perturbing the orbit of the system,” Vecchiato explained to physicsworld.com.

Wobbling moon

Hajdukovic’s model predicts that the wobble, or “precession rate”, of UX25’s tiny moon around the dwarf planet should be larger than is predicted by classical physics. Where Newtonian physics predicts a precession rate of 0.0064 arc seconds – too small to be observed with current methods – Hajdukovic’s theory predicts that the rate should be 0.23 arc seconds per period – just enough to be detectable by NASA’s Hubble Space Telescope and the soon-to-be-launched James Webb Space Telescope.

According to Vecchiato and Gai, a large ground-based telescope such as the Very Large Telescope might also be able to make the necessary observations of UX25.

Evidence for Hajdukovic’s theory would result in a dramatic change in perspective for physicists, says Gai. “Most scientists today think quantum physics is mainly restrained to the microscopic world…In this case, the natural microscopic behaviour of empty space would result in a cumulative, long-range effect acting up to cosmic scales.”

The proposal appears on the arXiv preprint server.

Nanoethical concerns

A colleague at Stony Brook who teaches an online engineering course recently asked me to help students acquire what the syllabus calls “an understanding of professional and ethical responsibility”. Deciding to use nanotechnology as a case study, I created a video about engineering ethics, got the students to read about nanotechnology and then asked them to write a piece on an issue involving “nanoethics”.

Science and engineering students, I have found, generally regard ethics training as a distraction. If they must have it, they want brief, clear instructions on how to identify and solve ethical problems. Unfortunately, ethics doesn’t lend itself to that. Instead, it involves the murkier process of learning how to become sensitive to aspects of a situation that you tend to neglect either because they make you uncomfortable or because you just don’t see them.

The three Is

More specifically, science and engineering students tend to find ethics discussions ineffectual, impossible and intrusive – reasons that I think of as the “three Is”. Ineffectual, because the students see themselves as working on something whose applications – where the ethics is – are somebody else’s business. Impossible, because how can you expect to envision consequences of a new discovery or technology when it doesn’t even exist yet? Intrusive, because students can tend to think that ethical inquiry involves scientifically illiterate outsiders trying to set limits on research when scientists should be free to explore where research takes them.

To set the stage, I had the students read an article from 2005 by the science historian W Patrick McCray entitled “Will small be beautiful? Making policies for our nanotech future” (History and Technology 21 177). This article discusses the creation of the US National Nanotechnology Initiative (NNI) in 2000. McCray begins by quoting a senior adviser at the US National Science Foundation to the effect that the NNI brought about a “phase transition” so that “what had once been perceived as blue sky research…was now being seen as the key technology of the 21st century”.

McCray then discusses the rhetoric used to marshal public and congressional support for this phase transition. The Nobel prize-winning physicist Horst Störmer, for instance, promised that nanotechnology would provide the tools “to play with the ultimate toy box of nature”, noting that “the possibilities to create new things appear endless”. Other scientists argued that nanotechnology could “change the nature of almost every human-made object” and “alter our comprehension of nature and life” – thereby influencing “societal and international relations” and giving rise to a new world called the “nanocosm”.

The nanocosm was promoted with utopian visions that promised industrial competitiveness, medical breakthroughs and even immortality. Some people did make dystopian suggestions of dangers to human health and security – warning of the possibility of disasters that would turn ecosystems into “grey goo” – but the utopian rhetoric was instrumental in creating the excitement that led to the NNI.

McCray’s article makes it easy to counter the first two of the three Is. First, he makes clear that nanotechnology – like much science – intertwines scientific and visionary aspects, both for scientists who do it and for governments that fund it. The potential applications of nanotechnology are usually quite obvious, which is why discussing nano-ethics is in no way an ineffectual exercise. Indeed, nanotechnology perfectly illustrates the “linear-model” fallacy referred to by science-policy analysts, which is that science always begins in the lab detached from a social context and only then do people think about applications. As for the idea that ethics discussions are impossible, the likelihood of specific transformative social impacts was clear to nanotechnology’s researchers and funders from the start, meaning that there are plenty of practical examples to pick from.

The third I – the assumption that ethical inquiry is intrusive to research – is harder to dispel. It springs from the misconception that ethics consists of rules that ethicists dream up. But ethics actually springs from values internal to the everyday practice of science and engineering, such as a desire for openness and avoiding harm. Ethical conflicts arise from clashes between those internal values and the desire of individuals or groups to advance their self-interest. Ethicists do not invent those values, but clarify why they may be compromised and how to head off temptations to do so.

I found that using nanotechnology to teach ethics has limits. For one thing, it creates the illusion that nanotechnology involves a special kind of ethics rather than being a new context for familiar ethical issues, an illusion that has been promoted by numerous books and websites on nanoethics. But as Paul Litton, a professor at the University of Missouri law school, notes in an essay entitled “‘Nanoethics’? What’s new?” (Hastings Center Report 37 22), “None of the ethical concerns associated with nanotechnology is unprecedented and none raises novel ethical issues or demands new ethical principles.” What nanotechnology does, Litton writes, is give us new contexts in which to weigh and balance reasons related to our long-held values: “autonomy, beneficence, fairness, efficiency and environmental preservation”.

The critical point

One student told me that he found the entire exercise frustrating. “I prefer calculus,” he wrote, for “there is always a right and a wrong answer.” People who go into science and engineering, after all, are drawn to problems with exact answers, which ethics does not have. Still, he admitted to being excited enough about nanotechnology to read about its ethical issues.

That illustrated the upside of using nanotechnology to teach ethics. Teaching ethics to students in the middle of a science and engineering class requires delivering a jolt of excitement and the sense of something novel – and discussing nanotechnology delivered. It may pose the same old issues, but served to consolidate their interest long enough to get students to make the phase transition needed to follow through on the readings.

  • The latest Physics World Focus on Nanotechnology is now out in print and digital formats

Relaxation and repulsion helps viruses pack DNA

The molecular motor that folds and packs DNA into a virus is at its most efficient when the DNA shows some self-repulsion. That is the surprising finding of researchers based in the US – it was previously thought that such repulsion would act as an obstacle in the packing process. The team also found that pausing the motor and allowing it to relax increased the rate of the whole packaging process. In addition to providing new insights into how viruses function, the work could benefit biotechnologies that enclose long polymers into nanoscale devices.

After invading its host cell, a virus reprogrammes the cell’s nucleus to duplicate it. As it replicates, a strand of DNA is pulled from an infected host cell and squeezed into a protein shell – known as a prohead – which then carries the DNA to infect other cells. In some species, the prohead is produced first, leaving only a small hole at one end through which a powerful molecular motor pushes the DNA in and then packs it at very high densities. The motor has to overcome three forces: the electrostatic self-resistance that comes into play because DNA is negatively charged; the mechanical resistance of DNA to bending; and the entropic resistance of DNA to be crowded on itself.

Increasing attraction

The DNA could, however, be made attractive – a cell containing positive ions (notably a polyamine called spermidine3+) could stick to the DNA and partially screen the repulsion or even create attractive forces at high concentrations. In the past, certain computational models suggested such attraction might help the packing process. But according to biophysicist Douglas Smith of the University of California, San Diego, who led the new research, the earlier models had assumed the DNA was able to continuously relax to the lowest energy state as it was packaged, and this would reduce resistance.

To test this, Smith and colleagues manoeuvred two microspheres near to each other using optical tweezers, attaching viral DNA to the first and the molecular motor to the second. Occasionally, the molecular motor managed to grab hold of the DNA and pull it from one microsphere to the second in a process that Smith compares to fishing. Smith’s team stopped the motor part of the way through the packaging process by depriving it of nutrients. The researchers found that, upon restarting the packaging process, it proceeded faster than its previous rate. Indeed, the longer it was stopped, the faster the process went when restarted. The team believes this shows that the virus does not instantaneously become a neat spool, but is packaged as a higher-energy, messier configuration, which only later relaxes to the optimum. The findings were published in Proceedings of the National Academy of Sciences in May this year.

Repulsively efficient

The researchers then decided to measure how the packaging rate varied with the concentration of spermidine3+, which they reported in Physical Review Letters last week. When they added enough spermidine to the mixture so that the DNA’s self-repulsion was only just reduced, they saw an increase in the packaging rate. But when enough spermidine was added to make the DNA self-attractive, the packaging initially proceeded very rapidly before stalling – in about 75% of cases the molecular motor stopped part way through being packaged and the process did not complete. “If the thing is sticking to itself,” explains Smith, “and it gets into a bad, disordered configuration, then it may be very hard for it to rearrange.” Curiously, therefore, some degree of repulsion appears to be necessary to allow the DNA to be packaged into a small space.

William Gelbart of the University of California, Los Angeles, who was not part of the current research but has worked on packaging viral DNA, points out that recent experimental work has shown that such systems are not “equilibrating and that’s what I see as really important about this work, namely that attractions in particular get the system stuck out of equilibrium. This is something that we have to contend with as a fact of life about viral packaging.”

Smith’s team is now looking at other viruses to establish its findings. He says that the results may be of interest to the biomedical community that could study how to target this assembly step with drugs as a way to halt viral infection.

The research is published in Proceedings of the National Academy of Sciences and Physical Review Letters.

Physicists seek to cut helium costs

The American Physical Society (APS) has kick-started a pilot programme that is designed to provide helium at affordable prices for US academic researchers who need only small amounts of the element. The APS plan will involve the Defense Logistics Agency (DLA) negotiating the cost for helium with suppliers for researchers who are funded by government grants. The DLA already buys helium on behalf of the Department of Defense, of which it is a part.

Physicists routinely use helium to cool lab experiments and it is needed in large quantities to cool the superconducting magnets in particle accelerators. Helium also cools the magnets in magnetic-resonance-imaging machines and plays a critical role in the manufacture of microchips and optical fibres. Shortages of helium have become regular occurrences in recent years after uses for the gas have expanded.

While big laboratories and national labs can negotiate a good price for helium from suppliers, owing to the vast quantities that they need, smaller users – such as single principal investigators buying 100 litres at a time – find that suppliers can charge higher prices. “[Smaller buyers] don’t have the same purchasing power,” says Mark Elsesser, a policy analyst at the APS who will serve as a liaison between researchers and the DLA.

Indeed, researchers at Pennsylvania State University pay $7.50 per litre of liquid helium – almost half what Rutgers University in New Jersey pays. “The hope from this programme is that some universities in a poor position to negotiate with particular vendors will have access to helium,” says Moses Chan, a low-temperature physicist at Penn State. On top of this, users at the end of suppliers’ delivery routes might receive only 75 or 80 litres in a 100 litre Dewar flask, owing to evaporation.

The more the merrier

The plan between the APS and the DLA originated after APS members warned the society about their problems obtaining liquid helium at an affordable price. After hearing a presentation on the issue by Chan in March, two representatives from the DLA offered to help, and the programme was then set up. The American Chemical Society came on board the following month, helping to improve the programme’s reach. “We’re looking for a diverse set of users in geography and supply demands,” Elsesser says. “Chemists have a much more regular schedule of delivery.”

The team is now looking for research groups to participate in the programme and is publicizing it via newsletters, journal articles and webinars, as well as a dedicated page on the APS’s website that will offer information about the programme. “Starting with a pilot programme allows us to evaluate how it works, its potential benefits, and which type of academic user is a good fit for it,” says Elsesser. “Then we’ll look at where users are located, where supply needs are and other issues.” The consortium expects to review the helium-purchasing plan in December 2015. If successful, a full-scale roll-out should start in 2016.

Diamond defect images magnetic domain walls

Researchers in France have discovered a new way to image magnetic domain walls on the nanoscale in ultrathin ferromagnetic films – something that has been difficult to do until now. Using a point-like defect in diamond attached to a scanning atomic force microscope (AFM), they were able to map out the energy “landscape” for a domain wall and could even make the walls themselves move using the laser light from the microscope. The technique could help in the development of sophisticated spintronics devices such as racetrack memory.

Magnetic domain walls are narrow boundaries (about 10 to 100 nm in size) between regions in a material where the magnetic moments point “up” on one side of the wall and “down” on the other. At these boundaries, the magnetic moments do not rotate abruptly from one orientation to the other – rather, they shift gradually over the region. When these walls move through a material, they behave rather like a taut elastic band sliding over a rough surface. How they move depends on the potential energy landscape they encounter – energy troughs offer little resistance, but energy peaks act like barriers that are more difficult to overcome.

Magnetic domain walls could be used to make new types of spintronics devices, such as racetrack memories in which data are stored as a sequence of magnetic domains along a nanowire. Individual bits are stored and retrieved by moving the sequence along the nanowire and across magnetic read and write devices. A typical racetrack chip would contain arrays of nanowires a few microns long and about 30 nm wide, and could store hundreds of gigabytes of data. In such devices, researchers would need to precisely control the position of a domain wall, as well as be able to move it along a nanostructure at will. Being able to characterize the magnetic “terrain” for these domain walls would be an important step in this direction, but this has proved difficult to do up to now, for lack of the right tools.

NV microscope

Vincent Jacques and colleagues at the ENS Cachan, the French National Centre for Scientific Research (CNRS) and Université Paris-Sud have now succeeded not only in imaging domain walls, they have also managed to observe the walls jumping along different pinning sites along a thin ferromagnetic wire. They were able to do this using a highly sensitive scanning magnetic microscope based on lattice imperfections known as nitrogen-vacancy (NV) centres in diamond. These defects occur when two neighbouring carbon atoms in the diamond are replaced by a nitrogen atom and an empty lattice site. Such NV sites are capable of detecting weak magnetic fields.

The instrument employed by Jacques’ team actually consists of a 50-nm-sized diamond gem attached to the cantilever of the AFM. When stimulated with green laser light with an external radiofrequency field, the NV centre in the diamond emits light in the red part of the electromagnetic spectrum. The intensity of this light depends on the local magnetic field of the sample being imaged (in this case a 1-nm-thick CoFeB ferromagnetic nanowire).

“By detecting the NV defect emission with the optical microscope, we can precisely determine the magnetic field emanating from the magnetic film beneath the diamond tip,” explains Jacques. “As we move the diamond sensor across the film, we can image the stray magnetic field from the nanowire and determine its domain-wall profile.”

Dragging domain walls

Using their technique, the researchers were also able to observe domain-wall hopping (known as Barkhausen jumps) between two pinning sites spaced 50 nm apart along the wire. Pinning sites come about because of the presence of structural or fabrication defects in a material. They locally modify the energy landscape and hinder the movement the domain walls. They managed to control these jumps using the heat generated by the laser light in the microscope, which in turn allowed them to “drag” the domain wall along the wire and position it at any point on the structure.

“Our process allows us to calculate the energy landscape ‘seen’ by the domain wall along the wire,” says Jacques. “Such a quantitative understanding of this landscape could be important for future applications in data storage and information processing. For example, the racetrack memory device proposed by IBM involves storing bits of data with a sequence of domain walls that are shuttled back and forth along a magnetic wire (the track). As mentioned, a crucial step towards making these memories will involve characterizing the magnetic terrain for these domain walls, because how they move across the track will determine how well they actually perform as devices.” The technique is not just limited to studying domain walls either, he added. “It can also be used to study other magnetic objects, such as skyrmions (tiny magnetic vortices that could form the basis of future hard-disk technologies) – another subject of intense research at present.”

The technique is published in Science.

Art, physics and performance painting

Photo of artwork created by Adrian Pritchard

I was in London at the end of last week to attend a meeting on “Communicating physics through the arts” (PDF), which had been organized by the Physics Communicators Group of the Institute of Physics (IOP), which publishes Physics World.

Held at the IOP’s headquarters in London, the idea of the meeting was to “ask artists to explore how they use their knowledge of physics during the development of their work” and to see “how physics could be communicated to the public through their work”.

(more…)

The story of neutrinos

The story of neutrinos began in 1930, when Wolfgang Pauli suggested that an unknown neutral particle could account for some puzzling behaviour in radioactive decay. At the time, the idea was speculative at best, and Pauli knew it, joking to a friend he said “I have done a terrible thing. I have postulated a particle that cannot be detected.” A pair of experimentalists, Clyde Cowan and Frederick Reines, would eventually prove Pauli wrong, but subsequent efforts to study this new particle – which Enrico Fermi dubbed the neutrino, or “little neutral one” – seemed to raise more questions than they answered.

These stories – and others from the neutrino’s rich history – are the subject of Ray Jayawardhana’s book The Neutrino Hunters. Currently an astronomer at the University of Toronto, Jayawardhana is due to join Canada’s York University as Dean of Science in July 2014. In this podcast he talks to reviews editor Margaret Harris about the history of neutrinos, the experiments being done to study them and what we might learn from these “pathologically shy” particles about the nature of our universe.

Extraterrestrial espressos, quantum card-games, misunderstood science and more

Cartoon photo of astronaut enjoying espresso in space

Most of us can’t get our day started without a fortifying cup of coffee and astronauts are just the same. To help those on the International Space Station meet their caffeine cravings, Italian coffee king Lavazza has designed and built an espresso machine that will work in space! Called “ISSpresso” the machine will be blasted off into space in the possession of astronaut Samantha Cristoforetti, who will also be the first Italian woman in space. You can read all about the ISSpresso and its supreme blends on the Wired website.

(more…)

Is D-Wave’s quantum computer actually a quantum computer?

A team of quantum-computing experts in the US and Switzerland has published a paper in Science that casts doubt over the ability of the D-Wave Two quantum processor to perform certain computational tasks. The paper, which first appeared as a preprint earlier this year, concludes that the processor – built by the controversial Canadian firm D-Wave Systems – offers no advantage over a conventional computer when it is used to solve a benchmark computing problem.

While the researchers say that their results do not rule out the possibility that the processor can outperform conventional computers when solving other classes of problems, their work does suggest that evaluating the performance of a quantum computer could be a much trickier task than previously thought. D-Wave has responded by saying that the wrong benchmark problem was used to evaluate its processor, while the US–Swiss team now intends to do more experiments using different benchmarks.

Quantum insights

D-Wave Two is the second generation of quantum processors sold by D-Wave Systems and one of the devices is owned by NASA, Google and the Universities Space Research Association. The company has also sold a system – claimed to be the world’s first commercially available quantum computer – to Lockheed Martin. The tests on D-Wave Two were carried out by Matthias Troyer and colleagues at ETH Zurich, the University of Southern California (USC), the University of California Santa Barbara, Google and Microsoft.

Containing 512 quantum bits (qubits), the D-Wave Two processor was designed specifically to perform a process called “quantum annealing”, which is a technique for finding the global minimum of a complicated mathematical function. Unlike “conventional” quantum computers – which are kept in a fragile quantum state throughout the calculation – quantum annealing involves making a transition from a quantum to classical system. As a result, D-Wave’s approach might be more immune to noise, which can destroy conventional quantum calculations. However, a quantum annealing processor is not a universal computer like a PC and cannot be programmed to perform a range of tasks.

Fiendishly difficult calculation

Troyer’s team tested the processor by using it to solve a particularly difficult task from condensed-matter physics involving “Ising spin glasses”. A spin glass is a magnetic material in which the individual magnetic moments – or spins – interact with each other and are also located randomly throughout the material. This is unlike conventional models of magnetic materials, in which the spins are arranged on a regular lattice and tend to all point in specific directions. Instead, the spin glass has an extremely complicated spin configuration that is fiendishly difficult to calculate for large numbers of spins. “Ising spin-glass problems are the ‘native’ problem that the [D-Wave Two] is designed for,” Troyer explained to physicsworld.com.

To evaluate the performance of D-Wave Two, the team measured how long it took the processor to solve an Ising spin-glass problem and compared this with the time it takes with a conventional, classical computer. This ratio, known as the “quantum speed-up”, is expected to be around one for small problems – meaning classical devices can do the job just as well – but it should grow in size as the problem becomes larger. In its test, the team carried out lots of quantum and classical simulations on different spin glasses, in which the number of spins and the interaction strengths were varied systematically.

No speed-up found

The results, however, reveal no clear evidence of speed-up. While the D-Wave Two processor was sometimes 10 times faster than the classical computer, it was also sometimes more than 100 times slower. Troyer and colleagues put forth several possible explanations for why speed-up was not seen. One is that although D-Wave Two functions as a quantum processor, quantum annealing offers no advantage over classical methods. Another possibility is that noise or other operational problems mean that the device is not operating as a quantum processor.

A third, intriguing prospect put forth by Troyer and colleagues is that speed-up could still be seen when D-Wave Two is used to solve other types of problems, even if it was not observed in the current test. Indeed, D-Wave itself claims that research done by a team led by Helmut Katzgraber at Texas A&M University suggests that Ising spin-glass problems cannot be solved quicker using quantum annealing. Moreover, Jeremy Hilton – D-Wave’s vice-president for processor development – points out more recent work done by Itay Hen of USC and colleagues, which – he says – shows that “a new benchmark has demonstrated better performance of the D-Wave 512 qubit processor over the simulated annealing algorithm developed by Troyer et al.

Better-suited problems

“Katzgraber argues that 3D spin glasses may be better test cases that ‘might’ be better suited,” says Troyer. “We have found a way to implement such problems and are testing it now. Researchers at Google and NASA are also searching if there are problem classes that may show quantum speed-up.”

Troyer adds that the fundamental importance of the work presented in Science is that it describes a method for measuring quantum speed-up for devices with unknown potential, such as the D-Wave devices. With physicists worldwide focused on making larger and more complex quantum processors, such measurement techniques will become increasingly important.

The benchmark testing is described in Science.

Copyright © 2026 by IOP Publishing Ltd and individual contributors