Skip to main content

From Euclid to Einstein

Colin Pask is a very bossy author. “These chapters…should not be missed!” he instructs his readers. “You must read the final chapter,” he orders. Reflecting this self-confident panache, the preface of his Magnificent Principia: Exploring Isaac Newton’s Masterpiece is called “Why you should read this book”. However, the only reason he gives is that Isaac Newton was important for modern science, which is true but unhelpful. I would like to ask Pask a different question: “Who did you write this book for?” Or, to put it more bluntly, “Who is likely to buy it?”

Books about Newton have three main target audiences: the general reader (whoever that might be), historians and scientists. All publishers and many writers have a mass market in sight, and – as James Gleick demonstrated in 2003 with his book Isaac Newton – it is certainly possible for a well-informed and detailed account of a scientific icon to be a bestseller. But as Stephen Hawking might or might not have remarked, every equation halves sales, and Pask’s version of “Newton made lite” assumes (in an introductory section called “Fundamentals”) that all his readers will be familiar with calculus. Although his equations and diagrams are not too off-putting for anybody with university-level mathematics, many potential purchasers will blench on flicking through what resembles an advanced physics primer.

In that sense, the immediate impact of Pask’s book is not so very different from that of the original Principia. Pask reprimands Newton for using geometry and writing like Euclid, although this had nothing to do with Newton’s decision to make the Principia difficult: the famously reclusive Lucasian professor wanted to keep out of the limelight and “avoid being baited by little Smatterers in mathematics”. For Newton and many of his contemporaries, civilization had gone downhill since the time of the Greeks, and he was determined to recover the lost but pure knowledge of the ancients. Under his influence, British mathematicians spurned algebra until the early 19th century, mocking it as a fancy French practice of juggling with symbols devoid of real-world significance. It was only after a group of Cambridge undergraduates demanded to learn about the latest Laplace transforms and Legendre polynomials being used in Paris that the continental calculus originated by Gottfried Leibniz (or “Leibnitz”, as Pask would have it) was imported across the Channel.

So what about historians? Would they buy Pask’s book? The equations are not necessarily a deterrent, as a substantial proportion of science’s historians (myself included) have first degrees in science. But would they want to brush up their rusty integration techniques? Personally, I abandoned physics not because it was too difficult, but because I found it boring. Other lapsed scientists who share my passion for the past would already be familiar with the content of the introductory historical sections of Magnificent Principia and would have little incentive to work their way through 400 pages of technical explanation.

Historians would also be wary of Pask’s decision to present the Principia as a finished product. Like other historical inaccuracies, this reflects his preference for adulating Newton as a superhuman genius rather than appraising him as an extremely talented but fallible mortal. The Principia may well deserve Pask’s accolade of magnificence, but it was neither a single-authored book nor one that appeared at a single moment in time. Pask’s analysis is based on the Principia‘s third edition, which was published four decades after the celebrated year of 1687. That might not matter too much if Newton himself had been responsible for all the revisions, but it was Roger Cotes, a young astronomy professor at Cambridge, who forced Newton to confront his mistakes and corrected many of them for him. Newton’s attitude was surprisingly lackadaisical. “Such errors as do not depend upon wrong reasoning can be of no great consequence and may be corrected by the Reader,” he pontificated from his superior position as president of the Royal Society. Undeterred, the more perfectionist Cotes – dismissed in one short paragraph by Pask as irrelevant – bombarded Newton with letters for years, repeatedly challenging his experimental results as well as his theoretical calculations, and refusing to accept any attempts to fudge the evidence.

In contrast, I imagine many practising scientists will welcome this book, which is effectively a guided translation of the Principia‘s geometrical arguments into modern mathematical language. Yet even here, Pask’s devotion to his hero sometimes tempts him to be misleading. An emeritus professor of mathematics at the University of New South Wales, Australia, he knows the differences between Galilean invariance and special relativity, yet he is so keen to claim Newton as the originator of everything that he implicitly elides them. Albert Einstein abhorred the meaningless cocktail-party phrase “Everything is relative” bandied about by artists and writers, but unwary readers could easily deduce from this book that special relativity is merely a modification of the classical theory.

People who are convinced that Newton was the first great scientist have to face the tricky truth that Newton was deeply religious, which didn’t just mean going to church on Sundays. Pask omits to point out that Newton could never have subscribed to modern relativity theory because for him, there had to be an absolute time and space: they were God Himself, and He was immanent throughout the universe. Today’s Newtonianism is deterministic, but that feature was introduced by Pierre Laplace, the self-styled French Newton, at the end of the 18th century. To the disdain of Leibniz and other critics, Newton posited a God who intervened from time to time by sending in comets with animated tails. Newton derived the concept of an active nature, a “perpetual worker”, from his alchemical studies – a crucial topic ignored in this?study.

Newton remains one of science’s most revered figureheads. Yet paradoxically, he would have been appalled by modern Newtonian models of the cosmos, because they leave no place for spirit. He wasn’t even a scientist (a word not invented until 1833), but a natural philosopher who regarded the Bible, alchemy and experimentation as three related routes towards God. And he broke all the rules in the scientific code of behaviour by sometimes twisting the facts to fit his preconceptions – a tendency that is, regrettably, occasionally shared by Pask.

  • 2013 Prometheus Books $26.00hb 528pp

Web life: Voices of the Manhattan Project

So what is the site about?

Voices of the Manhattan Project was launched in October 2012 with the aim of preserving the memories and experiences of scientists and other workers who participated in the US-led effort to build an atomic bomb during the Second World War. Backed by two non-profit organizations, the Atomic Heritage Foundation and the Los Alamos Historical Society, the site hosts a rich archive of audio and video interviews with Manhattan Project veterans, as well as written transcripts.

Who are the interviewees?

There are a few famous faces in the collection, including Roy Glauber, who won the Nobel Prize for Physics in 2005 for his work on quantum optics; John Wheeler, the theorist who coined the term “black hole” and influenced a whole generation of physicists; and Alvin Weinberg, who later became director of Oak Ridge National Laboratory. However, most of those interviewed are relatively obscure figures, and their stories are all the more fascinating for it. A good example is an interview http://ow.ly/sz8kV that the journalist Stephen L Sanger conducted in 1989 with a husband-and-wife pair, Vincent and Clare Whitehead. The Whiteheads met when they were both working in military intelligence at the plutonium plant at Hanford, Washington, and Vincent’s story, in particular, makes a useful counterpoint to Richard Feynman’s better-known tales of playing pranks on security personnel at Los Alamos. “There was a code name given to each piece of apparatus, and some of those professors, for Christ’s sake, would just in the clear say the description,” Whitehead recalls with disgust, before joking “Have you ever tried to get an egg back into a hen?”

How is the site organized?

As well as searching for the names of specific interviewees, it is also possible to filter the collection according to common themes such as “Security and secrecy” or “Environmental impact” and by the location where people worked. The location filter is interesting because it highlights the scale of the Manhattan Project. In addition to interviews with those who worked at the well-known facilities in Hanford, Oak Ridge and Los Alamos, the archive also incorporates the experiences of people employed at a chemical plant in Colorado, the University of Cambridge, UK, and at universities across the US. There is even an interview with one Robert Furman, who, as an assistant to the project’s military chief, General Leslie Groves, participated in the Alsos Mission to locate and capture German atomic scientists in Europe during the spring of 1945.

Why should I visit?

Most of today’s physicists are too young to have worked on the Manhattan Project, or even to remember the era in which it took place. In 2014, as people across Europe commemorate the centenary of the outbreak of the First World War – a conflict that is now almost entirely beyond the reach of living memory – it makes sense to listen, while we still can, to those who lived through this crucial aspect of the war that followed on the heels of that earlier fight. Indeed, several of the people whose oral histories can be found in the site’s archive have already died, making their interviews all the more precious.

‘Dropleton’ quasiparticle makes its debut

Plot showing the liquid-like correlations between electrons and holes in a dropleton

A new type of quasiparticle dubbed the quantum droplet, or “dropleton”, has been identified by researchers in the US and Germany. Created in semiconductor quantum wells using ultrashort laser pulses, the dropleton comprises a small number of electrons and holes that are bound together in a liquid-like drop.

A quasiparticle is a collective excitation within a material that behaves like a fundamental particle. Recently, physicists have identified quasiparticles called levitons, orbitons, phonitons and even wrinklons – which occur in wrinkled fabrics such as curtains.

Bound electrons and holes

The dropleton is related to a well-known quasiparticle called an exciton, which is formed when a semiconductor absorbs a photon. This action promotes an electron from the valence band to the conduction band, leaving behind a positively charged “hole” in the valence band. The electrostatic force binds the electron and the hole together to create an exciton, which moves through the semiconductor like a particle.

More complicated configurations of electrons and holes are known to exist, such as the biexciton, which has two electrons and two holes. Micron-sized droplets comprising many electrons and holes have also been seen in indirect-gap semiconductors such as germanium and silicon. However, these are formed by a thermodynamic process.

Now laser-induced droplets have been created in a direct-gap semiconductor by Steven Cundiff and colleagues at the University of Colorado and NIST in Boulder, and at Philipps-University in Marburg, Germany.

Pump and probe

The researchers create their quantum droplets by firing laser pulses at gallium-arsenide quantum wells. These are 10 nm-thick layers of semiconductor separated by an insulating material. Each pulse “pumps” electrons into the conduction band and is followed by a “probe” laser pulse that is used to measure the absorption spectrum of the quantum wells.

At first, the measurement revealed that the ongoing pumping merely creates more excitons – just as expected. After a while, however, the team found that instead of pairing-up as excitons, the electrons and holes formed unpaired configurations. The result is neutrally charged droplets typically composed of about five electrons and five holes.

Rising pressure

“This transition occurs because of the increasing density of electrons and holes injected by the pump pulses,” explains Cundiff. He adds that the increased numbers of electrons and holes act to screen the electrostatic Coulomb attraction, which normally keeps excitons bound together. Furthermore, he explains that boosting the number of electrons and holes “increases the ‘pressure’ – actually caused by the Fermionic nature of the electrons and holes, and the Pauli exclusion principle – which stabilizes the quantum droplet”.

Once in their new arrangement, the electrons and holes are not fixed in a rigid configuration. Instead, they are able to move around, much like particles within a liquid. It is this behaviour that inspired the team to dub the new quasiparticle a dropleton.

While the quantum droplets might not have any obvious practical applications, the researchers say their discovery could improve our understanding of how electrons interact in solids – and ultimately lead to better electronic devices. Even with their short lifetimes of about 25 ps, the droplets are stable enough to be studied. Cundiff and colleagues are now working to improve their quantum optical-spectroscopy technique, which will allow them to gain a better understanding of the behaviour of the droplets.

Theory and experiment

Jim Wolfe – a physicist at the University of Illinois who was not involved in this study – told physicsworld.com that “The existence of a stable plasma droplet in a photoexcited gallium-arsenide quantum well is a very interesting and novel idea.” He also commends the team for combining “compelling experimental evidence with detailed theoretical support”.

Patrick Parkinson of the University of Oxford also considers the discovery to be important. “This work reports a highly interesting application of their quantum-spectroscopy scheme, both confirming its utility and revealing a previously unexplored many-body quasiparticle within an extremely well-studied material system,” he says. Parkinson adds that he hopes further studies will more fully explore the dropleton’s properties. “In particular, properties arising from its liquid-like pair correlation function are likely to be of great interest,” he says.

The research is described in Nature.

Rebirth of the SSC?

Following the closure of Fermilab’s 1 TeV Tevatron particle collider near Chicago in 2011 – and with no similar facility being planned to replace it in the US – many physicists in the country felt not surprisingly concerned that America was losing its place at the “energy frontier”. That baton had already passed to the CERN particle-physics lab near Geneva when its Large Hadron Collider (LHC) fired up in 2008, and with collisions set to restart there next year at 13 TeV, the US’s day looked certain to have passed.

Indeed, as we first reported three weeks ago, researchers meeting in Geneva last week discussed plans that would keep Europe at the energy frontier for decades to come with options for an LHC successor – a machine that would be even bigger and bolder that the 27 km-circumference LHC.

One new design tabled at the conference would involve creating a huge 80–100 km tunnel near Geneva that would house a new collider to study the Higgs boson in great detail. In the future, this tunnel could then be used to search for new particles by colliding protons at 100 TeV – much greater than the LHC’s 13 TeV.

However, a group of US physicists from Texas A&M University and Michigan State University is now proposing to wrestle back the energy frontier by constructing a huge accelerator in the US.

In a paper posted on the arXiv preprint server today, the researchers outline plans to use the partly constructed tunnel of the axed Superconducting Super Collider (SSC) just outside Dallas, Texas. Conceived in 1983, the SSC was to be the next big particle collider with a circumference of 87 km and a maximum collision energy of 40 TeV. But 10 years later the all-American project was cancelled, largely on grounds of cost, leaving a few buildings on the surface as well as tens of kilometres of tunnels deep underground.

Most of the cost of a new collider would be in excavating the tunnel, but the researchers claim that around 46% of the SSC tunnel has been already bored and some facilities built, such as the linear accelerator that feeds particles into the collider. This would then make it much cheaper than the CERN proposal.

The physicists point out that if the SSC tunnel were finished off, it could be home to a 240 GeV “Higgs factory” that would collide electrons with positrons to study the new boson in unprecedented detail.

But their plans don’t stop there. The researchers say that given its “soft consolidated cretaceous rock”, the site in Texas is an “ideal medium for large-bore tunnelling”. This same location could be home to a 270 km-circumference particle collider that could then host a 100 TeV proton–proton machine. The SSC tunnel would be used as an injector into the new 270 km tunnel. The authors add that in future the collider could even be upgraded to a 300 TeV machine.

There’s nothing like thinking big!

But if the plans do go ahead, there may be some clearing up to do first. During the 2011 March Meeting of the American Physical Society in Dallas, a group of “rogue” physicists (see image above) took a break from the gruelling conference schedule to break into the SSC’s derelict site. They found that the tunnels are well below the water table and are therefore flooded, while many unopened crates containing electronic equipment are just lying around.

Yet the authors seem serious about their plans and have submitted the document to a subpanel of Department of Energy’s High Energy Physics Advisory Panel, and are planning to discuss the proposal at a workshop at Fermilab in July.

Dark field illuminates X-ray imaging

Radiation that does not play a part in conventional X-ray imaging has been exploited by physicists in the UK to provide comprehensive snapshots of an object’s physical and chemical state. Potential applications of the new technique, known as “dark-field hyperspectral X-ray imaging”, include identification of stress build-up inside engineered structures, security scanning of elicit materials, and analysis of medical biopsies.

Normal radiography of the kind used in hospitals relies on the phenomenon of absorption. A beam of X-rays is fired at an opaque object and the radiation that emerges on the far side is captured by a photographic film or digital detector, with the image mapping variations in electron density inside the object. However, the image cannot be used to identify the materials that make up the object in question.

Bright and dark fields

That limitation has been overcome in the latest work, which has been carried out by Robert Cernik of Manchester University and colleagues at Manchester and the Rutherford Appleton Laboratory in Oxfordshire. Instead of recording what is known as an X-ray beam’s “bright field” – the radiation that passes through the sample – the new approach involves measuring a portion of the “dark field” – the radiation scattered or emitted by the object. “Usually great lengths are taken to remove the scattered radiation,” says Cernik, “but in fact that radiation contains all sorts of extra information not available in conventional imaging.”

The technique involves placing a sample in the path of a relatively wide polychromatic X-ray beam and then positioning a pinhole aperture a few degrees off the beam axis on the far side of the sample. A sensitive, multi-pixel detector then captures the radiation that emerges from the pinhole. Cernik explains that the set-up provides a new way of recording diffraction patterns from the sample. Conventional scattering experiments shine a monochromatic beam onto a crystal, which is rotated until the angle between the beam and crystal structure is such that the diffracted waves interfere constructively to produce a peak in output intensity. In the latest work, the sample and detector can remain fixed because each pixel is designed to record light intensity across a range of different wavelengths, producing what are known as “data cubes”. With data from any one pixel revealing diffraction peaks at specific wavelengths, the combined output from all the pixels allows the various chemical elements and compounds that make up the sample, as well as their crystal structures, to be identified.

Imaging the sample – or identifying the positions of the various materials within it – therefore involves colour coding the output of each pixel according to which peaks it contains. This can be done not only for a flat, 2D sample, but also for real-life 3D objects. Rotating the sample by successive small amounts around a vertical axis and imaging it at each step generates a series of slices through the object that allows a “tomogram” to be built up in a similar way to the production of medical CT scans. The difference, as Cernik points out, is that in the case of hospital scans it is the detector and source that rotate, rather than the person being scanned. “Unlike humans,” he quips, “our samples have no ethical rights.”

Diamond devices

Cernik’s group put its idea to the test using the Diamond synchrotron source – they placed a cylinder containing zinc oxide, aluminium and cerium oxide in the path of a square-shaped beam of white X-rays 8 mm wide for between two and five minutes at a time, and recorded the resulting images using a specially developed “High-Energy X-ray Imaging Technology” (HEXITEC) device.

Cernik told physcisworld.com that the technique could be used to monitor stresses inside objects, such as the complex welded components used in aircraft, with variations in crystal spacings across the object revealing any residual stresses. In addition, cracks in metals such as aluminium that are too small to image in any other way might be revealed by the corrosion that surrounds them – the diffraction pattern of aluminium oxide being different to that of aluminium.

Further, says Cernik, the ability to train a sensor to look for characteristic diffraction patterns could mean that the technique finds use in both security scanning – where the chemicals in question might be explosives or drugs – and in medicine. In the latter case, he explains, the technology might be best employed to reduce false-positive diagnoses; the non-identification of, for example, a certain kind of breast cancer within a biopsy helping to avoid unnecessary, unpleasant and costly treatment.

Quick scan

Cernik points out that 2D or 3D images showing the crystalline or chemical structure of complex opaque objects can already be carried out, thanks to the combination of X-ray diffraction or fluorescence together with tomography. But this approach is slow, he says, because it involves a narrow “pencil” beam of X-rays being scanned across or rotated around a sample to build up images bit by bit – as opposed to the direct imaging possible using his group’s technique.

However, critics say that, while this latest work does have significant potential, the dark field generates very low intensities, making the experiment harder to carry out using a conventional X-ray tube. They argue that the team must develop a practical solution – such as using multiple pinholes – for the technique to be commercially useful.

In fact, Cernik has set up a company to commercialize his group’s technology, and says that he and his colleagues are now working to reduce the cost and improve the spatial resolution of the semiconductor materials that make up the detector.

The research is published in Proceedings of the Royal Society A.

CERN creates new office for medical research

Earlier this month my colleague Tami Freeman was at CERN where she had a tour of what will soon be the Geneva-based lab’s first major facility for biomedical research. Called BioLEIR, the facility is now being created by modifying the existing Low Energy Ion Ring (LEIR).

LEIR is currently only used for several weeks each year to supply lead ions to the Large Hadron Collider (LHC). The idea behind BioLEIR is to make more use of the accelerator by creating beams of various types of  ions and evaluating how they could be used to destroy tumours.

Of course, many medical applications have spun out of CERN over the years, including high-energy particle detectors for PET scanners and much of the technology used in dedicated accelerators for particle therapy. The lab also hosts experiments that investigate medical applications. These include ACE, which looks at how antimatter can be used for particle therapy.

But now for the first time in the lab’s 60-year history, CERN has created a dedicated office for medical applications, with BioLEIR being one of its first major projects. The office is headed by Steve Myers, who Freeman interviewed for her article “CERN intensifies medical physics research“.

Until recently Myers was CERN’s director of accelerators and technology, and last year I spoke to him about the ongoing upgrade of the LHC. That interview appears in our recent Focus on Big Science.

How to steer a qubit using sideways glances

Physicists in the Netherlands say they have manipulated the state of a quantum bit (or qubit) by simply adjusting the strength of the technique they used to measure it. The method, which involves using an ancilla (or helper) qubit and a new technique for “non-demolition” read-out, can be used to “steer” the qubit to the desired state. The work is not only of interest for fundamental physics, but could also find use in future quantum computers and for improving the sensitivity of magnetic-field sensors.

One of the fundamental principles of quantum mechanics is that an object can be in two or more states at the same time. This means that an electron can, for instance, be in two places at once. However, these “superposition” states are never seen in classical, macroscopic objects – as illustrated by the paradox of Schrödinger’s famous thought experiment involving a cat in a sealed box, which clearly could not be both dead and alive at the same time.

Indeed, the very act of trying to find out whether the cat is alive or dead actually changes its state. This act of measuring, through the so-called quantum-mechanical back-action, disturbs the state of a quantum object so that it collapses and behaves like a classical one.

Opening the box

Now, researchers at the Foundation for Fundamental Research on Matter (FOM) and Delft University of Technology say they have succeeded in “opening” the box in which Schrödinger’s cat finds itself by just a small amount. In this way, it is possible to take a “peek” at the cat without destroying its fragile quantum state.

The team, led by Ronald Hanson, replaced the cat with a nitrogen atom in diamond – referred to as a nitrogen-vacancy (NV) centre. This particle carries a nuclear spin that can point up (equivalent to the cat being alive) or down (cat dead). In previous work, the same group showed that it is possible to measure the spin’s orientation by coupling the state of the nucleus to a nearby electron. By varying the strength of the coupling between the nucleus and electron, the researchers we able to actually control the strength of their measurements.

They found that weaker measurements revealed less information but also had less effect on the spin. Analysing a nuclear spin after such a measurement showed that the spin remained in a superposition of two states – albeit a slightly altered superposition.

Steering a spin

Now, Hanson and colleagues have discovered that they can actually “steer” the nuclear spin by applying a series of measurements that vary in strength. Since the outcome of a measurement is not known beforehand, the researchers apply a feedback loop in their experiment. They adapt the strength of a second measurement depending on the outcome of a first, and in this way manoeuvre the nucleus towards a desired superposition state.

To do this, the team uses an “ancilla” qubit, which in this case was the spin of the electron associated with the vacancy of the NV centre. “This ‘ancilla’ qubit helps to partially measure the spin of the nitrogen atom of the NV centre,” explains Delft’s Machiel Blok. The researchers measure the electron’s spin orientation by applying a laser that excites the electron only if its spin is in the up state. The resulting fluorescence signal (bright or dark) reveals the state of the electron: bright means that the electron is in an up state and dark means that it is in a down state.

The measurements and feedback loops developed by the team could be useful for measurement-based quantum computing in the future, say the researchers. “In this particular scheme, a large entangled state is first created between many qubits,” explains Blok. “Actual computation is then performed by sequentially measuring the individual qubits while adjusting the measurement settings depending on the results of previous ones.”

Better magnetic measurements

Furthermore, the NV centre’s electron spin is extremely sensitive to small changes in magnetic field over tiny volumes. As a result, the team’s protocols could be used to make magnetic measurements of biological samples that are more sensitive than current methods based on superconducting quantum interference devices (SQUIDs) or magnetic resonance imaging.

The work is described in Nature Physics and is also available as a preprint on the arXiv server.

Golden-anniversary physics, flaming challenges, smart lists and more

It never rains but it pours, they say, and 1964  experienced quite a downpour of amazing “physics firsts” as the first papers about quarks, the Higgs mechanism and the EPR paradox or Bell’s inequality were all published. Also, Arno Penzias and Robert Woodrow Wilson made their first measurement of the cosmic microwave background on 20 May 1964, detecting the whisper of the Big Bang. To celebrate 50 years since these world-changing discoveries were made, the Harvard-Smithsonian Centre for Astrophysics has produced a webcast (you can watch the video on their YouTube page in a week’s time) featuring leading cosmologists Alan Guth, Robert Woodrow Wilson, Robert Kirshner and Avi Loeb. You can read more about it in this fascinating article by David Kaiser on the Huffington Post website, as he take a deeper look at the eventful year of 1964.

Have you ever had a young child ask you what seems like a simple question – “Why is the sky blue?”, “What is a rainbow?” – only to find yourself completely lost while trying to give them a simple, uncomplicated answer that they actually comprehend? If so, then you might turn to the handy answers provided by people each year who participate in Alan Alda’s “Flame Challenge” – where he challenges scientists to satisfactorily explain a complicated scientific idea to a panel made up of 11-year-old judges. This year’s question is “What is colour?”. Take a look at this article on the Slate website to discover questions from previous years while you start working on your entry for this year.

In today’s technologically fast-paced world, it’s always good to know which companies and businesses are at the top of the innovations pile. The MIT Technology Review has made this its business – its motto states that it “identifies important new technologies – deciphering their practical impact and revealing how they will change our lives”. It has  just published its list of the “50 Smartest Companies” – take a look at the nice interactive list and find out what exactly makes a smart company in today’s climes.

For some light weekend reading, take a look at Freeman Dyson’s rather long (eight pages!) review in New York Review of Books of Brilliant Blunders: From Darwin to Einstein, Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe by Mario Livio; and then make sure to read our more succinct review too. And take a look at this essay series on the Slate website that talks about astronomy’s most intriguing puzzles today.

Ordering electron and nuclear spins in quantum wires

Nuclear and electron spins in a quantum wire may spontaneously form an ordered state at very low temperatures, according to work recently carried out by an international team of physicists. The team was studying the conductance of gallium-arsenide quantum wires and discovered that, at temperatures of 0.1 K and lower, the conductance of the wires dropped below the universal quantized value. This reduced quantization is explained using a theoretical model that proposes that the nuclear and electron spins order themselves in a helical formation at these temperatures.

A queue of electrons

A quantum wire confines electrons to a single direction of movement. As a consequence, and unlike a regular wire, its conductance is quantized – the flow of current is not proportional to the voltage applied. The conductance takes discrete values in multiples of 2e2/h where e is the elementary charge and h is Planck’s constant. The factor of two comes from the fact that the spin of electrons in an unordered state can take one of two values.

Dominik Zumbühl from the University of Basel in Switzerland, along with colleagues at Harvard and Princeton universities in the US, measured the conductance of gallium-arsenide quantum wires at temperatures ranging from 20 K to 0.01 K. They noted that, at the higher temperatures, the wires did indeed exhibit the universal quantized conductance in units of 2e2/h. At lower temperatures, however, they saw something new.

Bringing order to randomness

“Conductances in gallium-arsenide quantum wires had previously been measured only down to 0.3 K,” explains Zumbühl. “It is very difficult to cool such samples to temperatures as low as 0.01 K. A significant amount of work went into filtering and thermalizing both the electrical wires and the sample.” When the scientists reduced the temperature of the system to 0.1 K, the conductance of the wires dropped by a factor of two to e2/h. The conductance was unaffected by moderate magnetic fields and remained at this value as the temperature was lowered further to 0.01 K. This is interpreted as being caused by the spontaneous ordering of the electron and magnetic spins within the quantum wire.

According to a 2009 model developed by Daniel Loss of the University of Basel and colleagues, if the spins inside a quantum wire spontaneously align in the shape of a helix, the conductance should be lowered by a factor of two from its universal quantized value. After considering several alternatives, Zumbühl and colleagues concluded that such a helical spin arrangement was the most likely cause of the observed phenomenon.

Commenting on the paper for the Journal Club for Condensed Matter Physics, Leon Balents of the University of California, Santa Barbara writes, “This proposal [the helical interpretation] explains the insensitivity to [magnetic] fields, and roughly the right temperature scale for the experiment. While conceptually simple, the idea is audacious and I for one am amazed it might be true!”

Direct measurements of spin ordering needed

The spontaneous ordering of nuclear spins has been observed experimentally before, but only at even lower temperatures, typically in the microkelvin range, and never in quantum wires. Should Zumbühl and colleagues’ interpretation, that the drop in conductance is caused by spin ordering, be correct, writes Balents, “spontaneous nuclear magnetism is occurring at a temperature 50 times higher here”. He exercises some caution, though, explaining that the connection to nuclear ordering is circumstantial and that “nothing attributable to the proposed spin helix was observed”.

Zumbühl agrees. “Our data stem from electrical measurements alone with no direct nuclear spin data. More experiments are needed to investigate the nuclear spins,” he says. However, he further explains that the researchers have “data that agree with the helical model without contradictions, while other known models prove to be inconsistent with our observations. We show evidence for a novel state of quantum matter consisting of a system with helical electron spin tightly linked with a nuclear spin helix.”

This electron–nucleus coupling, according to Zumbühl, is at the heart of the arrangement that allows the reduced conductance to manifest at higher temperatures than before. Applying a gate voltage on the wire can expel the electrons, dissolving the electron–nucleus coupling and consequently the helical order. “The reason for the exceptionally high nuclear ordering temperature”, Zumbühl explains, “is the 1D nature of the wire and the resulting strong electron–electron interactions.”

Of course, Zumbühl admits, there remain unanswered questions. “In addition to probing the phenomenon magnetically, we also want to understand the timescales for electron and nuclear magnetization build up and decay, upon admitting or removing the electrons with a gate,” he says.

“Regardless,” notes Balents, “the observation is a dramatic one, and it is an impressive experiment.”

The research is published in Physical Review Letters.

Researchers spin a yarn into a muscle

Photograph of muscles made by coiling different-sized fibres

An unusually simple approach to artificial muscles – based on high-strength polymer fibres – has been developed by an international team of researchers. Rather than needing sophisticated or expensive materials, the muscles can be produced from simple polymers that are used to make fishing-line or sewing threads. When heated, these fibres can shorten or lengthen far more than biological muscle, and could be used for applications as diverse as temperature-sensitive window shutters, “smart” clothing and robotics.

Synthetic sinew

Materials that expand and contract in response to some form of stimulus are useful for robotics, where they are used to make “actuators” or artificial muscle fibres, and on smaller scales where they can produce sensors for lab-on-a-chip devices. Numerous designs for such materials have been developed, such as shape-memory materials – either metals or polymers that exist in two phases and can therefore suddenly shorten or lengthen at a specific temperature – or electroactive polymers – that change shape in response to an electric field – and electrochemically stimulated carbon-nanotube yarns. All of these, however, have limitations. Shape-memory metals can degrade after a limited number of cycles and show various amounts of hysteresis (a reluctance to change phase). Electric-field-driven polymers can require impracticably high changes in field. Furthermore, all of these materials – especially carbon-nanotube yarn – are highly specialized and can be expensive.

In the new research, Ray Baughman and colleagues at the University of Texas at Dallas, together with collaborators in Canada, South Korea, Turkey, China and New South Wales in Australia, took a simpler tack. Unlike most materials, polymer fibres tend to shorten when heated, because as the entropy of the system increases, the polymer chains become more disordered. On its own, this contraction is quite small – up to about 4% for a 250 K increase in temperature. More significant, however, is that when the chains become more disordered, the fibre becomes thicker.

Twisted tendons

Baughman’s team worked out a simple way to use this radial expansion to amplify or, alternatively, to reverse the thermal contraction. The researchers started with a fibre consisting of highly ordered, linear polymer chains and then twisted it repeatedly, turning the chains into helices. When the heated fibre expanded radially, this increased the lengths of the individual chains’ helical paths, thus creating a torque on the fibre as the increased tension in the polymer chains caused it to try to untwist.

Next, the researchers wound the twisted fibre into a coil. They showed that if the coil is wound in the same direction that the fibre had been twisted, the untwisting torque from heating causes the coil to tighten up and shorten by up to 49%. Even more remarkably, if the coil is wound the opposite way – i.e. one is wound clockwise and the other anticlockwise – heating wil cause the coil to lengthen by up to 69%. This compares favourably with human skeletal muscles, which can expand and contract by a maximum of about 20%, although muscles use energy many times more efficiently.

Pulling their weight

The fundamental mechanical properties utilized by the muscles – radial thermal expansion combined with axial thermal contraction – are found in many polymers that can be purchased in a hardware shop. To produce high-strength muscles, the researchers focused on strong polymers such as polyethylene fishing line and nylon sewing thread. Using these, they demonstrated their muscles in a variety of everyday applications, such as shutters that opened and closed in response to changes in temperature or textiles with coiled polymer threads interwoven so that the pores opened as the temperature increased. In the future, these could be used in clothing that allows more heat to escape if the wearer gets too hot. “We have a lot of improvements to make,” says Baughman, “but already these muscles are ready to be deployed commercially. They’re cheap to make – you don’t have to make the precursor and you can make a hell of a lot of muscle very easily.”

Richard James, an expert in shape-memory polymers at the University of Minnesota in the US, feels “it’s not a huge breakthrough but it is very clever and I enjoyed reading about it”. He believes the most valuable part of the research is the geometric effect, in which a coiled coil amplifies the strain produced, although he notes that a temperature change of more than 200 K is needed to produce the largest strains, which could be problematic in applications. “It’s an interesting method,” he concludes. “I can imagine people might combine this with shape memory as the fibres wouldn’t have to be polymers.”

The research is published in Science.

Copyright © 2026 by IOP Publishing Ltd and individual contributors