Skip to main content

Flux pinning in action

By Hamish Johnston

The above video shows a very nice demonstration of flux pinning in a superconductor. This effect occurs in high-temperature superconductors, which when exposed to a magnetic field will allow some magnetic field lines to penetrate their bulk. This is unlike most conventional superconductors, such as lead, which expel all magnetic field lines.

The field lines inside the superconductor don’t like to move around, which pins the magnetic field in place. The result is that the magnet and the superconductor don’t want to move relative to each other, which is demonstrated in the video.

Exceptions occur when there is a degree of symmetry in the magnet field. This is illustrated nicely by showing that a superconductor will rotate on a magnetic disk but not on a rectangular-shaped magnet. Even better is when the superconductor is placed above โ€“ and then below โ€“ a magnetic track and given a shove.

Supercapacitor electrodes go for a dip

A new and simple “dipping” technique that can significantly improve the performance of supercapacitors has been developed by researchers at Stanford University in the US. The method, dubbed “conductive wrapping”, could be applied to a range of electrode materials. It might even be used to improve next-generation electrodes made from sulphur, lithium manganese phosphate and silicon for use in lithium-ion batteries.

Supercapacitors – more accurately known as electric double-layer or electrochemical capacitors – can store much more charge than a conventional capacitor. This is thanks to a double layer that forms at the electrolyte–electrode interface of such devices when a voltage is applied.

The conductive-wrapping technique can further increase the capacitance of a supercapacitor by boosting the conductivity of the electrodes – which enhances the device’s ability to store charge. Developed by Zhenan Bao, Yi Cui and colleagues, the process involves dipping a composite electrode made of graphene and manganese oxide into a solution containing either carbon nanotubes (CNTs) or a conductive polymer. The CNTs or polymer coat the electrode and boost its ability to store charge by more than 20% for the CNT coating and 45% for the polymer.

Higher specific capacitance

The specific capacitance obtained by the researchers (about 380 F/g) is comparable to other manganese-oxide-based electrodes, which typically have specific capacitances of between 250–400 F/g. However, the hybrid electrodes also show good “rate capability” – which means that they maintain their high capacitance at high charging and discharging rates. This is in contrast to conventional metal-oxide-based electrodes, which usually have poor rate capability because they have low electronic and ionic conductivity.

As a result, the new electrodes can also be used for more than 3000 charge–discharge cycles while retaining more than 95% of their capacitance. When combined with the fact that the electrodes have a much higher specific capacitance than existing commercial carbon-based supercapacitors (150–250 F/g), the conductive-wrapping technique looks promising.

Large-scale energy-storage applications

“The hybrid electrode system we have developed shows promise for large-scale energy-storage applications,” says team member Guihua Yu. “From the perspective of materials selection, both graphene and MnO2 are attractive electrode materials given that both carbon and manganese are cheap and abundant. From a processing point of view, our coating method is solution-based and easy to scale up.”

The researchers are now busy working on improving the performance of the electrodes in lithium-ion batteries using the method. “Our novel approach could be applied to a wide range of energy-storage electrode materials that have high energy density but that show limited performance because of their insulating nature,” says Yu.

The results are reported in Nano Letters.

Ergodic theorem passes the test

For more than a century scientists have relied on the “ergodic theorem” to explain diffusive processes such as the movement of molecules in a liquid. However, they had not been able to confirm experimentally a central tenet of the theorem – that the average of repeated measurements of the random motion of an individual molecule is the same as the random motion of the entire ensemble of those molecules. Now, however, researchers in Germany have measured both parameters in the same system – making them the first to confirm experimentally that the ergodic theorem applies to diffusion.

The experiments developed from the work of Christoph Bräuchle and a team at Ludwig-Maximilians University in Munich, who developed a technique for tracking individual dye molecules dissolved in alcohol that then pass through a nanoporous material. Such diffusion is of more than just academic interest because it plays an important role in a number of technologies, including molecular sieves, catalysis and drug delivery.

Pinpoints of light

To confirm the ergodic theorem, Bräuchle’s team tracked the molecules by illuminating the sample with light. This makes the molecules fluoresce so that they appear as pinpoints of light when viewed using a high-powered optical microscope. By using dye molecules at very low concentration, the researchers ensured that each point of light corresponded to just one molecule. So, by measuring the intensity profile of a point and finding its centroid, the Munich team was able to determine the position of a dye molecule to within about 5 nm. Individual molecules could then be followed as they moved through the sample by taking a series of snapshots.

Meanwhile, a team led by Jörg Kärger at the University of Leipzig used a nuclear magnetic resonance (NMR) technique to track the diffusion of all the dye molecules in a similar sample. The pulsed-field-gradient NMR method is sensitive only to the collective motion of all the dye molecules and cannot determine individual molecules. Comparing the results from the two groups showed that the average of many measurements of the diffusivity of individual dye molecules (as measured in Munich) was identical to the collective diffusivity of the dye molecules (as measured in Leipzig). Given that diffusion involves the random motions of molecules, the study therefore confirms the ergodic theorem.

Conflicting requirements

Bräuchle told physicsworld.com that the main challenge was to find a system that could be studied using both techniques. The fluorescence method works best when the dye concentration is extremely low and the molecules move very slowly – whereas the NMR measurements need much higher concentrations and faster motion. The compromise involved using a special microporous material that slowed down the molecules and constrained them to a plane so that they were easier to track with the microscope. In addition, the dye concentration in the NMR experiments was about 10 times greater than that used for the fluorescence measurements.

Now that the researchers have worked out a way to confirm the ergodic theorem, they are keen to use the technique to search for systems that do not obey the theorem. Bräuchle believes that this could occur when some molecules diffuse through living cells – something that could have important implications for how drugs are designed.

The research is described in Angewandte Chemie.

Which is the most significant popular-physics book?

By James Dacey

hands smll.jpg

Last Friday we expanded our coverage of the literary world with the release of the debut Physics World books podcast. The programme looks at the topic of “women in science”, and it is the first in a series devoted to physics books and the issues they cover.

I personally believe that reading about the history, the personalities and the issues surrounding science can be just as inspiring as doing the science itself. But we want to know what you think. In this week’s poll, we are looking specifically at popular science and the books that may have inspired your interest in physics. The question is:

Which do you believe is the most significant popular-physics book?

A Brief History of Time Stephen Hawking
The Elegant Universe Brian Greene
A Short History of Nearly Everything Bill Bryson
Longitude Dava Sobel
The Physics of Star Trek Lawrence Krauss

To cast your vote, please visit the Physics World Facebook page.

These five titles have been taken from a list drawn up in 2008 by the Physics World editorial team to celebrate the most significant popular-physics books of the past 20 years. As we acknowledged at the time, our criteria for selecting these books was, by necessity, highly subjective. So if your favourite book is not included then please let us know by posting a comment on the Facebook poll.

In last week’s poll we looked at the issue of carbon emissions and personal behaviour. My colleague, Tushna Commissariat, had recently attended a talk by James Hansen, the US space scientist who is also well known for his advocacy of action to limit the impacts of climate change. A member of the audience had challenged Hansen on his decision to fly to the UK to talk about the need to rapidly reduce fossil fuel consumption. Hansen replied that it is already too late for his minor sacrifice to make a significant difference, and that the more important thing is to communicate the message that urgent government action is required.

We asked you the following question: Would you consider not attending a conference because it would involve a flight? And it seems that the majority of respondents share similar sentiments to Hansen, with 51% choosing the option No. My sacrifice would have no useful impact. A smaller number of people, however, may be inclined to take action, as 26% of respondents selected Possibly. I try to significantly limit my air travel. 18% of respondents said that I would take another means of transport, even if it drastically increased my travel time. And just 5% said that Yes. I would not attend, even if it could hurt my career.

And in a busy week on our Facebook page we also wanted to hear from you about a new development in astronomy. The Very Large Array, the famous bank of radio telescopes in New Mexico, is about to be renamed following an upgrade, and the National Radio Astronomy Observatory (NRAO) is asking the public to come up with ideas. We encouraged you to enter the NRAO competition, and then share your ideas on our Facebook page.

We’ve seen some creative suggestions! My two favourites were: the Eyes of Hope, suggested by Helmy Parlente Kusuma in Indonesia; and Contact, suggested by Velin Ivanov in Bulgaria. It appears that the facility’s biggest fan is Kyle Murphy in the US – he believes it should be renamed the Serious Gravitas Array because “everything about this scientific achievement is awesome”. Thank you for all your contributions.

Opportunities lost

Nancy Marie Brown’s The Abacus and the Cross is a book with a hero and a villain. The hero is Gerbert of Aurillac, a 10th-century shepherd boy who became a monk, schoolmaster, scientist, mathematician and abbot before reigning at the turn of the first millennium as Pope Sylvester II. Gerbert is the first person in the Latin world known to have used Arabic numerals and the place-value system of counting. His much-used textbook on geometry was not supplanted in the West until 200 years after his death, when full translations of Euclid became available. He designed his own abacus and constructed such instruments as armillary spheres, which were used to represent important celestial circles such as the ecliptic, along with astrolabes, which were used to tell the time and latitude, and to predict the positions of heavenly bodies.

The villain is Gerbert’s lifelong intellectual and political enemy, Abbo of Fleury. Like Gerbert, Abbo became monk, schoolmaster, scientist, mathematician and abbot. Though he was never pope, he has been named a saint, whereas Gerbert’s legacy has been complicated by his popular reputation as a sorcerer. For Brown, a science writer and journalist, the most crucial difference between Gerbert and his nemesis Abbo is that the former showed great creativity, introducing a whole new tradition in mathematics and science in a manner that was distinctively modern and “experimental”. Abbo, in contrast, was much less creative: Brown describes the copious written works he left behind as “disappointingly derivative”, involving merely “well-organized rearrangements of sources commonly used” to create a “fine and tidy summation”.

Brown’s approach in this book is more Abbo than Gerbert. While she provides a good, lively, readable synthesis of scholarly evaluations and translations of the primary source materials, her ample endnotes show little evidence of direct work with primary sources from Gerbert, Abbo or their contemporaries. As Brown observes, not much remains of Gerbert’s own writings, so evidence for his genius can be inferred only indirectly, by tracing how his knowledge spread to his students. She compensates for the lack of documentary evidence by describing what life would have been like for someone like Gerbert. For example, she discusses in vivid detail the typical diet, style of life and pattern of education of a monk; the manufacture and use of parchment, paper and ink; the construction and use of books; and the processes and dangers involved in travel.

Brown’s account of Gerbert’s accomplishments in mathematics and science whets the appetite, but it may not satisfy readers with a scientific background, who will expect to learn in greater detail what was distinctive about Gerbert’s abacus, and how he used it. Her discussion of Gerbert’s complicated political entanglements and ascent to the papacy may be difficult to follow for readers who lack previous familiarity with turn-of-the-millennium history and political intrigue, particularly because she focuses more on what happened (with many names and dates) than on why. A specific point of frustration is her account in chapter 9 of Gerbert’s “figurative poem”. Brown communicates that it was an extraordinary accomplishment, marvellously complex and well worth exploring, but she does not help her reader enter into the poem and its complexities: she sets the table, but she does not serve the meal.

Three themes run through this book. First, Gerbert’s work in mathematics and science serves to show that the European Dark Ages were not that dark after all: creative things were happening, knowledge was advancing, and mathematics and science were already rational and experimental. For Brown, Gerbert served as an important conduit to the West of mathematical knowledge and insight from the Arab-speaking world. The second theme is that science and religion are not (and were not) really at war. Gerbert is an important example of a religious person who did first-rate work in mathematics and science. It was only much later that Petrarch, Washington Irving, William Whewell, John Draper and Andrew Dickson White popularized the whiggish notion of eternal war between science and religion, by circulating the charge that religious people persisted in the uncritical belief that the Earth is flat, not round – a version of history that Brown is at pains to refute.

The third theme of the book is the ways in which history could have been different. If only Gerbert’s hopes and ideals had been realized, Brown argues, religion and science would be more closely linked, and science would bridge the tensions and differences that separate Christianity, Islam and Judaism. But the death of Emperor Otto III in the year 1002 thwarted the plans and ambitions of Gerbert, who had relied on the emperor’s support to become pope in 999. After Gerbert’s own death in 1003, his “enlightened” dark age gave way to a world of deeper darkness: a world dominated by apocalyptic fear, religious intolerance and crusades; a world in which the idea of a scientist–philosopher pope was no longer thinkable; and a world in which Christian and Jewish scholars were no longer able to work together to translate Greek and Arabic scientific texts.

The Abacus and the Cross represents an intellectually honest, good-faith effort to portray Gerbert and his accomplishments for a popular audience. But both scientifically and theologically minded readers may echo Brown’s plaint of “what might have been”. If only she had gotten more deeply into the scientific issues – it would have been interesting, for example, to hear in some technical detail how Gerbert’s abacus actually worked, or how his “figurative poem” played out on multiple levels of meaning. And if only she had entered into some of the relevant theology – it seems strange that a book which is concerned to debunk the notion of a “war” between science and religion does not address any theological issues, but Brown deals with religion only as an institutional and sociological force. In this book, the only place the Cross shows up is in the title.

On the shoulders of eastern giants: the forgotten contributions of medieval physicists

We learn at school that Newton is the father of modern optics, Copernicus heralded the birth of astronomy and Snell deduced the law of refraction. But what debt do these men owe to the physicists and astronomers of the medieval Islamic Empire? What about Ibn al-Haytham, the greatest physicist in the 2000-year span between Archimedes and Newton, whose Book of Optics was just as influential as Newton’s seven centuries later? Or Ibn Sahl, who came up with the correct law of refraction many centuries before Snell? What of the astronomers al-Tusi and Ibn al-Shatir, without whom Copernicus would not have been able to formulate his heliocentric model of the solar system? In this lecture, Jim Al-Khalili recounts the stories of these characters and more from his new book Pathfinders: the Golden Age of Arabic Science.

Date: Thursday 20 October 2011

Speaker: Jim Al-Khalili
Jim Al-Khalili is a physicist, author and broadcaster. He is professor of physics and also professor of public engagement in science at the University of Surrey, UK. As well as his work on radio and television, he has written a number of popular-science books, the most recent of which is Pathfinders: the Golden Age of Arabic Science. His awards include the Royal Society Faraday Prize (2008), the IOP Kelvin Medal (2011), an OBE in 2008 and a Bafta nomination.

Moderator: Dr Margaret Harris, reviews and careers editor, Physics World

Virus helps build new materials

Scientists in the US have used a common virus to produce materials that resemble skin and bone. In addition to providing new insights into how such materials develop in the natural world, the work also brings synthetic production of tissue in the laboratory closer to reality.

In nature, completely different materials are often assembled from many copies of the same basic molecule such as a protein. Collagen type I, for instance, is a protein molecule that can combine with various other chemicals to form skin, bone or even eye tissue. This process is called self-templating because individual molecules are not assembled according to an external template. Instead, thermodynamic factors such as temperature and solution concentration are controlled to ensure that the desired configuration is the one that is energetically favoured.

Scientists are keen to mimic these processes – but the extreme sensitivity to thermodynamic factors that drives self-templating makes such molecules extremely difficult to work with in the lab. Indeed, it remains a mystery how nature can achieve the precision control that has so far eluded the laboratory chemist.

Going viral

An elegant solution to this problem is to use the M13 phage as a base unit, rather than a molecule such as collagen. M13 is a virus that attacks E. coli bacteria but is harmless to humans. It is relatively easy to grow and control in the lab because its protein coat can be manipulated by genetic engineering – a trick discovered by Seung-Wuk Lee, Angela Belcher and colleagues at the University of Texas at Austin in 2002.

In this latest work, researchers led by Lee, now at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory, have looked at the physical conditions under which different molecular structures would form from genetically modified M13 viruses. They started with solid plates immersed in a virus-rich salt solution and carefully drew the plates out of the solution, allowing the salt solution to evaporate, leaving a thin film of viruses on the plates.

As the concentration of the virus solution was increased, so too did the complexity of the patterns on the plates. At concentrations of 0.1–0.2 mg/ml, the researchers saw a simple alternating pattern of ridges and grooves. However, at a concentration of 6 mg/ml, the patterns on the plates showed much more complex, long-range order that the researchers say is reminiscent of dried ramen noodles.

Pulling noodles

The researchers also investigated the effect of extraction speed. They found that the optical properties of these “ramen-noodle-like” structures varied quite distinctly depending on how fast the plate was withdrawn from the solution. Increasing the pulling speed from 50 µm/min to 80 µm/min reduced the peak reflected wavelength of one film from 490 nm to 388 nm. The scientists point out that natural materials with such wavelengths are what give some bird feathers and beetle shells dazzling structural colour.

The team produced bulk 3D structures by using their films as substrates on which to grow cells. The cells grew differently on different films. On one substrate, the scientists even managed to grow mineralized tissue similar to tooth enamel.

Lee underlines that his group does not claim to have replicated the processes used to produce these materials in the natural world – something that is still poorly understood. “We believe the actual process of how nature produces these materials is very far from our process, but the important thing is that we begin to test the importance of the kinetic factors and then begin to engineer, and very closely mimic, the structures that nature produces,” he says.

Belcher, who now works at the Massachusetts Institute of Technology and was not involved in the recent work, is impressed. “I think the most exciting aspect of this work is the ability to use a genetically tunable single molecular building block to finely control the self-templating and assembly of materials over multiple levels of organization. The researchers then exploit this ordering for a diverse set of applications. This system should be readily adapted to other applications and materials,” he says.

The research is described in Nature.

Subluminal neutrino news from Italy

icarus.jpg
By Hamish Johnston

Physicists were buzzing last month when scientists at the OPERA experiment in Italy hinted that neutrinos may move faster than the speed of light. If you missed all the excitement, the experiment measured the time it takes the particles to travel 730 km from CERN in Switzerland to Gran Sasso in Italy – and came up with a relativity-defying result.

Although there was much coverage of the “Was Einstein wrong?” sort in the popular media, I suspect that most physicists were quietly thinking “there must be something wrong with the experiment”. Others have been more vocal, with nuclear-physicist-turned-TV-presenter Jim Al-Khalili famously declaring that he will eat his boxer shorts if it turns out to be true.

Well, it looks like Jim won’t be tucking into his briefs any time soon because new data from OPERA’s sister experiment ICARUS have failed to yield any evidence for superluminal neutrinos. More precisely, ICARUS has shown that neutrinos travelling from CERN to Gran Sasso do not emit electron–positron pairs. Emission of such pairs is expected if the neutrinos travel faster than the speed of light, according to a preprint published recently by Andrew Cohen and Sheldon Glashow.

The emission of electron–positron pairs would have a noticeable effect on the energy distribution of neutrinos arriving at both OPERA and ICARUS, but now neither experiment has seen evidence of it.

OPERA’s superluminal result is based on the time it took for neutrinos to travel the 730 km – and now this speed measurement contradicts both the OPERA and ICARUS energy-distribution measurements.

I should point out that the Cohen–Glashow paper has yet to pass peer review (as far as I can tell). However, the preprint seems to meet with the approval of physicists who have blogged about it – and the ICARUS collaboration repeatedly uses the word “must” to describe the effect. And the fact that one of the authors is a Nobel laureate must give it additional kudos.

If you haven’t yet had your fill of superluminal neutrinos, the BBC will be airing a television programme on that very subject tonight. It will be hosted by the mathematician Marcus du Sautoy and you can find more details here.

UPDATE: The Cohen-Glashow paper has been accepted for publication in Physical Review Letters.

Famous physicists on the BBC

randall.jpg

By Hamish Johnston

If you happened to be listening to BBC Radio 4 last night, you would have heard interviews with two famous American physicists. First up was Lawrence Krauss of Arizona State University, who is in London to give a lecture on Sunday about our relationship with the cosmos.

Krauss chatted with Quentin Cooper – host of Radio 4’s Material World science programme – about physics in the news, including dark energy, superluminal neutrinos and quasicrystals. You can listen to the programme here and Krauss appears after about 15 minutes.

Next up was cosmologist Lisa Randall (right) of Harvard University, who spoke with Radio 4’s Andrew Marr on his Start the Week programme – which this week was entitled “God and science” and also featured Richard Dawkins and the Chief Rabbi of England, Jonathan Sacks.

Dawkins kicked off with a discussion of his new book entitled The Magic of Reality: How We Know What’s Really True, which aims to show children and their families that the myths surrounding phenomena such as earthquakes and rainbows pale in comparison with the magic and wonder of science. Particularly interesting was Dawkins’ argument that space aliens must have eyes.

Randall speaks about her latest book Knocking on Heaven’s Door about 25 minutes into the programme, which you can listen to here. Earlier this year, she also spoke to our very own Michael Banks, and you can listen to that interview here.

Three electrons for the price of one

Researchers have created a new material that can produce three or more free electrons every time it absorbs a single photon. This is unlike conventional semiconductors, which produce just one free electron per photon. Based on tiny semiconductor structures called quantum dots, the new material – developed by researchers at Delft University of Technology in the Netherlands and Toyota Europe in Belgium – could someday be used to make more efficient solar cells.

Solar cells work by absorbing photons, each of which liberates an electron and positively charged hole that travel in opposite directions, thereby creating a voltage and current that can do work. However, when an electron is liberated, a lot of its kinetic energy is lost to the semiconductor as heat, rather than being available as useful electrical energy. Researchers are therefore keen to develop new materials in which some or all of this energy is captured rather than wasted.

One way of capturing this energy is to use thin films of quantum dots in which the energy needed to liberate an electron can be fine-tuned by adjusting the size of the dots. An electron can therefore liberate more electrons as it travels through a dot in a process known as “carrier multiplication”. Unfortunately, this approach does not involve truly free electrons and holes – but rather excitons, which are bound pairs of electrons and holes. Although excitons can be separated into free charges by applying an electric field or connecting the dots to another semiconductor material, both techniques reduce the efficiency of the devices.

Now, Michiel Aerts and colleagues have made a film of quantum dots in which carrier multiplication occurs with free electrons, rather than excitons. The quantum dots are each about 5 nm in diameter and are made from the compound-semiconductor lead selenide. The films themselves are made by dipping quartz substrate into a solution of the dots.

Stable, yet conducting

One challenge for Aerts was to make sure that electrons can move easily between individual quantum dots. This is normally a problem because the nanoparticles have to be coated with an electrically insulating organic layer to prevent them from degrading while the film is being made. So, what Aerts and colleagues did was to work out a way to remove the organic layer of the dots in the film so that conduction can occur.

The carrier-multiplication process begins when a photon is absorbed by a quantum dot, which liberates an electron and hole that can then travel into adjacent dots to liberate further electron and holes. Using a technique called time-resolved microwave conductivity (TRMC) to measure the conductivity of the films, the team was able to show that – on average – about three free electrons are created per photon when the films are illuminated with 400 nm ultraviolet light. This wavelength is right on the edge of the visible spectrum and therefore abundant in sunlight.

Aerts told physicsworld.com that the team now wants to try to make solar cells from the films. In theory, such solar cells could achieve efficiencies of 44%, compared with the theoretical limit of 35% on conventional silicon cells. Although the quantum-dot films are relatively cheap and easy to produce, making devices out of them is not easy. Apart from lead selenide being a toxic material, the quantum dots deteriorate quickly when exposed to air.

The research is described in Nano Letters 10.1021/nl202915p.

Copyright © 2026 by IOP Publishing Ltd and individual contributors