Skip to main content

Hydrogel matrix makes superhydrophobic surface

A superhydrophobic thin film that can be coated onto virtually any substrate has been synthesized by an international team of researchers. The material, produced using a 3D nanotextured hydrogel matrix, is strong, very flexible and optically transparent. It might be used as a waterproof coating in applications such as self-cleaning windows, antifouling surfaces, and as a filter and sponge to separate oil from water after an industrial oil spill.

Superhydrophobic surfaces efficiently repel water in a phenomenon that is also known as the “lotus effect”. Now, a team led by Guihua Yu and Yi Shi from Nanjing University in China, along with colleagues at the University of Texas at Austin in the US, has made a new type of superhydrophobic surface comprising a 3D silica nanostructure replicated from a hydrogel template. The resulting hybrid coating consists of 3D interconnected nanofibres with uniform diameters of about 100 nm. Its morphology is like that of the lower surface of a lotus leaf, which contains micron-sized bumps that, in turn, are covered with nanoscale hair-like tubes. The nanofibres trap air under any water drops falling on them, creating a surface that repels water.

Stretched to their limit

The films produced by these inherently 3D nanotextured hydrogel templates remain superhydrophobic, even when stretched to their limit – and after more than 5000 stretching cycles at 100% strain. This is a first, because most superhydrophobic surfaces made to date lose their hydrophobic properties when exposed to a strain of more than 30%.

The films can be coated onto virtually any substrate, including metals, cement, wood, fabrics and plastics, thanks to their good wettability. They are also optically transparent (letting through 98% of light falling on them). They might come in handy as screen filters and sponges for separating oil from water, says Yu, because they can absorb up to 40 times their weight in oil.

The researchers made their superhydrophobic films using a polyaniline (PAni) hydrogel template. First, they mixed three precursor solutions together: an aqueous solution of oxidative initiator; an aqueous solution of aniline monomer and phytic acid; and tetraethoxysilane in isopropanol. The polyaniline hydrogel polymerizes and gels out fairly fast, and forms a 3D structure within three minutes.

Thanks to the high-acidic, high-water-content hydrogel matrix, the silica layer preferentially coats onto the PAni nanostructured template. Next, the silica layer is chemically modified, or “silanized” by depositing trichloro(octadecyl)silane onto the template to produce a superhydrophobic surface. The overall process is simple and can be scaled up to produce large amounts of superhydrophobic film, team member Lijia Pan told physicsworld.com.

The Texas–Nanjing researchers say that they are now looking at making super-oleophobic (oil-repelling) surfaces using the same hydrogel matrix template but a different version of their process.

The research is published in Nano Letters.

Molecular footballs, knotty headphones and a naming contest for exoplanets

Artist impression of an explanet

With the final two matches of the FIFA World Cup to look forward to this weekend, I thought I would sneak one more football-related story into the Red Folder. Over on the arXiv blog, there is a nice commentary about the topological nature of World Cup balls through the ages. Why? Well, two chemists in Taiwan have worked out a way to create a carbon-based molecule with the same shape as the football currently being used in the tournament in Brazil. Called the Brazuca, the ball is made from six panels that each have a four-leafed clover shape. Together, they form a structure with octahedral symmetry.

(more…)

Gluons get in on proton spin

For a quarter of a century, physicists have faced a paradox regarding the net spin of protons and neutrons – the spin of their constituent quarks accounts for only a small fraction of their overall spin. Now, new research carried out by physicists in Argentina and Germany who have analysed data produced by the Relativistic Heavy Ion Collider (RHIC), suggests that the missing spin might come from gluons that hold quarks together.

Misplaced spins?

Spin, an intrinsic angular momentum, is a property of both protons and neutrons (collectively known as nucleons). Until the 1980s, physicists had assumed that the spin-1/2 of both the neutron and the proton was simply the sum of the spin-1/2 of their three constituent quarks – with two quarks spinning in the opposite direction to the third. But a series of experiments found that the quark spins contributed only a small fraction to the nucleon spins, leading to what was known as the “spin crisis”. Those experiments involved firing spin-polarized beams of electrons or muons at targets containing spin-polarized nucleons. The idea was to compare the deflection of the particles in the beam when their spin axis was pointed in the same direction as the beam with those in the opposite direction. The results of these scattering experiments showed that no more than about 25% of nucleon spin comes from the constituent quarks, meaning that physicists could not determine where protons and neutrons get their net spin.

One possibility lay with gluons that hold quarks together and are exchanged by quarks in strong-force interactions. As the experiments studying quark spin cannot measure the properties of gluons, which do not interact electromagnetically, researchers turned to RHIC. Situated at the Brookhaven National Laboratory near New York, it collides two beams of protons – the gluon from one proton can interact with the quark in another via the strong force.

Gyrating gluons

In the latest work, a group of theorists – Daniel de Florian, from the Aires University in Argentina, and colleagues – analysed several years’ worth of collision data from RHIC’s STAR and PHENIX experiments. De Florian and colleagues have now studied data collected up until 2009, and have compared those data with a theoretical model they have developed that predicts the likely spin direction of gluons carrying a certain fraction of the momentum involved in the proton collisions.

The researchers discovered, in contrast to a null result they obtained using fewer data five years ago, that gluon spin does tend to line up with that of the protons, rather than against it. In fact, they estimate that gluons could supply as much as half of a proton’s spin. “This is the first evidence that suggests gluons could make a significant contribution to proton spin,” says team member Werner Vogelsang of Tübingen University in Germany, who adds that, on theoretical grounds, gluons ought to supply the same amount of spin to neutrons.

Dizzy orbits

Vogelsang cautions that he and his colleagues cannot be sure of their result because they have not yet analysed the possible spin contribution of gluons with low momenta. Doing so, he says, will require data from higher-energy collisions at RHIC, where proton energies have recently been increased from 100 to 250 GeV, and potentially from a new generation of very-high-energy electron–proton colliders. These advanced machines might also allow physicists to study another possible source of nucleon spin – the orbital, as opposed to spin, angular momentum of quarks and gluons – an analysis that requires the measurement of extremely rare collision outcomes.

Robert Jaffe of the Massachusetts Institute of Technology in the US praises De Florian and co-workers for their “fine work”, saying that their research is an “important step” in understanding what makes up a proton’s spin. He adds that it makes it even more important for physicists to understand why the three-quark model of the proton works so well in describing properties such as the magnetic moment and yet falls so far short in the case of spin.

The research is published in Physical Review Letters.

Plasmons excite hot carriers

The first complete theory of how plasmons produce “hot carriers” has been developed by researchers in the US. The new model could help make this process of producing carriers more efficient, which would be good news for enhancing solar-energy conversion in photovoltaic devices, making better photocatalysts and for applications like water splitting to produce hydrogen, to name but a few.

Plasmons are quantized collective oscillations of conduction electrons on the surface of metallic nanostructures that interact strongly with light. Such enhanced interaction allows them to concentrate light into subwavelength volumes, well below the diffraction limit of light. The phenomenon could be put to good use in a range of technologies, such as light detection and modulation, optical communications, photovoltaics and spectroscopy.

Surface plasmons only live for a short while, after which they either decay radiatively by emitting a photon or non-radiatively by generating electron–hole (charge carrier) pairs, explains team leader Peter Nordlander from Rice University. In the non-radiative case, hot charge carriers are produced. These carriers are electrons and holes that have been excited by photons with high energies.

Capturing hot-carrier energy

In bulk materials, hot carriers quickly cool in a matter of picoseconds, releasing phonons (vibrations of the crystal lattice, or heat). Indeed, such wasted heat can account for up to 50% of the energy losses in present-day solar cells. If the energy of hot carriers could be captured before it converts into wasted heat, solar-to-electric power-conversion efficiencies might be greatly increased.

Hot carriers can also induce chemical reactions – that would otherwise be too energetically demanding – in molecules near the surface of plasmonic nanostructures. Such reactions might help in water splitting, for example. Here, water is separated into oxygen and hydrogen using sunlight, which is a clean and renewable way to produce energy. They might also be used to transfer electrons into molecules or structures nearby – and so act as dopants.

Simple model

To fully exploit these carriers for such applications, researchers need to understand the physical processes behind plasmon-induced hot-carrier generation. Nordlander’s team has now developed a simple model that describes how plasmons produce hot carriers in spherical silver nanoparticles and nanoshells. The model describes the conduction electrons in the metal as free particles and then analyses how plasmons excite hot carriers using Fermi’s golden rule – a way to calculate how a quantum system transitions from one state into another following a perturbation.

The model allows the researchers to calculate how many hot carriers are produced as a function of the light frequency used to excite the metal, as well as the rate at which they are produced. The spectral profile obtained is, to all intents and purposes, the “plasmonic spectrum” of the material.

Particle size and hot-carrier lifetimes

“Our analyses reveal that particle size and hot-carrier lifetimes are central for determining both the production rate and the energies of the hot carriers,” says Nordlander. “Larger particles and shorter lifetimes produce more carriers with lower energies and smaller particles produce fewer carriers, but with higher energies.”

The team says that it has also succeeded in characterizing how efficient the hot-carrier generation process is, thanks to a figure of merit that measures how many high-energy carriers are produced per plasmon.

“Our results could help provide strategies for making the hot-carrier generation process more efficient,” says team member Alejandro Manjavacas. “Indeed, we are now busy developing another theory for how hot carriers are produced in transition-metal particles and a third one that describes how the hot carriers evolve over time.” Identifying the timescales involved in carrier decay will be another essential element for optimizing the carrier-generation process, he adds.

The results are published in ACS Nano.

Physics World tackles the valley of death

Many academics believe that they have an idea in them that could lead to a nifty new technology – and make them some cash in the process. But there is a world of difference between discussing an idea in the departmental common room and actually launching a new product to fit into an unexploited niche in the market. One of the biggest challenges that start-up companies face is known as the valley of death, which we have illustrated for you here with this quirky animation.

The voice you hear is that of Stan Reiss, who works for the international venture capitalist firm Matrix Partners. He explains how the valley of death is a metaphor for the financial challenges faced by a spin-off company in the early stages of its development. In this phase, the firm may have a prototype for a product but it might not have the income or the capital to comfortably survive and grow. Often, the company simply runs out of money and falls by the wayside. “There’s a lot of dead bones and skeletons at the end of that valley,” says Reiss.

This video is part of a series we produced following a recent visit to the Boston area of the US, which is a hotbed of academic spin-offs thanks to a glut of world-leading universities and numerous sources of investment. We published a video profile of a company called MC10, which is developing flexible-electronics products that can conform to clothing and skin. We also featured the lab of Joanna Aizenberg at Harvard University, which is creating new materials inspired by biological materials and processes.

We also published a couple of video interviews with professionals involved in the financial aspects of developing spin-off companies. There is an in-depth interview with Stan Reiss, who explains what venture capitalists do on a day-to-day basis. Reiss also discusses the types of thing he is looking for when deciding whether to invest in a science-based spin-off. Then there is an interview with Leon Sandler, who works at the MIT Deshpande Center for Technological Innovation. Sandler explains how the centre was founded at the Massachusetts Institute of Technology (MIT) in order to support the commercialization of technology developed at the university. He talks about the types of innovation the centre nurtures and the different forms of support that it can provide.

If these videos have piqued your interest in the art of commercializing physics, then there will be plenty more for you to feast upon in the near future. In November we will be publishing a special issue of Physics World exploring the different themes involved in the process of taking physics research from the lab to the marketplace. You can also find out more about how research is commercialized, specifically in the materials-science field, via the TMR+ blog. It is published by IOP Publishing, which also publishes Physics World.

You never know, once you have ingested all these great stories, you may feel sufficiently nourished to have a go at tackling the dreaded valley of death yourself. If you do, then good luck with your journey. Just don’t forget to pack the November issue of Physics World!

What is the valley of death?

Many academics believe that they have an idea in them that could lead to a nifty new technology – and make them some cash in the process. But there is a world of difference between discussing an idea in the departmental common room and actually launching a new product to fit into an unexploited niche in the market. One of the biggest challenges that start-up companies face is known as the valley of death, which we have illustrated for you here with this quirky animation.

The voice you hear is that of Stan Reiss, who works for the international venture capitalist firm Matrix Partners. He explains how the valley of death is a metaphor for the financial challenges faced by a spin-off company in the early stages of its development. In this phase, the firm may have a prototype for a product but it might not have the income or the capital to comfortably survive and grow. Often, the company simply runs out of money and falls by the wayside. “There’s a lot of dead bones and skeletons at the end of that valley,” says Reiss.

To find out more, look out for the November issue of Physics World, which will be a special issue exploring the different themes involved in the process of taking physics research from the lab to the marketplace. You can also find out more about how research is commercialized, specifically in the materials-science field, via the TMR+ blog. It is published by IOP Publishing, which also publishes Physics World.

Watch more from our 100 Second Science video series.

Theories of the dark side

Some 13.8 billion years ago, the universe was a hot soup of particles. Since then, it has expanded and cooled, and as it did so, the matter in it clumped together under the action of gravity. The result is the web-like pattern of galaxies we observe today. Over the years, observations have enabled cosmologists to construct a model of exactly how this happened, starting from less than a second after the Big Bang. This is known as the Lambda Cold Dark Matter (ΛCDM) model and it is capable of accommodating an impressive array of astronomical data to high precision. Most notably, it accounts for the patterns that the galaxies make in the sky, the existence and fine details of the cosmic microwave background (CMB) and the synthesis of the light elements hydrogen, helium and lithium.

One of the distinguishing features of the ΛCDM model is that it makes relatively few assumptions about how the universe works. For example, it assumes that gravity behaves in the way articulated in Albert Einstein’s general theory of relativity. The other assumptions concern a mere handful of parameters, the values of which are fixed by observational data. The free parameters include the rate at which the universe is expanding today (the Hubble parameter) and two parameters that fix the very small amount of lumpiness present in the otherwise perfectly homogenous hot soup.

This article concerns the theory’s other two cosmological parameters: the average energy densities stored in “dark matter” and in empty space. To heighten the sense of mystery, the latter is often referred to as “dark energy”. According to the ΛCDM model, the energy stored in the dark sector makes up 95% of the total energy in the universe (68% dark energy plus 27% dark matter). That leaves only 5% for ordinary matter, and it is certainly quite a striking idea that the visible stuff of the universe – the stuff that makes up things like planets and stars – should only account for a tiny fraction of its energy budget.

But perhaps we should not be too alarmed. Dark matter gets its name from the fact that it does not emit light, and we should not be too surprised that the universe appears to contain more of it than it does ordinary matter. After all, why should our “tele-scopically challenged” perspective on the universe entitle us to the whole story? The existence of energy in the vacuum should also not surprise us: Einstein’s equations of general relativity include it via the cosmological constant, and our theories of particle physics predict it. The trouble is that the particle-physics predictions appear to be at least 60 orders of magnitude too high; if the vacuum energy were really that big, the universe would expand so quickly that it would be impossible for stars to form. That heinous conundrum goes by the name of the “cosmological constant problem” and it is one of the most pressing problems in fundamental physics. Without a solution, we cannot be happy with our understanding of empty space and its effect on gravity. So although the ΛCDM model works well, it has nothing to say as to the origin of dark energy. It also does not explain the origin of dark matter.

We know it’s out there…

The existence of dark matter was first conjectured in the 1930s as a means of explaining the motions of the galaxies in the Coma cluster. Later, it was invoked to explain the motions of stars within individual galaxies. Today, the evidence for dark matter comes from a diverse range of phenomena, including the nuclear synthesis of the light elements in stars and “gravitational lensing” – a phenomenon that occurs when light from distant astronomical objects, such as galaxies, is distorted by the gravitational pull of intervening matter.

Perhaps the most important piece of evidence comes from the evolution of structure in the universe. At one time, not long after the Big Bang, the universe was very close to being perfectly uniform. On very large scales, it still appears smooth, like a gas of galaxies weakly linked by gravity. Yet from our perspective, matter in the universe seems far from uniformly distributed: it clumps together to make stars, planets and people. How did such structure evolve?

Remarkably, the tiny seeds of this structure can be measured directly by looking at the CMB. This background radiation originated when the universe was a mere 380,000 years old and it is almost perfectly uniform except for tiny deviations. Precise measurements of these deviations, most recently by the Planck satellite, provide the single most stringent test of the ΛCDM model and can be used to extract all of the model’s parameters to an accuracy of a few per cent or better.

Measurements of the CMB tell us that initial deviations in the universe’s uniformity were small. If ordinary matter were all there is, these perturbations could not have grown fast enough to form the cosmic web of filaments and clusters we see today: radiation pressure would have pushed them apart before gravity could pull them together. The presence of a substantial amount of dark matter, interacting weakly with radiation, made it possible for this dark matter to coalesce much earlier. In time, ordinary matter gathered around the dark-matter structures, eventually producing galaxies – the structure and distribution of which still bear the imprint of those early deviations from uniformity.

…but what is it made of?

When discussing what theorists have to say about dark matter, we should start with a caveat: what follows is not complete. Where ignorance reigns, theorists revel, and there is a mind-boggling array of weird and wonderful speculations as to what might be happening on the dark side of the universe. It would be impossible to cover them all in an article of this length. Instead, we will take a look at some of the leading contenders.

The simplest idea is that dark matter is ordinary matter that is too dim for us to see, such as planets or dead stars. But this idea does not add up, because these objects should be made from the same stuff that we already know only accounts for 5% of the universe’s total energy density. “Primordial” black holes, which may have formed from over-dense regions of matter before the time when the light elements were cooked up, would evade the 5% boundary. However, astrophysical and theoretical constraints appear to rule them out as the sole source of dark matter. (The exception would be black holes with a mass that is less than that of our Moon, but we would need to explain how such black holes could be produced.)

Photos of the Axion Dark Matter eXperiment, the Large Underground Xenon experiment and the SuperCDMS experiment

So, what else could the dark matter be? Fortunately, we are not entirely in the dark. For example, we know that dark matter should not decay to other particles on timescales smaller than the age of the universe. Otherwise, we would have detected it. It should also interact only weakly with particles of normal matter, because otherwise we would have experienced it via something other than its gravitational effects. We also know that it should be predominantly “cold”. Here, “cold” is a technical term that means dark-matter particles should not have been moving anywhere near the speed of light at the onset of galaxy formation, which happened when the universe was around 50,000 years old. This constraint is necessary because otherwise the weakly interacting dark-matter particles would not have hung around long enough to clump together and build structures as small as galaxies. For this reason, neutrinos – which have a very low mass, and travel very close to the speed of light – are not a viable cold dark-matter candidate.

Since there are no other dark-matter candidates within the Standard Model of particle physics, physicists have concluded that dark matter must be a genuinely new form of matter. However, the properties described above are very general and they do not narrow things down as much as we would like. How should we progress from here?

Well, one option would be to forget about theory and try to hunt the dark-matter particles down. Direct-detection experiments are predicated on the idea that dark-matter particles do occasionally interact with ordinary matter, and their goal is to intercept such particles as they pass through a detector. However, these experiments will be scuppered if the dark-matter particles interact with normal matter too rarely, or not at all. They would also fail if the dark-matter particles are too heavy (the number of dark-matter particles passing through the detectors should fall off in inverse proportion to their mass) or too light (in which case the dark matter’s “signature” – the recoil of a nucleus of normal matter – would be too feeble to be detected).

An alternative approach is to try to detect dark-matter particles indirectly. For example, they might well be slowed down and captured by the gravitational pull of the Sun or the centre of the galaxy, where they could build up and, if they are able to annihilate each other, start to convert into other particles. The result of this dark-matter annihilation would look like an excess of high-energy particles coming from a particular source, and would in principle be detectable. Finally, we can look for the production of dark-matter particles in colliders like the Large Hadron Collider at CERN.

A theoretical approach

All of these options for detecting dark matter are currently being pursued, and there have been some encouraging hints (but nothing more) in the data. This leaves theorists with a choice: they can build models inspired by those hints, or they can be more guided by purely theoretical considerations.

The two most famous theoretical dark-matter candidates have been around for a long time now, and neither was introduced to solve the riddle of dark matter. The first of these candidates is called the lightest supersymmetric particle (LSP). As its name implies, the LSP is predicted in many “supersymmetric” (SUSY) extensions to the Standard Model of particle physics. There are many good reasons for supposing that the Standard Model requires extending, but for our purposes, we need only know that SUSY predicts that for every boson/fermion in the Standard Model there should exist a partner fermion/boson, and that the lightest of these predicted “super-partners” is typically stable, weakly interacting and sufficiently massive to make an excellent candidate for dark matter.

The downside is that supersymmetric extensions to the Standard Model come with a lot of unknown parameters, so there are many viable implementations of SUSY. Of those that have stood the test of time (including non-detection at CERN’s Large Hadron Collider), most predict that the LSP should be a particle known as a neutralino, which is a combination of the superpartners to the particles that carry the weak force (the W and Z bosons) and the Higgs boson.

The LSP is an example of a weakly interacting massive particle (WIMP). In the dark-matter search, WIMPs now constitute the main paradigm, and for good reason. The fact that the LSP is a WIMP is a strong motivator in itself, as is the fact that WIMPs are thought to interact weakly with atomic nuclei and to annihilate in pairs, producing high-energy photons and neutrinos. As we saw in the previous section, both behaviours provide opportunities for detection.

But perhaps the most compelling feature of WIMPs is something called the “WIMP miracle”. It might be overstating things to speak of miracles, but it is certainly intriguing that the thermodynamics of the nascent universe make it possible to predict the current-day abundance of WIMPs if we know the rate at which WIMP pairs annihilate each other. The “miracle” here is that the abundance of WIMPs would agree with the evidence for dark matter if the WIMP is weakly interacting and has a mass in the 10 GeV to 1 TeV range. These are exactly the properties expected of the LSP.

The other main dark-matter candidate that arose out of a theoretical development that had nothing to do with dark matter is the axion. This hypothetical particle emerged in the 1970s as a consequence of something called the “strong CP problem” in quantum chromodynamics (QCD), which is the theory that governs the strong nuclear force. The axion is thought to be very light, with experiments and observations constraining its mass to lie somewhere between a micro- and a milli-electronvolt. Although light, it remains a good cold dark-matter candidate because it is predicted to be produced essentially at rest.

Graph showing the expected mass range and interactiveness of various dark-matter candidates

The axion is also predicted to interact only weakly with ordinary matter. Nevertheless, its small coupling to photons can be exploited to look for axions via their conversion into photons in the presence of a strong magnetic field. Observing such a conversion is the goal of the Axion Dark Matter eXperiment, and it should be able to cover a substantial portion of the theoretically favoured mass-coupling range over the next few years.

Well motivated as they may be, the LSP and the axion are but two of many possibilities (see above). For example, the question of why the densities of dark matter and visible matter just happen to be of the same order of magnitude has inspired the development of “asymmetric dark matter” models. We know that the current-day visible-matter density is the tiny residue left over after a process of matter–antimatter annihilation in the early universe. The idea is that the mechanism that produced a slight excess of matter over antimatter in the early universe also produced a slight excess of dark matter over dark antimatter. And, just as for ordinary matter, the dark matter and the dark antimatter subsequently annihilated to leave behind the residue of dark matter we infer today.

If this line of thinking is correct, there ought to be roughly the same number of dark-matter particles as there are protons in the universe. Since the dark-matter energy density is around five times that of the ordinary matter, this implies that a dark-matter particle should be around five times more massive than a proton. The precise value of the mass depends on the specific details of the model, but it is typically in the 5 to 15 GeV range – which just happens to correspond to the region where a number of direct-detection experiments have reported tentative evidence of a dark-matter signal. On the downside, at such low masses, the experiments lose sensitivity because the energy deposited by the dark matter is too small to be detected.

On to dark energy

Compared with dark matter, the history of dark energy is short. Its existence was indicated in the mid-1990s after observations of distant type Ia supernovae – highly predictable and very bright exploding white dwarf stars – showed that the expansion of the universe is accelerating. This was an unexpected, even counterintuitive finding: it seems to imply that a large part of the energy density of the universe arises as a result of something that exerts a negative pressure. This is certainly not something that ordinary matter does, so we call this mysterious negative-pressure stuff “dark energy”.

Within the ΛCDM model, the universe’s accelerating expansion (and hence dark energy) is attributed to a non-zero cosmological constant in Einstein’s equations of general relativity. The supernovae measurements give us some information about the size of this cosmological constant, and we can also extract its value by examining the fluctuations in the CMB. For example, using data collected by the Planck satellite, one can infer that dark energy accounts for 69% of the universe’s total energy density – a value that agrees well with the supernovae data. Studies of the growth of structure in galaxies point to a similar conclusion.

The concept of a non-zero cosmological constant is consistent with all of the observational data we have collected so far. However, the data do not prove that the vacuum energy really is constant. Indeed, the data may perhaps be hinting that there is something more interesting going on with dark energy than just a cosmological constant. It seems we are living at a very special time in the history of the universe. In the time since the light elements formed when the universe was a few minutes old, the energy density in matter has fallen (due to the expansion of the universe) by a factor of about 1025. During this time, the energy density associated with a cosmological constant would have stayed constant, so it is quite a coincidence that today just so happens to be the time when the two have roughly equal values (5% and 68% are classed as “roughly equal” in this case). This “coincidence problem” could conceivably be avoided if the energy density in the vacuum has not remained constant over time. Maybe it is small today because the universe is old.

Another clue regarding the nature of dark energy comes from particle physics. If the universe were permeated by a spin-zero field – something rather like the field associated with the Higgs boson, which gives rise to the masses of the elementary particles in the Standard Model – then dark energy would arise as a natural consequence. In a nod to Aristotle’s mysterious “fifth element”, the spin-zero field associated with dark energy is known as “quintessence”.

The idea of using a spin-zero field to cause cosmic acceleration is not new. It is a cornerstone of most modern theories of inflation, which suggest that the expansion of the universe accelerated very rapidly for a very short time right at the birth of the universe. In contrast, the quintessence field needs to be active right up to the present day. The field will automatically generate a negative pressure if its energy is predominantly made up of potential energy. In the case that the kinetic energy in the field is zero, the dark energy behaves like a cosmological constant but, more generally, quintessence will lead to a dark-energy density that varies over time. Needless to say, current and planned observations have their sights set firmly on establishing to what extent the dark-energy density does vary with time.

The encouraging news is that it is possible to construct models of quintessence in which the dark-energy density is at no time much smaller than the energy density in matter or radiation. This would handily explain the coincidence problem. Unfortunately, there are currently no compelling theoretical models of quintessence: the new spin-zero field must be introduced without any external motivation and its potential energy chosen “by hand” to deliver the desirable evolution of the field’s energy density. But of course, the fact that we are ignorant does not make the idea of quintessence a bad one.

A ghostly 3D map showing where dark matter is found in the universe

Another option, and one that has a considerable amount of superficial appeal, is to drop one of the key assumptions of the ΛCDM model. Rather than invoking a new field to explain the observed cosmic acceleration, we could instead entertain the idea that Einstein’s theory of gravity needs modification on the largest scales. However, despite a good deal of effort, proponents of modified gravity theories have been unable to make a compelling case. In order to accommodate the twin constraints of highly constraining data (the general theory of relativity is very well tested) and theoretical consistency (it is easy to build models that end up making no sense), modified gravity theories are often forced into looking very much like Einstein’s theory with a cosmological constant.

Together, dark matter and dark energy represent possibly the most serious challenge to our understanding of the laws that govern the cosmos. Although our ideas about them are often well motivated, it is possible that none of them will turn out to be correct. As we so often learn, there are far more ways to be wrong than right. Moreover, the answers will not necessarily come quickly. Dark matter, for example, might be so weakly interacting that we can only ever learn about it through its gravitational influence. In that case, we might never be able to identify its particle properties.

It is also possible that dark matter and dark energy are just the tip of an iceberg, in the sense that the underlying theory is much more than just a simple extension of the Standard Model. Perhaps there is a whole “dark sector”, with its own equivalent of the Standard Model. In that case, we would have a lot of work to do before we could claim a full understanding. After all, the physics of the 5% of our universe that is not dark matter or dark energy is beautifully rich, and understanding it has been the work of many experiments and many lifetimes.

Static electricity helps geckos get a grip

The amazing ability of some geckos to scale smooth walls and cling to ceilings could be primarily a result of contact electrification. That is the claim of researchers at the University of Waterloo in Canada, who have made a new study of the electrostatic interactions between the lizard’s feet and two different surfaces. Their conclusion contrasts with conventional thinking, which attributes the stickiness of gecko feet to Van der Waals forces.

The exceptional climbing ability displayed by many geckos comes from their specially adapted toe pads. Each pad is covered in layers of microscopic, hair-like structures – or setae – that split into smaller, spatula-shaped tips. Being so small, the tips can get close to the surfaces on which the geckos walk, forming an intimate contact. Each seta contributes only a tiny attraction, but together they produce a combined adhesive force of about 10 N for each foot, which allows geckos to hang from a ceiling by a single limb. Letting go is not a problem because the adhesive effect is directional, allowing a gecko to detach by simply re-orientating its foot.

According to conventional theory, the attraction is a result of Van der Waals interactions. These are the weak dipole–dipole forces that act between adjacent atoms and molecules as a result of shifting electron concentrations.

Exchanging electric charges

In their new study, Alexander Penlidis and colleagues looked at how contact electrification could contribute to gecko adhesion. This effect occurs when two materials touch and exchange electric charges. The result is a net negative electrostatic charge on one material and a positive charge on the other, which causes an attractive force between the two.

To test whether these interactions could be contributing to the adhesive abilities of geckos, the researchers measured the electric charges and adhesive forces generated when gecko toe pads were stuck on two insulating polymer surfaces – one of Teflon AF and one of polydimethylsiloxane. In both cases, on contact, the geckos’ toe pads became positively charged and the surfaces negatively charged. Furthermore, the adhesion strength correlated with the magnitude of the electrostatic charge that was generated. Despite having a lesser potential for generating Van der Waals forces, Teflon AF was seen to have a much stronger adhesion than the other substrate. This, say the researchers, suggests that contact electrification plays a major role in gecko adhesion.

Still clinging in ionized air

The finding could overturn 80 years of conventional wisdom that electrostatic interactions are not involved in gecko adhesion. Penlidis and colleagues believe this dismissal can be traced back to an experiment described in 1934 by the German scientist Wolf-Dietrich Dellit. Ionized air – which would neutralize electrostatic interactions – was blown towards a gecko clinging to a metal surface and had no effect on its ability to hang on. Penlidis and colleagues explain that Dellit’s observation is consistent with their conclusion because the contact between seta and substrate is so close that ionized molecules in the air would not be able to get between the two to neutralize the interaction.

While contact electrification could play an important role in a gecko’s ability to scale smooth walls, it is not clear whether the force is any help on rougher surfaces. “The [research] clearly shows how electrostatic forces can play an additional role in enhancing adhesion in geckos, which is an aspect that had not been previously considered,” says Duncan Irschick, a biologist at the University of Massachusetts who is developing a synthetic, reusable adhesive based on gecko feet. Irschick questions, however, “whether such forces are relevant for natural surfaces that geckos have evolved to use, such as leaves, trees, etc”.

In 2002 Kellar Autumn of Lewis & Clark College in Oregon was the first to observe the Van der Waals interaction in gecko feet. Commenting on this latest research, he says that “This is a novel and important discovery, and suggests that electrostatic forces could contribute to adhesion in geckos on some surfaces, such as Teflon.” However, Autumn is not convinced that electrostatic forces are dominant, pointing out that this conclusion is not supported by the results of the study. “Moreover, the use of whole animals rather than isolated setae, and only one axis of force measurement, makes the results difficult to interpret,” he adds.

The research is described in the Journal of the Royal Society Interface.

Carbon nucleus seen spinning in triangular state

Physicists have obtained important new evidence showing that the structure of the carbon-12 nucleus – without which there would be no life here on Earth – resembles that of an equilateral triangle. The evidence was obtained by physicists in the UK, Mexico and the US by measuring a new rapidly spinning rotational state of the nucleus. The finding suggests that the “Hoyle state” of carbon-12, which plays an important role in the creation of carbon in red giant stars, has the same shape too. Recent theoretical predictions, in contrast, had suggested that the Hoyle state is more like an obtuse triangle or “bent arm”.

All the carbon in the universe is created in red giant stars by two alpha particles (helium-4 nuclei) fusing to create a short-lived beryllium-8 nucleus, which then captures a third alpha particle to form carbon-12. But exactly how this reaction occurs initially puzzled physicists, whose early understanding of carbon-12 suggested that it would proceed much too slowly to account for the known abundance of carbon in the universe. Then in 1954 the British astronomer Fred Hoyle predicted that carbon-12 had a hitherto unknown excited state – now dubbed the Hoyle state – which boosts the rate of carbon-12 production.

Three years later the Hoyle state was confirmed experimentally by physicists working at Caltech. However, the precise arrangement of the protons and neutrons in the carbon-12 nucleus remains a matter of much debate. While some physicists feel that carbon-12 is best thought of as 12 interacting nucleons, others believe that the nucleus can be modelled as three alpha particles that are bound together. The rational for the latter model is that alpha particles are extremely stable and so are likely to endure within the carbon-12 nucleus.

Molecular inspiration

If carbon-12 is indeed well described as three alpha particles, molecular physics could provide important clues about how those particles are arranged. In 2000 Roelof Bijker of the National Autonomous University of Mexico (UNAM) and Francesco Iachello at Yale University suggested that the three alpha particles could arrange themselves in an equilateral triangle in which the three alpha particles are all in the same plane. Such a structure had already been spotted five years earlier in the triatomic hydrogen molecular ion, H3+.

Now, Bijker has joined forces with Martin Freer and colleagues at the University of Birmingham and Moshe Gai at the University of Connecticut to obtain the best experimental evidence so far that carbon-12 is indeed shaped like an equilateral triangle. The experiment was carried out at Birmingham’s cyclotron by firing a beam of alpha particles at a carbon target to produce carbon-12 nuclei that are in high spin states. Indeed, the nuclei, which literally spin like tops, are rotating so fast that they tear apart by emitting alpha particles.

By measuring the energy and angular distribution of the alpha particles, the team observed a high spin state that had never been seen before. When analysed along with four lower spin states measured in previous experiments, the new data suggest very strongly that the carbon-12 nucleus resembles an equilateral triangle that has been set spinning like a three-pointed pinwheel.

Breathing nucleus

This description applies to the ground-state rotational band of carbon-12, but it also has significance for the Hoyle state. This is because the spectrum of the Hoyle-state rotational band appears to be similar to that of the ground-state band – with two of the five spin states measured already. However, the Hoyle state appears to have a larger moment of inertia than the ground state. This suggests that the Hoyle state is a “breathing mode” whereby the equilateral triangle expands. This expanded nucleus can itself be set spinning, resulting in a series of excited states similar to that of the ground-state band of carbon-12.

This evidence pointing towards an equilateral-triangle-shaped Hoyle state appears to be at odds with the recent calculations that suggested that it is more like an obtuse triangle. However, alpha-decay measurements do not give physicists the complete picture of the shape of a nucleus and the only way to be sure of the structure is to study the gamma rays that are given off when a spin state decays. While such studies are commonplace in nuclear physics, they are much harder for carbon-12 because the nucleus is much more likely to decay by emitting an alpha particle than a gamma ray.

Freer and colleagues are now, however, developing an experiment that will try to capture the gamma rays given off by the spinning carbon-12 nuclei and hope to be making measurements before the end of this year. So 60 years after the Hoyle state was predicted, we may finally know its shape.

The research is described in Physical Review Letters.

Modelling molecular magnets

The complete magnetic properties of the prototype molecular magnet Mn12 have been modelled, for the first time, by an international team of researchers. The calculations will be crucial for developing real-world devices from the material, as well as to study fundamental nanoscale quantum-phenomena such as magnetic tunnelling.

Single-molecule magnets, such as Mn12, Fe8, Mn4 and V15, are natural ensembles of identical, weakly interacting magnetic nanoparticles that can switch their magnetization between two states, from “spin up” to “spin down” for example. At low temperatures, the magnetic state of the molecule persists even in the absence of a magnetic field. Such a “memory effect” could be exploited to make high-density information storage devices for computing applications and in molecular electronics in general.

Mn12 is a prototype molecular magnet and as such, is also an ideal model object in which to study physical phenomena such as spin dynamics and quantum decoherence in nanoscale quantum systems. The molecule contains 12 magnetic ions with high spins, so the space taken up by these quantum states (known as the Hilbert space) is very large.

No ‘adjustable parameters’

“Our calculations prove that modern quantum-physics models can be used to study magnetic interactions in such a complicated system, from first principles,” explains team leader Mikhail Katsnelson of Radboud University of Nijmegen in the Netherlands. “Thanks to methods that we developed previously, we have now managed to analyse all the magnetic interactions within this molecule – and without any ‘adjustable parameters’.” The calculations seem to back up results from inelastic neutron scattering experiments on Mn12, he further explains.

Until now, most theoretical work on molecular magnets mainly relied on the so-called rigid-spin model, in which the whole system of interacting spins is replaced by just one big spin, with some magnetic anisotropy being introduced “manually”. However, such a description is rather simplistic and largely ignores intermolecular interactions.

Interactions are important

From their earlier work, the researchers had suspected that special kinds of magnetic interactions, known as antisymmetric exchange or Dzyaloshinskii–Moriya (D–M) interactions, could, on the contrary, play a crucial role in the physics of molecular magnets. “We have now confirmed this assumption quantitatively,” says Katsnelson.

The D–M interaction is a consequence of the spin-orbital interaction, which couples electronic orbital and intrinsic spin magnetism. It is the part of this interaction that favours perpendicular coupling of magnetic moments, rather than the normal parallel/antiparallel coupling. The interaction causes a periodic (left, right, left, right) twisting (or “canting”) in weak ferromagnets. It also appears to be responsible for the magnetoelectric effect in multiferroics – materials with both magnetic and electric ordering. The latest calculations could perhaps now inspire new experiments for detecting a weak in-plane antiferromagnetic ordering originating from the D–M interactions.

Spurred on by their new results, Katsnelson says that he and his co-workers will now be applying their model – which also takes into account Heisenberg exchange interactions and magnetic anisotropy – to other molecular magnets and nanoscale magnetic clusters.

The research is published in Physical Review B.

Copyright © 2026 by IOP Publishing Ltd and individual contributors