Skip to main content

Was the universe born spinning?

The universe was born spinning and continues to do so around a preferred axis – that is the bold conclusion of physicists in the US who have studied the rotation of more than 15,000 galaxies. While most cosmological theories have suggested that – on a large scale – the universe is the same in every direction, these recent findings suggest that the early universe was born spinning about a specific axis. If correct, this also means that the universe does not possess mirror symmetry, but rather has a preferred right or left “handedness”.

Led by Michael Longo from the University of Michigan, the team had set out to test whether mirror symmetry, also referred to as “parity”, was violated on the largest scales. If a particle violates parity, its mirror image would behave differently, and such particles can be described as right- or left-handed. Parity is violated in nuclear beta decays and there is a strong preference in nature for left-handed amino acids, rather than right-handed.

“To my knowledge, no-one had asked the question of whether the universe itself had a preference of say left-handed over right-handed. My idea was to test this by seeing if there was a preferred sense of rotation of spiral galaxies. At that time, I didn’t quite appreciate that, if so, it meant that the entire universe would have a net angular momentum,” explains Longo.

Galaxies in a spin

Longo and a team of five undergraduate students catalogued the rotation direction of 15,158 spiral galaxies with data from the Sloan Digital Sky Survey. They found that galaxies have a preferred direction of rotation – there was an excess of left-handed, or counter-clockwise, rotating spiral galaxies in the part of the sky toward the north pole of the Milky Way. The effect extended beyond 600 million light-years away.

The excess is small, about 7%, and Longo says that the chance that it could be a cosmic accident is something like one in a million. “If galaxies tend to spin in a certain direction, it means that the overall universe should have a rather large net angular momentum. Since angular momentum is conserved, it seems it [the universe] must have been “born” spinning.”

What impact would this have on the Big Bang and how the universe was born? Observers in our universe could never see outside of it, so we cannot directly tell if the universe is spinning, in principle, explains Longo. “But if we could show that our universe still retains the initial angular momentum within its galaxies, it would be evidence that our universe exists within some larger space and it was born spinning relative to other universes,” he told physicsworld.com. “I picture the Big Bang as being born with spin, just like a proton or electron has spin. As the universe expanded, the initial angular momentum would be spread among the bits of matter that we call galaxies, so that the galaxies now tend to spin in a preferred direction,” he explained. When asked if the preferred spin on a large scale could be induced by some other means, he agrees that, while it may be possible, a net universal spin would be simplest explanation and so probably the best-case scenario.

Looking for ‘other manifestations’

Longo also points out that the axis of asymmetry that they found is closely related to the alignments observed in WMAP cosmic microwave background distributions. He feels that it would be interesting to see if we could find “other manifestations” of a spinning universe.

The Sloan telescope is in New Mexico, and therefore the data that Longo’s team analysed came mostly from the northern hemisphere of the sky. However, they did find a similar trend in the galaxy spin data from the southern hemisphere compiled by Masanori Iye and Hajime Sugai in 1991. Longo and his students are now looking through more data to show an equal excess of right-handed spiral galaxies in the southern hemisphere.

Neta Bahcall, an astrophysicist at Princeton University in the US, feels that there is no solid evidence for a rotating universe. “The directional spin of spiral galaxies may be impacted by other local gravitational effects,” she said. She believes that this could result in small correlations in spin rotation over distances less than about 200 Mpc – whereas the observable universe is about 14 Gpc in size. She feels that the uncertainty quoted in the paper includes only the minimal statistical uncertainty and that no systematic uncertainties – such as local gravitational effects or the fact that galaxies are correlated with each other – have been considered.

A paper on the findings is published in Physics Letters B 10.1016.

New laser technique makes cold positronium

Physicists in the US have shown that positronium – a short-lived bound state of a positron and an electron – can be produced by firing a laser beam onto a silicon surface. Because the technique is highly controllable and operates over a wide range of temperatures, it could prove extremely useful in low-temperature experiments designed to look for tiny differences in the behaviour of matter and antimatter.

Positronium is of interest to physicists in part because it can be used to supply the positrons used to make antihydrogen atoms with well-defined quantum states. Antihydrogen is the antimatter version of hydrogen, consisting of a positron orbiting an antiproton. According to the Standard Model of particle physics, antihydrogen should have the same atomic spectrum as hydrogen. Any difference between the two would reveal an asymmetry between matter and antimatter that could explain why the universe we see is dominated by matter, even though equal quantities of matter and antimatter were thought to have been created in the Big Bang.

Positronium was discovered in 1951 by Martin Deutsch, who created it by stopping positrons in a gas. More recently, researchers have been studying how to create “atoms” of positronium in a controlled way in a vacuum by emission from various surfaces, including silicon.

In March this year David Cassidy and colleagues at the University of California, Riverside, published a paper (Phys. Rev. Lett. 106 133401) describing how they implanted a beam of positrons from a radioactive source into a silicon target and then heated the target in order to liberate positrons that had become bound to electrons from the silicon. However, they were surprised to find that the emitted positronium atoms did not have a broad range of energies, as would be expected if they had been thermally removed from a hot surface. Instead, they found that almost all of the atoms had the same energy of about 0.16 eV.

Binding with positrons instead of holes

The researchers concluded that electrons excited from the silicon’s valence band to its conduction band were being scattered into unoccupied surface states. Normally these electrons would bind with holes (electron absences) to form electron–hole pairs known as surface excitons, but if positrons are present then the electrons can instead combine with them to form exciton-like positronium atoms. It is this binding, say the researchers, which releases a well-defined quantity of energy and pushes the atoms away from the silicon surface.

Now, Cassidy and team have shown that the energy needed to generate these exciton-like states can be provided far more efficiently using laser light than it can simply by heating the sample. Just before implanting the positrons they shine green laser pulses onto the silicon and this substantially increased the flux of emitted positronium. They also found that with the laser beam switched on, this flux would remain significant even if the silicon is cooled to close to 0 K, something they were not able to show directly but inferred by plotting how the flux rose as they increased the temperature up to nearly 1000 K and then extrapolated back down towards absolute zero.

‘Imaginative piece of work’

According to Michael Charlton of Swansea University in the UK, this ability to generate significant quantities of positronium atoms even at very low temperatures makes the laser-based technique appealing for production of antihydrogen, because the combining of antiprotons and positronium must take place in very cold traps. “This is a really imaginative piece of work,” says Charlton, who is a member of the ALPHA experiment at CERN that in May revealed that it had trapped over 300 antihydrogen atoms, some for longer than 15 minutes. “Researchers in the field will take up this technique straight away.”

One of Cassidy’s colleagues, Allen Mills, says that positronium itself could in principle be used to test fundamental physics, for example by dropping a sample of the atoms into the Earth’s gravitational field to find out if, owing to matter–antimatter asymmetry, antimatter actually falls upwards. He also points out that his group hopes to create a positronium Bose–Einstein condensate, within which all of the atoms would exist in the same quantum state, meaning that they would all self-annihilate at the same time and so should generate a gamma-ray laser beam. He says that such a laser could potentially be used to provide the very high energy densities needed to ignite fusion reactions, but cautions that the roughly 10 million positronium atoms generated inside their silicon target is still only about a millionth of the number needed to generate lasing.

The research is described in Phys. Rev. Lett. 107 033401.

Higgs cornered in Grenoble

The latest data from experiments at the Large Hadron Collider (LHC) leave significantly less room for the Higgs boson to hide – that is the take-home message from the Europhysics Conference of High-Energy Physics in Grenoble, France.

“It’s getting real!” remarked one particle physicist excitedly as delegates emerged from Friday afternoon’s session of the conference, which runs until 27 July. In the space of a few presentations, the possible hiding places for the Higgs boson have been shrunk dramatically thanks to data collected this year at the LHC at CERN.

The Higgs boson is a hypothetical particle the existence of which would provide the last missing piece in the nearly 40-year-old Standard Model of particle physics. It is the simplest explanation for how the electroweak symmetry was broken in the very early universe, giving mass to elementary particles.

The new results even hinted that the infamous boson may already be rearing its head in particle collisions taking place at the LHC deep beneath the Franco–Swiss border near Geneva.

Nowhere to hide

At the Tevatron collider at Fermilab in the US, the CDF and D0 experiments have been excluding possible Higgs masses for the past several years. On Friday morning CDF and D0 physicists presented their latest results in Grenoble, ruling out mass regions of 157–174 GeV and 162–170 GeV, respectively. Direct searches at CERN’s previous collider, LEP, ruled out a Higgs lighter than about 115 GeV, while indirect constraints from precision measurements of other Standard Model parameters disfavour a Higgs heavier than about 180 GeV.

The LHC has been performing well in recent months, with its four giant particle detectors taking more data per day than were collected during the whole of last year. Although the LHC has not yet collected as much data as the Tevatron, its higher-energy collisions (7 TeV compared with 2 TeV) are much more likely to create Higgs bosons and so the LHC has greater power to exclude certain mass intervals.

The LHC has finally entered the Higgs game, giving the first direct constraints in the high-mass region Dave Charlton, Atlas

Indeed, the LHC’s ATLAS experiment has now excluded the regions 155–190 GeV and 295–450 GeV, while its sister experiment CMS rules out a Higgs in the ranges 149–206 GeV and 300–440 GeV. “The LHC has finally entered the Higgs game, giving the first direct constraints in the high-mass region ever reached by experiment,” ATLAS deputy spokesperson Dave Charlton told physicsworld.com. “The data were only collected up until three weeks ago so there’s much we still have to look into; but if the Higgs exists, then we’ve now got hints that we should be looking more carefully at the 130–150 GeV region. ”

Higgs peeking out?

Intriguingly, both experiments saw slightly more events over background in this low-mass region than would be expected if the Higgs does not exist. The statistical significance of the excess is about 2.7 sigma, which means a roughly 8% chance that the excess could have been produced by chance fluctuations of the data. That is nowhere near the 5 sigma “gold standard” for a discovery, but the fact that both ATLAS and CMS see the same pattern generated a distinct buzz at the Grenoble conference.

It is precisely what you would expect to see if a low-mass Higgs was starting to show itself Gigi Rolandi, CMS

“It could be an unlucky fluctuation of the background or it could be something common in the way the two experiments model the background, but it is also precisely what you would expect to see if a low-mass Higgs was starting to show itself,” CMS physics coordinator Gigi Rolandi told physicsworld.com. No excess events were seen at higher masses by either ATLAS or CMS.

With Fermilab’s Tevatron collider set to close down in the autumn, the transatlantic race for the Higgs prize is all but over, unless a Higgs with a very low mass exists. ATLAS and CMS will perform the all-important combination of their results in time for the Lepton–Photon conference in Mumbai, India, next month, by which time they we will have doubled the amount of data. “Assuming the LHC keeps running as it is, we will know by the end of October whether the Higgs exists or not,” says Rolandi. Researchers from the Tevatron experiments will present their combined exclusion limits in Grenoble on Wednesday.

Some physicists, however, were expressing caution. “It’s too early to get carried away,” said theorist Matt Strassler of Rutgers University in the US. “It’s vital that the background, especially events where two W bosons are produced, is correctly understood.” A signature of the Higgs is that it decays into two W bosons – but such pairs can also be produced by decaying quarks. Distinguishing between the two processes is difficult and getting it wrong could make it seem like there is an excess of Higgs-like events.

Theories ruled out

The newHiggs limits rule out theories that predict a fourth generation of quarks, and followed a day of presentations in Grenoble on Thursday during which ATLAS and CMS teams cut swathes through models that attempt to describe nature at scales beyond the Standard Model.

A blizzard of plots displaying the latest LHC data showed no deviation from the Standard Model of particle physics. Black holes and other exotic states that could arise if there are extra dimensions of space now have much less room to hide, as do new force-carrying bosons and other heavy particles that would indicate the existence of physics beyond the Standard Model. Quarks also remain point-like at the energies probed so far.

Perhaps the most troubling finding of the LHC so far is that supersymmetric particles – heavy copies of the Standard Model particles that arise from new quantum dimensions of space–time – have not been seen. ATLAS and CMS independently exclude such “sparticles” with masses less than roughly 900 GeV.

Grappling with the unexpected

Meanwhile, Tevatron researchers are grappling with a number of anomalous features in their data that do not fit within the Standard Model. An unexpected bump at a mass of around 150 GeV that turned up at the CDF detector earlier this year, for example, is still present in a slightly larger data sample, yet it is not seen by its sister experiment D0 or by new analyses performed by the LHC experiments. ATLAS and CMS are also closing in on other anomalous Tevatron results.

“We are in discovery mode,” CMS spokesperson Guido Tonelli says. “This is just the beginning, but within the next year we will have, one way or another, a completely different view of nature which will have major consequences for the field.”

Aerosols must be considered by climate models

The cooling effect of aerosols in the stratosphere must be accounted for properly by computer models of Earth’s climate, say climate scientists in the US who have modelled how tiny particles in the stratosphere contribute to the planet’s heat balance. Otherwise, models will overestimate future global warming, assuming that the background concentration of these aerosols remains at least as high as it is at present.

It is well known that stratospheric aerosols cool the surface of the Earth by reflecting some of the Sun’s energy back into space. These are tiny solid or liquid particles suspended within the air and can include sea salt and dust, as well as sulphur dioxide given off by volcanic explosions or the burning of fossil fuels.

The concentration of stratospheric aerosols can rise enormously in the aftermath of major volcanic events, and this can lead to a notable drop in global temperatures. However, recent data show that the quantities of aerosols in the stratosphere can vary significantly even in the absence of major eruptions, particularly owing to the effects of moderate eruptions and probably also because of human activities.

Focusing on the background

The latest research, carried out by Susan Solomon of the National Oceanic and Atmospheric Administration’s Earth System Research Laboratory in Boulder, Colorado, and colleagues, focuses on this “background” level of aerosols and takes advantage of the fact that there have been no major eruptions, which obscure the background, since that of Mount Pinatubo in 1991.

Solomon and co-workers took data from a number of sources, including ground- and laser-based measurements recorded at the Mauna Loa volcano in Hawaii, as well as satellite observations. The satellite data reveal that stratospheric aerosols increased by about 7% a year between 2000 and 2010. This implies a change in the Earth’s radiative forcing – a measure of the imbalance between the Earth’s incoming and outgoing energy – of about –0.1 Wm–2. As the researchers point out, this compares to an annual change in atmospheric carbon dioxide of about 0.5% a year and an overall increase in radiative forcing over the last decade of +0.28 Wm–2, making the contribution of the aerosols small but “significant”.

To calculate the effect of this negative forcing on temperatures, Solomon’s group used the Bern 2.5cc climate model. This is not as complex as the “general circulation” models used to compute the overall climate but is, they say, better for investigating small temperature changes that might otherwise be hidden by the larger climate trends. They found that the stratospheric aerosols reduced warming over about the last decade by 0.07 °C, or 20%. This is when compared to a scenario with no aerosols, which is the assumption made by many models. They also calculated, based on more limited balloon data, that the presence of background aerosols reduced warming by about –0.05 °C between 1960 and 2000.

Flattening temperature rise

One of Solomon’s colleagues, John Daniel of the Earth System Research Laboratory, says that the group was motivated to model this cooling effect by a recent flattening out of the global temperature rise, although he cautions that this rise appears to be quite short term. As previously highlighted by NASA’s James Hansen, who was not involved in the current research, the flattening is present when temperatures are averaged out over periods of five years, but not when the average is taken across the whole of the 2000 to 2010 decade.

Daniel, however, emphasizes that whether or not there has been a real flattening of late is essentially irrelevant to the conclusion of the latest paper – that background levels of stratospheric aerosols must be taken into consideration in future climate projections. The researchers say that any climate models that neglect changes to these levels relative to the year 2000 are likely to overestimate warming if aerosol concentrations remain constant or continue to increase. On the other hand, they say, if concentrations were instead to drop back down to the levels last seen in 1960 then global average temperatures will be around 0.06 °C higher by 2020 than they would otherwise have been.

Predicting how these concentrations will in fact evolve is currently not possible, given our inability to predict volcanic eruptions and the large uncertainties that exist regarding future emissions of manmade sulphur dioxide, as well as our still limited understanding of how such emissions contribute to the total background. Daniel points out that if manmade emissions are a significant factor, then increasing sulphur-dioxide emissions could to some extent put a brake on warming. In any case, he points out, the relative importance of these different sources is currently being studied by a number of different research groups and should, he says, probably be much better understood within a couple of years.

Too small to matter?

Steven Sherwood of the University of New South Wales in Australia describes trends in stratospheric aerosol concentrations as “interesting and potentially important” in determining future climate, but he points out that these background levels are small compared to those associated with major volcanic eruptions and he believes that they have little to do with the “modest weakening” of global temperature trends since 2000.

Gavin Schmidt of NASA’s Goddard Institute for Space Studies in New York agrees that any effect is likely to be small compared to that from large volcanoes and intrinsic climate variability such as El Niño events and also emphasizes the difficulty of incorporating the contribution of volcanic aerosols into future climate projections. “We can either assume no volcanoes, or make some scenario based on the 20th century or on a random number generator, “he says. “Both have been tried, but neither is satisfactory because the real world volcanoes are not going to cooperate!”

The research is described in Science 10.1126/science.1206027.

Has the Pioneer anomaly been explained?

Pioneer 10
Artist’s impression of Pioneer 10 (Courtesy: NASA)

By Hamish Johnston

For more than a decade physicists have known that the space probes Pioneer 10 and Pioneer 11 are following trajectories that cannot be explained by conventional physics – leading some to speculate that this is the result of new and exciting physics.

Called the “Pioneer anomaly”, both spacecraft seem to be experiencing an extra tug towards the Sun as they move through the solar system. There has been much written about the origins of the extra acceleration, which is 10 billion times weaker than the Earth’s gravitational pull. Explanations have run the gamut from the gravitational attraction of dark matter, to modifications of Einstein’s general theory of relativity, to string theory and/or supersymmetry.

Now it seems that the answer could be much more mundane: heat generated onboard the spacecraft creates a thrust by escaping in an asymmetrical manner. That’s the conclusion of a paper to appear in Physical Review Letters and written by Slava Turyshev of the Jet Propulsion Laboratory in California and colleagues Viktor Toth, Jordan Ellis and Craig Markwardt.

The quartet’s painstaking analysis of telemetry data suggests that the anomalous acceleration of both spacecraft is decreasing with time. While the exact nature of this decrease is not certain, there is a good chance that it is exponential. This would be consistent with the decay of radioactive material with a half-life of about 27 years. Both spacecraft have radioactive power sources that are still running – so mystery solved.

Well, not quite. Both spacecraft are powered by plutonium-238, which has a half-life of about 88 years. However, the team believes that the more rapid drop in acceleration could be the result of degradation and changes to the thermal properties of the spacecraft over time. When these factors are considered, claim the researchers, a half-life of 27 years seems reasonable.

Studying the acceleration of Pioneer 10 and 11 has been a daunting task. Turyshev and others have sifted and processed decades-old data – and in some cases data have been found to be corrupt or missing. So it’s nice to see that the hard work is paying off, even if the answer is a bit dull.

Indeed, in 2004 Turyshev co-wrote an article for Physics World about the Pioneer anomaly that says “Dispassionately, the most likely cause of the anomalous acceleration of the Pioneer spacecraft is on-board systematics”.

A preprint of Turyshev’s latest paper is available here.

New heavy-weight particle of the Standard Model seen

Baryons Fermilab.jpg

By Tushna Commissariat

This past year has seen a fair amount of excitement in the particle-physics community, with bumps and jumps and leaks and debates, but sadly without any spectacular discoveries. In fact, since the both the CDF and D0 experiments at the US Fermi National Accelerator Laboratory (Fermilab) reported the production of the top quarks in 2009, it’s been rather quiet on the particle front. So it was quite refreshing to hear that researchers at the CDF collaboration at Fermilab announced the observation of a new particle – the neutral “Xi-sub-b”. This particle is basically a baryon – a Standard Model particle that is formed of a combination of three quarks.

Common examples of baryonic particles are the proton – a combination of two up quarks and a down quark and the neutron – a combination of two down quarks and an up quark. This new addition consists of a strange quark, an up quark and a bottom quark (s-u-b). While its existence was predicted by the Standard Model, the observation of the neutral Xi-sub-b is significant because it strengthens our understanding of how quarks form matter. This new particle fits into the bottom baryons group, which are six times heavier than the proton and neutron because they all contain a heavy bottom quark. The particles are produced only in high-energy collisions, and are rare and very difficult to observe.

Once produced, the neutral Xi-sub-b travels a fraction of a millimetre before it decays into lighter particles. Combing through almost 500 trillion proton–antiproton collisions produced by researchers isolated 25 examples in which the particles emerging from a collision bore the signature of the neutral Xi-sub-b. The analysis established the discovery at a level of 7 sigma, clearing the 5 sigma threshold quite easily. (Image courtesy: Fermilab)

A paper detailing their results will be available on the arXiv preprint server soon.

Carbon nanotubes could store solar energy

Researchers at the Massachusetts Institute of technology (MIT) have designed a new solar thermal fuel that could store up to 10,000 times more energy than previous systems. The fuel, which has been studied using computational chemistry but not yet fully tested in the lab, consists of carbon nanotubes (CNTs) modified with azobenzene. It is expected to provide the same energy storage per volume as lithium-ion batteries and can store solar energy almost indefinitely. It can also be recharged by simply exposing it to sunlight – no electricity required.

Solar thermal fuels work by storing energy from the Sun in the chemical bonds of molecules. For example, a typical fuel molecule in, say, its ground state A absorbs light from the Sun. This light absorption transforms state A into state B. Here, only the geometry of the molecule changes and no chemical reaction occurs. Such molecules are said to be “photo-switchable”.

The molecule is less stable in state B, because it has a higher energy. The difference between this energy and the energy of the molecules in state A is the amount of energy stored, or ΔH. Even though the molecule is more stable in state A, once it is in state B, it can be made to stay there until a “trigger” provides enough energy for it to overcome the energy barrier between the two states.

Heat on demand

When this trigger – which can be in the form of heat, light or a voltage – is applied, the molecule switches back to state A and the stored energy is released as heat. “We can thus store an amount of energy equal to ΔH per molecule in the solar thermal fuel, and then access the energy as heat when and where desired,” explain team members Alexie Kolpak and Jeffrey Grossman. “For example, the heat can be used directly – to boil water, for instance – or to generate electricity.”

After the heat is released, the solar thermal fuel can easily be recharged by exposing it to sunlight. In principle, the entire cycle could be repeated indefinitely without any loss of performance.

In the hybrid azobenzene/CNT system studied at MIT, the azobenzene is the molecule that stores and releases energy. The CNT acts to bring the azobenzene molecules into close contact with each other so that the molecules interact. “This interaction opens up an entirely new chemical phase space for tuning both the relative energies of states A and B as well as the reaction barrier between them,” explains Kolpak. “By taking advantage of these additional degrees of freedom, a much higher energy-storage capacity and a longer storage lifetime can be designed into the azobenzene system.” He believes that this may also hold true for a number of other well-known chromophores, chemical structures such as azobenzene that absorb light.

High energy density

Although many solar thermal fuels were developed in the 1970s and 1980s, they degraded rather quickly with each cycle. Currently, there is only one solar thermal fuel that can cycle many times without degradation, in addition to the new one created by Kolpak and Grossman, but it is based on ruthenium, which is rare and expensive. What is more, the volumetric energy density of this fuel is very low in contrast to that of azobenzene/CNT, which has a value that is 10,000 greater. “This value is comparable to that of lithium-ion batteries, and high enough for us to realistically envisage our solar thermal fuel in real-world applications,” Kolpak told physicsworld.com. “The fuel also has many other advantages, such as being emission-free and easily rechargeable – you don’t need to be near an electricity source to recharge.”

The team has just started to synthesize and test its compounds and says that it will soon be working with several other groups at MIT to begin developing prototype devices. “We are also continuing to use first-principles calculations to help us design other novel solar thermal fuels,” reveals Kolpak. “We are particularly excited by several systems based on the same concept of combining known photo-switchable molecules with nanoscale templates to enhance their properties.”

The researchers admit that there are still many challenges to overcome before they can even consider commercializing such a technology. On the science side, they need to develop a fundamental understanding of the relationship between the geometry of the hybrid nanostructure-based fuels and their solubility in water and other solvents, and then design and optimize highly soluble structures that still have a high energy density, are thermally stabile and are good at cycling.

“On the experimental side, the first thing we need to do is adapt synthesis methods so that we can attach a large number of photo-switch molecules per CNT, or other ‘template’ molecules, that yield the desired properties,” adds Kolpak. “And finally, we must find ways to integrate solar thermal fuels with existing technologies or design novel and inexpensive new devices.”

The work is reported in Nano Letters.

Hunt for the Higgs explained

By Matin Durrani

With the world’s leading particle physicists meeting in Grenoble right now to discuss the latest results from the Large Hadron Collider at CERN and the Tevatron machine at Fermilab in the US, I couldn’t resist pointing out a great new video from Fermilab’s Don Lincoln about what the Higgs boson is all about and why it’s interesting.

Rather than launching straight into the nature of the Higgs boson, Lincoln begins quite rightly with the Higgs field – the energy field that permeates the entire universe and interacts with subatomic particles to give them their mass.

In doing so, Lincoln draws a comparison between a barracuda gliding effortlessly through water and Don’s rather rotund buddy “Eddie” who is “no stranger to doughnuts”.

The water serves the role of the Higgs field and the barracuda, “being supremely streamlined”, interacts – like a low-mass particle – only slightly with the field and can glide through it very easily. Eddie, in contrast, moves only very slowly through the water, being like a massive particle that interacts a lot with the water.

In other words, if the Higgs field didn’t exist, then neither the top quark nor the electron, for example, would have any mass at all.

To explain the Higgs boson itself, Lincoln continues his water theme – explaining how just as water is made of individual H2O molecules, so the Higgs field is made up of countless Higgs bosons.

Meanwhile, for the real news from the International Europhysics Conference on High Energy Physics in Grenoble, stay tuned to physicsworld.com, where our reporter is prowling the conference halls and lecture rooms for the latest news.

And finally, if you want more on the hunt for the Higgs, don’t forget this great article from Physics World magazine by prolific blogger and CERN particle physicist Tommaso Dorigo or our own video. It’s from March but still well worth watching.

How often do you use physics at work?

By Margaret Harris

How often do you use physics in your job?

hands smll.jpg

Have your say by voting in our Facebook poll.

The poll allows you to choose from several options, from “practically every day” to “I obey the laws of physics, but I don’t use them at work”.

You’re also welcome to explain your answers further in the comments section or – if you’re a recent physics graduate – to e-mail us at pwld@iop.org to tell us more about how you’re using physics in your job. We’ll use the results in our October special section on careers for physics graduates.

For the moment, though, let’s return to last week’s poll. We asked whether a shift towards privatized space missions would be good for science, and readers who are simultaneously celebrating the safe return of Atlantis and mourning the last shuttle flight and can take a little comfort from the result. Some 72% of respondents said “yes”, with 28% voting “no”.

Cloak could hide ships from flowing water

Ships of the future may be able to move through the water without a creating a wake. That is according to a pair of physicists in the US, who have proposed a new type of material that lets water flow around an object as if it were not there at all. The design, which has yet to be built, could boost the energy efficiency of ships and submarines – and even prevent them from being detected. “The main function of [our] structure is to prevent fluid flowing around an object from ‘feeling’ that object,” says Yaroslav Urzhumov of Duke University.

The past five years have seen a flurry of research into invisibility cloaks. The first functioning cloak, which operated for electromagnetic waves in the microwave range, was demonstrated by a team led by David Smith at Duke University in 2006, and since then researchers have proposed and demonstrated cloaks that work for visible light, sound and even events in time.

Warping fluid flow

The latest design, put forward by Urzhumov and Smith in a paper due to be published in Physical Review Letters, could be called a water cloak, or more accurately a “fluid-flow cloak”. It is based on the same theory that gave us previous cloaks, namely transformation optics. In the same way that the equations of general relativity show how gravity can warp space–time, so the equations of transformation optics can show how materials with unusual properties can warp the path of light – or indeed other waves, such as sound or water. These exotic materials, known as metamaterials, can guide waves around an object, so that from a distance it appears as though the object is not really there.

Researchers have adapted invisibility cloaks to water before. In 2008 physicists at Liverpool University in the UK and the Ecole Centrale Marseille in France showed how a metamaterial could shield an object from surface water waves. But surface waves are different from fluid flow: in waves, the fluid itself does not go anywhere and so no mass is transferred. Urzhumov and Smith are the first to show how an object could be cloaked so it can move through water without leaving a trace.

One problem that the pair faced is how to make water flow around a vessel and meet up neatly at the stern. For this, Urzhumov and Smith suggest that the metamaterial surrounding the vessel would need to be not just porous, but also to have an anisotropic structure that exhibits a different resistance to the flow at different points around the hull. This could be a lattice of blades supported by wires, suggests Urzhumov.

Tiny pumps needed

Even if the metamaterial was able steer water around the vessel, there is a bigger problem: the more the water is steered, the more it will slow down. It is this change in velocity that is responsible for the frothy disturbance at a boat’s wake. Therefore, suggest the researchers, the metamaterial would need to actively pump water to counteract the loss of speed. Since this pumping would have to be done throughout the metamaterial, the pumps would have to be tiny.

Urzhumov has a couple of ideas in mind. One is a piezoelectric pump, which consists of a small crystal that bends when a voltage is applied across it. Another is an electro-osmotic pump, in which a voltage across a membrane creates a pressure difference, forcing water through. “Electro-osmotic micro-pumps have a much lower flow rate, so they may [only] be used to build a proof-of-principle, scaled-down, slow-moving prototype,” Urzhumov says. “Piezoelectric micro-pumps are the most likely candidates.”

If Urzhumov and Smith’s fluid-flow cloak were built, the researchers predict that one advantage would be efficiency. As a vessel moves, it drags nearby water with it, displacing more mass than it strictly has to. On the other hand, if the vessel were propelled only by the active metamaterial, then it would displace only the minimum water necessary.

Evading detection

Another advantage is silence: the turbulent wake of a vessel is where a lot of its acoustic noise is generated. By killing the wake, the metamaterial should make a vessel quieter. “Acoustic noise is definitely used by defence [agencies] for detection purposes,” says Urzhumov.

Sebastien Guenneau, a physicist at Liverpool University who helped develop the water-wave cloak in 2008, says the fluid-flow cloak could have “tremendous potential applications in aeronautics”, reducing the disturbed flow around boats, submarines and even aircraft. “There are obvious applications in civil engineering, but I guess the military would be interested too,” he adds.

Smith’s lab has built several electromagnetic cloaks in the past, but the Duke group is not planning to create the fluid-flow cloak anytime soon. “Our experimental strength is in electromagnetic metamaterials…we do not have a hydrodynamics testing facility,” says Urzhumov. “It would be much more efficient to build a collaboration with an organization that is already set up for such experiments.”

A preprint of the article is available at arXiv:1106.2282.

Copyright © 2025 by IOP Publishing Ltd and individual contributors