Skip to main content

Now you see it, now you don't

MINOS


By Michael Banks

Blink and it’s gone.

No, it’s not the latest in the search for the Higgs boson at the Large Hadron Collider near Geneva, but instead a slight difference in the mass between neutrinos and their antimatter counterparts, antineutrinos.

Neutrinos come in three “flavours” – electron, muon and tau – that change or “oscillate” from one to another as they travel though space.

It is generally thought that neutrinos and antineutrinos should have the same mass. Last year, however, results from the MINOS experiment at Fermilab, near Chicago, showed a 40% difference between muon neutrinos and muon antineutrinos (converting into tau neutrinos and tau antineutrinos, respectively) as they travelled from the accelerator to the MINOS detector (shown above) some 735 km away in the Soudan mine, Minnesota.

The results were presented with a “confidence level” of around 90–95%, which in statistical terms is approximately “two sigma” (usually a “discovery” requires five sigma).

Although the two sigma significance was small, the result was backed up three days later by a three sigma effect at another detector in the Soudan Mine – MiniBooNe. They saw a difference when muon neutrinos oscillate into electron neutrinos compared with the related process for muon antineutrinos.

Physicists noted that if the result turned out to be true it would not come as a surprise, but as an “overwhelming shock”.

But now it seems as though those fears have at least been partially allayed. After gathering twice as much data, researchers at MINOS announced yesterday at the Lepton Photon 2011 meeting in Mumbai, India, that they found the difference had dropped from 40% to 16%.

So it seems that there is still a disparity, but more data will be needed before we can be sure whether there is any mass difference between neutrinos and antineutrinos.

Milky Way stars born from intergalactic gas

Astronomers using the Hubble Space Telescope may have solved the mystery of how the Milky Way continues to spawn new stars at a consistent rate despite its diminishing gas reserves. They say the galaxy is being supplied by clouds of gas originating from outside of the Milky Way, and that these findings could help refine our knowledge of galaxy evolution.

The Milky Way currently converts 0.6–1.45 solar masses’ worth of gas into new stars every year, depleting the galaxy’s gas reserves. Yet the star formation rate doesn’t seem to be dropping, which suggests that something must be replenishing the supply. Ionized High Velocity Clouds (iHVCs), fast-moving conglomerations that move with a haste that cannot be explained by the rotating disc of the galaxy, are a proposed culprit. One suggestion is that they could be remnants from the formation of the 30+ galaxies in the Local Group, drawn in by the Milky Way’s gravity. If they do originate beyond the galactic disc, and then fall onto it, they could be bolstering the amount of gas in the galaxy.

It is not clear how large these clouds are but they were first found when astronomers noted that some of the light from distant quasars was being absorbed by objects near the edge of the galaxy. However, the huge distances involved meant it was unclear whether the iHVCs were directly associated with the Milky Way’s halo – the diffuse sphere that surrounds the galaxy – or existed beyond it. In order to solve this problem, Nicholas Lehner and Jay Christopher Howk, of the University of Notre Dame, US, adapted the quasar technique.

“Instead of observing quasars, we observed stars within the Milky Way’s halo”, Lehner told physicsworld.com. The pair observed 28 halo stars with the Hubble Space Telescope, 14 of which showed similar absorption lines in their spectra to the original quasars observations – the presence of an iHVC was revealed. The distance to these stars is well known, and so gives the maximum possible distance of the iHVC that has now been incorporated into the galaxy.

Why doesn’t the galaxy run out of gas?

Knowing the distance is the first piece in a jigsaw. “The mass of the iHVC is proportional to the distance squared,” explains Lehner. Lehner and Howk then used the original quasar observations to model the likely distribution of these iHVCs across the sky. Knowing where they are, how much gas they contain and how fast they are moving allowed the pair to estimate how much gas should fall on the Milky Way per year. “We predict that between 0.8 and 1.4 solar masses of material from iHVCs falls onto the Milky Way annually,” says Lehner. Compare that to the 0.6–1.45 solar masses consumed in star formation every year, and there is a potential answer to why the galaxy doesn’t run out of gas: it is commandeering it from intergalactic space.

“They found the magic number,” Filippo Fraternali, who researches iHVCs at the University of Bologna, Italy, told physicsworld.com. “It is not 0.1 or 100 solar masses in-filling each year, but very close to one – this is an important result,” he adds. However, the result isn’t water-tight. “It is the right approach, but there are big assumptions that may change that final number quite a bit,” Fraternali explains. He would like to see a much bigger sample than the original 28. “It is hard to get good statistics on a sample of that size,” he says.

Now that they are confirmed halo objects, Lehner is planning just that. “We’re going to go back to the quasar database, which is much larger than our stellar sample,” he explains. “If we want to understand how galaxies evolve then we need to understand how this gas gets in and out of them,” he adds.

This research was published in Science.

The future of the James Webb Space Telescope

hands smll.jpg

By Tushna Commissariat

With scientists and politicians debating over the fate of the James Webb Space telescope, this week we are asking your thoughts on the subject. Following over-run costs, a US congressional committee has moved to cancel the $6.8bn James Webb Space Telescope, poised to be the successor to Hubble Space Telescope. Should funding be reinstated or should NASA focus on other projects?

Do feel free to explain your position by posting a comment on the poll. You can vote on this poll on our Facebook page.

Results just in

Last week we asked you what you thought was the main benefit of studying physics at university. Options ranged from “Learning how the physical world works” to “Developing strong problem-solving skills”, “The wide range of career opportunities it can bring” and “The chance to play with some cool hi-tech equipment”. Among the 229 people who voted, “Learning how the physical world works” was the most popular with a 137 votes followed by “Developing strong problem-solving skills” at 63 votes. Interestingly, our “other” option that encouraged people to let us know what reasons they had for studying physics that did not fall in any of the above categories had 16 votes, with a few people pointing out that they chose physics to have a career in military research labs or, in one case, to “make something go boom”.

For some, like reader Craig Levin it was more about the type of course one was subscribing to. “If you’re taking ‘Physics for Poets’, you get a whizz-bang tour of the universe and how it works. If you’re taking a lab course, you’re getting a more in-depth picture and picking up some problem-solving skills and a little bit of project management.” he sagely pointed out. A tongue-in-cheek comment from reader Russell Davies read “I abandoned physics at age 18, because it appeared fraught with problems of limited career opportunities, limited income potential and a distinct lack of babes.”

Thank you for taking part in the poll and for taking the time to provide your thoughts. And don’t forget to vote in this week’s poll on our Facebook page.

Peering into the past of near-Earth asteroids

Eight years after the Hayabusa mission was launched to retrieve material from a near-Earth asteroid, the first scientific analyses of these rocks have been released. Among the findings published by an international team of scientists is the discovery that common chondrites found on Earth derive from the same origins as this stony “S type” asteroid. The implication is that meteorites scattered across the globe may contain preserved information about the early solar system.

The Hayabusa mission, launched by the Japanese space agency JAXA in 2003, was designed to land on the Itokawa asteroid – a 500 m-long body that lies around 300 million kilometres away from Earth – and return a sample to Earth by 2007. But having landed on the surface of Itokawa to collect samples in November 2005, technical glitches including being hit by a solar flare caused the return of the mission to be delayed for an additional three years. It eventually landed in the Woomera Prohibited Area in Southern Australia in June 2010, and the largely intact probe containing 1500 extraterrestrial grains was sent back to Japan for examination.

Parent rock

The essential result of the analysis of the 40 particles the researchers have looked at has confirmed the belief that the most common meteorites found here on Earth, known as ordinary chondrites, are born from S-type asteroids like Itokawa. The main goal of the Hayabusa mission was to demonstrate that S-type asteroids are primitive solar system bodies that “record” the long history of early solar system events.

Tomoki Nakamura from Tohoku University, Japan, and colleagues are the first to analyse the loose surface material, or regolith, that was brought back, using electron microscopes and X-ray diffraction techniques. “In our experiments, we analysed single particles by applying different techniques in order to know different aspects of the [same] particle. We needed to show asteroidal material is identical to chondrite meteorites, because we already knew that the chondrites are most primitive materials in the solar system,” explains Nakamura, who is one of the lead authors of a series of six papers published in the journal Science this week. All six papers are by the same team, with different lead authors looking at various aspects of the research that were conducted with different techniques.

The lead paper states that the regolith samples show signs of impact shocks and a large amount of heating. This suggests that the asteroid underwent a thermal evolution as the interior slowly heated up and reached a peak temperature of 800 °C, before it cooled down again very slowly. Because many of the particles studied had experienced this high temperature, this suggests that the particles on the surface of the asteroid were once at considerable depths within its body. This led the team to conclude that the asteroid was initially a much bigger body that suffered a large impact. “The size reduction occurred during a heavy impact that broke the parent body of 20 km into small pieces; some of which gathered again to form the [current] 0.5 km asteroid Itokawa,” says Nakamura. The researchers now believe that the formation time of Itokawa goes back to the early solar system 4.5 billion years ago.

Spectrum of results

The other papers published by the group cover a host of topics – one looks at the oxygen isotope ratios and minor element abundances in Itokawa particles, while another ties S-type asteroids with chondrite meteorites. Another paper investigates evolution processes of the asteroid’s surface. This led to the finding that the particles on the surface were first formed by fragmentation of larger rocks. In addition, exposure to solar winds has changed the colour of the particles and seismic activity in smooth terrain has reduced their size gradually.

Through this analysis we now have a detailed understanding of the history of asteroid formation. So tiny particles gave us a big result.Tomoki Nakamura, Tohoku University, Japan

Another study compared the dust with regolith sampled from the Moon, showing that there are chemical differences between lunar dust and the Itokawa samples. The researchers attribute these differences to chemical alteration by space-weathering and meteoroid impacts on the asteroid surface. A final paper looks at the noble gas isotopes helium, neon and argon to map a history of irradiation from solar wind and cosmic rays on the asteroid’s surface. The team found that Itokawa is continuously losing its surface materials at the rate of tens of centimetres per billion years.

Nakamura says that he was rather surprised to see just how similar the asteroid particles were to chondrites. “When I analyse the particles I always feel I am analysing meteorites,” he said. He also said that the results have helped with better understanding of asteroid formation. “The particles recovered from the asteroid Itokawa are very small, mostly less than 0.1 mm. But through this analysis we now have a detailed understanding of the history of asteroid formation. So tiny particles gave us a big result.”

The quantum century

Books that try to explain quantum physics to the general reader have become a well established non-fiction (well, mostly non-fiction) genre. In this recent addition to the field, The Quantum Story: a History in 40 Moments, Jim Baggott plucks from the 20th-century history of quantum physics a set of “moments”, each one being the brief story of a significant discovery, tied to a single individual or a small group. He proceeds, largely in chronological order, from the year 1900 to just beyond 2000. It is an ambitious undertaking, and makes for a literally weighty book. But although Baggott goes at his task with unrelenting enthusiasm, the results are ultimately disappointing.

Baggott’s choice of “moments” is not the problem. His selections are relatively well balanced. Of the 40, a dozen are devoted to the development of quantum physics, another 12 to particle physics, 11 to the “meaning” of quantum physics (too many, in my opinion), and five to quantum cosmology and quantum gravity. The only missing moment, as I see it, is Fermi’s 1934 theory of beta decay, which set the stage for understanding that annihilation and creation of particles is the bedrock of all interactions, not just those involving the electromagnetic force. Apart from this omission, and the overemphasis on complementarity and “reality”, I cannot fault the selection.

More problematic is Baggott’s philosophical bent, which he reveals in the book’s preface. Here, he states that recent experiments “strongly suggest that we can no longer assume that the particle properties we measure necessarily reflect or represent the properties of the particles as they really are”. Baggott chooses to emphasize these words by repeating them on page 356 (except for the qualifying “strongly suggest that”). Indeed, questions about the nature of reality run throughout the book, and Baggott cannot quite free himself from the supposition that if an electron is emitted at A and absorbed at B, it must have existed as a particle at points between A and B. In fact, we cannot even say that it was always the same electron, much less that it had a definable existence or definable locations between A and B. All of which shows that opinion about what is reasonable and what is not reasonable, what is real and what is not real, remains part of what we call physics.

Another problem – although this is more a matter of taste – is the book’s style. If you love breathless prose, this is the book for you. Baggott’s account has physics lurching from crisis to crisis on a collision course with philosophy, and his leading characters burn with emotion as they are consumed by anger, ambition or bitter disappointment. For example, on page 158, I I Rabi is said to have been “incensed” when he famously asked about the muon, “Who ordered that?” Actually, he was joking. Similarly, in its early days, the quark model was not just questioned, it was, says Baggott, “treated with derision”.

Baggott’s end notes run to 29 pages and his bibliography to nearly 150 titles. He has obviously worked long and hard on this project, and many of his discussions reflect a depth of understanding. For instance, in “Moment 11” he explains well that waves in classical physics exhibit a kind of uncertainty. Similarly, his treatment of Bell’s theorem (Moment 31), although not easy reading, is accurate. And his discussion, in an “interlude”, of the famous meeting between Heisenberg and Bohr that took place in German-occupied Copenhagen in 1941 is balanced.

Despite this careful preparation, the book still contains quite a few physics errors, which is perhaps not surprising in a work by a non-physicist. Some are minor, such as a wrong definition of a mole (p17) or an incorrect explanation of the way that polarizing sunglasses work (p321). Others are a bit more substantive, such as the statement on page 28 that in classical theory an electron spiralling down toward a nucleus will lose speed, or the assertion on page 29 that electron energies in the Bohr atom increase in direct proportion to the quantum number n. Neither these nor any of the other errors are significant enough to undermine interest in the book.

A more serious problem, perhaps, is that most of the book’s errors are irrelevant. Even when the physics is correct – which is most of the time – does it matter? The discussions of physics in the book, although earnest, vary from barely comprehensible to completely opaque. Who, even among physicist readers, will understand the discussion of spontaneously broken symmetry groups in the 25th “moment” or mixing angles in the 28th? It matters little whether the physics is wholly accurate or not. The book doesn’t teach any physics. At best, it gives some kind of flavour of what it is all about. For some readers, that may be enough.

Basically, Baggott’s book is about people – from Max Planck, Niels Bohr and Albert Einstein in quantum theory’s early years, to Ed Witten, Lee Smolin and Anton Zeilinger in more recent times. On people, he does a good job (even if, as noted above, the descriptions are a bit breathless in places). If you read this book, you will learn about Pauli’s temper, Gell-Mann’s erudition, Bohr’s paternalism, Heisenberg’s ambition, Schrödinger’s dalliances, Glashow’s zest, Feynman’s playfulness, Hawking’s triumphs, Einstein’s stubbornness, Aspect’s determination, Zeldovich’s White Horse scotch whisky bet and more. But with 410 pages of text, there is a lot of book to wade through for the pleasure of these engaging profiles.

Probing the cosmic-ray–climate link

 

Best known for its studies of the fundamental constituents of matter, the CERN particle-physics laboratory in Geneva is now also being used to study the climate. Researchers in the CLOUD collaboration have released the first results from their experiment designed to mimic conditions in the Earth’s atmosphere. By firing beams of particles from the lab’s Proton Synchrotron accelerator into a gas-filled chamber, they have discovered that cosmic rays could have a role to play in climate by enhancing the production of potentially cloud-seeding aerosols. Describing their findings in this week’s Nature, the team has also found that our current understanding of the chemistry of these aerosols is inadequate and that manmade pollution could have a larger role in their formation than previously thought.

Aerosols are tiny liquid or solid particles suspended in the atmosphere that can warm or cool the climate directly by absorbing or scattering radiation. They can also act as surfaces on which water vapour condenses, leading to the formation of cloud droplets and so tending to cool the planet. Around half of all cloud droplets are thought to form on aerosols that are injected directly into the atmosphere, such as dust particles, sea spray or pollution from the burning of biomass. The other 50% form on aerosols that are produced by the clustering of molecules of trace gases found in the atmosphere. However, it is not well understood exactly how this clustering takes place and precisely which kinds of molecules are involved.

There has also been much debate about the possible role of cosmic rays in the formation of these aerosols. Henrik Svensmark of the National Space Institute in Copenhagen and colleagues hypothesize that the ions that are formed as (charged) cosmic rays pass through the atmosphere act as a kind of glue that makes it easier for molecules to stick together and form aerosols. This hypothesis has proved controversial because it suggests a role for solar variation, as well as human emissions of greenhouse gases, in climate change – the idea being that the stronger the Sun’s magnetic field, the more cosmic rays are deflected away from the Earth, resulting in the formation of fewer clouds and so a warmer Earth, with a weaker solar magnetism having the opposite effect.

Cloud in a canister

The CLOUD collaboration, an international group led by CERN’s Jasper Kirkby, was set up to settle the question of whether or not there is a link between cosmic rays and climate. The experiment, which has been running since the end of 2009, consists of a 3 m-diameter stainless steel chamber containing humidified ultra-pure air and selected trace gases, which is placed in the path of a charged-pion beam that simulates ionizing cosmic rays. By varying the concentrations of the trace gases, adjusting the temperature and humidity inside the chamber, turning the beam on and off, and then measuring the concentration of aerosols inside small samples removed from the chamber, Kirkby and colleagues can establish how changing atmospheric conditions affect the rate of aerosol production. The fact that they can do this very precisely and with extremely low levels of contaminants means that they can make much cleaner and more controlled measurements than is possible in the real atmosphere.

To their surprise, the researchers found that when simulating the atmosphere just a kilometre above the Earth’s surface, sulphuric acid, water and ammonia – the components generally believed to initiate aerosol production – were not on their own enough to generate the quantities of aerosols observed in the real atmosphere, falling short by a factor of up to a thousand, even when the pion beam was switched on. They conclude that other molecules must also play a role, and say that an organic compound or compounds are most likely.

As Kirkby explains, if the missing substance is manmade, then human pollution could be having a larger cooling effect than is currently believed (emissions of sulphur dioxide are already known to generate the sulphuric acid that is vital for aerosol production). Otherwise, says Kirkby, if the missing substance comes from a natural source, the finding could imply the existence of a new climate feedback mechanism (possibly, he adds, higher temperatures increasing organic emissions from trees).

However, when simulating the atmosphere higher up, the researchers found a stronger cosmic-ray effect. They discovered that at altitudes of 5 km or more, where temperatures are below –25 °C, sulphuric acid and water can readily form stable aerosols of a few nanometres across and that cosmic rays can increase the rate of aerosol production by a factor of 10 or more.

More detailed experiments required

Svensmark welcomes the new results, claiming that they confirm research carried out by his own group, including a study published earlier this year showing how an electron beam enhanced production of clusters inside a cloud chamber. He acknowledges that the link between cosmic rays and cloud formation will not be proved until aerosols that are large enough to act as condensation surfaces are studied in the lab, but believes that his group has already found strong evidence for the link in the form of significant negative correlations between cloud cover and solar storms (which reduce atmospheric ionization). “Of course, there are many things to explore,” he says, “but I think that the cosmic-ray/cloud-seeding hypothesis is converging with reality.”

I think that the cosmic-ray/ cloud-seeding hypothesis is converging with reality Henrik Svensmark

Jeffrey Pierce, an atmospheric scientist at Dalhousie University in Canada, however, is more cautious. Modelling carried out by his group shows that a 10–20% variation in atmospheric-ion concentrations, roughly the variation associated with solar storms or across a solar cycle, produces less than a 1% change in the concentration of cloud condensation nuclei, with the diminishing returns resulting from more aerosols having to share a given quantity of molecular raw material and aerosols merging with one another. “This change is very likely too small to explain the effect on clouds reported by Svensmark,” he says. “We must continue to explore other potential physical connections between cosmic rays and clouds.”

Kirkby shares Pierce’s caution. He argues that CLOUD’s results “say nothing about cosmic-ray effects on clouds” because the aerosols produced in the experiment are far too small to seed clouds. But he adds that the collaboration will have some “interesting new results” to present later this year regarding the role of organic molecules in aerosol formation. “What is needed now to settle this question are precise, quantitative measurements,” he adds.

Jasper Kirky describes the broad aims of the CLOUD experiment.

Did Einstein discover E = mc2?

Who discovered that E = mc2? It’s not as easy a question as you might think. Scientists ranging from James Clerk Maxwell and Max von Laue to a string of now-obscure early 20th-century physicists have been proposed as the true discovers of the mass–energy equivalence now popularly credited to Einstein’s theory of special relativity. These claims have spawned headlines accusing Einstein of plagiarism, but many are spurious or barely supported. Yet two physicists have now shown that Einstein’s famous formula does have a complicated and somewhat ambiguous genesis – which has little to do with relativity.

One of the more plausible precursors to E = mc2 is attributed to Fritz Hasenöhrl, a physics professor at the University of Vienna. In a 1904 paper Hasenöhrl clearly wrote down the equation E = 3/8mc2. Where did he get it from, and why is the constant of proportionality wrong? Stephen Boughn of Haverford College in Pennsylvania and Tony Rothman of Princeton University examine this question in a paper submitted to the arXiv preprint server.

Hasenöhrl’s name has a certain notoriety now, as he is commonly invoked by anti-Einstein cranks. His reputation as the man who really discovered E = mc2 owes much to the efforts of the antisemitic and pro-Nazi physics Nobel laureate Philipp Lenard, who sought to separate Einstein’s name from the theory of relativity so that it was not seen as a product of “Jewish science”.

‘Leading Austrian physicist of his day’

Yet all this does Hasenöhrl a disservice. He was Ludwig Boltzmann’s student and successor at Vienna, and was lauded by Erwin Schrödinger among others. “Hasenöhrl was probably the leading Austrian physicist of his day”, Rothman told physicsworld.com. He might have achieved much more if he had not been killed in the First World War.

The relationship of energy and mass was already being widely discussed by the time Hasenöhrl considered the matter. Henri Poincaré had stated that electromagnetic radiation had a momentum and thus effectively a mass, according to E = mc2. German physicist Max Abraham argued that a moving electron interacts with its own field, E0, to acquire an apparent mass given by E0 = 3/4 mc2. All this was based on classical electrodynamics, assuming an ether theory. “Hasenöhrl, Poincaré, Abraham and others suggested that there must be an inertial mass associated with electromagnetic energy, even though they may have disagreed on the constant of proportionality”, says Boughn.

Robert Crease, a philosopher and historian of science at Stony Brook University in New York, agrees. “Historians often say that, had there been no Einstein, the community would have converged on special relativity shortly”, he says. “Events were pushing them kicking and screaming in that direction.” Boughn and Rothman’s work, he says, shows that Hasenöhrl was among those headed this way.

Hasenöhrl approached the problem by asking whether a black body emitting radiation changes in mass when it is moving relative to the observer. He calculated that the motion adds a mass of 3/8c2 times the radiant energy. The following year he corrected this to 3/4c2.

A different style of scientific paper

However, no-one has properly studied Hasenöhrl’s derivation to understand his reasoning or why the prefactor is wrong, claim Bough and Rothman. That’s not easy, they admit. “The papers are by today’s standards presented in a cumbersome manner and are not free of error. The greatest hindrance is that they are written from an obsolete world view, which can only confuse the reader steeped in relativistic physics.” Even Enrico Fermi apparently did not bother to read Hasenöhrl’s papers properly before concluding wrongly that the discrepant 3/4 prefactor was due to the electron self-energy identified by Abraham.

“What Hasenöhrl really missed in his calculation was the idea that if the radiators in his cavity are emitting radiation, they must be losing mass, so his calculation wasn’t consistent”, says Rothman. “Nevertheless, he got half of it right. If he had merely said that E is proportional to m, history would probably have been kinder to him.”

But if that’s the case, where does relativity come into it? Actually, perhaps it doesn’t. While Einstein’s celebrated 1905 paper, “On the electrodynamics of moving bodies”, clearly laid down the foundations of relativity by abandoning the ether and making the speed of light invariant, his derivation of E = mc2 did not depend on those assumptions. You can get the right answer with classical physics, says Rothman, all in an ether theory without c being either constant or the limiting speed. “Although Einstein begins relativistically, he approximates away all the relativistic bits, and you are left with what is basically a classical calculation.”

A controversial issue

Physicist Clifford Will of Washington University in St Louis, a specialist on relativity, considers the preprint “very interesting”. Boughn and Rothman “are well-regarded physicists”, he says, and as a result he “tend[s] to trust their analysis”. However, the controversies that have been previously aroused over the issue of priority perhaps account for some of the reluctance of historians of physics to comment when contacted by physicsworld.com.

Did Einstein know of Hasenöhrl’s work? “I can’t prove it, but I am reasonably certain that Einstein must have done, and just decided to do it better”, says Rothman. But failure to cite it was not inconsistent with the conventions of the time. In any event, Einstein asserted his priority for the mass–energy relationship when this was challenged by Johannes Stark (who credited it in 1907 to Max Planck). Both Hasenöhrl and Einstein were at the famous first Solvay conference in 1911, along with most of the other illustrious physicists of the time. “One can only imagine the conversations”, say Boughn and Rothman.

Rothman told physicsworld.com that he had run across Hasenöhrl’s name a number of times but with no real explanation as to what he did. “One of my old professors, E C G Sudarshan, once remarked that he gave Hasenöhrl credit for mass–energy equivalence. So around Christmas-time last year, I said to Steve, ‘why don’t we spend a couple hours after lunch one day looking at Hasenöhrl’s papers and see what he did wrong?’ Well, two hours turned into eight months, because the problem ended up being extremely difficult.”

Hunt for the Higgs enters endgame

Tantalizing hints that the Higgs boson is rearing its head at CERN’s Large Hadron Collider (LHC) have become slightly less thrilling than was previously thought, reported physicists on the opening day of the Lepton Photon 2011 conference taking place in Mumbai, India, this week. Possible sightings of the famous particle had caused a stir at last month’s European Physical Society meeting in Grenoble, when data presented from both the ATLAS and CMS experiments showed a small excess of events consistent with the production and decay of Higgs bosons with a relatively low mass of about 144 GeV.

Now, having almost doubled their datasets since the Grenoble meeting, the researchers continue to see a small excess in the low-mass region, but it is one with a lower statistical significance (about 2–2.5σ compared with 2.8σ). If the excess really is the genuine signature of a new particle, rather than a statistical fluctuation of similar-looking background events, physicists would have expected its significance to grow – not to shrink – as more proton–proton collisions were analysed.

“The fact that we’re introducing data collected up until two weeks ago is scary and wonderful,” Vivek Sharma, who presented the results of the CMS experiment, told physicsworld.com. “We don’t know if the excess is a statistical fluctuation as it seems to persist, but the picture will become much clearer when we add data collected during the next two months.”

CMS spokesperson Guido Tonelli cautions that a real Higgs signal could become weaker, despite extra data being included. “Some people got a bit too excited about the Grenoble excess so this latest snapshot of the data may therefore appear a let-down, but it’s simply too early to say,” he told physicsworld.com on Friday. “This is a historical time for particle physics and we have to be absolutely sure before we draw any conclusions.”

Keep calm and carry on

As well as using more data, the new Higgs results are based on improved analysis routines, says deputy ATLAS physics coordinator Richard Hawkings. “With more time, we’ve done a better job of handling the background, which gives us increased sensitivity,” Hawkings told physicsworld.com. “There’s still plenty of room for the Higgs to hide at lower masses – we just need more data.”

Some people are starting to think “What if the Higgs isn’t there?” James Gillies, CERN communications chief

Discovering the Higgs boson would complete the Standard Model of particle physics, providing an explanation for how electroweak symmetry broke a fraction of a second after the Big Bang to leave certain elementary particles with the property of mass. Not discovering the Higgs, or something else that performs this symmetry-breaking role, would leave a major hole in physicists’ understanding of nature’s fundamental constituents.

“Some people are starting to think ‘What if the Higgs isn’t there?’,” CERN’s head of communications James Gillies admitted to physicsworld.com. “Our job is to stay calm and to get the message out that a non-discovery of the Higgs, if that plays out, is a big scientific discovery in itself.”

Narrowing the range

Apart from a couple of narrow windows at mid-range masses, the LHC has now pretty much excluded Higgs bosons with masses between 145–466 GeV and finds no significant excess of events across the region 110–600 GeV. Direct searches at CERN’s previous Large Electron Positron (LEP) collider, which shut down in 2000, excluded a Higgs lighter than 114 GeV, while fits to precision measurements of electroweak Standard Model parameters disfavour a Higgs heavier than 180 GeV.

Meanwhile, the latest results from Higgs searches at the Tevatron collider at Fermilab near Chicago, which is due to close down at the end of September, that were also shown at the Mumbai meeting exclude the regions 100–109 GeV and 156–177 GeV.

“The mass regions in which to search for the Higgs boson are narrowing,” says Aleandro Nisati, who presented the ATLAS results. “I’m an Higgs enthusiast and I’m getting very excited by this!”

With the LHC delivering data faster than the researchers can analyse them, physicists have decided against presenting an official combination of the ATLAS and CMS Higgs results until the end of this year’s data-taking. The LHC is due to cease proton–proton collisions in early November, switching to heavy-ion collisions for a month before closing down until early 2012.

I’m not particularly fond of the Higgs hypothesis, which seems ad hoc; so if we don’t find the Higgs, I’d be quite happy Vivek Sharma, head of CMS Higgs group

“As head of the CMS Higgs group I can’t hold ‘religious’ views on whether or not the Higgs exists,” says Sharma. “But I’m not particularly fond of the Higgs hypothesis, which seems ad hoc; so if we don’t find the Higgs, I’d be quite happy.”

CERN theorist John Ellis says there is still everything to play for. “The region that is currently surviving the LHC’s onslaught is precisely the favoured region for the Higgs based on previous electroweak fits,” he explains from a sofa in the CERN theory department’s common room. “With just two inverse femtobarns of data [more than 20 trillion collisions] recorded by each of CMS and ATLAS, a Standard Model Higgs boson has been excluded at the 95% confidence level between 130 and 600 GeV, demonstrating the need for new physics in the electroweak symmetry-breaking sector.”

Test match physics

A cricket ball sitting in grass


A cricket ball at rest

By Margaret Harris

Late yesterday afternoon, I was pottering around with the BBC’s Test Match Special on in the background when something in the cricket commentary caught my attention. In-between the usual chatter about English bowling (good), Indian batting (bad) and the latest cakes delivered to the TMS commentary box (excellent), the conversation suddenly turned to physics – specifically, to the question of whether a ball could gain speed after nicking the edge of a bat.

The matter was raised after an Indian batsman, V V S Laxman, edged a delivery from Jimmy Anderson, an English bowler. The ball spurted off towards England’s captain, Andrew Strauss, who couldn’t quite catch it. After lamenting the missed opportunity, one of the TMS commentators suggested that Strauss might have mistimed his catch because the ball gained speed after glancing off Laxman’s bat. The commentators then spent the next several minutes talking a load of old rubbish about whether this was physically possible.

Then, shortly after 6 p.m., a secondary-school physics student, Laurence Copson, sent a message to the BBC’s online commentary team claiming that no, it wasn’t possible. “Removing all external forces on the ball, under no circumstance would the ball gain speed after a nick…as [the] bat would be slightly hitting the ball in the opposite direction,” he wrote. However, he did add a caveat: “What may be deceiving is if the batsmen swipes, catches an edge and then the ball gains top-spin and seems to reach the ground quicker than usual.”

This analysis was quickly contradicted by Rob, a university astrophysics student, who pointed out that Copson was neglecting both the elastic coefficients of ball and bat, and (more importantly) “the spin on the ball before it hits the bat which, if very fine, may accelerate the ball…in the direction of spin (like a car with its wheels spinning hitting the ground goes forward)”.

This seemed fair enough, but Rob’s parting shot – ”this is the real world, external forces on the ball can’t be discounted!” – struck me as rather snide, so I decided to do some analysis of my own.

(more…)

Vacuum chief looks to new horizons

IUVSTA provides a global platform for the promotion, proliferation and education of vacuum science, techniques and applications. As an international federation of 30 national vacuum societies, it represents some 15,000 scientists, engineers and technicians worldwide. J J Pireaux was elected president of IUVSTA in 2010 for a tenure of three years. He is a surface scientist who for the last decade has been director of the Interdisciplinary Laboratory of Electron Spectroscopy (LISE) at the University of Namur in Belgium.

What is IUVSTA’s role within the international vacuum community?

IUVSTA aims to stimulate international collaboration in vacuum science, including related multidisciplinary topics such as the solid–vacuum interface. It focuses on educational activities (organizing technical training courses and publishing educational material), on scientific activities (organizing thematic workshops and international conferences) and on awarding prizes and scholarships. Its actions are all related to one of the research themes covered by its divisions, namely applied surface science, electronic materials and processing, nanometre structures, plasma science and techniques, surface engineering, surface science, thin films, vacuum science and technology, and the newly created biointerfaces section. IUVSTA focuses a significant part of its activities in developing countries, helping to educate technicians and scientists in those areas of the world about the modern technologies relevant to materials science.

How would you assess the current state of vacuum science and technology?

It is obvious that the areas of research covered by IUVSTA are extremely broad, from very fundamental science to applied science to research and development. Indeed, these fields are much broader than just vacuum science and technology. This includes the development of methods of fabricating new materials, and the techniques to characterize them. Without vacuum science and technology, there would be no transistors, microprocessors, mobile phones, hybrid cars or alternative-energy devices. As it is so intimately connected to materials science, vacuum science and technology is a central pillar of modern academic and industrial research.

What do you see as the main challenges facing IUVSTA?

The challenges are to keep abreast of research and development, and to help focus enough energy and resources on fundamental research in the mid to long term while devoting expertise to new applications and production processes based on existing knowledge. A significant amount of quality research, together with the production of materials and devices, is now carried out in Asia and the Far East, so contact and communication have to be improved with these scientists and engineers.

How is IUVSTA addressing those challenges?

IUVSTA has set up a working scheme with a dual approach: working (or better, thinking) groups organized in different committees and a scientific body based on the work of its thematic divisions. The committees strive to adapt and improve the work of the union. For example, we recently revisited our educational materials. We also created a task force to initiate schemes for contacting non-member societies in developing countries. We have worked on the (still ongoing) creation of a new field of interest – namely a biointerfaces group – to address the emerging research and technologies of this area. We also try to ensure that we organize our scientific events across different parts of the globe.

How does IUVSTA plan to develop?

There are a few actions in the pipeline. First is attracting new membership from developing nations. We also aim to provide better expertise for training and learning via our educational materials and technical training courses, while there is a plan to refocus on excellence within the selection of our thematic workshops. Soon we hope to launch the World Transfer Program. This will be a new grant scheme to help early-career scientists work in another laboratory for a short period of time.

What aspects of basic science will impact most on vacuum science and technology over the next 20 years?

In my opinion, there are at least three areas of research, each with a significant fraction of very basic science, that have the potential to influence our everyday lives in the near future. The first is biotechnology, including genetic engineering. Physics, chemistry and mathematics will all have a significant role to play in the development of this field. The second is energy, including the materials and processes for solar-energy capture and conversion, fuel cells and the materials required for fission and fusion reactors. Finally, humankind will resume exploration and begin the exploitation of space. There are still huge problems to solve for those working in vacuum science and technology.

Copyright © 2026 by IOP Publishing Ltd and individual contributors