Skip to main content

On the road to discovery

The walls are plastered with banks of computer screens. Most show bland-looking information, constantly streaming in, about the status of the accelerator or the sub-detector elements. But if you walk in while the collider is taking data, what stands out most is the least useful piece of hardware: a large, colourful flat-screen display set up high in front of the shift leader’s seat, where snapshots of the reconstructed tracks and energy deposits of particles produced in the collisions are continually broadcast in a 3D orgy of colours (see “What the pixels show” below).

The screen refreshes every few seconds with a new collision, so it is tough luck if you wanted to spend more time examining the last one: it will have been recorded in a data file somewhere, but the chances are you will never see it again. Millions of such “events” – the term used to describe particle collisions, as well as the resulting few-hundred-megabytes-worth of data – are logged every day in the huge on-site centre known as “Tier zero”, where tape robots handle and store the precious raw data, and thousands of CPUs perform the first full reconstructions of particle trajectories and energies. The one you paid attention to on screen was nothing special in itself – it was merely raised to ephemeral glory by a random choice of the event-display program.

Welcome to the control room of the Compact Muon Solenoid (CMS), one of the four experiments running at the CERN particle-physics lab just outside Geneva (figure 1). Here, and in three other command centres, researchers work shifts, spending busy days and sleepless nights in front of computer screens running monitoring programs that take the pulse of detector components, data-collection hardware and electronics of all kinds. Nights are better than days for data taking: everybody is more focused; phones do not ring; and data quietly accumulate in the storage hardware.

The valuable strings of zeroes and ones do not lay undisturbed for long. A formidable network of computers continuously copies the reconstructed data to different regional centres around the world, where huge parallel sets of CPUs reprocess the information, producing skimmed datasets that are then broadcast around the globe to their final destinations – an array of smaller regional centres. There, the data files get sucked in by avid programs that researchers deploy in long queues. Like people politely queuing, the programs silently await their turn to spin the data disks and produce distilled information for the analysers who designed them.

The gigantic effort of machines and brains that converts hydrogen atoms into violent proton–proton collisions, and then turns these into data-analysis graphs, is surprisingly seamless and remarkably fast. There is overt competition between the Large Hadron Collider (LHC) experiments and those at the Tevatron, the US’s proton–antiproton collider at Fermilab in Illinois, despite the latter’s imminent demise. Having run for 25 glorious years and due to be decommissioned at the end of this year, the Tevatron is unwilling to leave the scene to its younger and more powerful European counterpart just yet, and is trying to catch a first faint glimpse of the Higgs boson before its CERN rival discovers it. Even more, there is in-family competition between the two main LHC experiments: ATLAS and CMS. The challenge for these two large collaborations is not only to find the signal of a Higgs boson; perhaps even more exciting, they will also try to figure out which of the “new physics” scenarios already on the blackboards of theorists is the follow-up to the “Standard Model” of particle physics. The quest is on to fill the blank pages of our fascinating story of the infinitely small.

Fundamental matter

Through a century of investigations and a string of experimental observations, particle physicists have amassed a precise knowledge of how matter at the shortest length scales consists of a small number of elementary bodies acted upon by four forces (figure 2). We know that matter is composed of two dozen fermions – six leptons and 18 quarks – interacting by the exchange of a dozen bosons; the odd player is a single additional particle, the Higgs boson that characterizes the excitations of the vacuum in which particles live. The LHC can generate enough energy to “shake” this vacuum and so could finally observe those “Higgs vibrations” that were hypothesized more than 40 years ago but which have so far escaped experimental confirmation.

The LHC experiments have been designed with the explicit aim of finding that one missing block. Yet even with a Higgs boson, as pleasing and tidy as the Standard Model looks, it is necessarily incomplete. Like Newton’s theory of classical mechanics, which we now understand to be the small-speed approximation of Einstein’s theory of relativity, the Standard Model is believed to be what we call an effective theory – one that works well only in a restricted range of energies. The energy at which the theory starts to break down and new phenomena become evident is unknown, but theoretical arguments suggest that it is well within reach of the new accelerator.

Acting like speleologists confined in a small corner of a huge unknown cavern, researchers have scrupulously explored all the territory they could light up with the available technology; their fantasies of what lies beyond, however, have never ceased. The LHC is a powerful new lamp, capable of illuminating vast swaths of unexplored land. Where light is cast, we hope we will finally see features at odds with our low-energy effective theory. These new phenomena should provide us with the crucial hints we need in order to answer some nagging questions and widen our understanding of nature. For example, why is it that there are only three generations of matter fields, and not four, or five, or 10? Or is there, perhaps, a host of “supersymmetric” particles mirroring the ones that we already know about? Maybe these particles have not been discovered yet only because they are too massive and thus impossible to materialize with the collisions created by less-powerful accelerators. And is space–time really 4D, or can we produce particles that jump into other dimensions? These and other crucial questions can only find an experimental answer if we continue to widen our search.

Casting new light

The new lamp is now finally turned on, but it was not a painless start. The celebrations for the LHC’s start-up on 10 September 2008 were halted only eight days later by a fault in an electrical connection between two of its 1232 dipole magnets: the heat produced vaporized six tonnes of liquid helium, the blast from which created a shock wave that damaged a few hundred metres of the 27 km underground ring and forced a one-year delay in the accelerator programme. A total of 53 magnets had to be brought to the surface, repaired or replaced by spares, and reinstalled in the tunnel. A full investigation of the causes of the accident was carried out, and safety systems were designed to prevent similar catastrophes in the future.

Since the LHC restarted in November 2009 at an energy of 0.45 TeV per beam, it has been working impeccably; but a cautious ramping up in stored energy was still required. Bit by bit, and with great patience, the physicists and engineers who operate the accelerator have raised the energy of the circulating protons, as well as their number, while painstakingly searching for the best “tunes”: orbit parameters that avoid electromagnetic resonances of the beams with the machine that would otherwise cause instabilities and decrease the lifetime of the beams.

Although the quality of the beams has so far consistently exceeded expectations, only up to a tenth of the maximum protons per beam have so far been circulated, and the total collision energy of 7 TeV now being produced is half the design goal of 14 TeV. Still, 7 TeV is more than three times what has been achieved at the Tevatron, allowing the investigation of large swaths of new, unexplored territory. The latest schedule is for the LHC to remain at 7 TeV until the end of 2012, when an upgrade to 8 TeV or more will be possible. Then, after a year-long shutdown in 2013 to finalize the commissioning of extra safety systems, the machine will gradually be brought up to its 14 TeV maximum.

Particle physicists need higher energy to see deeper, but they also need more intense light and observation time to resolve what they are illuminating more clearly; for them, energy and intensity – or time if you think about how long it takes to build up an intense signal – are two sides of the same coin. In November 2009, as news of the first collisions was broadcast worldwide, it was easy to find curious non-physicists asking what the outcome of the experiment had been, but hard to explain to them why it is likely to last at least another two decades. The signal of a new particle or unknown effect is not expected to spew out as soon as a switch is flicked and collisions take place: it will appear at first as a small departure of the observed data from what the models predict, and only the accumulation of more data will turn it into clear evidence of a new phenomenon.

What is more, any evidence of new physics had better be rock solid if it is to be published. Despite being more than 40 years old, the Standard Model has only required tweaking once: it was initially thought that neutrinos (the partners of the charged leptons e, μ and τ, see figure 2) were massless, but in 1998 long-awaited experimental proof showed that they have a small but non-zero mass. Since its conception in the early 1970s, the Standard Model has withstood such detailed and precise tests that no physicist is going to take a claim of its inability to describe an observed phenomenon lightly. Indeed, the thousands of researchers working on the LHC experiments will provide a deep level of internal scrutiny to any scientific result claiming new physics. By the time they let it be submitted for final publication, the chosen journal’s peer-review process will be like the bored glance of a ticket inspector in comparison.

The search for evidence

The typical modus operandi of the search for a new particle signal or a new phenomenon involves several steps. First, it must be verified that the detector’s response to known phenomena is well understood and matches what is expected from computer simulations. Test particles include electrons, muons, photons and neutrinos, as well as the collimated streams of particles, or “jets”, that originate from the emission of energetic quarks or gluons by colliding particles (see “What the pixels show” below). The processes being sought may produce a combination of these objects, and simulations are needed in order to accurately predict what signal they will yield in the detector.

The second step involves selecting events that contain that specific signature being searched for; for example, if the goal is to find a massive particle believed to yield a pair of quarks when it disintegrates, then one may choose to only analyse events where exactly two energetic jets are observed (again, see “What the pixels show” below). Third, researchers usually impose some fine-tuning to the signatures’ requirements: events are chosen for which the produced particles were emitted orthogonally to the beam, or thereabouts, as these are the most interesting events. Particles that are produced at a small angle to the beam do not undergo much of a momentum change and are therefore more likely to originate from background processes. The point of this step is to discard physical processes that we already understand with the Standard Model (which in effect constitute an unwanted background noise in the search) while retaining as many events as possible that may contain the new particle signal. The less background that remains in the final sample, the more likely it is that some small anomaly caused by the new process will become visible.

In the final step of a search for new physics one typically uses statistics to infer whether a signal is caused by a real effect or just some random variation. The observed size and features of the selected data are compared to two different hypotheses: the “null” and the “alternate”. According to the null hypothesis, the data result exclusively from known Standard Model processes; according to the alternate hypothesis, the data also contain a new particle signal. If there is a significant disagreement between the data and the null hypothesis, and a much better agreement with the alternate one, researchers then estimate the probability that such a phenomenon occurred by sheer chance. They usually convert this into units of “standard deviations” – commonly labelled by the Greek letter sigma (σ). A “3σ significance” effect would be produced by background fluctuations alone (i.e. without any signal contribution) only once or twice if the whole experiment were repeated a thousand times. Such an occurrence is said to constitute evidence for a possible signal, though a statistical fluctuation usually remains the most likely cause. A “5σ significance” instead describes effects where the chance of random occurrence is smaller than a few parts in tens of millions, and is agreed to be enough to claim the observation of a new particle or phenomenon.

Unfortunately for Nobel-hungry particle seekers, most of the searches result in no new signal: the data fit reasonably well to the null hypothesis; standard deviations remain close to zero; and that flight to Stockholm can be put on hold. Still, even a negative result contains useful information: the consolation prize is then a publication in a journal. From the level of disagreement of the data with the alternate hypothesis one can in fact extract and publish a “95% confidence-level upper limit” on the rate at which an LHC collision may create the new particle being sought. This means that when no signal is found, physicists conclude that either the particle does not exist (and its rate of creation in LHC collisions is thus zero) or that it is produced too rarely: too few of them would then be contained in the data for their presence to be detectable. These limits are a useful guide for theorists, whose models need to avoid predicting new particles that are produced in collisions at a rate already excluded by experimental searches.

The LHC is now casting light further into the unknown. If there is anything to discover out there, many are betting that it will be reported by the CMS and ATLAS collaborations this year. The excitement for these new searches is as great as ever, and the internal meetings of the two collaborations, where the status of ongoing analyses is presented, are packed with researchers constantly balancing their primeval scepticism with their childlike enthusiasm for anything that looks like a potential new find. Will the LHC finally prove its worth, 20 years after its original design? A description of the discoveries that might hit the news in the next few months is offered in the accompanying article (“Signatures of new physics” pp26–30 in print and digital editions – see below).

What the pixels show

This event display, shown on one of the many CMS control-room screens, gives researchers a few-seconds snapshot of a randomly chosen collision. Once trained in how to interpret these figures, a quick glance is all it takes to understand their significance. Here the event was a proton–proton collision. The figure on the left shows the signals recorded as they would be seen by an observer looking along the beam line running through the CMS detector. Charged particles follow curved paths in the magnetic field, and their tracks (green) are reconstructed from the ionization deposits they leave in an instrument called the Silicon Tracker. The most energetic particles are bent the least, and here are part of two particle “jets”. The jets are easily identified by their high momentum perpendicular to the beams (red and blue, labelled “pT“).

Jets are the cleanest manifestation of quarks and gluons, the constituents of the colliding protons. Hard collisions consist of two constituents being mutually kicked in opposite directions. As quarks and gluons get expelled from the protons, they are slowed by the strong force that keeps the protons together. The resulting radiation gives rise to streams of collimated particles.

On the right, the cylindrical detector has been “unrolled on a plane” along the azimuthal angle φ to show where the particles hit. High-energy collisions are the most interesting ones and they often produce particles near perpendicular to the beams; the emission angle is linked to a quantity called pseudo-rapidity, η, which is close to zero for near-perpendicular angles. Particles detected at an angle close to the beam have large positive or negative values of η. The height of the coloured bars in the graph correspond to the transverse energy, ET. The two clusters of cells labelled “Jet 1” and “Jet 2” originate from two quarks or gluons emitted “back to back”, i.e. in opposite φ directions, orthogonal to the incoming protons. The event displayed here is no random choice – it is one of the highest energy collisions recorded by the CMS detector in 2010.

• To read the accompanying feature by Tommaso Dorigo entitled “Signatures of new physics”, which examines the exciting discoveries in store at CERN, please follow this link to the digital issue of Physics World. The article is free for all members of the Institute of Physics. If you are not yet a member, why not join for just €15/£20/$25 per year.

Nature’s building blocks brought to life

These colourful shapes are part of a project launched last week to create a periodic table of shapes to do for geometry what Dmitri Mendeleev did for chemistry in the 19th century. The three-year project could result in a useful resource for both mathematicians and theoretical physicists to aid calculations in a variety of fields from number theory to atomic physics. But those hoping to buy the wall chart may need to invest in a bigger house as there are likely to be thousands of these basic building blocks from which all other shapes can be formed.

“The periodic table is one of the most important tools in chemistry. It lists the atoms from which everything else is made, and explains their chemical properties,” says project leader Alessio Corti, based at Imperial College in the UK. “Our work aims to do the same thing for three-, four- and five-dimensional shapes – to create a directory that lists all the geometric building blocks and breaks down each one’s properties using relatively simple equations.”

The scientists are looking for shapes, known as “Fano varieties”, which are basic building blocks and cannot be broken down into simpler shapes. They find Fano varieties by looking for solutions to a variety of string theory, a theory that seeks to unify quantum mechanics with gravity. String theory assumes that in addition to space and time there are other hidden dimensions and particles can be represented by vibrations along tiny strings that fill the entire universe.

According to the researchers, physicists can study these shapes to visualize features such as Einstein’s space–time or subatomic particles. For the shapes to actually represent practical solutions, however, researchers must look at slices of the Fano varieties known as Calabi–Yau 3-folds. “These Calabi–Yau 3-folds give possible shapes of the curled-up extra dimensions of our universe,” explains Tom Coates, another member of the Imperial team.

Coates says that the periodic table could also help in the field of robotics. These machines are operating in increasingly higher dimensions as they develop more life-like movements. Robot engineers could use the new geometries discovered for the project to help them develop the increasingly complicated algorithms involved with robotic motion.

The periodic table project is an international collaboration between scientists based in London, Moscow, Tokyo and Sydney, led by Corti at Imperial College London and Vasily Golyshev in Moscow. Given the large time differences involved, the team communicates using social media including a project blog, instant messaging and a Twitter feed. Team member Al Kasprzyk, based at the University of Sydney, says, “These tools are essential. With some of us at working in Sydney while others are asleep in London, blogging is an easy way to exchange ideas and keep up to speed.”

Telescope team plans to track the whole sky

A European project that will allow astrophysical events to be tracked across the whole sky for the first time has begun and is already recruiting its personnel. Funded with €3m from the European Research Council over the next five years, the 4 Pi Sky project will use a combination of ground- and space-based telescopes to study rare events such as colliding neutron stars and exploding supernovae.

4 Pi Sky combines three separate terrestrial telescopes systems. One is the Low Frequency Array (LOFAR), consisting of some 10,000 dipole antennas across Europe, that will be used to track objects at a frequency range of about 30–240 MHz. The others are the MeerKAT array in South Africa and the Australian Square Kilometre Array Pathfinder (ASKAP) in Western Australia, which will be used to track phenomena at higher frequencies of about 1 GHz.

Linking telescopes

When combined, the telescopes will be able to monitor the whole sky, as scientists will be able to link from telescope to telescope to follow transient phenomena as the Earth rotates. Using this technique, researchers are expected to find and track thousands of new events that would have previously been missed.

“This is really a project that will try and co-ordinate what we can do with these telescopes in terms of optimizing their performance and getting the software up and running,” says Rob Fender, an astronomer at the University of Southampton in the UK, who is leader of the 4 Pi Sky project. He adds that it will take at least another three years until all three telescopes come online and are fully networked due to the construction timelines for MeerKAT and ASKA. Part of the network, however, will be working on LOFAR this year.

In addition to the ground-based telescopes, 4 Pi Sky researchers will also be able to use the Japanese Space Agency’s Monitor of All-sky X-ray Image (MAXI) telescope aboard the International Space Station, which studies events at wavelengths between 0.5 and 30 keV. “A lot of transient phenomena that we study with the radio telescopes has an X-ray component and we will be able to study that with MAXI,” says Fender.

As well as studying the birth of black holes, the project might even help researchers to identify the sources of gravitational waves. Although 4 Pi Sky will not be able to detect such waves directly, if these ripples in space–time are spotted by another facility, then astronomers will be able to use 4 Pi Sky see if the production of the waves was accompanied by an explosion of radio signals.

SKA on the horizon

Meanwhile, two rival teams of astronomers in South Africa and from Australia and New Zealand are continuing to compete for the right to host the Square Kilometer Array (SKA), which will combine the signals from thousands of small antennae spread over a total collecting area of approximately one square kilometre. Both MeerKAT and ASKAP are being built as technology demonstrators for SKA, which will be a radio telescope capable of extremely high sensitivity and angular resolution.

In the past week, the South Africa campaign is claiming to have made two major breakthroughs in its bid to host the telescope array. First, they have combined data from two radio telescopes at separate locations using a technique known as “very long baseline interferometry”, which had previously required assistance from foreign countries. Second, South African computer engineers have also finished building the computer hardware, ROACH 2, that they believe will provide the data-processing capabilities for SKA.

“In 2011 South Africa in conjunction with its eight African-partner countries bidding communally for the SKA will pull out all the stops to show the world that Africa is the future as far as science and technology are concerned,” says Bernie Fanaroff, director of South Africa’s SKA Project. A final decision on who is to host SKA will be made in 2012 based on a number of criteria, including operating and infrastructure costs and the levels of interference from sources such as mobile phones and televisions.

Physicists couple oscillating ions

Two independent groups of physicists have built the first coupled harmonic oscillators using ions in two separate traps. Both teams were able to show that as little as one quantum of energy can be exchanged between the two oscillators. This ability to control the exchange of energy between ions at the quantum level could help in the creation of quantum computers that operate much faster than their classical counterparts.

The work has been carried out by researchers at the National Institute of Standards and Technology (NIST) in Colorado, US, and at the University of Innsbruck in Austria, who have coupled two quantum simple harmonic oscillators (SHOs) by trapping ions in adjacent parabolic potential wells. SHOs are ubiquitous in nature, playing a key role in everything from mechanical clocks to the transmission of light.

The NIST team – which includes Kenton Brown and David Wineland – carried out its experiment using two positively charged beryllium ions in separate wells about 40 µm apart. With both ions initially oscillating within their potential wells, the researchers fired laser beams at the ions, slowing their vibrational motion. The lasers were then directed at only one ion, further slowing it down until it was in its lowest vibrational quantum state. As both ions are positively charged, they naturally repel each other and it is this repulsive force that couples the motions of the two ions.

The team then “switched on” the interaction between the beryllium ions by adjusting the potential wells so that both ions had the same characteristic frequency of vibration. When this occurs, the ions can exchange quanta of energy with the slow ion speeding up and the fast ion slowing down. After a few hundred microseconds, most of the energy is transferred, and the energy flow reverses. This back-and-forth exchange of energy could continue indefinitely, but is eventually “washed out” as the ions absorb heat from their surroundings.

The team studied the energy flow by taking “snapshots” of the ions by firing a laser at them and measuring the amount of light they absorb, which depends on their motion. While this destroys the ions’ fragile quantum states, the team can map out the energy exchange by continually repeating the measurement, taking each snapshot at different times in the energy-exchange cycle.

Meanwhile, over at Innsbruck, Rainer Blatt and colleagues have performed a similar experiment, but this time using trapped calcium ions separated by about 54 µm. The Austrian group has, however, found that the coupling strength can be increased significantly if each potential well contains more than one ion. Indeed, the interaction strength increased by a factor of seven when both wells contain three ions. The reason, according to Blatt, is simply the greater amount of charge in each well.

Transforming information

As the exchange of energy is equivalent to the exchange of quantum information, such coupled oscillators could potentially be used to transfer information encoded in a photon to one trapped ion and then on to another ion. Indeed, Brown believes that the success of his group’s experiment suggests that information could also be transferred from an ion to a mechanical oscillator such as a tiny cantilever. This could be very useful because it would allow several different types of quantum system to be integrated within a quantum computer.

Trapped ions are an attractive technology for quantum computers because they allow quantum information to be held for a relatively long time. An important downside of ions, however, is that quantum processing is currently done using two ions interacting in the same well, which are then separated to interact with other ions. “Separating ions confined in one trap into two separate traps is costly because this process causes heating which must be counteracted with subsequent re-cooling,” Wineland told physicsworld.com.

The ability to couple separate wells could, however, do away with the need to transport ions, allowing arrays of coupled wells to be used to create quantum logic gates and other processing devices. Indeed, Blatt told physicsworld.com that his team is currently working on an array of coupled ion traps that could work in this way. Meanwhile in Colorado, the NIST team plans to boost the rate at which quantum information is exchanged between traps by shrinking the distance between the ions.

The results are both reported in Nature.

Dots deliver full-colour display

Researchers in South Korea and the UK say that they have produced the first large-area, full-colour display based on red, green and blue quantum dots. The technology could spur the launch of colour TV screens combining a vast colour range with an incredibly small pixel size.

Both attributes stem from the intrinsic properties of the quantum dots, which despite being just a few nanometres in diameter comprise several thousand atoms that form tiny compound-semiconductor crystals. When electrons inside the quantum dots recombine with their positively charged counterparts, known as holes, it can result in the emission of a narrow band of light.

Making a colour display with the dots requires their deposition onto a substrate in a well-controlled manner. Monochrome displays can be made by spin-coating – dropping a dot-containing solution onto a substrate and spinning this around to yield a thin film of material. This approach is unsuitable for making a full-colour display, however, because it would cross-contaminate red, green and blue pixels.

Patterned rubber stamps

In this new work, a team led by Tae-Ho Kim at the Samsung Advanced Institute of Technology in South Korea, overcame this issue by spin-coating red, green and blue dots onto separate “donor” substrates, before transferring them in turn to the display with a patterned rubber stamp.

To make a 4-inch diameter, 320 × 240 pixel display, a pair of electron-transporting polymers was deposited onto a piece of glass coated in indium tin oxide. Red, green and blue dots were stamped onto this structure, which was then coated in titanium dioxide, a material with good hole-transporting properties.

Adding a thin-film transistor array allowed a different voltage to be applied to each of the 46 × 96 µm pixels. Increasing this voltage increases the brightness of the pixel, because more electrons and holes are driven into the dots, where they recombine to emit light.

Higher-resolution displays could be possible by reducing pixel size. “We showed an array of narrow quantum dot stripes of 400 nm width [in our paper], which indicates the feasibility of nano-printing quantum dots with extremely high resolution,” says Byoung Lyong Choi, one of the Samsung researchers. This demonstrates that the Korean team’s technology is more than capable of producing the displays with the highest practical resolution for viewing with the naked eye, which can resolve pixel sizes of up to 50 µm.

Improving efficiencies

One downside of the Korean display is its low efficiency – just a few lumens per watt, which is roughly half that of an incandescent bulb. But Choi says that far higher efficiencies should be possible by modifying their quantum dots. Samsung will continue to develop the technology, which it is trying to patent, before deciding whether to manufacture displays with this approach. “Transfer-printing can be scaled up to roll-to-roll systems for huge size printing onto flat or curved surfaces, such as rolled plastic sheets,” explains Choi.

John Rogers, a researcher at the University of Illinois, Urbana Champaign, is very impressed by the Korean effort: “It is, by far, the most complete demonstration of this technology.” However, Rogers also believes that the technology will face stiff opposition in the commercial market. “The entrenched technology – backlit liquid-crystal displays – continues to get better and better, and cheaper and cheaper.”

The Korean team reports its work in the latest edition of Nature Photonics.

Will the LHC find supersymmetry?

The first results on supersymmetry from the Large Hadron Collider (LHC) have been analysed by physicists and some are suggesting that the theory may be in trouble. Data from proton collisions in both the Compact Muon Solenoid (CMS) and ATLAS experiments have shown no evidence for supersymmetric particles – or sparticles – that are predicted by this extension to the Standard Model of particle physics.

Supersymmetry (or SUSY) is an attractive concept because it offers a solution to the “hierarchy problem” of particle physics, provides a way of unifying the strong and electroweak forces, and even contains a dark-matter particle. An important result of the theory is that every known particle has at least one superpartner particle – or “sparticle”. The familiar neutrino, for example, is partnered with the yet-to-be discovered sneutrino. These sparticles are expected to have masses of about one teraelectronvolt (TeV), which means that they should be created in the LHC.

In January the CMS collaboration reported its search for the superpartners of quarks and gluons, called squarks and gluinos, in the detector. If these heavy sparticles are produced in the proton–proton collisions, they are expected to decay to quarks and gluons as well as a relatively light, stable neutralino.

SUSY’s answer to dark matter

The quarks and gluons spend the energy that was bound up in the sparticle’s mass by creating a cascade of other particles, forming jets in the detector. But neutralinos are supersymmetry’s answer to the universe’s invisible mass, called dark matter. They escape the detector unseen, their presence deduced only through “missing energy” in the detector.

CMS physicists went hunting for SUSY in their collision data by looking for two or more of these jets that coincide with missing energy. Unfortunately, the number of collisions that met these conditions was no greater than expected with Standard Model physics alone. As a result, the collaboration could only report new limits on a variation of SUSY called constrained minimal supersymmetric standard model (CMSSM) with minimal supergravity (mSUGRA).

ATLAS collaborators chose a different possible decay for the hypothetical sparticle; they searched for an electron or its heavier cousin, the muon, appearing at the same time as a jet and missing energy. ATLAS researchers saw fewer events that matched their search and so could set higher limits, ruling out gluino masses below 700 GeV, assuming a CMSSM and mSUGRA model in which the squark and gluino masses are equal.

Good or bad omens?

Many believe that these limits are not bad omens for SUSY. The most general versions of the theory have more than a hundred variables, so these subtheories simplify the idea to a point where it can make predictions about particle interactions. “It’s just a way to compare with the previous experiments,” says CMS physicist Roberto Rossin of the University of California, Santa Barbara. “No-one really believes that this is the model that nature chose.”

ATLAS collaborator Amir Farbin, of the University of Texas, Arlington, calls these first results an “appetiser” for the SUSY searches to be discussed at the March Moriond conferences in La Thuile, Italy. “At this point, we’re not really ruling out any theories,” he says.

At this point, we’re not really ruling out any theories Amir Farbin, University of Texas

Still, CMS scientists Tommaso Dorigo of the National Institute of Nuclear Physics in Padova, Italy, and Alessandro Strumia of the National Institute of Chemical Physics and Biophysics in Tallinn, Estonia, say that there is some cause for concern. Supersymmetry must “break”, making the sparticles much heavier than their partners. It stands to reason that this should happen at the same energy as electroweak symmetry breaking – the point where the weak force carriers become massive while the photon stays massless.

This is thought to occur in the vicinity of 250 GeV. “But the LHC results now tell us that supersymmetric particles must be somehow above the weak scale,” says Strumia.

Dorigo notes that although SUSY can allow for high sparticle masses, its main benefit of solving the hierarchy problem is more “natural” for masses near the electroweak scale. The hierarchy problem involves virtual particles driving up the mass of the Higgs boson. While supersymmetric particles can cancel this effect, the models become very complex if the sparticles are too massive.

John Ellis of CERN and King’s College London disagrees that the LHC results cause any new problems for supersymmetry. Because the LHC collides strongly interacting quarks and gluons inside the protons, it can most easily produce their strongly interacting counterparts, the squarks and gluinos. However, in many models the supersymmetric partners of the electrons, muons and photons are lighter, and their masses could still be near the electroweak scale, he says.

Benchmark searches

CMS collaborator Konstantin Matchev of the University of Florida, Gainesville, explains that new physics was expected between 1 and 3 TeV – a range that the LHC experiments have hardly begun to explore. In particular, he notes that of the 14 “benchmark” searches for supersymmetry laid out by CMS collaborators, these early data have only tested the first two.

“In three years, if we have covered all these benchmark points, then we can say the prospect doesn’t look good anymore. For now it’s just the beginning,” says Matchev.

But not everyone is optimistic about discovering SUSY. “We will get in a crisis, I think, in a few years,” Dorigo predicts, sceptical of the theory because it introduces so many new particles of which data presently show “no hints”. However, even though he would lose a $1000 bet, he says that he would still be among the first celebrating if the LHC does turn up sparticles.

The CMS and ATLAS results are available on arXiv.

Dating the universe

Carl Sagan famously said that if you wanted to make an apple pie from scratch, you first had to create the universe. The deceptively simple title of David Weintraub’s latest book invokes a very similar philosophy: if you really want to know the age of the universe, then you too have to start from scratch. How Old is the Universe? places the question in its proper historical context and explains what has gone into answering it. Although other astronomy books have explained some of the methods here, Weintraub’s brings everything together into one narrative. Such an approach is sorely needed, as the universe’s age lies at the heart of modern cosmology.

In the first chapter, some main sources of evidence – such as meteorite samples, globular clusters, Cepheid variables and white dwarfs – are introduced and briefly explained. This gives a nice overview, before each item is discussed in depth later.

Weintraub’s narrative begins in the 17th century with James Usher, an Irish bishop who calculated the age of the Earth using biblical chronology (concluding that it started in 4004 BC.) Today, his methods are often ridiculed, particularly with the resurgence of young-Earth creationism. However, Weintraub shows that he was neither the first nor the last person to use the Bible to date the Earth. Moreover, Usher did the best that he could with the information available to him. Surprisingly, perhaps, the Copernican revolution two centuries before had contributed little to the 17th-century understanding of the universe’s age. But by attempting to calculate the Earth’s age, Usher and his contemporaries were at least on the right lines: if you could know the age of the Earth with precision, it would serve as a vital “stepping stone”, a lower bound on the age of the universe.

The next figure to make an impact was Johannes Kepler, who, in his mathematically rigorous fashion, proposed a more astrophysical approach. But even he came up with a figure of 3993 BC. Surely Sir Isaac Newton could do better? No. Newton’s approach was unscientific, and the figure that he arrived at was close to those put forward by scientists, bishops, rabbis and other great thinkers of the time. In this entertaining way, Weintraub shows that scholarly consensus does not always equate to fact – an important lesson for all scientists. It was not until the discovery of radioactivity at the end of the 19th century that the Earth’s age could be estimated accurately.

The Sun provided the next stumbling-block. Weintraub shows how scientists in the 19th and early 20th century did their best to try to explain the Sun’s age, especially in terms of the Earth. Geological and evolutionary evidence suggested that the Earth had been around for at least a billion years, perhaps longer. But if the Earth was so old, countered the physicists, how could the Sun have remained luminescent for so long without consuming all of its fuel? The big gap in their knowledge was nuclear fusion, and the “old Earth, young Sun” paradox could not be resolved until that particular breakthrough in physics had occurred.

The major headaches, however, were stars and star clusters. As Weintraub says, “Not all stars are the same.” He does a good job of conveying the interrelated problems of estimating a star’s apparent brightness, distance and luminosity. Some scientists found that their calculations were erroneous – again, because “not all stars are the same”. But some of these dissimilarities led to opportunities: the intrinsic brightness of Cepheid variables, for example, was found to relate to their luminosity period. Hence, by observing Cepheid variables in a galaxy, combined with redshifted spectra, astronomers could measure values for the Hubble constant, and thus the universe’s expansion rate. The rate of white-dwarf formation was also calculated, so that observations of their total number gave another way of placing an age on the universe. The book describes all of this in detail but, curiously, the proof copy that I read never mentions the terms “standard candle” or “distance ladder” when discussing objects with well-established intrinsic brightnesses that are used to measure the universe’s size (and hence its age). These two terms crop up time and time again in astrophysics, but possibly their omission will be remedied in the final version.

The next challenge came with galaxies. As late as the early 20th century, no-one knew what they were. It would take the efforts of many scientists to work out their dynamics, structure, composition and nature. This work paid off: investigation of galactic spectra would produce a paradigm shift in our view of the universe, leading scientists to conclude that it is expanding – and far bigger and older than previously thought.

As an astrophysics graduate and as someone who writes astronomy articles, I found How Old is the Universe? to be a satisfying, necessary and timely book. It should appeal to anyone wanting to learn about cosmology and astronomy in its broad context, but it would be especially good for astrophysics undergraduates because it assumes some physics knowledge, and has a good smattering of graphs, spectrograms, diagrams and images. University departments should ensure that they have some copies to hand.

In the book’s final section, Weintraub brings the journey right up to date by discussing supernovae, the cosmic microwave background, dark matter, dark energy, the Big Bang, inflation and quantum physics. He pulls together all of the relevant facets of scientific investigation from a variety of different fields, including geology, palaeontology, astronomy and physics, to ultimately arrive at the current best estimate for the universe’s age: 13.7 billion years (give or take 100 million years or so). We often take this figure for granted, along with the fact that it is known to within 1%. What this book shows is how deduction, dedication, care and persistence in many fields have led to the figure we have today. It is the story of a scientific triumph.

Uncertainty hits SESAME project

A major scientific project designed to foster collaboration between countries in the Middle East is undergoing a period of difficulty following growing unrest in the region. The Synchrotron-light for Experimental Science and Applications in the Middle East (SESAME) is currently under construction and due to start up in 2015, but the toppling of governments across the region is putting a strain on the ability to guarantee funding to complete the synchrotron.

SESAME is a project that aims to create the region’s first major international research centre by building a synchrotron light source in Jordan. The founding members of SESAME are Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority (PA) and Turkey. The facility would produce X-rays that can be used to study materials in a range of disciplines from biology to condensed-matter physics.

The revolution in Egypt, which led to its president, Hosni Mubarak, stepping down on 11 February, and growing anti-government protests in Iran and Bahrain have put the project on an uncertain footing. “In the short term it is very worrying,” Chris Llewellyn Smith, president of the SESAME council, told physicsworld.com.

‘Moment of great uncertainty’

Although Llewellyn Smith says the unrest has yet to have a direct impact on the project, the former director-general of CERN is working with the SESAME members to put together a financial package that would guarantee the roughly $35m that is needed to complete and open the facility by 2015. “[The package] is now in jeopardy as ministers of member countries are changing,” says Llewellyn Smith. “It is a moment of great uncertainty for the project.”

Llewellyn Smith says that he was discussing SESAME with the Egyptian science minister, Hany Helal, only on Saturday, but yesterday [Tuesday] Helal was removed from office by the military-led government. Egypt was expected to contribute about $5m to the $35m gap but no-one can be now sure what the attitude to SESAME will be for a new prospective government. Similarly, just last week the PA government resigned en masse and the growing unrest in Bahrain is adding to the uncertainty around SESAME members.

However, Llewellyn Smith, who is currently in Washington, DC to discuss a possible US contribution to SESAME, says that he is “optimistic” that in the long term the facility will still be able to open by 2015. He also adds that the situation in the region could even turn out to be positive for the project and science in the region. “With more democratic governments, maybe we can get renewed and greater support for SESAME,” he says.

Think Canada

finalpic.jpg



By Michael Banks in Washington, DC

The temperatures have been mild here at in Washington DC for the 2011 American Association for the Advancement of Science (AAAS) meeting. But according to the latest forecast the snow is on it way just as delegates are heading off home.

The AAAS was jampacked with interesting talks. We had sessions on the search for exoplanets, storing antimatter, first physics at the Large Hadron Collider, an outline of the MESSENGER mission to Mercury, detecting traces of nuclear materials, the effect a nuclear war could have on the climate, and talks on adaptive optics. Even this breathless list only represents a tiny fraction of the complete programme of the 2011 AAAS conference.

The thing that caught my attention when I first entered the Washington Convention Center, which held the 2011 AAAS, was the big red “Think Canada” badges some people were wearing. I was slightly confused in the beginning, but their purpose quickly became apparent that it was to publicise the next AAAS conference.

That meeting will be in Vancouver, Canada, from 16 to 20 February 2012, so see you there (together with my pair of big red mittens).

Global challenges for science

CLS.jpg
Chris Llewellyn Smith speaking to delegates

By Michael Banks in Washington, DC

The 2011 American Association for the Advancement of Science meeting in Washington, DC had a slight winding-down feel to it today as the placards were being removed and the exhibitors packed their stalls.

But there was still a morning of talks to be had. So I headed to a session entitled “Can global science solve global challenges?” where Chris Llewellyn Smith spoke about past and future global science projects. He is an ideal speaker for the topic, given that he has been director-general of the CERN particle-physics lab and also served as chairman of the ITER council – the experimental fusion facility currently being constructed in Cadarache, France.

Llewellyn Smith went through some of the successes of global collaboration and consensus such as the eradication of smallpox in 1979 and the banning of CFCs in 1987, which successfully reduced the ozone hole.

The particle physicist also named a few examples of global collaborations that he felt had failed. This included scientists who were warning that a tsunami could occur in the Indian Ocean. The tsunami happened in 2004 killing 230,000 people and Llewellyn Smith says that lives could have been saved if warnings from scientists around the world had been heeded. He also adds communicating climate change as a challenging area that was damaged by scientists “not keeping objectivity and turning to advocacy”;.

Llewellyn Smith now calls for a global endeavour to be set up for the application of carbon capture and storage (CCS) to coal power stations that would include working out if the technique is at all possible and, if so, then the best way to store carbon dioxide underground. “CCS is going to be crucial if we don’t stop burning coal,” he says.

Indeed, Llewellyn Smith is involved with a Royal Society report into global science, which will be released on 29 March. He didn’t want to give the report’s conclusions away but says the report will concern “where science is happening and who is working with who”. There will be no specific recommendations made in the report but “we hope that it will start a debate” he says.

Copyright © 2025 by IOP Publishing Ltd and individual contributors