Skip to main content

CERN points to early September start-up date for LHC

The Large Hadron Collider — the biggest experiment in particle physics — will start in earnest on 10 September, according to officials at the European laboratory CERN. The officials say that on this date engineers will make the first attempt to circulate proton beams around the 27 km-long accelerator.

With all eight sectors of the LHC now cooled to 1.9 K, the first proton beam will be injected in the clockwise direction this weekend. The test will involve synchronising the LHC with the Super Proton Synchrotron (SPS), the final stage of the accelerator’s injector chain from which high-intensity proton beams are injected into the LHC ring.

Tests will continue until early September with engineers also testing proton injection in the anti-clockwise direction.

Assuming there are no major glitches, CERN is gearing up for first full circulation of protons on 10 September when they will be injected with an energy of 450 GeV. Once this has been established, the two proton beams will be made to collide, and the final step of the LHC commissioning — accelerating the protons to 5 TeV per beam — will begin.

Dark-matter simulation reveals lumpy haloes

Cosmologists in the US and Switzerland have made the most detailed simulation yet of how gravitational interactions have led to the “haloes” of dark matter we see near the centres of galaxies today. The feat could help future experiments grasp the nature of the dark matter, an elusive substance thought to make up nearly a quarter of the universe.

The simulation, which took over a month to run on a supercomputer, charts the evolution of over a billion dark-matter particles since a few million years after the Big Bang. It reveals that the dark-matter haloes are less smooth than previously thought.

“This is the best resolved calculation of the Milky Way’s halo ever carried out, with a mass resolution five to sixty times better than the previous largest computations,” explains Piero Madau at the University of California, Santa Cruz. “Previously, the inner regions of the halo came out smooth but now we have enough detail to see dense clumps of dark matter.”

It’s like taking a picture of a cricket stadium and being able to see individual grass leaves, instead of just patches of green Asantha Cooray, University of California at Irvine

Small-scale structure

Physicists believe dark matter makes up the bulk of all matter in the universe, outnumbering normal matter by roughly five to one. Invisible to modern telescopes, it gives off no light or heat and interacts only through gravity. Its prevalence also means that dark matter has had a significant impact on the structural evolution of the universe.

Madau — together with colleagues at Santa Cruz, the Institute of Advanced Study, New Jersey, and the University of Zurich — performed his simulations using the widely accepted “cold” dark-matter model of the universe. According to this model, gravity acted initially on slight density fluctuations present shortly after the Big Bang to pull together the first clumps of dark matter. These merged and grew into ever larger clumps, creating “gravitational wells” that ordinary matter fell into and formed stars and planets, giving rise to galaxies in the centres of dark matter halos.

Although past simulations have had reasonable success at describing cosmological evolution, understanding the nature of dark matter would be helped by knowledge of its smaller-scale structure. To achieve this, Madau and colleagues had to use 3,000 processors in parallel for around a month to follow the gravitational interactions of 1.1 billion particles of dark matter. The simulation started 20 million years after the Big Bang and took into context 13.7 billion years of evolution, producing a halo the same size as the one in the Milky Way.

Their results revealed a Russian-doll structure, with entire generations of progenitor dark matter clumps preserved as a series of nested substructures. The simulation showed that dense clumps should lurk in the inner regions of a halo (Nature 457 735).

Looking for evidence

“For the first time, a numerical simulation is now able to study the dark matter makeup of a typical galaxy down to mass scales of a thousand solar masses instead of a few hundred thousand as before,” says Asantha Cooray, a cosmologist at the University of California, Irvine. “This is equivalent to taking a picture of a cricket stadium and finally being able to see individual grass leaves, instead of just patches of green.”

Physicists believe dark matter particles, such as so-called weakly interacting massive particles (WIMPs), can collide and annihilate each other while emitting gamma rays. These could be detected by space-based telescopes, including the recently launched GLAST. According to Madau and his colleague Juerg Diemand, the denser clumps predicted by their simulation should emit lots of gamma rays, allowing for easy detection.

Robert Caldwell at Dartmouth College, New Hampshire, thinks the team’s simulation should help understand the potential signals from GLAST and other particle dark matter searches. “It would be tremendously exciting to see astrophysical evidence,” he says, adding: “and imagine the timing, just as searches are pushing the limits of our conception of dark matter!”

Chinese particle collider turns up the charm

Twenty years after it smashed electrons and positrons together for the first time, China’s foremost particle-physics accelerator could soon be boosting the rate of its collisions a hundred times over.

The increase in output will be the result of a four-year-long upgrade at the Beijing electron–positron collider (BEPC) and a redesign of its detector, the Beijing spectrometer (BES), at a cost of ¥640m ($77m). Two weeks ago the overhauled facility saw a return to particle collisions, and researchers are now tweaking it for optimum collision luminosity.

“I think people are working hard to push the upgrade, to tune the machine for higher performance and to get data as soon as possible,” says Hesheng Chen, director of the Institute of High Energy Physics (IHEP), which runs the collider. Chen told physicsworld.com that he expects the first useable data from the collisions to be taken in the autumn.

‘Need more statistics’

BEPC is a dual-purpose facility, providing a testing ground for particle physics as well as hard X-ray synchrotron radiation for studies in materials and life sciences. In its original form, it had a single 240 m-long ring of magnets that accelerated bunches of electrons and positrons in opposite directions, colliding them with a luminosity of 1031 particles per square-centimetre per second (cm–2s–1).

The collisions produced particles containing a charm quark and an anticharm quark, which decayed rapidly into other mesons. Using BES to measure the energy and momentum of this debris, researchers have been able to measure various properties of the parent charmed particles, such as the J/Ψ — a particle containing a charm and an anticharm quark. The researchers could also measure the so-called R value, which tells how often the collisions produce hadrons (particles containing quarks).

The upgraded collider, called BEPC-II, has added another ring to the outfit. This means that the electrons and positrons can be accelerated separately, so that up to 93 bunches — as opposed to just one — can be fit into each ring. Luminosity has so far been tripled, but Chen thinks should reach 3 x 1032 cm–2s–1 by the end of the year and 1033 cm–2s–1 in two years, which would mark a hundred-fold improvement. The detector redesign, called BES-III, uses stronger superconducting magnets to measure the energy and momentum of the debris particles with higher precision.

“During the past five to eight years there have been interesting results in the charm quark region — the pentaquarks, and the pp(bar) bound state, which was observed in the BEPC,” explains Chen. “But so far we do not really understand those particles; we need more statistics to pin them down.”

‘Best machine’ for charmed particles

Chen thinks that the increase in data will help in the search for rare decay events. It should also give better error bars for the R value in the charm energy region, which are currently around 6% but which BEPC-II should reduce to around 1–2%.

In terms of electron–positron colliders, BEPC-II sits somewhere close to the middle of collision energies, at around 3–5 GeV. DAFNE — an accelerator based at the Frascati National Laboratories near Rome that has also recently installed an upgrade — smashes electrons and protons at about 1 GeV to produce phi-mesons. At higher energies, there’s KEKB at KEK in Japan and the recently closed PEP-II at SLAC in the US, both of which have operated at about 10 GeV to produce B-mesons. There are also higher-output versions of these latter “B-factories” in the pipeline. “The BEPC-II is the best machine in the charm energy region,” says Chen.

Tiny microscope aims for Third World market

When the Dutch scientist Antonie van Leeuwenhoek first demonstrated the potential of the microscope in the 17th century, medicine was without vaccination, anaesthetic or antiseptic. Three centuries later, medicine has become far more specialized, but the microscope — crucial to many sub-disciplines like cell biology and pathology — has remained largely unchanged.

For physicians in the West this may not be a significant problem, but in developing countries diagnostic labs in the field struggle to equip themselves with conventional microscopes, which are both large and costly.

Now, scientists at the California Institute of Technology (Caltech) have developed a microscope the size of a penny piece that matches the resolution of its larger counterparts. What’s more, they claim it could be produced for as little as £5 a pop. Bioengineer and study leader Changhuei Yang told physicsworld.com that his new invention was inspired by the “floaters” in our eyes.

Direct projection

Floaters are small clumps of cells that have broken loose from the eye’s inner lining and drift through the watery gel known as the vitreous humour. We glimpse floaters as dark dots or scratches that occasionally enter into our field of vision.

Normally we see the world because light reflected from objects is focussed through the eye by a lens onto the retina. However, we see floaters by a different mechanism. These cells sit behind our lens and accumulate on the retina, and so we only obtain a scan or “direct projection” of them. Because the dots appear larger than the cells themselves, our body has a natural microscope that needs no lens.

In the Caltech replica of this effect, the specimen to be magnified is placed directly onto a complementary metal-oxide semiconductor (CMOS) sensor, which converts optical images into an electrical signal. Direct projection using this method was first demonstrated in 2005 by Dirk Lange of Stanford University, but so far resolution has not competed with conventional microscopes. At best the resolution has been the size of the pixels, or around three microns.

Yang and his colleagues get around this limitation by laying a thin film of aluminium over the sensor and then piercing it over the centre of each pixel. This simple idea restricts pixel sensitivity to the areas directly beneath the holes — effectively creating a smaller pixel that can match conventional microscope resolution. The researchers then suspend the specimen in an “optofluid” and let it flow into the holes.

‘Immediate application’

So what do the medics make of the pocket-sized microscope? “For diagnosis or screening of samples which contain macro parasites — such as worm eggs — this [microscope] has an almost immediate application,” says Dr Chris Drakely of the London School of Hygiene and Tropical Medicine. Unfortunately, Drakely warns, the microscope’s magnification factor of just 20 means it is not yet suitable for detecting malaria parasites, which requires magnification to be five times greater.

Yang recognises the need to further improve resolution but says his next plan is to pack several sensors into the same chip. If an array of thousands of microscopes could return data to the same interface, users could view a large sample but easily switch to a focussed view.

The tiny microscopes could also be inserted under a patient’s skin to monitor the spread of cancer. Some forms of cancer spread by tumour cells entering the blood stream, a process known as metastasis. Chips would be inserted in the appropriate region to “look out” for these cells. Yang points out, however, that this idea would be very difficult to implement, and at best the technology would be 10 to 15 years away.

Yang says that he has secured a patent and is “in talks with various multinational biotech companies”. He hopes his microscope will soon benefit developing countries, but does not want to take a major role in promoting the invention.

Read all about it


(Credit: Amazon)

By Michael Banks

When I was a PhD student, I remember having to go through a few rounds of thesis revision, which was usually greeted with a painful moan of once again ploughing through 200 plus pages of dry, technical language, with a few equations thrown in as well. But I never thought about anyone other than a physicist really wanting to read it — even my mum only got as far as the abstract.

Well for all those Queen fans out there, guitarist and astronomer Brian May, who has recently completed his PhD in astronomy at Imperial College London, has now had his PhD thesis published as a book by Springer and Canopus Publishing Ltd.

May’s thesis, and the book too for that matter, is snappily entitled “A survey of radial velocities in the zodiacal dust cloud” and covers the Zodiacal light — a faint diffuse cone of light seen in the west after sunset and the east before sunrise.

(more…)

Ring out the old

daresbury.jpg
The Daresbury laboratory (Credit: STFC Daresbury Laboratory)

By Matin Durrani

Reporting the opening of new facilities is grist to the mill for us on Physics World. That’s why we ran a long article in last month’s print issue about the opening of the new “second target station” at the ISIS pulsed-neutron source at the Rutherford Appleton Laboratory near Oxford in the UK.

The £145m upgrade to the ISIS facility, which is used for a wide range of neutron-scattering experiments, moved a step closer to completion today when the first neutrons were created in the new station.

But spare a thought for the Synchrotron Radiation Source (SRS) at the Daresbury Laboratory in Cheshire, in the north-west of England, which officially closes today after 28 years of operation and two million hours of science.

(more…)

Nobel-prize trivia

By Matin Durrani

Who’s the only physicist to have won a Nobel Prize for Literature?

It’s one of those tricky questions that you either know or don’t. And obviously because I know the answer, I couldn’t resist raising it today.

His death last night at the age of 89 has been reported in most media outlets, including the New York Times, which has published a lengthy account of his life.

I’ll drip-feed you a few clues to help you along, if you haven’t got the answer already.

He was born in Kislovodsk in the Caucasus on 11 December 1918, graduating from Rostov University in 1941 with a degree in physics and mathematics.

In February 1945 he was arrested by the Soviet spy agency Smersh and was banged up for eight years in a labour camp.

(more…)

Ballistic breakthrough could lead to molecular logic gates

The first highly-conductive connection between a single organic molecule and a metal electrode has been made by an international team of physicists. This achievement could lead to the development of ‘molecular electronics’ devices with the potential to be smaller and faster than conventional transitors and logic gates.

The majority of electronic devices are made from just a handful of semiconductor materials — the most common being silicon. However, some organic molecules such as DNA appear to have electronic properties similar to traditional semiconductors and some researchers believe that some types of molecules could be used to make electronic devices.

A potential benefit of such devices is that molecules are extremely small compared to semiconductor structures, which could help manufacturers pack more and more circuits onto a chip.

However, it has proven very difficult to connect single molecules to a metal electrode such that electrons are conducted easily between the two. These junctions are essential for making real-world devices like transistors and logic gates.

Significant barrier

Previous attempts at making single-molecule junctions involved using “anchoring groups” such as thiols to bind organic molecules to metal electrodes like gold. However, the metal-molecule link creates a significant potential barrier across the junction and electrons end up tunnelling across the molecule when a voltage is applied between the electrodes. This inevitably leads to a low conductivity, and thus poor performance in the finished devices.

A higher conductance would be possible if electrons were allowed to travel ‘ballistically’ across the metal-molecule junction – whereby every electron that enters the junction travels straight through more or less unhindered. This occurs in many carbon nanotube devices or single-atom contacts, which reach the quantum of conductance — the maximum conductance possible for a single electron channel. However, this has never been achieved in single-molecule junctions before.

Now, Jan van Ruitenbeek of the University of Leiden in the Netherlands along with colleagues in Australia, Germany and Spain may have solved this problem by making the first highly conductive molecular junctions. This involved binding benzene molecules directly to platinum metal electrodes, and the team found that the conductance of these devices reaches the maximum value possible for a single electron channel.

Direct coupling

The physicists showed that it is possible to couple the metal electrodes, in their case platinum, ‘directly’ to the carbon backbone of an organic molecule (benzene), which allows the electrons to travel more easily across the junction. Indeed, the conductance of the junction has a value of G0, which equals 2e2/h, where e is the charge on the electron and h is Planck’s constant. This is the maximum conductance possible for a single electron channel — around 7.7 x 10-5 ohm-1 (Phys Rev Lett 101 046801).

The researchers say that they regard benzene as a “starting point” for building such molecular junctions and they will soon be investigating more advanced organic compounds. Benzene is ideal for such early work because it is a simple system that can be easily studied using techniques like vibration mode spectroscopy, isotope substitution, shot noise measurements and local density functional computations.

The team achieved its results using mechanically controllable “break junctions”, which allowed them to produce atomic-sized junctions of any metal at liquid helium temperatures. Once the scientists had verified that the junction was clean, they introduced benzene vapour through a capillary tube onto the structures. They observed the molecules arriving at the junction by measuring the change in conductance properties of the device. The junction was then stretched to nearly breaking point so that a molecular bridge spontaneously formed across it.

Detailed investigations

“The most important property of these junctions is that their conductance is at least an order of magnitude higher than comparable junctions made using thiol anchoring groups,” van Ruitenbeek told physicsworld.com. “Under our experimental conditions, the junctions can be held stable for a very long time, which, when combined with the cryogenic temperatures used, allows us to make detailed investigations.”

The team now plans to study metals other than platinum, which is expensive, for use in molecular electronics applications.

“What makes this work stand out is that [the scientists] have presented a new way to attach organic molecules to metal electrodes, by forming a direct metal-carbon bond, and have proven conclusively that their devices have a strong metal-molecule link,” commented Latha Venkataraman of Columbia University in an American Physical Society Viewpoint article on the research. “This enables them to overcome a major barrier in molecular based devices,” she said.

Cold atoms could help build ‘spintronics’ transistor

They can engineer them smaller and pack more in, but there will always be a limit to how fast semiconductor devices can be made to perform.

One way to improve this limit, and broaden applications, is to design “spintronic” devices that exploit electron spin as well as electron charge. Now, physicists in the US and Lithuania have come up with an idea for a test bed that could help in the realization of one of the most important spintronic devices — the so-called Datta–Das transistor (DDT).

Like normal transistors, the DDT would control a current passing between two of its electrodes. But because the DDT is a spintronic device, it would control a “spin polarized” current — one in which most of the electrons have the same spin orientation: up or down.

The key to controlling this spin-polarized current is a spin filter, which forms part of the second electrode. The filter is set to one spin orientation — say, up — which means that a current of spin-up electrons flowing from the first electrode is always let through. To control the size of this current, the DDT has a third electrode, which emits an electric field that “twists” the spin the electrons downwards. Depending on the extent of the twist, more or less current is blocked by the filter.

Putting it into practice

That’s the theory, anyway. Since US physicists Supriyo Datta and Biswajit Das proposed the DDT in 1989, experimentalists have not had much success making a working version. But Charles Clark and Jay Vaishnav from the National Institute of Standards and Technology (NIST), together with Julius Ruseckas and Gediminas Juzeliunas of Vilnius University, think they have an idea that could help experimentalists on their way — an analogous system in which basic parameters can be tweaked.

Vaishnav and colleagues’ system would be a beam of ultracold atoms, such as rubidium. These atoms would effectively have two possible states, which are analogous to the “up” and “down” spin states of electrons.

To control these atoms, say the researchers, the system would need the light from three laser beams that slightly overlap. As the atoms pass through the first laser beam — like electrons flowing from the first electrode in a DDT — they would be put in the same atomic “spin” state. But as the atoms pass through the region where the three laser beams overlap, their atomic states would begin to shift — like the twisting effect of the DDT’s third electrode. By controlling the manner in which the laser beams switch on and off, Vaishnav and colleagues think they should be able to replicate the entire function of a DDT (arXiv:0807.3067). “It may help understand problems in real systems,” says Vaishnav.

Although the cold-atom analogue is just an idea for now, Vaishnav told physicsworld.com that it uses “common experimental techniques”, which some of her colleagues would be well-equipped to implement. “There are people at NIST who are working on similar experiments,” she notes.

Postmodernism, politics and religion

Alan Sokal really likes footnotes, which may have made him uniquely qualified as a hoaxer of “science studies”. The original hoax, a purposely and wonderfully nonsensical paper about the epistemology of quantum gravity, appeared in 1996 in the cultural- studies journal Social Text, with the enthusiastic endorsement of its editorship of eminent postmodernists. There were 107 footnotes.

The first chapter of Sokal’s new book Beyond the Hoax revisits the original Social Text paper, adding 142 “annotations” in which Sokal explains at length much of the complex fabric of in-jokes and bouleversements that made it so exquisitely wacky to anyone with even a modest knowledge of physics. The remainder of the first part of the book contains four well-footnoted essays on his reasons for undertaking this exercise in foolery, and on the various responses he has received.

Sokal maintains that, at the time of his 1996 paper, a serious assault against rationality by postmodernists was under way, led by a relatively small number of left-leaning academics in humanities departments. He felt that this would be self-defeating for the Left (whom he identifies with, describing it as “the political current that denounces the injustices and inequalities of capitalist society and that seeks more egalitarian and democratic social and economic arrangements”) while opening up great opportunities for the Right to employ obfuscatory tactics. Indeed, as Chris Mooney’s recent book The Republican War on Science amply testifies, the “faith-based” administration of George W Bush has done its best to obscure a variety of “inconvenient” scientific truths, although Sokal has found little confirmation that it has borrowed this obfuscation from the postmodern relativists and deconstructionists in the leftist fringes of academia.

The second part of the book, which is co-authored with his collaborator the Belgian physicist Jean Bricmont, is a serious philosophical discussion of epistemology (the theory of knowledge). Its first chapter condemns the cognitive relativism of the postmodernists — the idea than fact A (for instance the Big Bang) may really be true for person A but not for person B — while the second chapter makes a trial run at a reasonable epistemology for science. I was delighted to find as part of their vision “the renormalization group view of the world”, according to which one sees every level of the hierarchical structure of science as an “effective theory” valid at a particular scale and for particular kinds of phenomena but never in contradiction with lower-level laws. This leads the authors to emphasize emergence as well as reductionism. I have seen few better expositions of how thoughtful theoretical scientists actually build up their picture of reality.

On the other hand, the Sokal/Bricmont view of science as a whole may be a bit idealized, and is perhaps best suited to relatively clean fields like relativistic field theory. In the murkier and more controversial field of materials science, for example, reality is not so cleanly revealed, particularly when it contradicts the personal interests of the investigators.

Part three of the book encompasses more general subjects. For example, one very long chapter explores the close relationships between pseudoscience and the postmodernists. It is easy enough to find ignominious stories about pseudoscience; some striking and important ones that Sokal picked, for example, are the widespread teaching of “therapeutic touch” (a practice with its roots in theosophy, and not involving actual touch) in many estimable schools of nursing, and, going farther afield, the close ties between conservative Hindu nationalism and the teaching of Vedic astrology and folk medicine in state schools in India.

Whether or not postmodernism has any causal relation to pseudoscience, when attacked, proponents of such pseudosciences are seen to defend themselves by referring to the postmodernist philosophers. And the postmodernists have been known to be supportive of such views — often, for example, favouring Vedic myths or tribal creation stories over the verifiable truths of modern science.

Finally, Sokal enters into the much-discussed intertwined fields of religion, politics and ethics. His essay takes the form of a long, discursive review of two recent books on religion: Sam Harris’ The End of Faith and Michael Lerner’s Spirit Matters. He promises a critical review, but I found him to be rather more critical of Lerner than of Harris. He supports Harris in considering Stephen Jay Gould’s description of science and religion as “non-overlapping magisteria” to be a cop-out.

Sokal is an implacable enemy of fuzzy-mindedness, and makes the point that religion cannot avoid inventing factual but unlikely claims about actual events. Even if one abandons young-Earth creationism or reincarnation, or those fascinating inventions heaven and hell, the idea that there is an actual personal God listening to one’s prayers and responding is not that far from believing that He is talking to me in Morse code via the raindrops tapping on my windowsill (which Harris suggests would be considered a sign of mental illness). Lerner’s book addresses the conundrum of religion as “spirituality”, incorporating, for instance, the sense of wonder that we scientists feel at the marvels that are revealed to us (I think a more interesting book in this vein is Stuart Kauffman’s Reinventing the Sacred). Sokal, though he lets Lerner get away with dubious claims about studies of the efficacy of prayer, rather dismisses this view.

He then moves on into the political. He asks, if we want voters to actually vote for their true economic and social interests, should we take away from them what are correctly known as the “consolations of religion”? Do we not then risk their perceiving the political Left as condescending and elitist? How do we attempt to break through misperceptions about the true values of the conservative elite? This is not a problem to which anyone, Sokal included, has a good answer.

I too cherish long explanatory footnotes crammed with extra ideas. But even skipping all the footnotes (which would be a great loss) this book is not a page-turner. The author is not one to drop a line of argument just because it wanders “off message”. Nonetheless, Sokal writes lucidly; and one must not forget that his main targets — the postmodern theorists in English, philosophy, sociology or “science studies” departments — are still doing well in even the most respected of our universities, and command enough respect for election into such august bodies as the American Academy of Arts and Sciences (I count two of Sokal’s prime targets in as many years). They aim to persuade the elite among our students that scientific rationality is just the invention of a few white males eager to hang on to positions of power, whereas Sokal (and he may be right) sees such rationality as our main hope.

Copyright © 2025 by IOP Publishing Ltd and individual contributors