Skip to main content

Single spins flipped in optical lattice

Physicists in Germany are the first to flip individual atomic spins in an optical lattice. The researchers, who are based at the Max Planck Institute for Quantum Optics in Garching, used a combination of laser light and microwaves to address individual rubidium atoms arranged in a state known as a “Mott insulator”. Their method could be used for making quantum computers and also for simulating the behaviour of electrons in solids – especially superconductors.

This newfound ability is just the latest example of the progress that physicists have made in understanding quantum interactions by studying ultracold atoms in optical lattices of crisscrossing laser beams. By adjusting the laser light and applied magnetic fields, scientists can “tune” the interactions between atoms and simulate the behaviour of electrons in crystalline solids. Although an atom in an optical lattice can normally tunnel from one lattice site to a neighbouring site, in a Mott insulator all the sites are occupied, which means that the energy cost of tunnelling is too great and the atoms are frozen in place.

Each of these frozen atoms, however, could make an excellent quantum bit (qubit) in a quantum memory because they are highly isolated from the surrounding environment. And as each atom has a magnetic spin, optical Mott insulators could be used to simulate the effect of spin on electronic properties such as conduction. However, physicists had been unable to adjust the value of individual spins, limiting the usefulness of optical Mott insulators.

Flipping a spin

What Stefan Kuhr, Immanuel Bloch and colleagues have now done is to devise a way of flipping the spin of an individual atom without affecting the rest of the lattice. The team began with a cloud of about one billion rubidium-87 atoms, which were cooled to less than 100 nK. As this process involves atoms continually leaving the cloud, the team was left with just a few hundred individual ultracold atoms. The crisscrossing lasers were then switched on to create a 2D square lattice and the parameters of the lattice were tweaked to transform the system from a conducting superfluid to a Mott insulator with a lattice spacing of 532 nm.

All of the atoms in the lattice are initially in the “0” spin state. To flip an atom, it is first illuminated with laser light. The beam is tightly focused so that nearly all of the light falls on one lattice site, where it modifies the energy difference between the “0” and “1” spin states of that atom alone. The entire lattice is then bathed in microwaves at the modified energy difference, which flips the spin of the illuminated atom but no others.

The process can then be repeated at different lattice sites and the team managed to write several patterns in the Mott insulator, including the Greek symbol ψ (see figure). While it is not possible to measure the spin of individual atoms without destroying the Mott insulator, the physicists verified the technique by firing a laser that is tuned to only eject atoms in the “1” state. An optical microscope image is then taken of the remaining “0” atoms, revealing the pattern.

Crucial step

Peter Zoller, a quantum physicist at the University of Innsbruck in Austria, thinks the work is “a seminal step forward in experiments with optical lattices”. Henning Moritz of the University of Hamburg, meanwhile, says that the team’s ability to address individual atoms “is a crucial step on the path toward quantum computing with ultracold atoms”. Indeed, Zoller thinks that quantum-computing devices based on atoms could soon catch up with those using trapped ions – which currently lead the pack in terms of performance.

However, there is more work to be done. If the Mott insulator is used as a quantum computer, atoms need to be put into a quantum superposition of the “0” and “1” state. Kuhr told physicsworld.com that requires some better control over the experimental parameters and that is on the team’s “to do list”.

Quantum computers also require quantum gates (and entanglement) between pairs of atoms. This could be realized by putting atoms into “Rydberg states” – in which the atom’s outer electron shell extends a great distance from the atomic nucleus and overlaps with that of its neighbour.

The physicists are also interested in flipping a spin and then watching how the spin excitation moves though the lattice, thus simulating quantum magnetism and transport phenomena in solids.

The work is reported in Nature 471 319.

Reading discs with fewer photons

The intensity of light required to read data from an optical disc can be reduced dramatically by using entangled photons – according to a physicist in the UK. The concept, which has yet to be verified by experiment, could allow more data to be stored on CD or DVDs and lead to new types of rewritable optical storage media.

Entanglement is a quantum-mechanical property that allows particles to have a much closer relationship than permitted by classical physics. A famous example of this is the Einstein-Podolsky-Rosen (EPR) correlation between the position and momentum of pairs of photons. This is unlike the laser light used to read conventional optical discs, which does not have such strong correlations between photons.

EPR light can be created in the lab and Stefano Pirandola from the University of York, UK has calculated that it could offer a new way of reading data from optical discs. Pirandola came up with the idea when considering a memory comprising a collection of cells, each with two possible reflectivities. Higher reflectivity represents a “1”, and lower reflectivity a “0”.

Measuring intensities

In his proposed system, light hits the memory cell and a detector records the intensity of reflected light. Light is also sent directly from source to detector, creating ancillary “idler” modes that may improve the reading of cells by exploiting possible correlations within the signals (see figure below). “We do not know if the idler mode is necessary or not,” admitted Pirandola.

He believes that the gain in information – the difference in information extracted by an EPR source and the best classical one – can be almost 100%. “When it is equal to 100%, it means that the EPR source retrieves all the information perfectly, while all classical sources cannot read the memory.”

In most situations, creating a practical system based on entangled light is extremely difficult because interacting with the environment destroys the entanglement. According to Pirandola’s analysis, his system should not suffer that fate. Calculations reveal that measurements of cell reflectivities are not impaired by stray photons within the system that hit the detector after being scattered by the environment.

Putting theory into practice

The biggest challenge to building a real system based on Pirandola’s calculations is making a suitable EPR source. But this hurdle is not overwhelming, given that such sources have already been created in many quantum optics labs. This is done in a process called parametric down conversion, whereby light from a pump laser hits a special “nonlinear” crystal to yield entangled photon pairs.

Pirandola thinks that a practical system could employ just a few tens of photons to read each cell. However, this does not mean that expensive, single-photon-counting detectors are needed. Instead, the signal reflected from the cells can be combined with that from the pump laser, before being separated into two parts, each impinging on their own photodetector. “Thanks to this set-up, the input signal is amplified into a macroscopic one before being measured,” explains Pirandola.

However, even if such a system is shown to be far better at reading CDs and DVDs, its size and cost make it impractical for this application. The biggest barrier is the realization of small, efficient sources of EPR light. “One promising technology is two-photon emission from semiconductors,” says Pirandola. “[Such a source] can generate correlated photons at a very high rate, and be miniaturized as well.”

Unexpected results

One researcher who is surprised by the results of Pirandola’s calculations is Seth Lloyd from MIT: “The scheme considered is very close to quantum illumination, and we verified that quantum illumination could not do significantly better for detection than classical schemes.”

He says that Pirandola’s work is very important, providing a rare example of a quantum mechanical measurement that is significantly superior to a classical one.

Pirandola’s work is described in Phys. Rev. Lett. 106 090504.

The flip-flop world of research

blog-flipflop.jpg

By Louise Mayor

Life in research involves a turbulent rollercoaster of emotions. But often the only glimpse we see is the success and jubilation of when things work out and results get published.

This new video report (below) offers a behind-the-scenes look into the whole research process, from the long hours spent working in the lab to that day when the results finally get accepted for publication in a journal. It features researchers at Nottingham University achieving a breakthrough in part of their broader aim: to construct 3D objects on surfaces, atom by atom, using scanning probes. “The novel aspect of this video is not so much the science but the fact that we’ve filmed the entire research process over the course of a year or so,” says Philip Moriarty, the main protagonist in this adventure.

The joy that results when experiments go well comes across nicely when, while being filmed in the lab, Moriarty breaks off mid-sentence to throw his fists in the air and exclaim “yes!” However, he reveals that the groundwork preceding what looks so effortless has been 18-months-plus in the making and has sometimes involved 24- and even 36-hour shifts.

But research is rarely over once you’ve got that crucial result: there are then the highs and lows of trying to get the work published in as prestigious a journal as possible. Moriarty highlights that there’s a definite hierarchy of journals to which physicists submit papers. In this case their work was rejected from both Nature and Science before finally being accepted in Physical Review Letters.

Film-maker Brady Haran really digs deep with a frank set of questions that would make many less-composed subjects squirm, such as: “Why is this impressive?”; “What you’ve written…looks really hard to read and really boring – who’s this for?”; and “If only you and a select number of people in the world can understand that paper, how is it doing the world any good?”

The up-and-coming Haran highlights this video on his blog as a great example of what he hopes to achieve with science films. Haran is the mastermind behind the Test Tube project where this video is featured alongside a veritable trove of other gems, as well as The Periodic Table of Videos and Sixty Symbols.

Tevatron tightens its grip on the Higgs

The latest results from the Tevatron collider at Fermilab near Chicago suggest that the Higgs boson is on the light side – which means that it could be harder to detect than a heavier particle. Predicted by the Standard Model of particle physics, the Higgs, if discovered, would provide an explanation for how elementary particles acquire mass. The Standard Model does not, however, say anything about what the exact mass of the Higgs boson ought to be and its eventual detection would be a massive achievement in high-energy physics.

Also in the race to detect the Higgs is the Large Hadron Collider (LHC) at CERN in Switzerland, which yesterday announced its first 7 TeV proton–proton collisions of the year. Although the LHC is expected to run continually until the end of 2012 before a year-long upgrade, researchers at Fermilab are keen to steal a march on their European rivals. Time is running out, though, as the Tevatron is due to close for good in September.

This makes the Tevatron the frontrunner in the hunt for the Standard Model Higgs boson Rob Roser, Fermilab

The new analysis of data from Tevatron’s CDF and D0 experiments – along with earlier results – adds spice to that race, ruling out a Higgs mass of 156–183 GeV/C2. Much of this region is excluded to 95% confidence, with some excluded to 90%. The new analysis extends Tevatron’s previous Higgs exclusion zone of 158–175 GeV/c2 (95%), which was reported in July 2010. “This makes the Tevatron the frontrunner in the hunt for the Standard Model Higgs boson,” claims Fermilab physicist Rob Roser, who works on the CDF experiment.

When combined with previous searches for the Higgs and constraints imposed by the Standard Model, the Higgs mass is most likely to be in either a small sliver at about 183–185 GeV/C2 or somewhere between 114 and 156 GeV/C2 (see figure).

Lurking in the background

The ease with which the LHC can look for the Higgs depends partly on the particle’s mass. If the Higgs is heavier than about 140 GeV/c2, it is more likely to decay into pairs of Z or W bosons, which would cause a distinct signal in the LHC’s detectors. A lighter Higgs, in contrast, would favour a decay to b-quarks, which would be harder to see against the background of other events. Indeed, this difficulty is the reason why the Tevatron has not yet been able to extend its exclusion zone to lower Higgs masses.

Although it’s the toughest region, the [LHC] experiments have been designed to do the job Greg Heath, University of Bristol, UK

If the Higgs weighs in towards the bottom of the theoretical range it could prove very difficult for the LHC to find the particle. However, Greg Heath of the University of Bristol in the UK, who works on the LHC’s CMS experiment, points out that the collider is equipped for the job.

“In the LHC experiments we have a range of strategies for looking in the low-mass region, not all of which are available at the Tevatron because the LHC detectors are more powerful,” he says. “Although it’s the toughest region, the experiments have been designed to do the job there and we have a good chance of seeing at least the first signs with this year’s data sample.”

As for a higher mass Higgs, Heath points out that the LHC will also be looking at masses above 180 GeV/c2.

Colliders race for the Higgs

Despite the collider’s imminent closure, researchers at Fermilab will continue operating the Tevatron until September 2011 in what is shaping up to be a race for the Higgs. “In the coming months our collaborations will focus on both the high-mass and low-mass scenarios and optimize our analysis techniques for the entire Higgs mass range,” says CDF physicist Giovanni Punzi, of the University of Pisa and the National Institute of Nuclear Physics in Italy.

Revelations of a golden age

For roughly 700 years, many of the greatest scientists lived in the Islamic world. The Western narrative, however, has often neglected the contributions of major figures such as the chemist al-Jabir, the mathematician al-Khwarizmi and the medic al-Razi, preferring instead to jump directly from Aristotle, Euclid, Archimedes and Ptolemy to Copernicus and Galileo in reporting scientific development over the ages. Yet the fact is that between the eighth and 15th centuries AD, the scientists of the Islamic world developed original theories in mathematics, astronomy, physics, medicine and engineering – frequently with the help of works translated into Arabic from Greek, Sanskrit, Pahlavi and Syriac sources.

Pathfinders: The Golden Age of Arabic Science is brimming with examples of scientific breakthroughs from this period, assembled with enthusiasm and written in a style that makes it compelling reading. The author, Jim Al-Khalili, was born in Baghdad and his book blends well his passion for his illustrious birthplace with his family history and a desire to engage a larger audience with not just the facts of science, but its history too. His knowledge of Arabic and physics provides the book with an authority that he is careful not to exceed by making unsubstantiated claims for “Arabic science”. Indeed, he initially defines “Arabic science” rather narrowly, as science “carried out by those who were politically under the rule of the Abbasids, whose official language was Arabic, or who felt obliged to write their scientific texts in Arabic”. However, he also discusses scientific activities in other dynasties and at different periods, such as the Andalusian Umayyad caliphate in Spain (929–1031) and the Fatimid Caliphate in Egypt (909–1050).

One way round the difficulty of encompassing such a wide canvas would be to refer to “Islamic science” instead, but Al-Khalili provides plausible reasons for not doing so. Among them is the fact that the three scientists whose careers feature most prominently in the book – the polymaths al-Biruni and Ibn al-Haytham, and the physician-philosopher Ibn Sina – all viewed their scriptures as religious guides and not as scientific manuals. Indeed, al-Biruni, who measured the circumference of the Earth to within an accuracy of 1%, once warned that “the extremist would stamp the sciences as atheistic and would proclaim that they led people astray, in order to make ignoramuses of them, and to hate the sciences. For this will help him conceal his own ignorance”.

However, Al-Khalili’s wise choice of “Arabic” rather than “Islamic” science as his theme makes it all the more frustrating when he proceeds to label ancient Indian science as “Hindu science”. It is, in fact, a weakness of this book that its author seems preoccupied with the transmission of knowledge through Greek texts, to the neglect of contributions from the East, notably India and China. This is particularly so in the chapters on numbers and algebra. It is not sufficient just to state that the mathematical activities of these traditions have been covered well elsewhere. They are integral to any discussion of transmissions to and from the medieval Islamic world – as I have shown in my own book on the non-European roots of mathematics, The Crest of the Peacock (2010, Princeton University Press). It might have helped the author to recognize that “Arabic science” went through three stages, not necessarily chronologically: first, a period of growing awareness of other scientific traditions and the emergence of the translation project; second, a period of assimilation of scientific knowledge of different cultures (including Mesopotamian, Iranian, Indian and, of course, Greek); and last, a period of creativity and original scientific endeavours, culminating in the transmission of knowledge to Europe and elsewhere.

Aside from this flaw, the book provides a very readable account of many developments, including what the author describes as “the world’s first state-funded large-scale science project”. During the caliphates of Harun al-Rashid (786–809) and his son al-Ma’mun (809–833) an ambitious programme of construction was carried out in Baghdad that included an observatory, a library, and an institution for research named Bayt al-Hikma (“House of Wisdom”). This project brought groups of scholars together to address issues such as determining the Earth’s curvature and the coordinates of the world’s major cities and landmarks. The seminal influence of mathematical models developed by Islamic scientists on Copernicus is likewise well summed up by the author, who suggests that Copernicus should be seen as the last of the Maragha school of astronomers rather than the first modern one – a reference to an astronomical tradition that began in the 13th century at the Maragha Observatory in modern-day Iran, where scholars attempted to produce alternatives to the Ptolemaic model.

A notable point made in the book is that despite the impressive list of Arabic scientists and their achievements (a useful glossary of which is found at the back of the book), what is more important is the scientific method that they championed. They were the first group of scientists who relied on experiment and observation as well as theory, and if the data they gathered did not support the theories of Aristotle, Galen or Ptolemy, they went with the empirical results. The spirit of Mutazilism, or critical thinking and rationalism, prevailed in this culture at a time that predated the European enlightenment by about a thousand years.

A question then arises: why did Arabic science falter instead of creating a full-blown scientific revolution? Al-Khalili’s answer is interesting. He is dismissive of the usual arguments, which blame the victory of religious orthodoxy, the wars between different caliphates, or the destruction of Baghdad by the Mongolian army in 1258. Instead, the author suggests that the reason should be sought in the Islamic world’s “intense aversion” towards printing, which lasted well into the 17th century both because of the aesthetic value attached to calligraphy and because of the technical problems of typesetting Arabic script. This would have constrained the spread of ideas, preventing them from travelling as fast as they did in Europe a few centuries later. As for other technologies that might have enabled a scientific revolution, it is worth pointing out that although Al-Khalili discusses paper production (a technology of Chinese origin) and its role in the diffusion of ideas in the Islamic world, his book is short of any discussion of other technological advances made in the Islamic world.

The first book printed in Arabic (the Koran) was so riddled with errors that the Ottomans refused to buy copies from its Venetian printers. There are errors in Pathfinders as well, although they are not as serious. For example, in the chapter on numbers, there is no such thing as the “Bakhshali Theorem” referred to in endnote 2, and the earliest example of Indians working out square roots is not the “Bakhshali Manuscript” (now dated to the 7th century AD) but the Sulbasutras (dated 800–500 BC), which is also the earliest source of the Pythagorean result (i.e. a2 = b2 + c2 for a right-angled triangle). Also, the author of the Chinese text The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven is not Zhou Bi Suan Jing (this is in fact the text’s Chinese title), but an unknown scholar.

For the most part, such errors are not enough to confuse the reader. However, the errors in the illustrations in the book are irritating, if not misleading. Among other mistakes, the reader is led to expect on Plate 15 a crater on the Moon named after the Córdoban polymath ibn Firnas, only to find instead a drawing of the muscles of an eye from an ophthalmological treatise by ibn Ishaq. Also, the two maps provided at the beginning of the book do not show the locations of some places, such as Khurasan, despite the fact that they are frequently mentioned in the text; in one map, the name of the river Ebro is spelt incorrectly. Careless editing has done a great disservice to a book that otherwise has much to recommend it, as it excavates and lightens up a hidden history of Islamic science and creativity of which we are also the inheritors.

Physicists take stock of quake damage

Physicists in Japan are assessing the state of the country’s research facilities in the aftermath of Friday’s major earthquake and tsunami. The 8.9 magnitude (on the Richter scale) earthquake, with an epicentre around 130 km off the eastern coast of Japan, has wrought untold devastation on the country’s eastern coastline. As the clean-up begins, scientists are now beginning to evaluate how much damage has been caused to the country’s research infrastructure and facilities.

Currently the massive new $1.5bn Japan Proton Accelerator Research Complex (J-PARC), which opened two years ago, remains closed and will stay shut for at least another three days while safety inspections are carried out. Lying on the eastern coast of Japan around 200 km south of Sendai – one of the worst hit areas of the quake – the facility currently has electricity but is without running water. J-PARC produces a range of particles including neutrons, muons, kaons and neutrinos from three accelerators: a 200 MeV linear accelerator; a 3 GeV proton synchrotron; and a 50 GeV proton synchrotron.

According to J-PARC director Shoji Nagamiya the lab has, however, been unaffected by the tsunami because the facility was built with enough defences to withstand a 10 m wave. “Fortunately, no-one from J-PARC has had any injuries,” says Nagamiya. “There are also no radiation problems.”

Fortunately, no-one from J-PARC has had any injuries, there are also no radiation problems J-PARC director Shoji Nagamiya

A preliminary inspection from researchers who battled for hours to reach the facility on Sunday also revealed that the earthquake has done little damage to buildings at J-PARC thanks to strict building codes. However, roads around the facility have been severely damaged with cracks as big as 50 cm wide. Nagamiya says he is unsure how long it will take before the facility is fully back up and running again.

Masatoshi Arai, deputy director of the Materials Life Facility (MLF) at J-PARC, which operates the facility’s neutron spallation source, adds that no MLF personnel have been harmed during the earthquake. However, he says that the mercury target used to produce neutrons has moved around 30 cm and that although the extent of the damage is not yet known, it could take more than six months for the MLF to return to normal.

Delayed results

Meanwhile, the Tokai to Kamioka (T2K) experiment, which involves generating neutrinos at J-PARC’s 30 GeV proton synchrotron and sending them to the vast SuperKamiokande detector that lies 300 km away in an underground mine in the city of Hida, also seems to have remained unscathed. David Wark, from Imperial College London and former international co-spokesperson of T2K, told physicsworld.com that the experiment was running at the time of the earthquake, but was shut down immediately and has not restarted since.

“The condition of the experiment and accelerators is unknown,” he says. “There is some superficial damage to the buildings and some damage to roads and services caused by ground failure. However, the major facilities are supported by pillars reaching the bedrock so hopefully subsidence will be less.” Researchers have not yet entered any of the buildings, so cannot currently assess any damage to experiments.

Wark says the earthquake struck last Friday just a few minutes before T2K researchers were going to present their first results from the facility. Those results will now be revealed on Wednesday at a neutrino telescopes conference in Venice.

Meanwhile, the KEK high-energy physics lab, which lies around 50 km north-east of Tokyo in Tsukuba, has established an earthquake emergency response team. Led by Atsuto Suzuki, director general of KEK, the team will start an investigation of the facilities this week. In a statement the lab says there has been some damage to the buildings and facilities, although there are no reports of casualties at the site.

Elsewhere in Japan, Hitoshi Murayama, director of the Institute for the Physics and Mathematics of the Universe (IPMU), which is based at the University of Tokyo, says there has been no damage to the university’s campus or the IPMU’s building. Murayama, is however, concerned for researchers at Tohoku University in Sendai, which lies around 10 km inland from the coast. “While I was told there has been no collapse of buildings, the lack of power, water and gas, and shortage of food is becoming serious.”

Finally, three researchers from J-PARC are among those to have been sent to the Fukushima reactor to undertake radioactivity surveys. Since Friday’s earthquake there have been two explosions at the facility where engineers are pumping seawater into the reactor to stop a potential meltdown of the nuclear fuel.

If you are a physicist working in the region with any information about how your institute or university has been affected by the earthquake then please contact michael.banks@iop.org

Update: 14:00 GMT 15 March. In an e-mail to members of the Japanese Physical Society (JPS), Miyanaga Masaharu, JPS chairman, says that the society’s board has decided to cancel its annual meeting. The conference was to be held at Niigata University on 25–28 March. More details will be posted on the JPS website on 18 March.

Update: 16:20 GMT 15 March. The Japan Society of Applied Physics (JSAP) has also cancelled its spring meeting, according to Tatsuzo Dazai, publishing and business-development director at the JSAP and the JPS. The meeting was due to have taken place at the Kanagawa Institute of Technology in Atsugi, Kanagawa Prefecture, on 24–27 March.

Update: 08:50 GMT 16 March. Soichi Wakatsuki, director of the Photon Factory – a national synchrotron radiation facility based at the KEK particle-physics lab – wrote in an e-mail that the facility’s linear accelerator has suffered “substantial damages”, including the displacement of three radio frequency modules by about 10 cm and one magnet that fell onto the floor. Wakatsuki says engineers will need to turn the facility back on before knowing the true extent of the damage and it will be “at least two to three months” before the facility is back to normal again.

Star-hungry black hole could blow galactic ‘bubbles’

Giant bubbles of gamma-ray-emitting materials surrounding the Milky Way are created by our galaxy’s central black hole – and its appetite for stars – according to an international team of astronomers.

Back in November 2010 astronomers using the Fermi Gamma-ray Space Telescope released details of a colossal but previously unseen structure burgeoning out from the core of the Milky Way (see figure). Stretching some 25,000 light-years above and below our galaxy’s main disc, the well-defined edge of these two gamma-ray-emitting “bubbles” hints at a sizeable and rapid release of energy as their creator.

Some astronomers have suggested that our galaxy’s central super-massive black hole is powering the mysterious bubbles, but the exact process remains unclear. Now, a team of researchers led by K S Cheng at the University of Hong Kong has created a model that makes the connection.

Capturing stars

“I’d been working with Vladimir Dogiel, based at the P N Lebedev Institute of Physics in Moscow, on the link between the unusually high-energy phenomena at the galactic centre and star capture by the central black hole,” Cheng told physicsworld.com. “When we saw the discovery of the bubbles last year we realized it too was a phenomenon that could be included in our model,” he added.

The black hole at the centre of the Milky Way is a behemoth with a mass four million times that of the Sun. It has long been known to consume anything that ventures too close, and Cheng’s model suggests it chews up stars at a rate of 100 every 3 million years. Astronomers believe that only 50% of the mass of the dilapidated star gets swallowed by the black hole; the other half gets “burped” back out into space before it reaches the point of no return.

This regurgitation blasts very hot plasma – with energies of around 10 keV, according to Cheng – out into the surrounding galactic halo, raising the ambient temperature. Having been heated, the halo then expands. Cheng’s model describes how, as the black hole continues to devour stars, shockwaves are created as hot plasma is repeatedly and periodically injected into the halo. “We used the analogy of the Sun ejecting the solar wind into the solar system,” says Cheng. “When the solar wind blows out plasma it too causes a bubble: the heliosphere,” he explains.

Particle accelerators

Each shockwave acts as a particle accelerator, increasing the speed of electrons within the plasma to near that of light. Indeed, Cheng expects that the energy of the shock front at the galactic centre is nearly 100 times greater than that created by a supernova explosion. These speedy electrons interact with photons in the galactic halo, boosting some of them up to the gamma-ray energies observed in the bubbles.

However, Cheng and colleagues are not the only scientists with a theory. Stefanie Komossa, an astrophysicist at the Max Planck Institute for Extra-Terrestrial Physics, Germany, told physicsworld.com: “More continuous accretion onto the black hole from interstellar matter or molecular clouds, like we see ongoing in other active galaxies, could be responsible too.” But she does recognize strength in Cheng’s argument. “The disrupting of stars by the central black hole is still a very plausible explanation as we know that these events are unavoidable,” she added.

Confirmation of Cheng’s theory could be around the corner. “We’re currently running our next simulation and then we expect to have a theoretical map of the distribution of gamma rays in the galactic halo. Our map can then be compared to those constructed through telescope observations,” Cheng explains. “The work is ongoing but we expect to have some results in the next six to nine months,” he adds.

Cheng’s model is described in a paper submitted to Astrophysical Journal Letters and a preprint is available on the arXiv preprint server.

Michio Kaku looks to the physics of the future


What benefits will science bring to the average person in the future?

Today a conventional MRI machine occupies a space about the size of this office, limiting where it can be installed and used. This is because huge coils are needed to make the magnetic field as uniform as possible in order to get those gorgeous pictures of the inside of the body. Using computer technology, which in turn is applied physics, you can now compensate for inhomogeneities of the magnetic field. The world’s smallest MRI machine, made by physicists in Germany, is about one foot tall. Eventually it will be the size of a mobile phone and could be used anywhere.

In the future, chemotherapy will seem as primitive as leeches and bloodletting

We will also benefit from DNA chips that use Silicon Valley technology to locate cancer colonies decades before they form a tumour. The cancer will then be cured using nanotechnology. I had lunch recently with one of the world’s leaders in research in nanoparticles. She’s at the National Institute of Health in Washington and uses molecules like smart bombs to zero in on cancer cells. We are talking about a revolution in cancer research. In the future, chemotherapy will seem as primitive as leeches and bloodletting.

You write that Moore’s Law – the theory that computer speed doubles roughly every two years – is not going to hold much longer for silicon devices, could you explain?

We are seeing the beginning of the end of Moore’s Law for two reasons. One is heat build-up as a result of doing so many electronic operations in a very tiny space. The second is quantum leakage – Heisenberg’s uncertainty principle eventually catches up with chip designers. In tiny circuits the uncertainty principle means that you can’t know exactly where the electron is. And if you shrink a transistor to the size of a few atoms, the atoms themselves can leak out. We physicists are desperately trying to create the post-silicon era: quantum computers, atomic computers, molecular computers.

Given all the technical and financial constraints, what do you see as the future of Big Physics?

In 1993 Big Physics took a huge blow because plans to build the Superconducting Super Collider in the US were cancelled. The Europeans are now benefiting from a much smaller machine, the Large Hadron Collider. Physicists want to go to the next generation beyond that and build the International Linear Collider, but ultimately society as a whole has to make the decision – and unfortunately physicists don’t interact with the larger society.

Science is getting more expensive and the public may simply pull the plug

Science is getting more expensive and the public may simply pull the plug. That’s why we have to interact with the rest of society. That’s one reason why I write books. Even though we physicists created the architecture of the 20th century, the public doesn’t know that. The public only looks in terms of those who massage money. Those who create wealth through things like the transistor or the laser, their names are mostly unknown.

How do you see the interplay of science, politics and society in the future?

Science is a double-edged sword. The positive side can cut against ignorance, poverty and disease. The negative side can be very destructive when wielded by dictatorships, evil monarchies, governments that want to take other people’s resources and subjugate them. Take a look at the two world wars; out of those came poison gas, saturation bombing and nuclear weapons. Scientists create the sword and we are the ones who have to interact with society and explain both sides – that is where I think we have been negligent. Which I think is very sad.

Being a tireless science popularizer must exact demands. How does it affect your research work?

I am a theoretical physicist. If I was an experimental physicist and my vacuum pump broke I would have to drop everything and fly back to New York to repair it. My laboratory is my own mind and I have chunks of equations in my head. If they don’t fit properly into the right form, I have to massage them, manipulate them, take them apart and put them back together again.

My laboratory is my own mind

Travelling does not interfere so much with this process – I can work while I stare out of an aeroplane window or a hotel window. An analogy would be a musician. A musician has partial melodies dancing in their head. When the melodies start to come together they go to a piano and plunk out a few notes, then they go back to daydreaming about melodies. Most of what a musician does is not with a piano at all.

Are you optimistic about the future?

I think we are headed for a type I civilization, a planetary civilization where humans can do things like control the weather and harness all the light from the sun. Type II is a stellar civilization that can control the power of an entire star. Type III is a galactic civilization that controls the output of a 100 billion stars, plays with black holes and zips around the galaxy.

Today we are type 0 and get our energy from dead plants, oil and coal. When I open the newspaper I see the birth pangs of type I. For example, the Internet is the beginning of a type I telephone system. We are privileged to be alive to witness the birth of a type I technology – a truly intelligent planetary communications system. Overall I’m pretty optimistic. I think we’ll get to type I. The danger point is between type 0 and type I; that’s when you have the power to destroy all life on your planet.

Physics of the Future will be published in the US on 15 March by Doubleday.

Phase-change memory becomes more portable

Phase-change materials are already used to store data on rewritable discs, but their relatively high power requirements make them impractical for use in mobile phones and other portable devices. Now, researchers in the US have found a way to decrease the volume of phase-change material in the memory bit, cutting power requirements 100-fold compared with the best devices on the market today.

Phase-change materials are the active material in rewritable DVDs and are usually made of chalcogenides like germanium antimony telluride – GST for short. Using voltage pulses to produce heat, the materials are switched between an amorphous state (“off”) and crystalline state (“on”). The amorphous state has a very high resistance and the crystalline state a very low resistance.

Faster than Flash

These states endure once the power is turned off, so the materials are ideal for making nonvolatile memory similar to Flash or hard drives. What is more, the phases can be switched in just a matter of nanoseconds, which is much faster than Flash. However, the snag is that relatively high power levels are usually required to switch between the amorphous and crystalline states in GST memory bits.

To get around this problem, Eric Pop and colleagues at University of Illinois Urbana-Champaign used carbon nanotubes to “house” nanometre-scale GST memory bits. They began by creating tiny gaps within the nanotubes using a method called electrical breakdown. This simple technique produces gaps that vary in size from 20 to 300 nm – usually in the middle of a nanotube. Next, the researchers filled the nanogap with a small amount of GST.

The devices are initially in the off state because the as-deposited GST bits are amorphous, with a high resistance of around 50 MΩ. When a voltage is applied across the nanotube (which effectively acts as a contact or interconnect), an electric field is created across the nanogap and switches the GST bit to the crystalline phase. The resistance of the crystalline phase is around 100 times lower, at roughly 0.5 MΩ.

‘Extremely low power dissipation’

The switching only occurs in the small amount of material contained within the nanogap. “This means extremely low power dissipation compared to state-of-the-art devices that use much larger metallic wires to contact the phase-change material,” explained Pop.

The results are very important, say the researchers, because phase-change materials are the most promising technology for replacing Flash memory in laptops, cellphones and many other portable applications. “A 100-fold power reduction could go very far in extending battery life and portability, and could also ultimately lead to many novel applications,” says the team.

Although the Illinois researchers have reduced power dissipation by two orders of magnitude, it is possible that it may have not yet reached the fundamental lower limit for such a technology. “We will now seek to further reduce the programming power (we think another factor of 10 is possible) and also improve the long-term reliability of the memory bits,” Pop told physicsworld.com.

The work was described in Sciencexpress doi:10.1126/science.1201938.

Laser heats up fusion quest

Physicists at the $3.5bn National Ignition Facility (NIF) say they have taken an important step in the bid to generate fusion energy using ultra-powerful lasers. By focusing NIF’s 192 laser beams onto a tiny gold container, researchers have achieved the temperature and compression conditions that are needed for a self-sustaining fusion reaction – a milestone that they hope to pass next year.

Located at the Lawrence Livermore National Laboratory in California and officially opened last year, NIF will provide data for nuclear weapons testing as well as carry out fundamental research in astrophysics and plasma physics. The facility will also aim to fuse the hydrogen isotopes deuterium and tritium in order to demonstrate the feasibility of laser-based fusion for energy production.

These hydrogen isotopes will be contained within peppercorn-sized spheres of beryllium, which will be placed in the centre of an inch-long hollow gold cylinder – known as a hohlraum. By heating the inside of the hohlraum, NIF’s laser beams will generate X-rays that cause the beryllium spheres to explode and, due to momentum conservation, the deuterium and tritium to rapidly compress. A shockwave from the explosion will then increase the temperature of the compressed matter to the point where the nuclei overcome their mutual repulsion and fuse.

One of the main aims of NIF is to achieve “ignition”, which means that the fusion reactions generate enough heat to become self-sustaining. Researchers hope that by burning some 20–30% of the fuel inside each sphere the reactions will yield between 10 and 20 times as much energy as supplied by the lasers.

Hotter than the Sun

NIF first began testing the laser beams last year and now two groups at Lawrence Livermore have shown that they can obtain the desired conditions inside the hohlraum. They did this by using plastic spheres containing helium, rather than actual fuel pellets, since these were easier to analyse, and by combining their experimental measurements with computer simulations, the researchers found that the hohlraum converted nearly 90% of the laser energy into X-rays and that it heated up to some 3.6 million degrees Celsius. They also found that the sphere was compressed very uniformly, its diameter shrinking from around two millimetres to about a tenth of a millimetre.

People were concerned that we wouldn’t be able to achieve the desired temperature and implosion shape, but those fears have proved unfounded NIF boss Edward Moses

“These results are better than we were hoping,” says NIF boss Edward Moses. “People were concerned that we wouldn’t be able to achieve the desired temperature and implosion shape, but those fears have proved unfounded.” Moses says that the next step will be to replace the plastic spheres with beryllium ones containing unequal quantities of deuterium and tritium, in order to study how hydrodynamic stabilities might lead to asymmetrical implosions. The final step will then be to switch over to actual fuel pellets, which will contain equal quantities of the two hydrogen isotopes, and which, it is hoped, will ignite.

Moses says he hopes that ignition will take place in 2012. But he is keen not to raise expectations, having had to deal with many technical problems since construction started on NIF back in 1997. Indeed, he and his colleagues had predicted last January that ignition would be achieved by the end of 2010. “We might be able to reach ignition around spring or summertime next year,” he says. “But there’s a lot of physics that can run us off course in the meantime.”

David Hammer, a plasma physicist at Cornell University in New York, says that the latest results are encouraging. However, he warns that the study was done without fully understanding the interactions taking place between the laser beams and plasma inside the hohlraum and that such interactions could wreck the very precise symmetry of the implosion needed for ignition.

The work is described in Phys. Rev. Lett. 106 085003 and 106 085004.

Copyright © 2025 by IOP Publishing Ltd and individual contributors