Skip to main content

Atom laser makes its first measurement

Physicists in Australia have for the first time performed a measurement task with an atom laser. The achievement opens up the possibility of manipulating an atom-laser beam so that it may be used to process quantum information.

An atom laser is made from a Bose–Einstein condensate, a collection of ultracold atoms that have all fallen into the same quantum state. BECs generally have to be trapped — for example, with magnetic fields — but if some of the atoms are allowed to escape the confining potential they can produce a travelling matter wave.

Just like the light from a conventional laser, the matter wave from an atom laser is coherent and therefore has a well-defined quantum field that can be manipulated for processing and transmitting quantum information or for making measurements. Indeed, because atoms have a greater momentum than photons from conventional lasers they have a smaller de Broglie wavelength, which in principle means an atom laser can make spatial measurements that are more precise.

“In my opinion the experiment we have done really shows an important new direction that we are following,” John Close of the Australian National University in Canberra told physicsworld.com. “We and several other groups have spent many years developing the atom laser to be a useful tool.”

Two condensates

The measurement performed by Close and colleagues was of the interaction between an atom laser and another BEC, both made of rubidium–87 atoms but in different “hyperfine” states. The researchers positioned the atom laser above the second BEC so that its matter wave would fall through the lower collection of atoms and scatter. Then, by shining light at right angles to the plane of motion and recording the absorption distribution, they measured the scattering length — that is, how close scattering atoms got to each other — to be 94 times the radius of the atom’s electron cloud (arXiv:0805.0477).

“All other measurements that we know of in the field of atom lasers have been used to characterize the properties of the atom-laser beam itself rather than to use an atom laser to make a measurement of another quantity,” says Close.

Although the measurement of scattering length itself for those hyperfine states is not new, the fact that Close and colleagues have performed it with an atom laser is important because it will help physicists to understand how to manipulate atom lasers so they can be used in quantum information systems. This can already be achieved with conventional lasers by sending the light through non-linear media, leading, for example, to entangled photon beams.

Wolfgang Ketterle, a condensed-matter physicist at the Massachusetts Institute of Technology, US, who invented the atom laser in 1997, says that, although the Australian group’s experiment has not produced any new results or realized new concepts, it is imaginative and “uses a few nice tricks.”

Close explains that it is still “very early days” for applying atom lasers to measurement. “We have plans to produce high flux, tuneable, continuous and squeezed atom lasers that we think will be applicable to precision measurement in a variety of fields from surface science to metrology.”

Who cares if it’s not even wrong?

“So what would you do if string theory is wrong?” asks string theorist Moataz Emam of Clark University, US, in a paper posted on arXiv yesterday. It’s obvious, you might think. String theorists would briefly mourn the 40 years of misspent speculation and leave furtively through the back door, while anti-string theorists would celebrate in light of their vindication.

Not so, says Emam — string theory will continue to prosper, and might even become its own discipline independent of physics and mathematics.

Oddly, the reason Emam gives for this prediction is precisely the same reason why many physicists despise string theory. For example, in reducing the 10 dimensions of string theory to our familiar four, string theorists have to fashion a “landscape” of at least 10500 solutions. Emam says that such a huge number of solutions — of which only one exists for our universe — may make string theory unattractive, but in studying them physicists are gaining “deep insights into how a physical theory generally works”:

So even if someone shows that the universe cannot be based on string theory, I suspect that people will continue to work on it…The theory would be studied by physicists and mathematicians who might no longer consider themselves either. They will continue to derive beautiful mathematical formulas and feed them to the mathematicians next door. They also might, every once in a while, point out interesting and important properties concerning the nature of a physical theory which might guide the physicists exploring the actual theory of everything over in the next building.

Peter Woit, author of the string-theory polemic Not Even Wrong, notes on his blog that physicists looking to pursue string theory for its beauty should “go and work in a maths department”:

The argument Emam is making reflects in somewhat extreme form a prevalent opinion among string theorists, that the failure of hopes for the theory, even if real, is not something that requires them to change what they are doing. This attitude is all too likely to lead to disaster.

Is this the youngest professor ever?

Sabur.jpg

According to the Guinness Book of World Records, and what appears to be most major media outlets, Alia Sabur (pictured above) has broken the record for the world’s youngest professor.

Sabur, 19, will begin teaching physics next month at the Department of Advanced Technology Fusion at Konkuk University, Korea. It will be just another entry on the teenager’s laden CV, which reveals she received a bachelor’s degree at 14 and a masters in materials science at 17.

Something might be awry here, though. There’s nothing wrong with the media adopting the American English definition of “professor” (i.e. any university teacher) — after all, Sabur was born in New York. But it appears that the previous record holder was Scottish physicist Colin Maclaurin, who was appointed professor of mathematics at the University of Aberdeen when he was a few months over 19 in 1717.

I might have to explain to our international readers that in the UK “professor” is a more distinguished title, reserved for heads-of-departments and the like. (At least it has been as far back as any of us at Physics World can vouch for.) Sabur, I note, is yet to defend her PhD.

Does this mean the titles of Sabur and Maclaurin are being confused? Does Maclaurin, who is credited with the mathematical “Maclaurin series”, deserve to keep his accolade?

Of course, science was a considerably narrower discipline back in the 18th century, and achieving a professorship might have taken a little less time than it does today (it certainly wouldn’t have required a PhD). But Maclaurin can’t defend his honour, and offhand I don’t know enough about science in the early 1700s to cast a vote either way.

Do any of you have any thoughts? Feel free to comment below.

Prospect of US science debate wanes

Organizers of ScienceDebate 2008 are “disappointed” but “not surprised” that the three main US presidential candidates have ignored invitations to participate in a public debate on science that was scheduled to take place today.

Friday 2 May was one of three possible dates this month that had been put forward to the candidates after the original date — 18 April — had to be cancelled, also because of a poor response.

John McCain, the likely Republican nominee, declined the invitation for a debate at any time in early May. Hillary Clinton, one of the two remaining Democratic candidates, told the organizers that the invitation had gone to her “scheduling” department, while Barack Obama, the other Democratic candidate, acknowledged receipt of the invitation but did not confirm whether he would attend.

“I believe the candidates have left it so late that it is now virtually impossible that the debate will happen,” Matthew Chapman, president of ScienceDebate, told physicsworld.com. “For the candidates to show such disdain of the academic, science and technology community should be a matter of concern for every voter in [the US].”

Although McCain is the only candidate to have formally rejected the two remaining dates — 9 May and 16 May — it seems unlikely that either Clinton or Obama will accept at this late stage. The organizers are now pinning their hopes on new invitations that will be sent out shortly. “We are making progress in bringing this to the attention of voters, and if everyone continues to keep the pressure up, ultimately the candidates will have to respond,” says Chapman.

Growing support

ScienceDebate 2008 was formed towards the end of last year by a group of six people who wanted science policy to be debated by the presidential candidates in the run up to the November election. Since then the organizers have gathered the signatures of some 37,000 supporters including university presidents, the representatives of scientific institutions and Nobel laureates.

McCain has no reason to appear on stage with the Democrats, and the focus of the Democratic debates has devolved into issues of personality and gossip Lawrence Krauss, co-organizer, ScienceDebate 2008

After the original date in April was rejected, several signatories — including the Nobel laureates David Gross, John Mather, David Politzer and Leon Lederman — sent an independent letter urging the candidates to respond to the new May invitations. But this too appears to have gone unheeded.

Lawrence Krauss, co-organizer of Science Debate, says that resounding “gossip and innuendo” between the Clinton and Obama campaigns is to blame for the lack of interest in a serious public debate on science. “At this point McCain has no real reason to appear on stage with the Democrats, and the focus of the Democratic debates has devolved into issues of personality and gossip.”

Krauss now believes it is more likely that the debate will happen after the primaries are over in June. “I am only hoping that after the primary season ends that real issues may come to the fore, but this will depend in part on the media actually focusing in on these issues,” he says, adding: “We are in this for the long haul, and are hoping to set up an infrastructure that will impact not just upon this election but future elections.”

How lasers really work

Researchers in Switzerland and the US have developed a new theoretical framework for describing a wide range of lasers — including unconventional systems called diffusive random lasers, which had hitherto defied a complete explanation. Their theory could open the door for unconventional lasers to be used in a wider range of commercial applications such as document security, remote sensing, ultra-fast displays and diagnostic imaging.

A conventional laser comprises an optical gain medium, such as a gas, that is sandwiched between two mirrors in an optical cavity. The gain medium is “pumped” using an external source of light or electric field such that most of its atoms or molecules are in higher energy excited states.

When these states decay, they emit light that bounces back and forth in the cavity. This feedback stimulates the emission of similar light from other atoms in excited states. The result is a cavity filled with unidirectional light at the same wavelength, some of which is allowed to escape to form a laser beam.

Random multiple scattering

The random laser — which does not have an optical cavity — was born in the mid-1990s, when Nabil Lawandy of Brown University in the US fired a laser beam at a beaker filled with dye that is normally used as a gain medium in a conventional laser. Lawandy found that when tiny particles of metal were added to the beaker, the dye began to lase. The laser is random in the sense that the feedback for the photons generated in the dye is provided by the random multiple scattering of light from the particles.

But exactly why this was happening was a mystery — particularly because the addition of scattering particles to a conventional dye laser was known to reduce its performance. As physicists began making random lasers utilizing different lasing media they discovered that there were actually two different kinds of random lasers.

One is the localized random laser, in which light is believed to be confined to “hot-spots” (a situation closer to that of a conventional cavity laser). The other is the diffusive random laser, or DRL, in which the particles are not very reflective. In a DRL the light effectively seeps out of the medium rapidly, instead of being confined. This rapid escape of light is very different than what occurs in an optical cavity, leaving physicists wondering how a DRL could function as a laser.

Researchers have had some success describing a theory of localized random lasers, but an understanding of DRLs had been difficult to pin down. But now, Hakan Türeci of the Swiss Federal Institute of Technology in Zurich and Douglas Stone and Colleagues at Yale University have come up with a new general theory of lasers that explains the operation of both types of random laser as well as more conventional lasers (Science) .

Extreme leakiness

The team used their theory to create computer simulations that can model the extreme leakiness of a DRL and determine a number of key parameters of the laser including its output power and the wavelengths of the laser light emitted. Inputs to the simulation include the distribution of scattering particles in the medium and the pumping power.

Their model considers all possible ways that light can reflect back and forth in the medium (its resonant modes) and works out how these modes interact with each other to define the wavelengths of the light that is produced by the laser.

Using the simulation, the team was able to reproduce a key feature of DRLs that had eluded previous theories – that the wavelengths of the laser light emitted are always the same, no matter how the DRL is pumped.

The team is currently using their theory to understand the stability of the output wavelengths. Türeci is also confident that the theory will be used to boost the performance of other unconventional lasers such as those based on chaotic resonators or photonic crystal-based cavities.

Random lasers have already been used by Nabil Lawandy to create a document security system whereby a liquid containing reflective particles is “painted” onto a piece of paper. When the material dries, it can be made to emit laser light by firing a laser at it. The precise wavelengths of the light emitted are defined by the exact locations of all the reflective particles in the dried paint — something that is different for each daub. The result is a unique signature that cannot be reproduced.

“Pine tree” nanowires do the Eshelby twist

Frost, icicles and snowflakes are all too familiar to the people of Wisconsin, especially on winter mornings. But the icy-looking structures in the image above, created by scientists at the University of Wisconsin–Madison, US, didn’t form in the cold — they are nanowire “pine trees” grown via chemical vapour deposition (CVD) of lead sulfide at temperatures verging on 650 °C.

CVD is often used to grow nanowires, but it usually requires a catalytic nanoparticle “seed” to get the structure started. Song Jin and colleagues have found that by modifying the flow of hydrogen gas used in CVD the nanowires do not need a catalytic seed. Instead, the growth of their nanowires can be driven by a type of defect known as a screw dislocation, which creates a spiral step for atoms to settle on. When the researchers follow-up the CVD with another deposition technique, known as vapour-liquid-solid growth, horizontal nanowires grow outwards from the steps to form tree-like structures (Science doi: 10.1126/science.1157131).

Although Jin’s team have created branched nanowires before, this is the first time they have created structures with such intricacy. In fact, they think their nanowire pine trees are so intricate they might be the best evidence yet for a theory of dislocations called the Eshelby twist. This theory, put forward by 55 years ago by materials scientist John Eshelby, then at the University of Illinois at Urbana, proposes that the stress created by a dislocation generates a torque at either end of the cylinder, forcing it to twist.

Eshelby twists have been observed in nature before, but the horizontal branches in nanowire pine trees could serve as markers to highlight the extent of it. Jin’s team think Eshelby’s theory will help them to understand the layout of the branches of their nanowire pine trees, in doing so providing the “clearest demonstration” of the theory’s validity. “Lying beneath these beautiful nanostructures is a beautiful and fundamental science that goes back to the heart of crystal growth theory,” says Jin.

Nickel-based compound joins a new class of superconductor

The high Tc superconductivity community has been abuzz lately with the discovery of a growing number of iron-based materials that remain superconducting at temperatures as high as 55 K.

The first such material (fluorine-doped LaOFeAs) was reported by physicists in Japan earlier this year and has a transition temperature (Tc) of 26 K. Since then, researchers in China replaced the lanthanum (La) with samarium (Sm) and boosted Tc to 55 K. The Japanese team, meanwhile, put their material under pressure and increased Tc to 43 K.

Now, just as physicists are beginning to understand the mechanism behind these iron-based materials, scientists in Russia have come up with a new twist by replacing iron with nickel. They found that fluorine-doped LaONiBi is a superconductor with a Tc of 4K.

While this Tc is much lower than the iron-based materials, the team reports that LaONiBi has very similar structural and electronic properties as its iron-based cousins. This suggests that with a bit of fiddling with doping levels and other properties, the Tc could be boosted considerably.

Once a physicist: Ian Leigh

Why did you originally choose to study physics?

I always found the sciences much more satisfying than the arts — it seemed far tidier and more rewarding to be able to reach a solution to a problem rather than to interpret situations where there was no right answer. I studied maths, physics and computer science at the University of Edinburgh but chose to specialize in physics because it seemed to be so fundamental and to have so much relevance to everyday life.

How much did you enjoy the subject?

I enjoyed the practical side of physics far more than the theoretical side, but even found the theory fascinating when it could be related to real-life situations. That said, I developed a deep admiration for those of my tutors — such as Peter Higgs — who appeared to be so comfortable with theoretical concepts I could never grasp.

What did you do next?

After I graduated in 1979 I took a job at the National Physical Laboratory (NPL), where I researched the development of standards for micro-indentation hardness. The problems encountered in getting accurate and repeatable results on a microscopic scale were enormous because there were so many sources of error in the equipment, the measuring system and thematerials themselves. I submitted this research for my PhD thesis, which was examined by the late David Tabor, the undisputed master of this subject.

Why did you move away from physics?

From very early in my career I was given opportunities to develop administrative and policy skills as well as practical scientific expertise. I rapidly became a “jack of all trades” — more politely known as a “technological generalist” — and before long I found myself doing scientific administration and programme formulation rather than working at the lab bench. For example, in the late 1980s I established the LINK programme in nanotechnology, which provided UK government support for projects undertaken jointly by industrial and academic partners. This laid the foundations for some of the work that is taking place in this field today.

How did you end up working for Postwatch?

After many years of technological generalism, I eventually transferred to a purely administrative job in the Department of Trade and Industry (DTI), dealing with their regulatory programme and its impact on business. From there it was a short step to Postwatch, which was sponsored by the DTI.

What does your role there involve?

My job is to try and ensure that consumer needs are at the heart of the debate over the transformation of postal services in the UK and beyond. The postal industry worldwide is evolving very rapidly in response to changing methods of communication and customer behaviour, and traditional monopolists such as Royal Mail are having to adapt. They not only need to improve efficiency by cutting labour costs and introducing new technological solutions, but also understand the requirements of consumers and provide products that will encourage these customers to continue to value postal services. I also manage the organization’s programme of research on consumer needs. As the only country with a statutory body dedicated to understanding and articulating the needs of postal consumers, the UK is well placed to contribute to current international debates — such as the full liberalization of the European postal market.

How does your physics training help the way that you work?

I’m really good at fixing the photocopier when it jams or needs the toner replacing! More seriously, the disciplined way of thinking and analysis that a physicist develops, and the metrologist’s attention to detail, are useful in any field of work, as is the experience of presenting complex concepts and ideas to a sceptical audience. And although it’s not the result of cutting-edge scientific research, people are often surprised to learn how much technology is actually involved in modern postal operations. That said, the accuracy with which postal operators measure size and weight are not quite up to the standards of NPL!

Do you still keep up to date with any physics?

Apart from trying (and often failing!) to help my son with his sixth-form physics homework, I still take a keen interest in developments at NPL. I also read the science pages in my daily newspaper and look forward to receiving Physics World every month, which is now my most regular contact with the world of physics.

Passing of a legend

John Wheeler, who died last month at the age of 96, was one of the few true giants in physics (p7 print version only). Best known for his work on nuclear fission and general relativity — and for introducing the terms black hole, wormhole and quantum foam — Wheeler was one of the last surviving links with Niels Bohr and Albert Einstein, with whom he famously debated the meaning of quantum mechanics. Wheeler counted Richard Feynman among his PhD students and, like all the best physicists, never turned his back on the enthusiasm of youth. As he wrote in his 1998 autobiography Geons, Black Holes, and Quantum Foam, “Throughout my long career…it has been interaction with young minds that has been my greatest stimulus and my greatest reward.” Helping those minds to achieve great things is sure to be one of Wheeler’s lasting legacies.

The Bohr paradox

Niels Bohr

In his book Niels Bohr’s Times, the physicist Abraham Pais captures a paradox in his subject’s legacy by quoting three conflicting assessments. Pais cites Max Born, of the first generation of quantum physics, and Werner Heisenberg, of the second, as saying that Bohr had a greater influence on physics and physicists than any other scientist. Yet Pais also reports a distinguished younger colleague asking with puzzlement and scepticism “What did Bohr really do?”.

We can sympathize with that puzzlement. In history books, Bohr’s chief contribution to physics is usually said to be “the Bohr atom” — his application in 1912–3 of the still-recent quantum hypothesis to overcome instabilities in Rutherford’s “solar-system” model of the atom, in which electrons travelled in fixed orbits around a positively charged nucleus. But this brilliant intuitive leap, in which Bohr assembled several puzzling features from insufficient data, was soon superseded by more sophisticated models.

Bohr is also remembered for his intense conversations with some of the founders of quantum mechanics. These include Erwin Schrödinger, whom Bohr browbeat into a (temporary) retraction of his ideas; Heisenberg, who broke down in tears under Bohr’s relentless questioning; and Einstein, whom Bohr with debated for years. Bohr is remembered, too, for “complementarity” — an ordinary-language way of saying that quantum phenomena behave, apparently inconsistently, as waves or particles depending on how the instruments that measure them are set up, and that we need both concepts to fully capture the phenomenon.

He has, though, been mocked for the supposed obscurity of his remarks on this subject and for extending the idea to psychology, anthropology, free will, love and justice. Bohr has also been wrongly blamed for mystical ideas incorrectly ascribed to the “Copenhagen interpretation” of quantum mechanics (a term Heisenberg coined), notably the role of the subjectivity of the observer and the collapse of the wave packet.

Now Bohr is back in focus. Publishing giant Elsevier is this month putting the massive 12-volume Niels Bohr Collected Works online for the first time and has also created a print index for the entire set, which contains Bohr’s extensive correspondence and writings about various aspects of physics and society. Most of volume 10 and much of volumes 6 and 7, for instance, are about complementarity. The time is therefore ripe for re-evaluating Bohr and clarifying the “Bohr paradox”: why he is both revered and underappreciated?

What Bohr did

Bohr practised physics as if he were on a quest. The grail was to fully express the quantum world in a framework of ordinary language and classical concepts. “[I]n the end,” as Michael Frayn has Bohr’s character say in the play Copenhagen, “we have to be able to explain it all to Margrethe” — his wife and amanuensis who serves as the onstage stand-in for the ordinary (i.e. classically thinking) person.

Many physicists, finding the quest irrelevant or impossible, were satisfied with partial explanations — and Heisenberg argued that the mathematics works: that’s enough! Bohr rejected such dodges, and rubbed physicists’ noses in what they did not understand or tried to hide. However, he did not have an answer himself — and he knew it — but had no reason to think one could not be found. His closest approximation was the doctrine of complementarity. While this provoked debate among physicists on the “meaning” of quantum mechanics, the doctrine — and discussion — soon all but vanished.

Why? The best explanation I have heard is advanced by the physicist John H Marburger, who is currently science advisor to US President George Bush. By 1930, Marburger points out, physicists had found a perfectly adequate way of representing classical concepts within the quantum framework using Hilbert (infinite-dimensional) space. Quantum systems, he says, “live” in Hilbert space, and the concepts of position and momentum, for instance, are associated with different sets of coordinate axes that do not line up with each other, thereby resulting in the situation captured in ordinary-language terms by complementarity.

“It’s a clear, logical and consistent way of framing the complementarity issue,” Marburger explained to me. “It clarifies how quantum phenomena are represented in alternative classical ‘pictures’, and it fits in beautifully with the rest of physics. The clarity of this scheme removes much of the mysticism surrounding complementarity. What happened was like a gestalt-switch, from a struggle to view microscopic nature from a classical point of view to an acceptance of the Hilbert-space picture, from which classical concepts emerged naturally. Bohr brokered that transition.”

Thus while Bohr used the notion of complementarity to say that quantum phenomena are both particles and waves — somewhat confusingly, and in ordinary-language terms — the notion of Hilbert space provided an alternate and much more precise framework in which to say that they are neither. Yet the language is abstract, and the closest outsiders can come to grasping it is Bohr’s awkward and imperfect notion.

The critical point

In the first generations of quantum theory, Bohr was revered for leading the quest to keep the field together within a single framework expressible in ordinary language and classical concepts. The Bohr paradox arises because the results of the quest were manifested not in citation indices linked with Bohr’s name, but in an increased integrity of thought that pervaded the entire field, which has proved hard for subsequent generations of physicists — and even historians — to appreciate.

If Bohr’s quest had a specific result, it was the idea of complementarity. But physicists soon found a more effective and satisfying way to represent quantum phenomena in a technical language using coordinate systems in Hilbert space. Scientists need to pursue possible and important paths even if they do not pan out. Bohr’s greatness was to recognize the importance of this quest, and to relentlessly carry it out with insight and passion. If it did not succeed, and if in the end he would not be able to explain it all to Margrethe — for whom it would have to remain esoteric — that was nature’s doing and not Bohr’s failing. It should not diminish our appreciation of his achievement.

Copyright © 2025 by IOP Publishing Ltd and individual contributors