Skip to main content

Blog life: Michael Nielsen

Blogger: Michael Nielsen
URL: www.michaelnielsen.org/blog
First post: August 2003

Who is the blog written by?

Michael Nielsen is a theoretical physicist who, up until June, was based at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. He is a pioneer of quantum computation, contributing significantly to the theory of entangled quantum states. Last year, however, he announced a major change in the focus of his research, which now centres on the development of new tools for scientific publication and collaboration.

What topics does the blog cover?

The blog strongly reflects Nielsen’s interest in scientific publishing, and in how scientists can take advantage of the Internet in particular. He is currently writing a book about how science will change over the next few years as a result of the Web, and many of his posts consist of notes and comments on articles or books he has read on this topic. Twice a week Nielsen also publishes a list of links to interesting websites, articles or posts on other blogs that he has come across, which are much more general. For example, he has recently linked to an article from the New York Times suggesting that bonuses actually reduce workplace productivity and to a post on the Galaxy Zoo blog about a mysterious blue blob spotted in one of the images analysed by the project.

Who is it aimed at?

Given its focus on the process of science, this blog will primarily be of interest to other scientists. The content is not strictly science related however, and the discussion is usually non-technical.

Why should I read it?

If you are at all interested in how science is done and communicated, and what part the Internet plays in this process, then this blog is for you. Nielsen’s biweekly collections of links provide a great round-up of interesting Web articles related to these topics. And his other posts, although infrequent, are always insightful and well written.

How often is it updated?

Nielsen puts up his biweekly collections of interesting links every Friday and Monday, with other posts interspersed at random intervals.

Can you give me a sample quote?

Harvard’s amazing Berkman Center for Internet and Society had their tenth anniversary celebration last week. During one of the talks, the founder of the Berkman Centre, Charles Nesson, asked the following question about the relationship between universities and Wikipedia: “Wikipedia is the instantiation of the building of the knowledge commons. Why didn’t it come out of a university?” I think it’s an important question. It bothers me a lot that Wikipedia didn’t come out of a university. Academics make a big song and dance about their role in advancing human knowledge, yet they’ve played only a bit part in developing one of the most important tools for the advancement of human knowledge built in my lifetime.

Fame or footnote?

In 1968 the late sociologist Robert Merton published a study in the journal Science in which he addressed how various “psychosocial” factors can affect the attribution of merit in science (Science 159 56). Drawing on insights provided by interviews with Nobel laureates, Merton realized that, contrary to popular perception, good things do not always come to those who scream Eureka! “They [Nobel-prize winners] repeatedly observe that eminent scientists get disproportionately great credit for their contributions to science,” he wrote, “while relatively unknown scientists tend to get disproportionately little credit for comparable contributions”.

Merton singled out the Nobel prize for perpetrating this scientific form of “the richer get rich, the poorer get poor”. Given that each prize cannot be awarded to more than three people, according to Alfred Nobel’s original terms, many worthwhile contributors miss out — not only in terms of prize money, but also in the history books. As Merton pointed out, the prize is blind to the fact that many of those who are snubbed “have contributed as much to the advancement of science as some of the recipients, or more”.

A stark example of this concerns the tangled story of the three physicists who discovered superfluidity: Peter Kapitza, Jack Allen and Don Misener (see “Superfluidity: three people, two papers, one prize”). Working in Moscow in 1938, Kapitza found that the viscosity of helium-4 dropped almost to zero when cooled to below 2.18 K — a clear sign that this material had become a superfluid. Kapitza, who was Russian-born, had originally made his name after moving to Cambridge University in the UK in 1921, but he was then forced to stay in the Soviet Union after being detained by the authorities during a regular trip back home in 1934.

Records reveal, however, that Allen and Misener — two Canadian researchers who had moved to Cambridge after Kapitza’s enforced detainment — also discovered superfluidity at the same. But four decades later, it was Kapitza who was awarded one half of the 1978 Nobel Prize for Physics; Allen and Misener remained empty-handed. While physicists gathering at this month’s 25th International Conference on Low-Temperature Physics in Amsterdam are likely to be aware of the Canadian pair’s contributions, it is Kapitza who is best known to the wider world.

The truth is that a Nobel prize, rightly or wrongly, confers a special status on its recipients. Just ask Douglas Osheroff, who shared the 1996 prize with David Lee and Robert Richardson for discovering superfluidity in helium-3. As Osheroff points out in an interview given during this year’s annual meeting of Nobel laureates in Lindau, Germany (see “Inspired thinking”), winning the prize gives him a platform to talk about issues — like climate change — that lie well beyond his research expertise.

That is not to say that Osheroff’s discovery of superfluidity in helium-3 was not worth a Nobel prize. It may not yet have any obvious practical applications, but then neither did the discovery 100 years ago by the Dutch physicist and future Nobel laureate Heike Kamerlingh Onnes that helium could be liquefied. His lab techniques led to him discovering superconductivity and kick-started the global cryogenics industry. It is Onnes, Osheroff and Kapitza who will be remembered, while the likes of Allen and Misener may well end up, perhaps unfairly, as mere footnotes to history.

Inspired thinking

Before 1996 Douglas Osheroff did not know much about the picturesque island of Lindau on Lake Constance in southern Germany. But that year he was awarded the Nobel Prize for Physics with David Lee and Robert Richardson for the discovery that helium-3 undergoes a dramatic change to a strange frictionless “superfluid” when cooled to below about 2 mK.

The following year, Osheroff was invited to attend the 47th meeting of Nobel laureates in Lindau — an annual event at which 20 or 30 laureates hold discussions with several hundred young researchers from around the world to help inspire the next generation of scientists. Osheroff has now attended this elite gathering five times, most recently last month. This year’s event was devoted to physics and was attended by 19 Nobel-prize winners, including particle physicist Carlo Rubbia, cosmologist George Smoot, and solid-state physicist Peter Gruünberg, who shared last year’s prize, along with five laureates from other disciplines.

Osheroff, who is based at Stanford University in the US, takes his role as a mentor very seriously. “Even before I won the prize, I spent a lot of time giving talks to students,” he told Physics World shortly after emerging from one of the meeting’s press conferences. But after receiving his Nobel prize, he soon found himself more in demand than ever. Osheroff estimates that he started flying about 120,000 miles annually after winning his prize; now, he says, he is clocking up more like 150,000 miles a year.

Like many Nobel laureates, Osheroff has found that winning the prize offers a platform from which to tackle issues that lie beyond the research lab. “As a laureate, I have an ability to say something that will have more impact than it probably deserves,” he says. Osheroff has taken a particular interest in global warming, having spoken out on the topic during the 2004 US presidential election, although he admits it is ironic that his extensive travelling has given him a pretty large carbon footprint.

Career advice

Osheroff spoke about climate change when he was last in Lindau in 2005 and this year he took part in a panel discussion about the subject. But he attended the event first and foremost to meet the 567 young researchers who had been selected from about 21,000 applicants from 66 nations (15,000 of whom were from India and China alone). “It’s mostly a matter of talking to students, getting them stimulated and giving them as much good advice as you can about careers in physics,” he says when asked what he hopes to achieve by attending.

In previous years, Osheroff has talked about his role in the panel set up to investigate the 2003 space shuttle Columbia disaster and his discovery of superfluidity in helium-3. Superfluids are quantum-mechanical states of matter that have zero viscosity and can therefore do strange things like flow up hill. This year, however, he discussed strategies that he thinks are key to making Nobel-prize-calibre discoveries in physics. “By their very nature, those discoveries that most change the way we think about nature cannot be anticipated, so I take a historical approach and review what made certain discoveries possible.”

He starts with the example of the Dutch physicist Heike Kamerlingh Onnes, who won the 1913 Nobel prize for his investigations into the properties of matter at low temperatures. Having been beaten by the Scottish physicist James Dewar in the race to liquefy hydrogen gas, Onnes managed to liquefy helium by chilling it to below about 4.2 K. This achievement, which took place 100 years ago last month, was a feat in itself, but Onnes then tried to cool the helium further to see if it would solidify.

After two whole years he eventually reached the unprecedented temperature of 1.04 K, but the helium refused to budge from its liquid state. “This is no wonder,” says Osheroff. “I personally have cooled helium to 0.0001 K and it stays happily as a liquid.” Instead of quitting, Osheroff explains how Onnes looked around for other interesting questions that could be tackled using the cryostat he had built.

With physicists at the time debating how the electric conductivity of metals would change when cooled to near absolute zero, Onnes secured a pure sample of mercury and told one of his students to find out what would happen in this regime. They discovered by accident that the conductivity almost vanished at about 4 K, making this the first observation of superconductivity — the flow of electric current without resistance. The moral, says Osheroff, is that a failure in experimental science may be an invitation to try something new that ends up reaping big rewards.

But there is more to the story. Although Onnes had probably cooled his cryostat below 2.17 K hundreds of times during his lifetime, it was not until 1938 that the Soviet physicist Peter Kapitza in Moscow, along with the Canadian researchers Jack Allen and Donald Misener at Cambridge in the UK, discovered that below this temperature the helium becomes a superfluid. Kapitza shared one-half of the 1978 Nobel prize for the finding, although it is still not clear why Allen and Misener were not recognized (see “Superfluidity: three people, two papers, one prize”).

As Osheroff points out, Onnes had even remarked that at about 2 K the liquid helium in his apparatus ceased to boil, but he never went back to try to understand what the origin of this behaviour was. So the lesson this time, according to Osheroff, is to always be aware of unexplained behaviour. “Nature usually doesn’t knock with a loud bang,” he says. “She whispers very softly.”

Community effort

Based on his forays into the history of physics, Osheroff does not think there is much point in trying to direct fundamental research for specific applications — as governments often like to try and do. He cites the example of nuclear magnetic resonance (NMR), which underpins the now widespread medical technique of magnetic resonance imagining (MRI). Felix Bloch and Edward Purcell developed NMR independently in 1946 in order to study the charge distribution of atomic nuclei, for which they shared the Nobel Prize for Physics six years later.

NMR has since won a further three Nobel prizes, not only in physics but also in chemistry and in physiology or medicine, and Osheroff thinks it will quite possibly win a fifth for functional MRI, which allows the activity in the brain to be monitored in real time. “If you say ‘Oh my god, we need some way of monitoring brain function non-invasively’ and set out to do so without knowing anything about NMR, it just would never have happened,” he says.

Osheroff’s third message for young researchers echoes a famous quote attributed to Newton: if I can see further than anyone else, it is because I have stood on the shoulders of giants. He cautions that it is wrong to think of advances as being made by brilliant individuals in isolation and that, in reality, progress almost invariably results from the scientific community asking questions, developing new technologies and sharing the results. “I show a picture of my apparatus and put the names of scientists whose contributions were essential in making the discovery I made, such as Purcell. There are 14 names up there, but I could easily have put up twice that number.”

Osheroff also uses the discovery of the cosmic background radiation by Arno Penzias and Robert Wilson, who shared the other half of the 1978 Nobel prize, to highlight the importance of fully understanding your experimental equipment. The pair only realized the origin of the very weak signal being picked up by their giant horn-shaped receiver when they had ruled out every other explanation — including the possibility that the noise was from pigeon droppings.

Osheroff offers one final piece of advice for students wishing to make a breakthrough in experimental physics: avoid too many commitments, particularly those that require you to be out of the lab at fixed times. “You’ve got to sometimes back off from what you’re doing to make you a better researcher, but language lessons or dance classes are not advisable,” he says. “Research does not adhere to a schedule.”

In person

Born: Aberdeen, Washington, US, 1945
Education: California Institute of Technology (undergraduate); Cornell University (graduate)
Career: AT&T Bell Laboratories (1972–1987); professor at Stanford University (1987–present)
Family: married, no children
Other interests: photography

www.lindau-nobel.de

Once a physicist: Al Powell

Why did you choose to study physics?

I was quite good at physics at school and, to be honest, I didn’t know what else to do. I applied to the University of Leeds because they offered a standard physics degree with astrophysics added on, and I was interested in doing some astronomy. Leeds was also a good place to be based for climbing and running.

Did you enjoy the course?

I was a fairly typical undergraduate sportsman in that I probably spent as much time doing sport as I did studying. When I got to the final year and I could choose exactly what options I wanted to do, then I got into the course a lot more. I remember doing a lot of solar–terrestrial physics.

Did you meet many other physicists with similar sporting interests?

There were several postgraduates in the physics department who were keen climbers. Also, one of my lecturers at Leeds encouraged me to enter The Fellsman, a 67-mile race across high moorland in the Yorkshire Dales. He gave me the entry form and said “You’ll enjoy this”. He was right.

You weren’t tempted to do a PhD?

No, not really. To achieve at a high level in climbing you really have to commit yourself. I did short-term contract work, things that I could drop easily to go on an expedition. You can’t do that if you are a postgraduate because you are still in a relatively structured environment. You have to work hard year-round and can’t just disappear for a couple of months at a time.

You then went into teaching. How did that work out?

I taught physics and general science to 11–18 year olds for five or six years. The department I worked in was very good at providing specialist science teaching, even at the lower end of the school. They generally always had a physicist teaching physics, a biologist teaching biology, and so on. Children might have three or four different science teachers throughout the year on rotation. I enjoyed that way of working. It meant I could actually concentrate on teaching what I knew and what I was interested in.

Why did you stop teaching?

I am probably the only guy who gave up a teaching job because I wanted more holidays! I just found that I wanted to spend more and more time in the mountains.

Does your physics training help you at all in your current role?

As a mountain guide, you are always thinking ahead. You have to analyse situations very carefully, be observant and extrapolate information to make decisions. That’s perhaps something you get from studying any science to degree level. You also need a good understanding about the way snow behaves, for example how it evolves throughout the day, so that you can ensure that your clients have a good day in the mountains and that everyone stays safe.

Postmodernism, politics and religion

Alan Sokal really likes footnotes, which may have made him uniquely qualified as a hoaxer of “science studies”. The original hoax, a purposely and wonderfully nonsensical paper about the epistemology of quantum gravity, appeared in 1996 in the cultural- studies journal Social Text, with the enthusiastic endorsement of its editorship of eminent postmodernists. There were 107 footnotes.

The first chapter of Sokal’s new book Beyond the Hoax revisits the original Social Text paper, adding 142 “annotations” in which Sokal explains at length much of the complex fabric of in-jokes and bouleversements that made it so exquisitely wacky to anyone with even a modest knowledge of physics. The remainder of the first part of the book contains four well-footnoted essays on his reasons for undertaking this exercise in foolery, and on the various responses he has received.

Sokal maintains that, at the time of his 1996 paper, a serious assault against rationality by postmodernists was under way, led by a relatively small number of left-leaning academics in humanities departments. He felt that this would be self-defeating for the Left (whom he identifies with, describing it as “the political current that denounces the injustices and inequalities of capitalist society and that seeks more egalitarian and democratic social and economic arrangements”) while opening up great opportunities for the Right to employ obfuscatory tactics. Indeed, as Chris Mooney’s recent book The Republican War on Science amply testifies, the “faith-based” administration of George W Bush has done its best to obscure a variety of “inconvenient” scientific truths, although Sokal has found little confirmation that it has borrowed this obfuscation from the postmodern relativists and deconstructionists in the leftist fringes of academia.

The second part of the book, which is co-authored with his collaborator the Belgian physicist Jean Bricmont, is a serious philosophical discussion of epistemology (the theory of knowledge). Its first chapter condemns the cognitive relativism of the postmodernists — the idea than fact A (for instance the Big Bang) may really be true for person A but not for person B — while the second chapter makes a trial run at a reasonable epistemology for science. I was delighted to find as part of their vision “the renormalization group view of the world”, according to which one sees every level of the hierarchical structure of science as an “effective theory” valid at a particular scale and for particular kinds of phenomena but never in contradiction with lower-level laws. This leads the authors to emphasize emergence as well as reductionism. I have seen few better expositions of how thoughtful theoretical scientists actually build up their picture of reality.

On the other hand, the Sokal/Bricmont view of science as a whole may be a bit idealized, and is perhaps best suited to relatively clean fields like relativistic field theory. In the murkier and more controversial field of materials science, for example, reality is not so cleanly revealed, particularly when it contradicts the personal interests of the investigators.

Part three of the book encompasses more general subjects. For example, one very long chapter explores the close relationships between pseudoscience and the postmodernists. It is easy enough to find ignominious stories about pseudoscience; some striking and important ones that Sokal picked, for example, are the widespread teaching of “therapeutic touch” (a practice with its roots in theosophy, and not involving actual touch) in many estimable schools of nursing, and, going farther afield, the close ties between conservative Hindu nationalism and the teaching of Vedic astrology and folk medicine in state schools in India.

Whether or not postmodernism has any causal relation to pseudoscience, when attacked, proponents of such pseudosciences are seen to defend themselves by referring to the postmodernist philosophers. And the postmodernists have been known to be supportive of such views — often, for example, favouring Vedic myths or tribal creation stories over the verifiable truths of modern science.

Finally, Sokal enters into the much-discussed intertwined fields of religion, politics and ethics. His essay takes the form of a long, discursive review of two recent books on religion: Sam Harris’ The End of Faith and Michael Lerner’s Spirit Matters. He promises a critical review, but I found him to be rather more critical of Lerner than of Harris. He supports Harris in considering Stephen Jay Gould’s description of science and religion as “non-overlapping magisteria” to be a cop-out.

Sokal is an implacable enemy of fuzzy-mindedness, and makes the point that religion cannot avoid inventing factual but unlikely claims about actual events. Even if one abandons young-Earth creationism or reincarnation, or those fascinating inventions heaven and hell, the idea that there is an actual personal God listening to one’s prayers and responding is not that far from believing that He is talking to me in Morse code via the raindrops tapping on my windowsill (which Harris suggests would be considered a sign of mental illness). Lerner’s book addresses the conundrum of religion as “spirituality”, incorporating, for instance, the sense of wonder that we scientists feel at the marvels that are revealed to us (I think a more interesting book in this vein is Stuart Kauffman’s Reinventing the Sacred). Sokal, though he lets Lerner get away with dubious claims about studies of the efficacy of prayer, rather dismisses this view.

He then moves on into the political. He asks, if we want voters to actually vote for their true economic and social interests, should we take away from them what are correctly known as the “consolations of religion”? Do we not then risk their perceiving the political Left as condescending and elitist? How do we attempt to break through misperceptions about the true values of the conservative elite? This is not a problem to which anyone, Sokal included, has a good answer.

I too cherish long explanatory footnotes crammed with extra ideas. But even skipping all the footnotes (which would be a great loss) this book is not a page-turner. The author is not one to drop a line of argument just because it wanders “off message”. Nonetheless, Sokal writes lucidly; and one must not forget that his main targets — the postmodern theorists in English, philosophy, sociology or “science studies” departments — are still doing well in even the most respected of our universities, and command enough respect for election into such august bodies as the American Academy of Arts and Sciences (I count two of Sokal’s prime targets in as many years). They aim to persuade the elite among our students that scientific rationality is just the invention of a few white males eager to hang on to positions of power, whereas Sokal (and he may be right) sees such rationality as our main hope.

Secure your future

I first began to contemplate moving from physics to the world of policy at the American Physical Society’s March Meeting in 2004. At the time I was a PhD student at Cambridge University in the UK and was at the meeting in Montreal to give a short talk on my condensed-matter research. I have always had a strong interest in international affairs, and proliferation issues in particular, so I was pleased — if slightly surprised — to see a session on “non-proliferation and counter-proliferation” at the conference. Four years later, I can recall very little of that session’s content, but I do have two abiding impressions: first, that there is a useful role for physicists to play in the policy process; and second, that it is even possible to get paid for doing it.

After my PhD, a brief spell as a physics postdoc, endless speculative applications and three miserable months of unemployment, in January 2006 I started work as a science fellow at a small London-based nongovernmental organization (NGO) called VERTIC (the Verification Research, Training and Information Centre), where I was involved in a couple of projects concerned with analysing technical measures to enhance the effectiveness of various efforts of the International Atomic Energy Agency (IAEA) in its work on preventing nuclear proliferation. In November 2006 I started a part-time postdoc at the Centre for Science and Security Studies (otherwise known as CS3), part of the Department of War Studies at King’s College London, before taking up a full-time lectureship there last year.

The core of my work involves using science to help inform policy debates on international security, particularly nuclear non-proliferation, arms control and disarmament. Most people assume I must therefore be a nuclear physicist. In fact, it is a job in which you need to be a jack of all trades; everything from mechanics (for modelling the debris cloud produced from a blown-up satellite, for instance) to solid-state physics (for understanding the effectiveness of radiation detectors, say) is relevant.

Nuclear uncertainty

One project I tackled last year was an independent analysis of how effective the IAEA is in its efforts to safeguard Iran’s nuclear programme (the IAEA keeps its own analysis of those efforts confidential). Part of the IAEA’s job is to track fissile material to make sure that none is diverted for nefarious purposes. As all measurements have an associated error, there is always some uncertainty in the IAEA’s findings. Working out that uncertainty involves scouring the Internet to find details of the measurement techniques used by the IAEA and then creating a model to combine the errors — conceptually identical (if somewhat more complicated) to what I did 10 years ago in first-year undergraduate practicals!

Just as an understanding of science is vital to my work, so is an understanding of policy. It is important to stay well versed in current affairs, to be able to identify the key policy questions and to demonstrate how and why scientific considerations are relevant to them. These are not innate skills and nor are they ones that most physicists pick up during their education. A good place to start is to read foreign-policy journals like Foreign Affairs, The Nonproliferation Review and Science and Global Security. Attending talks and discussions is also important. In my case, years spent taking part in and coaching debating while at university certainly helped.

Currently I am finishing a publication on the abolition of nuclear weapons. I have been working with a politics expert to analyse the technical and political challenges to complete nuclear disarmament and to suggest what countries can do to surmount them. Again, much of this work has been focused on verification, the process of determining whether a state is complying with an agreement, in this case, to abolish nuclear weapons. It is important to recognize that even this issue is not purely technical. While questions such as “how effective can verification be?” might be technical, questions like “how effective does verification need to be?” certainly are not.

The life span of most projects in the policy world is much shorter than in academic physics (my first contract was a terrifyingly brief three months). Moreover, a much greater emphasis is placed on disseminating the results of research. If your aim is to influence a policy debate, it is not enough just to publish your results in a journal. You also need to sit down with officials from governments and international organizations and to get them interested in what you have done. A willingness to talk to journalists is also important. The public level of knowledge about proliferation issues is generally low, so events like North Korea’s nuclear weapons test in October 2006 or the Israeli bombing of a suspected nuclear reactor in Syria in September 2007 present an opportunity to raise awareness and highlight your research to a wider audience.

Intellectual breadth

In addition to research (and the administration that is the curse of all academics), CS3 also has an active teaching programme. At its heart is an MA in Science and Security, now in its third year, which is the only one of its kind in the world. This course is characterized by its intellectual breath and attracts students from both the sciences and the social sciences (last year just over half of the students had technical degrees in disciplines such as physics, astrophysics and engineering). Moreover, a number of the students have backgrounds in, for example, the military, government or the defence industry, which makes for particularly vibrant and interesting class discussions.

In October we will be launching a new Masters course in Nonproliferation and International Security, and we have two studentships of £8500 to offer. Our students have an excellent record of going on to get jobs in the field, ranging from government positions to science journalism and the think-tank sector.

Physicists have a long history of involvement in arms control and disarmament. A number of those involved with the Manhattan Project (including J Robert Oppenheimer himself) felt a special responsibility to advise on the control of atomic energy after the Second World War and believed themselves uniquely qualified to do so. Today, even more so, technological developments help shape the international security environment; it is vital that physicists become involved in helping to understand and manage those developments.

Plasmons put laser light on the straight and narrow

Researchers in the US and Japan have devised a simple way generate a nearly parallel beam of light from a semiconductor laser — without the need for bulky and expensive lenses. Instead, a patterned metallic film is used to absorb divergent light from the laser and reemits it one direction. The team says that the technique – which relies on collective electronic excitations called “plasmons” — could make semiconductor lasers cheaper, smaller and more efficient.

Semiconductor lasers, which emit light when electrons combine with holes at a p-n junction, have been around for almost 40 years and can be found in everything from the sophisticated optical-communications systems that power the Internet to five-dollar laser pointers. Although semiconductor lasers are very small (some are micrometer-sized) and can be integrated within electronic devices, the light-emitting region of the laser is about the same size as the wavelength of the laser light. This means that the light emitted from the laser is diffracted, often by as much as several tens of degrees.

As most laser applications require a collimated beam with a much smaller divergence, the light is usually collimated by placing a high-quality lens with a large collection angle (or numerical aperture) at the laser output. Unfortuantely, that makes semiconductors more expensive and relatively bulky. Now, however, Nanfang Yu, Federico Capasso and colleagues at Harvard University in the US, along with researchers at the optical-equipment maker Hamamatsu Photonics , have collimated laser light by placing a thin, patterned metal film on the output facet of a semiconductor laser (Nature Photonics doi:10.1038/nphoton.2008.152).

Parallel grooves

The film has an aperture is a slit that is about 2µm wide which is adjacent to a series of parallel grooves that are 0.8µm wide, 1.5µm deep and separated by 8.9µm. The collimator was fabricated on the surface of a semiconductor laser that emits infrared light at a wavelength of 9.9µm.

Some of the laser light travelling through the aperture is absorbed, creating surface plasmons — collective excitations involving large numbers of electrons on the surface of the metal film. As the plasmons propagate across the film, they are scattered by the grooves, which results in the plasmons being converted back into light with the same wavelength as the laser.

The size of the aperture and grooves are chosen so that the remitted light undergoes constructive interference to form a beam of light that propagates outward and perpendicular to the laser facet.

The divergence of the plasmon-collimated laser was about 2.4o, whereas the divergence of the same laser before being fitted with the collimator was about 63o.

New dimensions

Although this collimator only works in one dimension, Yu said that the team are now working on a 2D design involving a circular aperture surrounded by concentric rings of grooves. “Preliminary results in our group have shown that this scheme works very well: a divergence of a few degrees in the horizontal and vertical planes has been achieved in a quantum cascade laser.” said Yu.

So far, the team has made their collimator lenses using two different methods. One technique involves depositing a 1.7µm layer of gold on the flat surface of the laser facet, and then using an ion beam to cut the grooves into the metal. The second technique used the ion beam to cut grooves into the facet of the laser, which was then coated with a 400 nm-thick layer of metal. Both techniques resulted in similar improvements in collimation.

Milling, however, is an expensive and time-consuming technique that is not suitable for mass production. The team is now looking to develop new ways of making the collimators that are more in line with commercial semiconductor fabrication techniques.

The team believes that their collimator design — which is the subject of a patent application — could someday be used to create compact, low-cost semiconductor lasers for use in optical sensors that use beams of laser light to detect chemicals in the environment, for example. The technology could also lead to lower cost optoelectronics devices. “A very narrow spread of the laser beam can greatly reduce the complexity and cost of optical systems,” said team member Hirofumi Kan of Hamamatsu Photonics.

Simulation points to lightweight first stars

Physicists in Japan and the US have performed a simulation of structure formation in the early universe that suggests the first baby stars or “protostars” were hundreds of times less massive than previously thought.

Led by Naoki Yoshida of Nagoya University, the researchers say that their simulation provides the clearest picture yet of how tiny fluctuations in the early universe grew into protostars, and could be a stepping stone to explaining how the structures evolved into the stars and galaxies we see today.

“It is exciting because we have been unable to look at the structure of a protostar and its surroundings at the time a star is about to be born,” astrophysicist Ling Gao of Durham University in the UK, who was not involved in the research, told physicsworld.com. “This gives a significant input to understanding the formation of the first-generation stars.”

Cosmic dark ages

Although popular representations of the Big Bang usually involve the sky becoming awash with light almost instantaneously, in reality the first billion years of the universe were starless. From measurements of the cosmic background radiation we know that electrons and nuclei started combining into atoms, mostly of hydrogen, after about 380,000 years. Over the next 300 million years these hydrogen atoms accumulated into vast gas clouds.

Eventually, tiny density fluctuations in these clouds began to draw in matter and expand. Although definitions vary of when enough matter has accreted to become a protostar, Yoshida says the crucial stage comes when the gravitational forces driving the accretion balance the matter’s thermal pressure.

Past simulations of protostars have predicted that the mass of the first protostars would have been as much as 500 times that of our Sun. This is because simple elements like hydrogen and helium retain their heat for a long time, which produces high thermal pressures. To balance these pressures, the protostars would need high gravitational forces, and hence a lot of mass.

However, Yoshida’s team claim that these previous simulations have not fully taken into account the process of “radiative transfer”, which redistributes energy in the protostar. By including radiative transfer in their simulation, the researchers have found that protostars can start out just 1% as massive as our Sun (Science 321 669) — although they would quickly grow into stars a hundred times as big.

Rosetta stone or stepping stone?

Rachid Ouyed, an astrophysicist at the University of Calgary, points out that the nature of the simulation meant it had to be terminated after the formation of the protostar, leaving the fate of the structure uncertain. “This is to be taken seriously, since in past work it was shown that the [process] they see tended to quickly dissipate the so-called proto-stars they found.”

Nevertheless, Yoshida’s team would now like to expand their simulation into later periods of star formation. In the next period, gravity would overwhelm thermal pressure at the centre, leading to compression and raised temperatures of around a million Kelvin. At this point, deuterium molecules would begin to fuse — but Yoshida and his team have not found a way to model nuclear fusion reactions in three dimensions.

Describing this as his “life’s work”, Yoshida predicts that we’ll see a full simulation of complete star formation within a decade and galaxy formation within 20 years.

World's largest ever geological mapping project launched today

By James Dacey

Earth
Vatnajokul, Iceland — Europe’s largest glacier (Credit: OneGeology)

OneGeology has been dubbed ‘the biggest mapping project ever’ and its basic premise is to create the first ever digital geological map of the world and make it universally available via a web portal.

I have a background in geophysics, and I am on a Science Communication course at University of Bath, so I was naturally intrigued to see how this project will improve how scientists interact with each other and how they communicate with society as a whole.

In case you are worried that I have hi-jacked this blog, I am working as an intern at Physics World. If the lack of dark matter hasn’t scared you away then I’ll tell you a little bit more…

(more…)

Room temperature ice – deja vu?

By Hamish Johnston

In May, 2006 we ran the news story Ice freezes at room temperature, which explains how K B Jinesh and Joost Frenken came to the startling conclusion that water will form ice at room temperature if it is placed between a tiny tungsten tip and a graphite surface.

So it was a with a great sense of deja vu that I read a new paper in Physical Review Letters by Jinesh and Frenken called Experimental Evidence for Ice Formation at Room Temperature.

I emailed Frenken (who is at Leiden University in the Netherlands) to ask him what was new about the PRL work?

He told me that the original work (also published in PRL) was concerned with the forces between the tip and the ice on the surface. Essentially, the water under the tip had the same elastic constant as ice, and the same maximum shear stress as ice — allowing the physicists to conclude it was ice.

In their latest work, they concentrate on the stucture of the water itself — which (you guessed it) is very similar to ice.

Copyright © 2025 by IOP Publishing Ltd and individual contributors