Skip to main content

Pristine relics of the Big Bang spotted

For the first time, astronomers have discovered two distant clouds of gas that seem to be pure relics from the Big Bang. Neither cloud contains any detectable elements forged by stars; instead, each consists only of the light elements that arose in the Big Bang some 14 billion years ago. Furthermore, the relatively high abundance of deuterium seen in one of the clouds agrees with predictions of Big Bang theory.

Just after the Big Bang, nuclear reactions created the three lightest elements – hydrogen, helium, and a tiny bit of lithium. Stars then converted some of this material into the heavy elements such as carbon and oxygen that pepper the cosmos today.

But no-one has ever seen a star or gas cloud made solely of these three Big Bang elements. Instead, all known stars and gas clouds harbour at least some “metals”, the term astronomers use to describe any element, even carbon and oxygen, that is heavier than helium.

Minutes after the Big Bang

Now, Michele Fumagalli and Xavier Prochaska of the University of California, Santa Cruz and John O’Meara of Saint Michael’s College in Vermont, have found two pristine gas clouds. “Their chemical composition is unusual,” says Fumagalli. “This gas is of primordial composition, as it was produced during the first few minutes after the Big Bang.”

One gas cloud resides in the constellation Leo, the other in Ursa Major. The Leo cloud has a redshift – a measure of its distance – of 3.10, which means it is 11.6 billion light-years from Earth. The Ursa Major cloud is slightly farther away, with a redshift of 3.41 and at a distance of 11.9 billion light-years. We therefore see both clouds as they were about two billion years after the Big Bang.

The clouds are far too faint to observe directly. Fumagalli and colleagues discovered them only because the clouds happen to lie in front of even more distant quasars, which are luminous galaxies that were much more common long ago. Atoms in the gas clouds absorb some of the light from the background quasars, and the wavelengths at which absorption is evident reveals important information about the composition of the clouds.

Hydrogen only

Despite using the mammoth Keck I telescope atop Mauna Kea in Hawaii, the astronomers failed to find any element except hydrogen in the two clouds. While the researchers also expect helium and lithium to be present, their technique is not sensitive to those elements. However, if oxygen, carbon or silicon were present, it should have been easy to spot. From this, the researchers deduce that the metal-to-hydrogen ratio (or metallicity) in the Leo cloud is less than 1/6000th of that of the Sun and in the Ursa Major cloud is less than 1/16,000th of the Sun’s metallicity.

In comparison, ancient stars in the Milky Way’s most primitive population – the stellar halo – typically have metallicities that are 1/50th of that of the Sun. The most metal-poor halo star known has a metallicity 1/22,000th that of the Sun, which is similar to the upper limit of the two gas clouds.

“It’s a very interesting discovery,” says Nick Gnedin, an astronomer at Fermilab in the US, who is unaffiliated with the discovery team. Gnedin says it has been very difficult to understand why all other gas clouds – even those at greater distances – contain metals. These newfound exceptions should help astronomers understand why all other gas clouds contain metals, he says.

Agrees with Big Bang predictions

In addition to ordinary hydrogen, Fumagalli and colleagues detected the hydrogen isotope deuterium in the Ursa Major cloud. Physicists believe that the Big Bang produced deuterium but that stars then destroyed it – so the universe once had more deuterium than it does today.

The high deuterium-to-hydrogen ratio in the gas matches Big Bang predictions. “The fact that we see deuterium that is comparable to what is expected from theory is giving us more confidence that this gas is actually primordial in its composition,” says Fumagalli.

“Nice connection”

Rob Simcoe of the Massachusetts Institute of Technology says that the two clouds show that pockets of the universe remained free of stars and their ejecta for some two billion years after the Big Bang. “This is a nice connection between work that is being done on the early universe using these gas clouds and work that is being done in our backyard, in the stellar halo of the Milky Way, where people have discovered stars that have comparably low chemical abundances,” he says.

Each gas cloud has only about a millionth of the Milky Way’s mass. Simcoe suspects that each will eventually fall onto a galaxy and form stars. If one of those galaxies now has astronomers, they may be peering at the nascent Milky Way and seeing primitive gas clouds that spawned stars in our own galaxy’s stellar halo.

The observations are reported in Science.

What makes a great physics teacher?

By James Dacey

This week, the UK government has announced a £2m-a-year scholarship programme to help persuade 100 graduates to become physics teachers. Each graduate who wins a scholarship will be awarded a £20,000 (roughly $32,000) tax-free bursary provided they have got a place to study for a postgraduate certificate in education (PGCE) in England.

hands smll.jpg

It seems that the architects of the new scheme are keen to attract the brightest physics graduates into teaching. Applicants for the scholarships will require a first-class or upper-second degree and be intending to complete a physics or physics-with-maths PGCE. The new scheme may also help to persuade older graduates to move into teaching from other professions, in the knowledge that they will no longer need to study for a year without a salary.

The scheme is designed to address the lack of specialists teaching physics in English high schools. According to the UK Institute of Physics (IOP), about 1000 new specialist physics teachers in England will be needed every year for the next 15 years to ensure that the subject is taught entirely by specialists. Last year 275 fewer trainees were recruited to physics teaching-training courses than were needed to start plugging the gap. More information about the scheme is included in Michael Banks’s news article from Tuesday.

In this week’s poll, we want you to draw on your own experiences of studying physics at high school by answering the following question:

What do you believe is the single most important quality of a great physics teacher?

A deep knowledge of the subject
Prior experience working as a professional physicist
An enthusiastic and entertaining teaching style
A proven track record of their students getting good grades
An ability to maintain classroom discipline

To cast your vote, please visit our Facebook page. And feel free to explain your choices and share your own school experiences – by posting a comment on the Facebook poll.

In last week’s poll, we appealed strongly to your inner geek by asking you to select your favourite from a list of the most familiar physical constants. And it seems that the issue is close to the hearts of our Facebook followers, as the poll attracted more responses than any previous poll.

The clear winner, collecting 42% of the votes, was Planck’s constant. In second place was the speed of light in a vacuum, which received 21% of the responses, and in third place was the gravitational constant with 14%. Avagadro’s number, Boltzmann’s constant and the charge on an electron took 4th, 5th and 6th places, respectively.

The poll also attracted a lot of comments. For instance, Lulú Hernández, who studied at the Escuela Superior De Física Y Matematicas (IPN) in Mexico, stands firmly beside her choice. “Planck’s constant is the best constant of nature, used to measure the energy of the photon, define the limits of quantum phenomena, among many other applications,” she says [translated from Spanish]. Another respondent, Kate Scaryboots Oliver, wrote simply: “Good ol’ Planck. never lets you down. Unlike c [the speed of light in a vacuum].”

Thank you for all of your responses and we look forward to hearing from you again on the Physics World Facebook page.

The brave new-media world

On 23 September scientists at the Gran Sasso underground laboratory in Italy announced a discovery that could potentially revolutionize physics. The OPERA collaboration based at the lab found evidence that muon neutrinos produced by the CERN particle-physics lab near Geneva move slightly faster than the speed of light while on their way to Gran Sasso, where they are detected. If confirmed, the finding would put into question Einstein’s special theory of relativity, which forbids superluminal travel.

The result quickly turned into one of the most covered physics stories of the year, with numerous articles in magazines, newspapers and on television asking whether “Einstein was wrong”. Just as quickly came numerous physicists denouncing the media frenzy, with Lawrence Krauss from Arizona State University and Cambridge University cosmologist Martin Rees both calling the coverage “an embarrassment”.

“A press conference on a result, which is extremely unlikely to be correct, before the paper has been refereed, is very unfortunate – for CERN and for science,” Krauss told Scientific American. “Once it is shown to be wrong, everyone loses credibility.”

A closer look, however, suggests that the OPERA researchers behaved exactly as scientists should. They did not write a press release, but a technical preprint on the arXiv preprint server; and they did not schedule a press conference, but a seminar at CERN. While some of the media coverage has been regrettable, OPERA scientists are not the architects of overhype, but the victims of a radically changed media landscape.

Ahead of the game

Every sub-field of physics has its stories about how responsible physicists deal with anomalous results. In my own speciality of atomic, molecular and optical physics, for example, Bill Phillips from the National Institute of Standards and Technology (NIST) discovered in the mid-1980s that laser-cooled sodium atoms had a temperature of 40 µK – well below the theoretical limit of 240 µK. Confronted with the extremely odd situation of an experiment working better than expected, Phillips’ team re-checked its results, in the process inventing five new ways of measuring the temperature. When all of their measurements gave the same results, they quietly contacted other researchers in this relatively small community and also gave seminars at other labs. Phillips asked these labs to check their own atoms’ temperature and only after receiving independent confirmation did the NIST group publish the result (Phys. Rev. Lett. 61 169).

The discovery spurred other labs to work on explaining the lower temperatures and groups led by Steve Chu at Stanford University and Claude Cohen-Tannoudji at the Ecole Normale Supérieure in Paris soon worked out the correct theory. A decade later Phillips, Chu and Cohen-Tannoudji shared the 1997 Nobel Prize for Physics for their work on laser cooling. This case is often cited as an example of how to handle surprising results: first re-check all the measurements; then seek independent confirmation; and only after confirming your results, schedule a press conference.

Yet this is essentially what the OPERA collaboration did. Their paper on arXiv (1109.4897) shows that they considered most of the obvious sources of error in their results and re-checked the key elements by, for example, having national standards labs, such as the Swiss Metrology Institute and the PTB in Germany, verify the synchronization of their clocks and the distance between the neutrino source and the detector. There is some debate about whether they exhausted all possible checks, and around four senior members of the 160-strong collaboration removed their names from the paper as a result (see p12, print edition only). But they did also have some support from an earlier measurement in 2007 when the US-based MINOS experiment reported a similar anomaly in the apparent speed of its neutrinos, though with larger experimental uncertainty.

With the analysis showing an anomalously high speed, the appropriate next step was to present the result to other physicists to check the results. In this case that meant posting the preprint on arXiv and scheduling a seminar at CERN to present the results to leading particle physicists. Thanks to blogs, Twitter and other forms of social media, however, posting a preprint and scheduling a seminar are more or less equivalent to calling a press conference. Indeed, science journalists routinely monitor social media and arXiv for stories. Just 15 years ago, posting a paper on arXiv was a quiet way to disseminate results to other physicists; today, it is as good as e-mailing it to every science reporter.

The new-media landscape

Research is increasingly being done in the open. Some scientists do this deliberately, by posting all their data on freely available websites; but for others, the new openness is not by choice. And while physicists pioneered open-access science with arXiv, in many other ways we have been slow to adapt. Indeed, the recent brouhaha is not the first time physicists and social media have collided, with, for example, particle physicists Tommaso Dorigo and John Conway sparking controversy in 2007 by discussing preliminary data from Fermilab’s Tevatron collider on their blogs. Rumours of preliminary data spreading through social media have also led to some false alarms, such as the inconclusive findings of the CDMS dark-matter search in late 2009 being blown up into major events.

The problem of exciting results being released early will not go away and if anything is only likely to get worse. Physicists, therefore, need to adapt to the brave new-media world, in which it is nearly impossible to keep exciting results completely under wraps. Physicists are justly proud of having created the World Wide Web; now we have to get used to doing science in full view of the Internet.

Four-wheel nanocar takes to the road

A “four-wheel drive car” less than one billionth the length of an average SUV has been built and operated by researchers in the Netherlands and Switzerland. The molecular machine is about 1 nm long and uses electrons as fuel as it navigates across a copper surface. The tiny device could find use in nanometre-sized robotics or as tiny transporters that shift molecules around.

Molecular machines are common in nature. Motor proteins, for example, can move along a surface to transport molecular-sized cargo and are often used to build structures within living cells. Scientists would like to make their own versions of motor proteins, and indeed they have already designed and demonstrated single molecules that can move across surfaces. But these have been mostly passive: to ensure that they travel in a certain direction, they have had to be pulled or pushed.

Now, Ben Feringa of the University of Groningen and colleagues have demonstrated a truly active single-molecule vehicle. Constructed around an organic, carbon-based frame, it has four “wheels” or rotor parts, connected to the body via carbon–carbon double bonds. When the tip of a nearby scanning tunnelling microscope fires electrons at these bonds, they break and re-form the other way round. This process is known as isomerization and causes the wheels to turn, and the vehicle to move forward.

Steering by symmetry

Feringa and colleagues could make their molecular vehicle move in two ways, by adjusting the symmetry or “chirality” of the rotor parts. In one, the vehicle moves along a random path, something that has been performed before with active molecular machines. However, the researchers could also make the vehicle drive in a nearly straight line.

“The important step taken, in my opinion, is that we have shown that we can propel a single molecule along a surface and control directionality,” said Feringa. “This is exactly what happens with protein nanomotors that ‘walk’ along filaments with control of directionality,” he added.

‘Milestone’ reached

Ludwig Bartels at the University of California at Riverside, US, agrees that the ability to control direction is a major step forward. “This work is a milestone towards controlled transport of molecular species across surfaces,” he says. “But much work remains – most importantly, the replacement of the energy source away from the tip of a tunnelling microscope (which could in the first place just drag any molecule along, irrespective of its nature), and the achievement of concerted motion of the substrate linkers so that the motion becomes really straight.”

James Tour of Rice University in Texas, US, thinks the demonstration brings scientists closer to the goal of using synthetic molecular machines to assemble structures, rather like enzymes do inside the body. “This is an important and fundamental milestone in the quest for nanomachines that will one day do useful work,” he says.

The research is described in Nature 479 208.

Russian mission to Mars fails

Russia’s first interplanetary mission in over 15 years has failed after launch due to an engine mishap that prevented the unmanned spacecraft from being sent on its proper course toward Mars. The craft – dubbed Phobos-Grunt – is now trapped in orbit around Earth with engineers having around three days to save the craft before its batteries run out.

Phobos-Grunt, costing $163m, is a Russian Space Agency mission that was designed to travel to Mars’ moon Phobos and return up to 200 g of its soil to Earth in a three-year trip ending in 2014. The craft is also taking a Chinese probe – Yinghuo-1 – that would separate from Phobos-Grunt to orbit Mars for one year to study the planet’s atmosphere. The 115 kg probe is the first by China to venture to another planet.

Phobos-Grunt (which means Phobos-soil) was launched at 20:16 GMT on Tuesday 8 November from the Baikonur launch pad in Kazakhstan on a Zenit-2SB rocket. The craft initially launched successfully and separated from its Zenit launch vehicle. However, the craft’s own propulsion then failed and it began to veer off course. Phobos-Grunt could not establish its correct orientation in space, preventing the command activating its propulsion system.

“The engine did not fire, neither the first nor the second burn occurred,” Russian space agency chief Vladimir Popovkin told the Interfax news agency. “I would not say it is a failure, it’s a non-standard situation, but it is a working situation.”

Ground controllers now have three days to upload new instructions to the spacecraft before the on-board batteries are fully discharged. This could also indicate that Phobos-Grunt cannot use its solar panels to recharge the batteries.

The malfunction of Phobos-Grunt comes after Russia’s last mission to Mars – Mars 96 – also failed on its way to the red planet when the probe re-entered the Earth’s atmosphere, breaking up over the Pacific Ocean.

Earlier this year Ziyuan Ouyang, chief scientist of China’s lunar programme, told Physics World that the country now “has all the technology and know-how to eventually explore Mars on its own”.

Plasmonic absorbers turn a corner

A new nanostructure that can absorb light at any polarization and across the entire visible spectrum has been made by physicists in the US. The “plasmonic” structure has been used to convert absorbed light into heat and might be able to improve the efficiency of solar cells.

Solar cells may be a tempting green-energy technology, but they remain much less cost-effective than fossil-fuel energy. Most of the high cost of solar cells resides in the production cost of silicon – the most commonly used semiconductor. For this reason, industry is interested in solar cells made of far thinner films – around 1 µm, rather than 300 µm – so that less material is required.

The problem is that thinner conventional solar cells are not efficient. Light of a longer wavelength disperses its energy over a greater distance; so the thinner the cell, the less light at the red end of the spectrum is absorbed. To combat this, scientists have begun to investigate so-called plasmonic nanostructures, which are excellent at scattering light. By placing such nanostructures on top of a solar cell, light rays travelling downwards can be turned 90° so they travel horizontally. As a result, the entire width of the cell can be used for absorption.

Wavelength and polarization are limited

The plasmonic nanostructures created so far, however, have not fitted the bill because they are able to scatter light only at a narrow range of wavelengths – or for just a single polarization. Now, Harry Atwater and colleagues at the California Institute of Technology have come up with a plasmonic design that can scatter and absorb light independent of polarization and for all wavelengths of visible light.

The design consists of rows of silver trapezoids, each 300 nm long and joined together to look like the teeth of a saw. As each trapezoid has a varying width of 40–120 nm, the absorption works over a range of wavelengths: from blue light at 400 nm to red light at 700 nm. On its own, though, this design would only work for a single polarization; to make it polarization independent, Atwater and colleagues cross the trapezoidal rows with another set of trapezoidal rows at 90°.

At just 260 nm thick, the nanostructure has the potential to be a good plasmonic layer for an ultrathin solar cell. However, this is not possible yet because the metal and dielectric construction converts the absorbed photons to heat. This is in contrast to a semiconductor, which could convert the light into electricity. However, group member Koray Aydin, who is now at Northwestern University, says he and his colleagues have ideas for enhancing light absorption in semiconductors using the design principles they have learned.

Semiconductors are next

“The next step is obviously [to] demonstrate enhancements in semiconductors,” says Aydin. “First looking at how we can increase the light absorption in semiconductors, [and] eventually we [will be] interested in actual solar cells.”

Jeremy Baumberg, a nanophotonics expert at the University of Cambridge in the UK, is sceptical that applications in solar cells will be straightforward. “The designs [would have to be] changed so that the light is then absorbed not in the silver but in the semiconductor,” he says. “[Therefore the] effectiveness remains completely unclear. So, the results take us further in extending the colour range that gets absorbs, but [not] whether it is useful.”

Still, Aydin notes that there could be other applications in addition to solar cells – perhaps simply as a material that is very thin and very good at blocking the transmission of light.

The research was published last week in Nature Communications.

Cash hand-out to boost physics teachers

The UK government has announced a £2m-a-year scholarship programme to help persuade 100 graduates to become physics teachers. Each graduate who wins a scholarship will be awarded a £20,000 tax-free bursary provided they have got a place to study for a postgraduate certificate in education (PGCE) in England.

The new scholarships will be awarded by the Institute of Physics (IOP), which publishes physicsworld.com, as part of a pilot scheme that will initially last for one year. Students applying for a scholarship will require a first-class or upper-second degree and be intending to complete a physics or physics-with-maths PGCE.

The UK already offers a handsome bursary scheme for graduates in certain subjects to train as teachers. Currently, graduate students in physics are awarded £5000 by the Training and Development Agency if they enrol and complete a PGCE course. This existing bursary will remain for those students with poorer lower-second degrees or students who fail to get on the new IOP scheme.

“In physics, teaching is sometimes seen as profession if you have a 2:2 degree, while those who have a 2:1 or first go into research,” says Peter Main, director of education and science at the IOP. “This initiative is about raising the status of teaching as a profession so that the top students are attracted to it.”

According to the IOP, about 1000 new specialist physics teachers in England will be needed every year for the next 15 years to ensure that the subject is taught entirely by specialist teachers. Last year 275 fewer trainees were recruited to physics teaching-training courses than were needed to start plugging the gap.

Keith Taber, senior lecturer in science education at the University of Cambridge, says the new initiative is important in a number of ways. “First, these bursaries will be awarded to high-quality candidates; and as the IOP is administering the process, successful candidates will need to show that they have the qualities required to be good teachers,” he says. “The second important feature is that after many years when physics was not recognized or even mentioned as an explicit discipline within the school curriculum, it is now being accepted that while physics teachers are science teachers, they bring a particular expertise as physics specialists to complement other science specialists in a school.”

Taber adds that a teaching career is now an “attractive” option for graduates who wish to use their mathematics expertise but are not keen on teaching biology and chemistry topics. “The initiative allows new teachers to choose to prepare to teach physics with maths rather than prepare to teach science as a physics specialist,” he says.

Students looking to get their hands on the first round of scholarships will have to act quickly as the deadline for online applications is 6 December 2011. Successful applicants will then attend an interview at the IOP’s London office, expected to be held on 20 December. Five further opportunities to apply will available next year.

For Newton, a right Hooke

Shortly after lunchtime on Wednesday 26 June 1689, Robert Hooke began delivering one of his regular lectures at the Royal Society, London. These were dramatic performances in which he would entertain his philosophically minded peers with experiments, often using instruments he had developed himself. But in this lecture, Hooke digressed. “Many of those things I here first discovered could not find acceptance,” he protested. “Yet I find there are not wanting some who pride themselves in arrogating of them for their own.”

Quite how many developments in science and engineering should be credited to Hooke rather than to his contemporaries – especially Isaac Newton – had long been a matter of debate. Then, in 2006, a stash of mysterious papers turned up at a Hampshire country house that threw new light on the subject. They turned out to be Hooke’s long lost Folio – the minutes of meetings at the Royal Society during his tenure as curator of experiments. The Folio revealed that, as Hooke had always maintained, it was he and not the Dutch astronomer Christiaan Huygens who was the first to demonstrate a portable timepiece based on a balance spring. More importantly, it showed that Hooke was first to state that gravity causes the elliptical motion of the planets – an idea that Newton later honed into the famous inverse-square law.

These revelations have impelled the British playwright Siobhán Nicholas to create Hanging Hooke, which seeks to restore Hooke to his rightful place as one of history’s finest natural philosophers. It is an intimate, low-budget play, set on a makeshift stage and performed in monologues by a single actor, Chris Barnes, who switches between two different characters. One of the characters is John Hoskins, an amalgam of John Hoskins senior, one-time portrait painter of Charles I, and his son John “Jack” Hoskins, about whom much less in known. The other, naturally, is Hooke himself.

Hanging Hooke seeks to restore Hooke to his rightful place as one of history’s finest natural philosophers

We see Hoskins first, wracked with guilt in his studio as Hooke lies dying somewhere in London. Hoskins recollects how the young Hooke approached him as he painted on the Isle of Wight, even sitting down to copy his seaside paintings. They became good friends, so much so that Hoskins made him his assistant in London, where he learned of Hooke’s myriad ideas. “Had I discovered the English Leonardo?” he asks, using a now-common epithet. But Hoskins, a Royalist, had to travel to Saxony to escape the forces of Oliver Cromwell and, while there, he joined a secretive society, the Rosicrucians. In the play, this allies him with other Rosicrucians in the Royal Society, including Newton, and it is this allegiance that finally forces Hoskins to betray his friend – “My shameful task,” he says – by taking the Folio that proved Hooke’s prior claims to many innovations.

Despite his betrayal, Hoskins does a grand job of praising his friend’s lost achievements. So we learn that Hooke coined the term “cell” for biological organisms, after peering at pieces of cork through a microscope; that he was first to publish a wave theory of light; that he developed the first portable timepiece; that he built the vacuum pumps used by Robert Boyle in his gas-law experiments; that he designed telescopes that could reveal the rotation of planets; and that, like Leonardo, he had come up with dozens of blueprints for flying machines. In short, Hooke was a master polymath.

In the play’s second half, Hoskins transforms himself into Hooke – broad London accent, straggly hair, hunchback and all. Here, Barnes gives a commanding performance. At one point, we see Hooke give a lecture in which he uses one of his self-made pumps to demonstrate the effect of a vacuum on the human body. On stage, though, there is no pump – props are minimal – so Barnes creates the effect by standing on his head, describing the creeping asphyxiation that results. This is an impressive feat, and powerfully conveys the sacrifices Hooke made to advance knowledge.

Later, Barnes portrays Hooke as an aging hypochondriac, turned paranoid from overuse of quack remedies and bitterly resentful of his treatment within the Royal Society. In one scene, Hooke describes his most significant realization: that the planets move elliptically around the Sun, held in orbit by an attraction at a distance, namely gravity. He writes to Newton, explaining his revolutionary idea, but for years hears nothing in return. “What medicine do I take for this heavy, prescient spirit?” he asks. “What for melancholy?”

Hooke’s achievements should have made him a scientific superstar, yet somehow he faded into obscurity

Hooke had good reason to be distraught. His achievements should have made him a scientific superstar, yet somehow he faded into obscurity, becoming famous only for Hooke’s law – a rather boring relation that states that the extension of a spring is proportional to the force applied. Why didn’t he become a household name? Nicholas offers several explanations, one being Newton, who, we are told, ordered both the destruction of Hooke’s manuscripts and his portrait. (No surviving portraits of Hooke are known to exist.) Another is Hoskins, who embodies the failure of Hooke’s friends and family to preserve his legacy.

It was Newton, of course, who ultimately took credit for gravity. People can judge for themselves whether this was justified. Hooke had the initial idea, and may have suggested that the force between two objects followed an inverse-square law. But the play is one-sided, portraying Newton as a scheming, self-interested man who, as president of the Royal Society, could tweak history as he saw fit. This is a bit unfair. For all his brilliance, Hooke himself was wont to exaggerate his achievements, speculating about phenomena for which he had little empirical data – and it was almost certainly Newton who first thrashed out the mathematical details of gravitation.

Still, Hooke deserves his time in the limelight, and both Nicholas and Barnes have done a tremendous job of balancing explanations of his discoveries with the politics of the time, ensuring that the production never seems clunky. Around the tercentenary of Hooke’s death in 2003, several biographers re-explored his life, offering a much kinder portrait than those given in the past. Together with the uncovered Folio and Hanging Hooke, they have generated a resurgence of interest: a stone memorial now sits beside Christopher Wren’s Monument in London, which Hooke helped design, and a plaque has been placed in the crypt at St Paul’s Cathedral. Meagre offerings perhaps, but a good start in remembering the man about whom so much has been forgotten.

Norman Ramsey: 1915–2011

The US physicist Norman Ramsey, who shared the 1989 Nobel Prize for Physics, died on 4 November at the age of 96. Ramsey’s work on probing the structure of atoms to high precision was instrumental in the later development of the atomic clock, as well as in medical applications such as magnetic resonance imaging (MRI), which is now widely used to image nuclei of atoms inside the body.

Ramsey’s pioneering work in the 1940s followed that of his PhD supervisor – Nobel laureate Isidor Isaac Rabi from Columbia University. In 1937 Rabi invented a technique called atomic-beam magnetic resonance to study the structure of atoms by probing their transition levels with radiation. This technique involves passing a beam of atoms through a homogeneous magnetic field before subjecting it to a single oscillating electromagnetic field, which is set to a frequency that induces transitions between certain energy levels in the atoms. The radiation emitted has a characteristic frequency or wavelength that depends on the energy difference between the two levels.

Any inhomogeneity in the magnetic field, however, was found to widen the resonance line and therefore have a negative impact on the accuracy of the experiment. In 1949 Ramsey modified Rabi’s method by introducing two separated oscillatory fields. This means that the atoms can be excited in either one of the two regions, thus producing an interference pattern with a sensitivity that depends on the distance between the two oscillatory fields but that is independent of the degree of homogeneity of the magnetic field between them. This made it possible to greatly improve the accuracy of the Rabi’s method – reducing the width of the transition spectral line by as much as 35%.

Ramsey’s breakthrough allowed more precise measurements of atomic-energy spectra and has subsequently been used in caesium clocks, which have provided our standard of atomic time since 1967. His technique also provided the basis for nuclear-magnetic-resonance spectroscopy and MRI. In 1960 Ramsey and colleagues also began to develop the hydrogen maser – a device that produces coherent electromagnetic waves through amplification by stimulated emission – together with Daniel Kleppner from the Massachusetts Institute of Technology.

It was for this work – the “invention of the separated-oscillatory-fields method and its use in the hydrogen maser and other atomic clocks” – that Ramsey was awarded half of the 1989 Nobel Prize for Physics. The other half was shared by Hans Dehmelt from the University of Washington and Wolfgang Paul from the University of Bonn, Germany, “for the development of the ion-trap technique”.

Born on 27 August 1915 in Washington, DC, Ramsey went on to study mathematics at Columbia College in New York and graduated in 1935. He then moved to Cambridge University in the UK, where he obtained a second Bachelor’s degree – this time in physics – before heading back to Columbia to do a PhD in the new field of magnetic resonance that was supervised by Rabi. Ramsey stayed on at Columbia until 1947, before moving to Harvard University where he spent the remainder of his career before retiring in 1986.

Laser puts a new spin on light

A new type of pulsed laser that modulates the polarization of its emitted light very rapidly has been created by researchers in Germany and Scotland. The polarization modulation occurs much faster than the intensity modulation normally used in optical telecoms systems – and the resulting polarization pulses could increase dramatically the speed of fibre-optic communications.

Modern telecoms systems encode information in pulses of laser light that are then sent along optical fibres. This is an incredibly efficient method of transmitting information, but its speed is ultimately limited by the rate at which the intensity of the laser can be modulated, since this dictates how long it takes to encode data into a train of pulses. With a traditional, intensity-modulated laser, the maximum achievable modulation rate is about 40 GHz.

Now Nils Gerhardt and colleagues at the Ruhr-Universitat Bochum, together with a colleague at the University of Strathclyde in Glasgow, have worked out a way to use electron spin to boost the modulation speed of a semiconductor laser – something that physicists have been working on for several years.

Lowering the energy threshold

In a standard semiconductor laser there are equal numbers of spin-up and spin-down electrons, so spin plays no part in its operation. However, physicists know that if the relative proportion of charge carriers in either spin state can be increased, this lowers the amount of energy that must be put into the laser before it starts emitting light – called the lasing threshold.

But sustaining this spin imbalance – or polarization – at room temperature has proven extremely difficult because thermal energy will randomize the spin in a few picoseconds. So Gerhardt and colleagues created waves of spin polarization in the semiconductor by blasting it with very short pulses of polarized light from another laser. While the electrons themselves still lose their spin polarization rapidly, some is passed on to photons, which then re-polarize the charge carriers. Such spin oscillations between photons and electrons last about 200 times longer than the electron spin polarization itself.

And there was a more interesting feature to the Bochum group’s laser. In contrast to the light from a standard semiconductor laser, which has no net polarization, the polarization of the light oscillated rapidly because of the coupling of the photon spins with the electron spins. While this switching had been demonstrated before, the polarization modulation rate had always been pegged to the intensity modulation rate.

To 100 GHz and beyond

Gerhardt and colleagues used their technique to modulate the polarization of the light from a 4 GHz laser at 11.6 GHz. This is still slower than state-of-the-art intensity-modulated lasers, but the researchers believe that it should be possible to improve on this. “Principally, you can go to over 100 GHz,” explained Gerhardt. “We’ve shown it theoretically, but first we have to develop a device and that’s what we’re currently doing.”

Physicist Igor Zutic of the State University of New York at Buffalo is impressed. “I would say this is a very exciting proof of principle and probably showing us the tip of the iceberg of what may be possible with these spin lasers,” he says.

However, Zutic and Gerhardt both agree that before such a spin laser can be commercially viable, it must be possible to excite the spin oscillations without another laser. This would involve injecting spin-polarized electrons into the laser – a process so far realized only at cryogenic temperatures. “Some of the advances that are now pursued in very different areas – such as magnetic hard drives and magnetic random-access memory – are based on better magnets and better methods of electrical spin injection,” concludes Zutic. “A broader view of these developments may allow useful transfer of knowledge.”

The work is described in Appl. Phys. Lett. 99 151107.

Copyright © 2026 by IOP Publishing Ltd and individual contributors