Skip to main content

Bright ideas and their architects

Stephen Wolfram was a child prodigy, receiving his PhD in particle physics at the age of 20. His subsequent achievements include significant work on cellular automata, the creation of the computer-algebra system “Mathematica” and the computational knowledge system “Wolfram Alpha”. He also wrote a large and controversial book, A New Kind of Science (2002), which argues that computation is key to understanding our universe. But you don’t need to know about his achievements to read Wolfram’s latest book, Idea Makers: Personal Perspectives on the Lives and Ideas of Some Notable People, not least because they are rather frequently mentioned in the text.

Idea Makers is exactly what the subtitle suggests: it presents a very personal view of a number of people who have interested Wolfram, who is himself a significant scientific thinker. Some are major past figures from the history of science, such as Gottfried Wilhelm Leibniz, Ada Lovelace and Alan Turing. Others the author himself knew and worked with, including the likes of Richard Feynman – one of Wolfram’s PhD examiners, whom he later came to know well – Steve Jobs and Solomon Golomb. I knew of Golomb as the inventor of “pentominoes” (a generalization of dominoes, which involves joining five equal squares edge-to-edge) but his work, I found out from the book, is of fundamental importance in electronic engineering. Most of the people featured in the book only appear briefly, as their entries are written to mark an anniversary, an event or a death. Many of the entries do focus on Wolfram’s personal connections with his subjects, but this is understandable given the premise of the book.

While most of the subjects are well known, the name of Russell Towle was new to me. Towle, a US mathematician who died in a road accident in 2008, used Mathematica to explore zonohedra, a special kind of polyhedra, and corresponded with Wolfram about his work. The chapter on Towle includes some of his fascinating images, and I am delighted to have learned about Towle and his work.

Wolfram does not attempt to provide a complete portrait of his subjects: instead, he is interested in particular aspects of their work (naturally, those which relate to his own ideas), and there is a diversity of approach. The very interesting chapter on Feynman is largely personal reminiscence, while the section on Kurt Gödel (written on the centenary of his birth) argues that Gödel’s “abstruse theorem of mathematics” has set the agenda for 21st-century science.

Wolfram’s first-hand research into archival material, coupled with his enthusiasm for investigating the past, comes to light in the three more comprehensive chapters, on Lovelace, Leibniz and Srinivasa Ramanujan. The extended essays were, for me, particularly intriguing. Wolfram quotes extensively from original documents in arguing for Lovelace’s importance. When I first heard of Lovelace, there was a view that, while she undoubtedly played an important role in helping Charles Babbage develop his calculating engines, there were doubts as to the extent of her personal technical contribution. Wolfram, along with other recent researchers, convincingly dispels any such doubts. Happily, following last year’s bicentenary of her birth, Lovelace’s reputation has never been higher, and Wolfram puts forth even more convincing evidence of her abilities.

The chapter on Ramanujan delves into his unique way of doing mathematics, as well as his extraordinary insights into mathematical patterns. Wolfram goes as far as recommending the reader adopt a similarly adventurous and experimental attitude in mathematics. Personally, I’m enough of a traditional “pure” mathematician to want proof to go along with experimentation, and in some ways, I found this the least convincing section of the book.

In writing about Leibniz, Wolfram analyses Leibniz’s thinking about computation. He concludes that Leibniz didn’t take discrete systems seriously enough to anticipate modern ideas of universal computation; but he is interested in Leibniz’s use of binary and in his calculating machine. Although Wolfram’s approach is not entirely that of a historian – he is comfortable drawing conclusions from the perspective of our present-day thinking – the chapter is nevertheless illuminating and thought-provoking.

Wolfram writes very well: he is always entertaining and his ideas are interesting. While it is somewhat natural that he relates his subject’s ideas to his own, reading the frequent speculations about how these great thinkers might have appreciated Wolfram’s work became slightly overwhelming for me. There are sections with headings such as “What if Ramanujan had Mathematica?”, many comments along the lines of “I’m sure he [John von Neumann] would have been a big Mathematica user today”, and “perhaps long ago he [Turing] would have campaigned for the creation of something like Wolfram|Alpha”. The book discusses why John von Neumann did not anticipate Wolfram’s insight that cellular automata with simple rules can generate very complex behaviour. Despite this, I don’t think Wolfram intends to be self-congratulatory. To his credit, in the chapter on Benoit Mandelbrot, he quotes the fractalist’s highly unfavourable opinion of A New Kind of Science. Reading this book, one wonders whether his description of Mandelbrot as “constantly seeking validation and constantly fighting to get his due”, might also apply to himself.

Because of the personal nature of the book, Wolfram provides a very partial view of his subjects, in more than one sense of the adjective. The reader will meet some fascinating characters who deserve to be better known, and will gain insights into some of the major figures in the history of science and technology, both recent and from the more distant past. They will also learn a lot about the author and his own thoughts – more explicitly than most scientific biographers would allow themselves. Whatever you think of Wolfram’s big ideas, his thoughts and perspectives are illuminating and are worth careful consideration.

  • 2016 Wolfram Media $22.95/£17.99hb 250pp

Galaxies and auroras and planets, oh my!

Photograph of the Milky Way and bioluminescent phytoplankton by Arwen Dyer in Tasmania

For the past eight years, astrophotographers from all corners of the globe have sent in their best and more exquisite images of our cosmos, to participate in the Astronomy Photographer of the Year award, run by the Royal Observatory in Greenwich, UK. This year’s submissions were a bumper crop with nearly 4500 images entered into the competition, from photographers in 80 countries.

In Astronomy Photographer of the Year: Collection 5, the Royal Observatory has put together all the winning and shortlisted images from the 2016 competition. The coffee-table book contains 140 large glossy photographs of a variety of celestial objects, all photographed from the Earth. While the Royal Observatory hosts a free exhibition of the winners each year, this book offers readers world-over a chance to view these breath-taking images.

Photograph of the Moon by Dani Caxete at Cadalso de los Vidrios in Madrid

The competition has eight different categories including skyscapes, auroras, galaxies, people and space. There is also a separate competition for entrants aged 15 and under, where the subject can be anything astronomical. The competition now also boasts two special prizes – the Sir Patrick Moore Prize for Best Newcomer, for first-time entrants who have become involved in astrophotography only for the past year; and the Robotic Scope Prize, which is given to photographs taken with a remotely controlled telescope.

Photograph of city lights and star trails by Wing Ka Ho in Hong Kong

The entries are judged by a panel that includes astronomers, photographers and artists, with the book containing a poignant foreword penned by judge and Turner-prize-winning fine-art photographer Wolfgang Tillmans. “Astronomy transcends borders and cultures,” he writes, recounting how European astronomers in the 18th century were given special “safe passage” to observe a transit of Venus, despite the fact that France and Britain were then at war. “In today’s divided world, maybe astronomy can still help bring us together.”

  • 2016 Collins Astronomy £25hb 192pp

Tracing the path towards totality

A retro style poster for the August 2017 total solar eclipse in the US, created by book author Tyler Nordgren

On 21 August 2017 a total solar eclipse will cast its sweeping shadow across the US. Starting around 10 a.m. in Oregon on the west coast, it will end all too soon, a mere 90 minutes later in South Carolina in the east. An awe-inspiring cosmic display awaits those who are either lucky enough to live along that swath of land, or who make the effort to get there. Depending on where it’s observed from along its path in the US, totality will last from anywhere between 1 and 2.5 minutes, but for the rest of Northern and Central America, the eclipse will be partial. Nonetheless, the sight of the new Moon forming a dark crescent shadow on the solar surface will be a celestial spectacle for any viewer. In his book Sun Moon Earth: the History of Solar Eclipses from Omens of Doom to Einstein and Exoplanets, author and astronomer Tyler Nordgren charts the evolving history and science of the natural phenomenon that is a solar eclipse.

Through a narrative that flows effortlessly between personal experiences and scientific facts, Nordgren – a professor of physics at the University of Redlands in California, US – engages the reader with fascinating scientific discoveries that this cosmic coincidence makes possible. Eclipses are a good reminder that planetary motions follow well-defined laws of physics, for the most part. It is these latter exceptions, when the laws are broken, that have driven some of the most significant discoveries of our time, including the modifications to Newton’s laws of gravitational motion in the form of Albert Einstein’s theory of relativity, which was substantiated thanks to Arthur Eddington’s observations of starlight during the eclipse of 1919.

Nordgren also describes just how addictive it can become once you witness a total solar eclipse – see one and you are already planning for the next. Eclipse chasing can become a costly affair as you often travel to remote locations. As a scientist who has been leading a team to observe total solar eclipses since 1995, I understand all too well the eclipse-addiction syndrome. For my group – the Solar Wind Sherpas – an eclipse offers us the unique opportunity to probe a small section of the sun’s corona (of just a few solar radii) that is closest to its surface, which currently cannot be observed by any other instrument. With each eclipse, opportunities arise for testing new ideas and new instrumentation.

For the most part, it is the fleeting beauty of this event that makes the experience so compelling. Lasting a maximum possible seven minutes, the magic is always over too soon. The prospect of bad weather – that sometimes looms as a literal dark cloud – is omnipresent and makes observing solar eclipses even more of a challenge. The Solar Wind Sherpas have experienced the entire spectrum of emotion, from utter disappointment when the view was clouded out, to extreme joy when clear skies prevailed. But the addiction persists as we continue to strive to uncover some of the secrets of the Sun and its corona.

Astronomers and non-scientists are often equally obsessed with eclipses. But for the former, it is the unique opportunities that a solar eclipse offers – to test certain theories or trial new technologies – that is tempting. Beyond the sheer visual awe of an eclipse, the celestial setting that comprises the Sun, Moon and Earth serves as an excellent laboratory tool. Light is a ubiquitous astronomical signal that can be detected using everything from a telescope to a spectrograph to the naked human eye and has been studied through the centuries.

According to Nordgren, the world’s first eclipse-chaser happened to be a scientist – Jacques d’Allonville, or Chevalier de Louville, a member of the Royal Academy of Sciences in Paris – who travelled to London to see the total solar eclipse of 22 April 1715. This eclipse had been predicted by astronomer and mathematician Edmund Halley, using his friend Isaac Newton’s laws of motion. Incidentally, this event was also one of the first times that the public had been asked to engage in the science being done. Halley distributed posters across the country asking people to record the time and duration of the event using their pendulum clocks and to mail him their results. Halley’s aim was to make better measurements of the Moon’s orbit, thereby improving his ability to predict future eclipses.

A strikingly similar example of citizen science followed some 200 years later, when a total solar eclipse was predicted to pass over New York City on 24 January 1925. A number of scientists urged the public to witness totality and record the eclipse’s time and duration, in what the New York Times deemed “cosmic detective work.”

The book ends by offering the author’s insight into the evolution and the ultimate fate of our Sun and solar system. Nordgren teases us with the basic question, and worry, of whether eclipses will “forever” be present for us to marvel at. Currently, no other planet in our system has the privilege of experiencing a total solar eclipse. Our Sun, Moon and Earth can continue to boast of their unique alignment, but for how long? We can breathe a sigh of relief for now, as this fortuitous combination of sizes, distances and orbits that allows for a total solar eclipse to occur will last for at least the next few hundred million years.

Nordgren captures the scientific significance of total solar eclipses in a manner that is readily accessible to most readers. Certain concepts mentioned in the book – including the rather complicated story behind the modification to Newtonian gravity, which could not account for discrepancies in the orbit of Mercury, and eventually led to Einstein’s theory of relativity – might be difficult for some to follow. However, this does not deter the reader from carrying on. Nordgren’s book is extremely timely, and hopefully many of its readers will be compelled to witness the beauty of the corona next year.

  • 2016 Basic Books $26.99hb 256pp

Waking up to a gravitational wave


By Hamish Johnston

Yesterday we announced the winner of the Physics World 2016 Breakthrough of the Year, which went to the LIGO Scientific Collaboration for its revolutionary, first ever direct observations of gravitational waves. I caught up with six LIGO scientists in the above video Hangout and asked them what it was like when they first realized that they had detected gravitational waves emanating from two coalescing black holes 1.3 billion light-years away.

(more…)

Quantum free fall at 8500 m

“Weightless” experiments that compare the gravitational acceleration of two different quantum objects have been performed by physicists in France. Carried out in free fall on board an aircraft undergoing a parabolic trajectory, the tests were far too insensitive to test the long-held idea that all bodies fall at the same rate (in a vacuum) in a given gravitational field. However, the research could lead to far more powerful space-based experiments and might also result in the development of new navigational aids.

The universality of free fall is a consequence of the equivalence principle, which lies at the heart of Einstein’s general theory of relativity. It states that inertial and gravitational mass are equal, which means that a body’s mass – or indeed its internal structure – has no bearing on its acceleration in a gravitational field. Therefore two bodies with different masses or compositions will accelerate at the same rate.

Universality has been tested to ever greater precision since Galileo’s mythical experiment at the Leaning Tower of Pisa – and, so far, has never failed. The most precise experiment to date was carried out in 2008 by researchers at the University of Washington in Seattle, who found that universality held to one part in 1013.

Microscope in space

Physicists would like to boost precision by at least a factor of 100, since it as this level that some theories beyond the Standard Model of particle physics predict that the universality of free fall will break down. In fact, one space mission already in orbit around the Earth – the Micro-Satellite à traînée Compensée pour l’Observation du Principe d’Equivalence (Microscope), developed by the French National Centre for Space Studies (CNES) – is designed to reach a sensitivity of about 10–15 and could produce its first significant results early next year.

Microscope takes advantage of the fact that orbiting satellites are in free fall towards the Earth. Therefore objects inside the satellite are themselves in free fall for far longer than any mass dropped on Earth. This means that acceleration measurements can in principle reach very high sensitivities.

Microscope, like the University of Washington experiment, studies the free fall of large “classical objects”. In contrast, the latest work, carried out by Philippe Bouyer and Brynle Barrett of the LP2N laboratory in Bordeaux and colleagues, uses “quantum objects”. These are extremely cold clouds of two types of atom: rubidium-87 and potassium-39. Atomic systems have a number of advantages over macroscopic objects, according to Bouyer, including the fact that there is no possibility of contamination by unknown quantities of impurities. Also, spin and other quantum-mechanical properties of the atoms can be varied to see if this causes a violation of the equivalence principle.

Struck by lasers

The rubidium and potassium atoms are allowed to fall under the influence of gravity. As they drop they are struck by lasers, which acting as a beam splitter for matter, causes the atoms’ wave packets to split and follow two vertical paths at the same time. At the end of their trajectory, the two states interfere with one another, producing an interference fringe. Comparing the position of the fringes produced by the rubidium and potassium then allows the researchers to establish whether the two different types of atom have undergone different relative phase shifts and hence experienced very slightly different accelerations.

Physicists have previously used cold-atom interferometers to investigate the universality of free fall, having achieved sensitivities of around 10–8, but these experiments were performed on the ground. As with classical tests, the ultimate aim is to go into space. Bouyer and colleagues haven’t yet managed that, but have instead taken advantage of the near weightless conditions on board a specially adapted Airbus aeroplane owned by French company Novespace. The “zero-G” aircraft undergoes free fall for about 20 s at a time by climbing at an angle of around 45° and then cutting its engines just enough to cancel its air drag, so that it traces out a parabola as it accelerates downwards under gravity. The plane then drops, noses up again and traces out another free-fall parabola, repeating the cycle many times over.

Bouyer and colleagues have carried out nearly 10 years of painstaking work on many parabolic flights to stabilize their complex equipment in the noisy environment of the aircraft. This allowed them to perform tests on rubidium-87. Now, the team has compared the behaviour of two different types of atom over the course of six flights last year.

Suitable for space

Barrett says that the work relies on a number of technical innovations to reduce the effects of on-board vibrations, which can reach about 0.01 g, and the aircraft’s rapid rotation, which can get up to one revolution per minute during a parabola. Noting that he and his colleagues tested the universality of free fall with a modest sensitivity of just 3 ×10–4, he says that the importance of the work was in showing the suitability of their set-up for space-based tests. “The techniques we developed here could be exploited by many experiments over the next few years,” he predicts.

The team’s next step is to carry out new tests early next year to show how single atoms could be used for “inertial” navigation, which involves continually monitoring a body’s acceleration and rotation over time. Beyond that, some group members are also working to exploit the interferometer technology on a mission known as the Space-Time Explorer and QUantum Equivalence Principle Space Test (STE-QUEST). But according to Bouyer, the roughly €500m satellite will not launch until at least 2025. “It is a big, long term project,” he says.

The research is described in Nature Communications.

Optics sharpen view of extremely large telescopes

Astronomers around the world are attempting to outsmart the random nature of the Earth’s atmosphere, the part of our planet’s ecosystem that allows all of us to survive and thrive. A cool evening breeze may bring welcome relief after a long day in the suffocating summer heat, but that same gust of wind can wreak havoc on the feeble light signals that travel from distant stars and galaxies to reach telescopes on Earth.

The problem stems from turbulent eddies in the Earth’s atmosphere, which in turn are caused by changes in air density as the temperature fluctuates. Such atmospheric turbulence generates local variations in the refractive index of the air, which means that light from a cosmic object is refracted many times as it travels through the Earth’s atmosphere towards a telescope.

Unfortunately, telescopes that are designed to magnify starlight also amplify these atmospheric irregularities. If you were to take multiple photos of a star in quick succession, turbulence would cause a distorted image of the star to move around its position in the night sky. The buffeting effect of the wind also causes the telescope itself to vibrate – which is only made worse by imperfect mechanical couplings to the drive motors, the enclosure and other sources of vibration. As a result, long-exposure photos show a blurred stellar image rather than a sharp point source, with the size of the blurred image known by astronomers as “atmospheric seeing”.

Telescope designers have introduced optical techniques to correct for the effects of atmospheric turbulence, and these techniques have become ever more important and sophisticated as the light-collecting mirrors have become larger. The Thirty Meter Telescope (TMT) – now being developed in a collaboration between the US, Canada, Japan, China and India – is pushing this technology to new limits, with the ultimate aim of building a ground-based observatory with a spatial resolution more than an order of magnitude greater than that of the Hubble Space Telescope.

The first telescopes to correct for atmospheric effects exploited so-called active optical systems, which provide real-time control of the secondary mirror’s position relative to the main mirror. These active systems incorporate computer-controlled actuators positioned under the main mirror to constantly adjust its shape, helping to control the defects and deformations in the telescope structure, mirror and enclosure that are caused by temperature changes, wind buffeting and mechanical effects.

Such active optical systems greatly improve the quality of the resulting images, but telescope mirrors can only be adjusted at low frequencies, typically around 1 Hz. In contrast, atmospheric turbulence fluctuates on millisecond timescales.

New routes to sharper images

More responsive error correction can be achieved with adaptive optics (AO) systems, which use a reference star in the telescope’s field of view or an artificial laser “guide star” to continually sample the conditions that the light passes through. An optical device called a wavefront sensor analyses the incoming starlight, and prompts a computer to send corrective commands to small, deformable mirrors close to the telescope’s focus. Most telescopes with mirror diameters in the 8–10 m range are equipped with Shack–Hartmann wavefront sensors, in which the focal planes are covered with arrays of small lenslets. In this case, the wavefront distortion is measured for each lenslet, which in turn is fed back into the deformable mirror loop.

Most AO systems in operation today exploit a single guide star, but they can only correct the distortions over small patches of the sky. As an example, the bright
spot that can be generated by exciting sodium atoms at a height of around 90 km cannot sample atmospheric turbulence at greater heights. More importantly, the cone-shaped volume defined by the tele­scope mirror and the laser spot does not allow the AO system to sense the outer portions of the laser’s wavefront, which can cause differential stretching effects when using the laser guide star to correct the target’s wavefront.

The obvious remedy is to exploit multiple laser guide stars, which together can sample the atmosphere above the telescope more homogeneously. As a result, all current designs for the next generation of large telescopes include versions of such “multi-conjugate” AO systems.

Side by side comparison of Milky Way images with and without adaptive optics

Meanwhile, the TMT is taking the AO concept one step further. Just like any other telescope, the TMT can be used as a huge “light bucket” to observe the night sky without any atmospheric correction. But it is also on track to become the first telescope designed with AO as an integral system element, part of a low-risk design strategy that will enable advances in the technology to be incorporated into the telescope as they are developed.

This integrated AO approach is crucial for the TMT to achieve its design goal, which is to routinely provide a diffraction-limited spatial resolution that will be 12.5 times sharper than that of the Hubble Space Telescope. At the diffraction limit the gains in sensitivity scale with the fourth power of the main mirror’s diameter, which means that the TMT will be two orders of magnitude more sensitive than the current generation of large telescopes.

Inaugural system

When the TMT produces its first images – most likely soon after 2027 – it will be equipped with a multi-conjugate AO set-up incorporating six laser guide stars and two deformable mirrors. This inaugural system, called the Narrow Field Infrared Adaptive Optics System (NFIRAOS), will provide diffraction-limited images over the 0.8–2.5 μm wavelength range, where AO offers optimal results.

The NFIRAOS is expected to reduce wavefront errors to well below 190 nm over a 10–30 arcsecond field of view, in so-called “median seeing”. Given the usual atmospheric conditions at the preferred construction site, Mauna Kea, the highest summit of the Hawaiian islands, this median seeing requirement implies excellent, highly competitive performance. Astronomers have been considering alternative sites as a result of sustained opposition from native Hawaiians, and have just confirmed La Palma in the Canary Islands as their second choice – which promises world-class performance, although not quite as good as on Mauna Kea.

In practice, the NFIRAOS is expected to improve the typical size of a point source from around 0.5 arcseconds in “seeing-limited” mode to about 0.01 arcseconds. A defined upgrade path is set to further reduce the wavefront errors to approximately 133 nm within five years, significantly better than the performance of any current AO system in operation. The TMT will then be blessed with an AO system that can deliver competitive Strehl ratios – a quantitative measure of how much of the light collected by the detector is delivered into the diffraction-limited point source – of 40–75% at wavelengths from 1.2 to 2.2 μm.

NFIRAOS will mostly be used to obtain near-infrared spectra – either single observations using a “long slit” or multiple targets simultaneously in “integral field unit” mode. Additional AO systems are also being developed for deployment within the first decade of TMT operations: one optimized for spectroscopic observations at mid-infrared wavelengths (small-field, diffraction-limited mid-infrared AO: MIRAO), and another for near-infrared correction of multiple small sky areas over a 5 arcminute field of view (multiple-object AO: MOAO).

Such diffraction-limited operation of 30 m class telescopes will enable a step change in our knowledge of the universe. But advances in technology must proceed in tandem with our scientific understanding for the TMT and its class of next-generation large telescopes to make ground-breaking discoveries well into the future.

LIGO’s gravitational-wave discovery is Physics World 2016 Breakthrough of the Year

Almost exactly 100 years after they were first postulated by Albert Einstein in his general theory of relativity, gravitational waves hit the headlines in 2016 as the US-based LIGO collaboration detected two separate gravitational-wave events using the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO). The first observation was made on 14 September 2015 and was announced in February this year. A second set of gravitational waves rolled through LIGO’s detectors on 26 December 2015, and this so-called “Boxing Day event” was announced in June this year. Gravitational waves are ripples in the fabric of space–time, and these observations mark the end of a decades-long hunt for these interstellar undulations.

The measurements also herald the start of the era of gravitational-wave astronomy and multi-messenger astronomy, whereby gravitational-wave observations are combined with those made by optical and radio telescopes and other detectors observing the cosmos. Indeed, LIGO’s twin detectors will soon be joined by a global network of gravitational-wave detectors.

Cataclysmic events

The gravitational waves from both events were produced by cataclysmic events in the distant universe – the collision and eventual merger of two black holes. In the first event, two black holes of 36 and 29 solar masses, respectively, merged to form a spinning, 62 solar-mass black hole, some 1.3 billion light-years away in an event dubbed GW150914.

The gravitational waveform was picked up by the then newly upgraded aLIGO detectors – one in Hanford, Washington, and the other in Livingston, Louisiana. In fact, when the signal reached the observatories, both detectors were still being calibrated. Despite this, the signal from GW150914 was so strong and clear that it could be “seen” in the data by eye and was measured to a statistical certainty of 5.1σ.

The waves in the Boxing Day event – dubbed GW151226 – were also generated by colliding black holes. These weighed in at 14 and 8 solar masses, and merged to form a single, spinning 21 solar-mass black hole, some 1.4 billion light-years away. In October 2015 LIGO recorded a third possible event, dubbed LVT151012. Although not statistically significant enough to be a discovery, the team believes this event also arose from two coalescing black holes.

Three in four months

LIGO detected three events in its four months of observation, and this was no mean feat. The instruments are sensitive enough to detect a change in length less than 1000th the size of a single proton between its interferometer’s arms – which is an incredible feat of engineering.

LIGO has already changed our view of the universe – its observations are the first direct evidence for the existence of black holes. Also, the stellar-mass black holes that merged in both events do not fit our current understanding of black holes. Astronomers had thought that such binaries would either not form at all or, if they did, they would be too far apart to merge within the age of the universe. Also, the LIGO collaboration had expected that its first detections would come from binary neutron-star mergers rather than coupling black holes, which were thought to be rare. But the data from the recent discoveries suggest that the rate of binary-black-hole mergers is higher than expected.

  • In the below video, LIGO scientists at the University of Cardiff talk about the first-ever detection of gravitational waves. Looking to the future, they speak about the prospect of a new era of gravitational-wave astronomy.

The top 10 breakthroughs were chosen by a panel of four Physics World editors and reporters, and the criteria for judging included:

  • fundamental importance of research;
  • significant advance in knowledge;
  • strong connection between theory and experiment; and
  • general interest to all physicists.

Now for our nine runner-up breakthroughs, which are listed below in no particular order.

Schrödinger’s cat lives and dies in two boxes at once

Illustration of Schrodinger's cat in two boxes

To Chen Wang, Robert Schoelkopf and colleagues at Yale University in the US and INRIA Paris-Rocquencourt in France for creating a Schrödinger’s cat that lives and dies in two boxes at once. In this new twist on a much-loved quantum paradox, the boxes that hold Schrödinger’s cat are two entangled microwave cavities. The cats are represented by large ensembles of photons, which exist in each cavity. These ensembles can be in one of two quantum states – alive or dead – and the team managed to put the entire system into a state in which both cats (in both boxes) are both alive and dead until a measurement is made. Besides providing a novel illustration of how Schrödinger’s cat can be in two places at once, the large numbers of photons in such “cat states” could provide a robust way of storing quantum information using error-correction protocols.

Elusive nuclear-clock transition spotted in thorium-229

To Lars von der Wense, Peter Thirolf and colleagues at Ludwig Maximilian University of Munich, GSI Helmholtz Centre for Heavy Ion Research, Helmholtz Institute Mainz and the Johannes Gutenberg University Mainz for detecting the elusive thorium-229 nuclear-clock transition. It has long been a goal of some in the metrology community to produce a “nuclear clock” by locking a laser to a rare low-energy nuclear transition. Such a clock would, in principle, be much more stable than a conventional atomic clock because the nucleus is much less susceptible to interference from stray electromagnetic fields. The predicted 7.8 eV transition in thorium-229 is seen as an ideal candidate – except that physicists had been unable to actually detect it. By doing experiments involving atoms and ions of thorium-229, the team showed that the transition does indeed exist and has energy in the 6.3–18.3 eV range. The next step for the researchers is to improve their measurements so that the energy is known to millielectronvolt precision. This would then allow the transition to be studied using laser spectroscopy.

New gravimeter-on-a-chip is tiny yet extremely sensitive

Photograph of the gravimeter in its fused silica support structure

To Giles Hammond and colleagues at the University of Glasgow for building a highly sensitive gravimeter that is both inexpensive and compact. Their tiny device can make very precise measurements of Earth’s gravity and could be deployed in drone aircraft or in multi-sensor arrays to perform a range of tasks, including mineral exploration, civil engineering and monitoring volcanoes. While the gravimeter is not quite as sensitive as the best available sensors, it could be produced for a 1000th of the cost and is also significantly smaller and lighter than current devices. The device is based on a “proof mass”, which is a piece of silicon about 10 mm long that sits on top of two flexible struts. The mass, struts and frame are all made using standard semiconductor-manufacturing processes.

Negative refraction of electrons spotted in graphene

To Cory Dean, Avik Ghosh and colleagues at Columbia University, the University of Virginia, Cornell University, the Japanese National Institute for Materials Science, Shenyang National Laboratory for Materials Science and IBM for measuring the negative refraction of electrons in graphene. Negative refraction is a property of some artificial metamaterials and can be used to create novel optical devices such as a perfect lens. Electrons in materials can behave as waves and negative refraction should also occur at the interface between an n-type and a p-type semiconductor (a p–n junction). It has proven impossible to see this effect in conventional semiconductors because most electrons are reflected at p–n junctions. Dean and colleagues created a p–n junction in graphene and ensured that the interface was very smooth to minimize reflections – allowing them to measure the negative refraction of electrons. Negative refraction could be used to bring a diverging electron beam to a sharp focus and this could form the basis of an electronic switch that consumes very small amounts of energy.

Rocky planet found in habitable zone around Sun’s nearest neighbour

This artist's impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri

To the Pale Red Dot collaboration for finding clear evidence that a rocky exoplanet orbits within the habitable zone of Proxima Centauri, which is the nearest star to the solar system. Dubbed Proxima b, the exoplanet has a mass about 1.3 times that of the Earth and is therefore most likely a terrestrial planet with a rocky surface. Our newly found neighbour also lies within its star’s habitable zone, meaning that it could, in theory, sustain liquid water on its surface, and may even have an atmosphere. Proxima Centauri is a red-dwarf star that is just 4.2 light-years away from the Sun. While Proxima b could be subject to ultraviolet and X-radiation that is far more intense than that experienced on Earth, the team says that this does not exclude the existence of an atmosphere. Whether the planet contains liquid water, and ultimately life, depends upon exactly how it formed – according to the team.

Physicists take entanglement beyond identical ions

To Chris Ballance and colleagues at the University of Oxford and Ting Rei Tan and colleagues at NIST in Boulder, Colorado, for creating and measuring quantum entanglement between pairs of two different kinds of ions. The work – which was done independently by the two groups – is an important step towards the creation of ion-based quantum computers based on two or more different kinds of ion. Such hybrid systems would take advantage of the fact that some ions are better than others at performing specific quantum-computing tasks. The Oxford team entangled ions of two different isotopes of the same element – calcium-40 and calcium-43 – whereas the NIST group used beryllium-9 and magnesium-25 as their ions.

‘Radical’ new microscope lens combines high resolution with large field of view

Image of a mouse embryo taken using the new mesolens

To Gail McConnell, Brad Amos and colleagues at the University of Strathclyde for creating a new microscope lens that offers the unique combination of a large field of view with high resolution. Called a mesolens, the device allows a confocal microscope to create 3D images of much larger biological samples than was previously possible – while providing detail at the sub-cellular level. The ability to view whole specimens in a single image could assist in the study of many biological processes and ensure that important details are not overlooked. The researchers used the lens in a customized confocal microscope to image 12.5 day-old mouse embryos. They were able to image single cells, heart-muscle fibres and sub-cellular details, not just near the surface of the sample but throughout the depth of the embryo.

Quantum computer simulates fundamental particle interactions for the first time

To Rainer Blatt and Peter Zoller of the Institute for Quantum Optics and Quantum Information Innsbruck and the University of Innsbruck, and colleagues, for simulating fundamental-particle interactions using a quantum computer. The team used four trapped ions to model the physics that describes the creation and annihilation of electron–positron pairs. While the result can be easily calculated using a conventional computer, problems that are beyond the reach of even the most powerful supercomputers could be solved by the quantum computer if it could be scaled up to include about 30 ions. The team has already built a system with that many ions, but its performance must be improved significantly before it can do practical simulations – something that could be possible within a decade.

The single-atom engine that could

An image of the ion trap inside the vacuum chamber

To Kilian Singer, Johannes Roßnagel and colleagues at the University of Mainz for creating an engine based on just one atom. The team’s heat engine converts a difference in temperature to mechanical work by confining a single calcium atom in a funnel-shaped trap. The researchers then heated the atom using electrical noise, and as its temperature increased, its oscillations in the radial direction became larger, causing it to sample regions of higher potential, sending the particle towards the larger end of the trap. By turning the noise on and off periodically, the researchers caused the atom to oscillate between the two ends of the trap. This motion is damped to prevent the atom from escaping the trap – and the energy required to keep the atom in the trap is the work done by the engine. Their next research goal is to cool the atom further and confine it more tightly, so that it no longer behaves as a classical particle but rather as a quantum wavepacket. This could open the door to studies of the interface between thermodynamics and quantum mechanics.

Freeman Dyson on the physics dream team, Tycho Brahe's heavy metal, Tintin bags an astronomical sum

Mr Freeman Dyson: “so lucky” not to have a PhD. (CC BY-SA 2.0/Jacob Appelbaum)

By Hamish Johnston

What would it be like to have known Hans Bethe, Wolfgang Pauli, Robert Oppenheimer and Richard Feynman? One person who can tell is the theoretical physicist Freeman Dyson, who recounts his extraordinary life in an interview in Nautilus entitled “My life with the physics dream team”. Born in the UK, he got a degree in mathematics at the University of Cambridge before embarking on a PhD with Bethe at Cornell. Remarkably, Dyson did not complete his doctorate – something he seems rather pleased with: “I was so lucky. I slipped through the cracks.”

(more…)

Flash Physics: Graphene meets Silly Putty, new linear-collider bosses, Majorana pairs spotted

Graphene and Silly Putty make an excellent strain sensor

Extremely sensitive measurements of deformation and impact have been made using a sensor that is a combination of graphene and Silly Putty. Graphene is a layer of carbon just one-atom thick that has a number of very useful properties including high electrical conductivity. Silly Putty is a children’s toy that is essentially a lump of a viscoelastic polysilicone material. When mixed together by researchers at Trinity College Dublin and the University of Manchester, the resulting “G-putty” is a good conductor of electricity. However, when the material is subject to even a tiny strain or impact, its electrical resistance increases sharply – before relaxing to its original value as the material “self-heals”. This inspired Trinity’s Jonathan Coleman and colleagues to make a sensor from G-putty that when mounted on the neck and chest of a subject could measure breathing pulse and blood pressure. The sensor was even able to detect the footsteps of a small spider. “The behaviour we found with G-putty has not been found in any other composite material,” says Coleman, adding: “This unique discovery will open up major possibilities in sensor manufacturing worldwide”. The material is described in Science.

International Linear Collider Collaboration appoints new associate directors

The International Linear Collider Collaboration (LCC), which promotes the planning and construction of a new linear collider to complement CERN’s Large Hadron Collider (LHC), has appointed two new associate directors. Shinichiro Michizono of the Japanese particle-physics lab KEK will take over as associate director responsible for the International Linear Collider (ILC) design effort – taking over from Michael Harrison of Brookhaven National Laboratory in the US. James Brau of the University of Oregon will become associate director for physics and detectors for the LCC – replacing Hitoshi Yamamoto of Japan’s Tohoku University. Both appointments will take effect in January 2017. The ILC and the Compact Linear Collider (CLIC) are currently the two most popular proposals for a next generation of linear colliders. If built, such a facility will smash together electrons and positrons to make very precise measurements of the Higgs particle and other phenomena that occur at collision energies of a few teraelectronvolts.

Pairs of Majorana fermions seen by physicists

Atomic force microscope image of a Majorana fermion

Majorana fermions have been spotted at the end of an atomically thin iron wire by Ernst Meyer and colleagues at the Swiss Nanoscience Institute and the University of Basel. First hypothesized in 1937 by the Italian physicist Ettore Majorana, the fermions are their own antiparticles. While fundamental Majorana fermions have never been detected, they do exist as quasiparticles – collective excitations of electrons in some solids. Meyer and colleagues created their Majorana fermions by growing tiny iron wires (just one-atom thick and up to 70 nm long) on the surface of a superconductor. According to calculations by team members Jelena Klinovaja and Daniel Loss, a pair of Majorana fermions should exist in the nanowire – one at each end. Using scanning-tunnelling and atomic-force microscopes, the team was able to see clear evidence of the quasiparticles (see figure). Despite being separated by tens of nanometres, the Majoranas form a quantum state that can either be occupied or unoccupied by an electron. As such, the nanowire could form the basis of a robust quantum bit (or qubit) of information. The research is described in Quantum Information.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com on Monday when we will reveal the Physics World 2016 Breakthrough of the Year.

Sonic Lamb shift detected in ultracold atoms

The tiny influence that quantized sound waves called phonons have on atomic energy levels has been measured for the first time by physicists at the University of Heidelberg in Germany. Known as the “phononic Lamb shift”, the effect is predicted by the theory of quantum electrodynamics (QED), which describes how charged particles interact with quanta of light. Markus Oberthaler and colleagues say that their experimental techniques – which use Bose–Einstein condensates (BEC) to simulate the behaviour of phonons rather than conventional phonons in a crystal lattice – could be extended to test other predictions of QED.

The phononic Lamb shift is a variation on the original Lamb shift, a minuscule energy shift found between two atomic-hydrogen energy levels in a vacuum. First measured in 1947 by the American physicist Willis Lamb, this shift defied the classical understanding of empty space, which predicted that these two levels should have the same energy.

QED later explained why the classical understanding was wrong: in a vacuum, virtual electron–positron pairs pop in and out of existence. “The word ‘vacuum’ sounds empty, but it’s not,” says Oberthaler. These virtual particles perturb the hydrogen’s single electron, resulting in the small shift in energy levels.

Perturbing phonons

In the phononic Lamb shift, phonons perturb the atom instead of virtual electrons and positrons. In their experiment, the energy shift due to the phonons is about 10,000 times smaller than the spacing between the atom’s principal energy levels, says Oberthaler.

To make this measurement, the group did not use actual phonons in a crystal lattice to perturb their atoms. Rather, they used excitations in a BEC, which is an ultracold ensemble of atoms that are all in the same quantum state. These excitations behave analogously to conventional phonons. The group used lasers to trap and mix several thousand lithium atoms with a BEC made of about one million sodium atoms, all at near-absolute-zero temperatures.

The lithium atoms interact collectively with the BEC excitations to create a quasiparticle called a polaron. Using a technique known as Ramsey spectroscopy, the team measured the polaron’s lowest two motional energy states and compared these to the same energy states of lithium atoms in the absence of the BEC.

Feasible experiments

Mathematically speaking, the interactions between the lithium atoms and the excitations in the BEC are equivalent to electrons interacting with crystal-lattice phonons, respectively. But studying ultracold atoms and BECs is experimentally much easier than studying phonons in a crystal lattice, Oberthaler says. “To measure the effect, we either put lithium atoms into a sodium condensate or not,” he says. “And then we compare the results of the two cases.” The analogous experiment in a solid-state system would be to trap an electron within a crystal while turning the lattice vibrations on and off at will – which is not currently feasible.

Oberthaler says that the work paves the way for more experimental tests of QED using BECs. It has been difficult to experimentally confirm predicted QED phenomena, he explains, because many of its predictions rely on physical mechanisms that experimentalists cannot control. For example, one cannot experimentally turn off quantum vacuum fluctuations. BECs offer an experimental alternative, Oberthal says. Instead of studying the fluctuations themselves, physicists can simulate analogous behaviour in BECs, where these behaviours can be controlled and manipulated to test the theory.

The research is described in Physical Review X, and writing in an accompanying commentary, Vera Guarrera of the University of Birmingham suggests that they could use similar experimental methods to study the Casimir effect, a QED phenomenon where two neutrally charged plates placed nanometres apart feel a slight force between them due to quantum fluctuations. In addition, the techniques could be used to study other many-body physics phenomena such as superconductivity, she points out.

Copyright © 2026 by IOP Publishing Ltd and individual contributors