Skip to main content

Pondering the power law

Hamish Johnston

It’s a long-running joke in the physicsworld.com newsroom that physicists see power laws everywhere. Indeed, a quick scan of the arXiv preprint server reveals physics papers that apply power-law analysis to a wide range of topics from cosmology to geography.

But how many of these studies actually produce useful results? Not many, argue two applied mathematicians in the UK.

A power-law description of nature says that a physical quantity or probability distribution is proportional to an exponential power of another quantity. A simple example is the inverse-square law that describes the gravitational attraction between two masses. A more statistical formulation is the Gutenberg–Richter law, which describes the number of earthquakes experienced in a location as a function of earthquake magnitude.

But what does power-law analysis actually tell us about the physical properties of a system? Its proponents argue that if different things – say earthquake frequency and measles outbreaks – share the same power law, then there must be something similar about the fundamental dynamics that drives both systems. This train of thought has already proved very useful in the study of thermodynamic phase transitions, for example, where seemingly unrelated systems change phase in exactly the same manner.

But should physicists expect the same success when power laws are applied to other systems? Writing in today’s issue of Science, Michael Stumpf of Imperial College London and Mason Porter of the University of Oxford argue that, so far, the track record is not very promising.

They argue that many power-law studies have poor statistical underpinnings and don’t shed much light on the underlying mechanisms of the systems of interest. Indeed, they write that “even the most statistically successful calculations of power laws offer little more than anecdotal value”. That is fighting talk, so expect a robust response from the power-law community in the letters pages of Science.

Hmm, I wonder if the time gaps between letters will follow a power law? You can read all about that particular effect in this Physics World article by Albert-László Barabási, who is one of the discipline’s leading exponents.

You can read Stumpf and Porter’s article here, but it may require a subscription.

Japanese cosmology centre secures long-term future

The future of one of Japan’s leading cosmological research centres could be safe after it was awarded a massive $7.5m cash boost from the US-based Kavli Foundation. The Institute for the Physics and Mathematics of the Universe, which is based at the University of Tokyo, becomes the first centre in Japan to be supported by the foundation. There are now a total of 16 Kavli institutes around the world, including 10 in the US, three in Europe and two in China.

Set up in 2007, the centre will now be known as the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU). It seeks to tackle some of the biggest questions in physics, such as the origin, evolution and fate of the universe as well as the nature of dark matter and dark energy. The work is carried out on an interdisciplinary basis by more than 200 researchers, including theoretical and experimental physicists, mathematicians and astronomers.

The new money is a vital boost for the institute, which was set up as part of a Japanese initiative to attract scientists from abroad to work in the country. A total of five institutes were founded under the country’s World Premier International (WPI) programme, each of which was promised $10m a year for a decade and told to recruit 30% of its researchers from overseas. However, in early 2010 the Japanese government cut the WPI’s budget by 22%, putting question marks over the IPMU’s long-term future. Plans for two new WPI institutes were then axed, leaving the IPMU with a smaller, but still problematic, 3.6% budget cut.

Hitoshi Murayama, director of the IPMU, says that the support from the Kavli Foundation will now help the institute to keep going even when the WPI funding runs out. “The return [from the endowment] is nowhere close to the current funding level, but it is a start,” he told physicsworld.com. Murayama is confident that the Kavli cash will also bring “prestige and international visibility [that] should help the institute to attract and recruit more scientists”. Murayama himself was lured back to Japan to run the IPMU after almost 15 years in the US at the University of California, Berkeley. Currently, some 56% of the IPMU’s staff are non-Japanese.

The Kavli Foundation, based in California, was set up in 2000 by the Norwegian-born physicist and philanthropist Fred Kavli. It sponsors research in astrophysics, nanoscience, neuroscience and theoretical physics. It also awards three prestigious $1m prizes each year as well as funding workshops, symposia, Kavli professorships and a programme for science journalists. “I hope that our support of science in Japan can demonstrate that the quest for knowledge has no boundaries, and that finding the answers to some of science’s biggest and most fundamental questions itself requires international collaboration,” says Kavli.

Who is the most inspiring of the current physics communicators?

By James Dacey

Some have dubbed it the “Brian Cox effect”, others cite a whole raft of reasons, but all concerned agree that physics in the UK has undergone something of a popularity transformation in recent years.

Indeed, applications to study undergraduate physics (including astronomy) increased by 34% between 2004 and 2009, rising year on year. And the trend appears to be continuing unabated.

hands smll.jpg

According to the nuclear physicist Jim Al-Khalili of the University of Surrey – himself something of a media darling these days – there have been 320 applicants for 60 physics places this year at his institution alone, a 40% increase from last year. And this increase has occurred despite a 10% overall decline in applications at the university – blamed on the nationwide rises in tuition fees introduced this year.

To me, it is impossible to attribute the recent resurgence in physics to one specific reason. But I believe it is clear that the likes of Brian Cox and Jim Al-Khalili have helped to rebrand physics, thanks to their passionate communication of science in the popular media and their knack for explaining difficult ideas using simple, everyday concepts.

In this week’s Facebook poll question, please give us your opinion on the following.

Who is the most inspiring of the current physics communicators?

Brian Cox
Brian Greene
Michio Kaku
Lisa Randall
Neil deGrasse Tyson
Stephen Hawking

And, of course, feel free to explain your choice or suggest an alternative communicator by posting a comment on the poll.

In last week’s poll we asked you a question related to particle physics. We wanted to know where you think the International Linear Collider (ILC) – a proposed successor to the Large Hadron Collider (LHC) – should be built.

Some 52% of respondents opted for CERN, the home of the LHC on the Franco-Swiss border. 30% went for Fermilab in the US, which hosted the LHC’s former rival accelerator, the Tevatron, which shut down towards the end of last year. 11% believe it is time for Japan to have its turn, following recent speculation in the Japanese press that the ILC could be built on the island of Kyushu. The remaining 6% believe that the ILC should never be built.

Thank you everyone for your responses and we look forward to your responses in this week’s poll.

Neutrinos point to rare stellar fusion

Neutrinos captured under a mountain in central Italy have provided the first direct evidence for a nuclear reaction involved in the conversion of hydrogen to helium inside the Sun. The observation was made by the Borexino collaboration, which next hopes to ensnare as-yet-unseen neutrinos from fusion reactions taking place in stars heavier than our own.

Most of the Sun’s heat is generated in fusion reactions that form what is known as the “proton–proton cycle”. This involves the fusion of two hydrogen nuclei (protons) to form heavy hydrogen, the fusion with a third hydrogen nucleus to form helium-3 and then, via various pathways, the creation of extremely stable helium-4.

Physicists can learn about this cycle by intercepting the chargeless, nearly massless particles known as neutrinos that are produced in many of the constituent reactions. In fact, by measuring the fluxes of these particles, they can learn not just about the structure and dynamics of the Sun, but also about the properties of neutrinos themselves. To date, however, most neutrino detectors have been sensitive to the highest energy solar neutrinos – those having energies of between about 5–18 MeV. However, the vast majority of solar neutrinos have energies below 5 MeV, and Borexino was built specifically to study these particles.

Detection is demanding

Detecting any kind of neutrino is difficult because the particles interact extremely weakly with all other kinds of matter. But capturing the low-energy neutrinos from the Sun is particularly demanding as natural radioactive processes here on Earth generate particles with energies up to about 3 MeV, which can therefore obscure the low-energy neutrino interactions. Like other neutrino experiments, Borexino is located deep underground to protect it from interference from cosmic rays, being housed in the laboratory of Italy’s National Institute of Nuclear Physics at Gran Sasso. And, like other experiments, it contains a large mass of detecting material, in this case about 280 tonnes of a liquid scintillator, which generates flashes of light when neutrinos scatter off electrons inside it. What sets the experiment apart, however, is the extreme purity of the materials used to create it, such as the scintillator itself and the stainless-steel sphere that holds the scintillator – with levels of radioactivity inside each one reduced by up to 10 or 11 orders of magnitude.

In data collected between 2007–2010, the Borexino collaboration, made up of physicists from Italy, the US, Germany, France and Russia, had already identified solar neutrinos from the conversion of beryllium-7 into lithium-7. Having a very well-defined energy of 0.86 MeV, these neutrinos were detected at a rate of about 50 a day for every 100 tonnes of scintillator. In the latest analysis, which uses data obtained since January 2008, the researchers observe even rarer events – the detection of solar neutrinos with a precise energy of 1.44 MeV that are generated by the fusion of two protons and an electron in “pep” reactions. Using a new data-analysis technique to mask interference from nuclei of carbon-11, which are produced by the few cosmic-ray particles that make it down to the experiment, the researchers found that, on average, pep neutrinos collide with 100 tonnes of detector material 3.1 times a day.

First direct evidence

According to Borexino spokesman Gianpaolo Bellini, this is the first direct evidence of pep reactions taking place in the Sun, and he says that the observed flux matches well with the predictions of astrophysicists’ “standard solar model”. But he points out that further data will be needed to fully exploit Borexino’s potential as a probe of neutrino “oscillations”. Results from many different experiments over several decades have revealed that neutrinos oscillate from one kind (electron, muon or tau) to another as they travel through space, but physicists would like to know exactly how the strength of these oscillations varies with neutrino energy. Other experiments have shown that theoretical predictions agree well with the data at higher energies, while Borexino’s beryllium-7 result shows that there is also a good fit at the lowest energies. But, says Bellini, more pep neutrinos will have to be detected in order to gather sufficient data at intermediate energies.

In fact, the Borexino researchers are currently overhauling their detector to reduce levels of radioactivity still further and then hope to start three more years of data-taking in March or April. These new data might also confirm the existence of neutrinos from a completely different set of fusion reactions that are believed to fuel massive stars and also provide a small fraction of the helium inside the Sun – the “carbon–nitrogen–oxygen cycle” (CNO), which fuses hydrogen into helium via the formation of the three heavier elements. These neutrinos should interact with Borexino’s detector nuclei at a similar rate to the pep neutrinos but they have a less-distinctive energy spectrum that makes it harder to tell them apart from the background, although the latest analysis did place a new stringent upper limit on their flux.

Bellini says that detecting CNO neutrinos might also solve the “metallicity puzzle” regarding the composition of the Sun’s atmosphere. Scientists have created a 3D model of the atmosphere that agrees well with spectroscopy data, and which predicts about 30–40% less carbon, nitrogen, oxygen, neon and argon on the Sun’s surface than does an alternative, less sophisticated, 1D model. But it is this latter model that is more consistent with data from helioseismology – the study of the Sun’s interior via the pressure waves that propagate through it. According to Bellini, the observation of CNO neutrinos should settle the matter, since their predicted flux is quite sensitive to the abundance of the various elements in the solar atmosphere.

The work is described in Physical Review Letters.

Listening with a ‘quantum ear’

Physicists are very good at making measurements with single photons of light. Soon, however, they may also be doing routine studies of single phonons – single quanta of sound. That is the claim of physicists in Sweden and Germany, who say they have detected acoustic waves that are so weak they are – almost – at the quantum limit.

Recent years have seen a great effort to work with mechanical oscillations in the quantum regime. In such a regime, a mechanical device would be able to both emit and detect single phonons – just as optoelectronic devices are already able to emit and detect single photons. In 2010 a group at the University of California, Santa Barbara, US, demonstrated that it could create single phonons using a cryogenically cooled mechanical oscillator, thereby taking the first step on the quantum road.

Approaching the quantum limit

Now, Martin Gustafsson of Chalmers University of Technology in Sweden and colleagues have studied the echoes of near-quantum-limited acoustic waves, using a device they call a quantum microphone. In contrast to the Santa Barbara group’s mechanical oscillations, acoustic waves are propagating waves that travel over a surface, like ripples spreading in water. These particular acoustic waves are not quantum mechanical in behaviour, although they are almost weak enough to be at the quantum limit. “You could say that we have shown the way to quantum acoustics, and I think others would agree that this is a very exciting prospect,” says Gustafsson.

The experiment consists of a long, thin chip of the semiconductor gallium arsenide, at the ends of which are transducers that generate acoustic waves. Gallium arsenide is piezoelectric, so any deformations of its structure caused by the acoustic waves generate changes in electric polarization. This polarization, a subtle movement of electrons, is detected by a single-electron transistor – the quantum microphone – which sits midway along the gallium-arsenide chip. The entire apparatus is cooled to 200 mK.

Test for echo

Gustafsson and colleagues at Chalmers and the Paul Drude Institute in Berlin used one of the transducers to generate acoustic waves at a frequency of 932 MHz. These waves travel to the other end of the chip, then bounce back again. Indeed, the waves echo back and forth several times, all the while shifting electrons through the transistor. Using this electron movement as a proxy, and averaging over millions of experimental runs, the transistor is effectively able to detect acoustic waves at the single-phonon level, claim the researchers. The amplitude of the wave is just a few per cent of the diameter of a proton.

Konrad Lehnert, an expert in the quantum behaviour of electromechanical circuits at the University of Colorado at Boulder, US, believes the work has potential. But he thinks studies of true quantum acoustics are still some way off. “The claim of single-phonon sensitivity is frankly overblown,” he says. “To say that one can detect single phonons after averaging is to say that one cannot detect single phonons.”

Coupling qubits

Gustafsson agrees that his group’s experiment is still firmly in the classical regime. For one, he says, the researchers must average the signal from millions of acoustic waves to exclude noise, and every wavepacket itself often contains not one but several phonons – most of the wavepacket passes alongside the transistor undetected. Still, his group has ideas for generating acoustic phonons, by using a superconducting “qubit” to couple to the waves via charge movement, which is similar to how the transistor operates.

“We compare such an experiment with experiments that have been done very recently with single microwave photons, and find that it should be feasible to do acoustic versions of those,” says Gustafsson.

The research is described in Nature Physics.

The dream of open science

Imagine logging on to your computer in the morning and being presented with a list of 10 requests for your help. The requests have been filtered for you by specialized software from millions of applications filed overnight by scientists around the globe. There is no obligation to reply, but one particular question from a materials scientist in Hungary catches your eye. That researcher is seeking to develop a new crystal but is facing an unexpected hitch concerning the way particles diffuse on a particular lattice structure.

This just happens to be a subject that you – a condensed-matter theorist in California – know like the back of your hand, so you reply with the outline of a possible solution. In breaking the materials scientist’s log-jam, you feel gratified in helping to move science forward, having tackled a problem that would have taken them weeks to solve. Meanwhile, there is also the enticing prospect of developing a long-term collaboration with your new-found colleague and perhaps even writing a paper together.

This not-too-unlikely vision of the future is painted in Reinventing Discovery by Michael Nielsen as one of the exciting possibilities of “open science”. Nielsen, a physicist by training, originally worked in quantum computing and information before quitting research to develop new tools for scientific collaboration. He and other advocates of open science believe that researchers could benefit enormously if only they used the power of modern communications technology to share ideas, data, papers, results – everything, in fact. By collaborating more often and more creatively, scientists could crack problems faster and gain unexpected new insights through what the author dubs “designed serendipity”.

Nielsen’s idea of parcelling out requests for help to others with more time or expertise has already proved its worth as the mechanism underpinning a number of successful “citizen-science” projects. These include the massively popular Galaxy Zoo, in which members of the public are invited to view images of galaxies obtained by the Sloan Digital Sky Survey (SDSS) and classify them as either elliptical or spiral – a task at which computers are notoriously bad. So far more than 250,000 people have contributed to the mammoth task of analysing the SDSS’s 930,000 images, and their efforts have spawned some 25 genuine scientific papers.

Unfortunately, collaborative projects such as Galaxy Zoo are very much the exception in science, rather than the rule, because scientists are a notoriously conservative bunch when it comes to adopting new communications technologies. Nielsen is aware of this problem. He describes, for example, how one “well-known physicist” told him that Paul Ginsparg – the physicist who set up the popular arXiv preprint server used in many branches of physics – had “wasted his talent”. What Ginsparg was doing, the anonymous physicist complained, was “like garbage collecting” and beneath someone of Ginsparg’s abilities.

But as Nielsen points out, the even bigger problem with scientists ever fully adopting the concepts of open science is that there is often no motivation to do so. Why bother sharing your data in an online forum if it is only going to hand your rival the answers? Why spill the beans on your hard-thought-out ideas in a blog if it will merely let your competitors write those key papers that should rightly be yours? Why, in other words, should you sacrifice your career towards some nebulous goal of openness?

Nielsen is honest enough to admit all this, pointing out that “networked science” will not make much progress until scientific papers stop being the currency by which all scientific careers are judged. Publishing papers is, after all, what counts in science – it gets you grants, wins you research professorships and earns you kudos among your peers. As long as scientific papers continue to be the decisive output in science, there can be little hope of any true revolution.

Transforming science, Nielsen suggests, will be a 50-year enterprise and will only properly happen once we as a community learn to value openness and data sharing as much as we value research papers. What we need, for example, is some form of standardized way of allowing experimental data to be cited, just as we have an accepted way of allowing papers to be cited. Journal citations are a valuable and widely used way of judging research quality in quantitative terms – and a similar scheme for data is appealing and intriguing.

It is a shame, however, that the author fails to really develop this concept of a “data-citation-tracking service”. The same goes for the requesting-help concept mentioned above – it is an intriguing idea, but one that is not fully fleshed out. Nielsen’s get-out clause is that, well, we are just at the start of a long process, and how can anyone possibly know how things will develop or predict what different communities of researchers will want? But given that he has spent several years writing the book, Nielsen ought to be as qualified as anyone to make a few predictions – and he could have been more clear-cut in saying exactly what scientists should do next.

Indeed, on several occasions, just as the book appears to be coming to the boil and some great insight seems imminent, Nielsen pulls his punches. No, scientific blogging is probably not going to transform the world. Citizen-science projects such as Galaxy Zoo are important, but “it’s not obvious whether they’re curiosities or harbingers of a broader change”. Creating an open scientific culture seems to require “an impossible change” in how scientists work.

Nielsen also misses an opportunity by not saying more about how online tools can change the very nature of scientific explanation and the ways in which knowledge is constructed. Some areas of science, particularly astronomy and biology, are amassing so much data that computers can extract information that would be impossible for humans to deduce. If we can obtain purely statistical models of complex phenomena, does one still need traditional theories and hypotheses? Can computers reveal deeper truths than humans can? Nielsen raises these questions but does not really offer answers.

One thing that Nielsen is clear about, however, is that open science will never fully progress until funding agencies insist on it. After all, they control the cash that researchers need and pretty much have scientists eating out of their hands. “If the grant agencies decided that as part of the granting process, grant applicants would have to dance a jig downtown,” Nielsen jokes, “the world’s streets would soon be filled with dancing professors.”

Funding agencies have already made some progress in forcing researchers to deposit preprints and new papers in open-access databases, but Nielsen complains that further efforts at opening up the scientific literature are being hampered by dastardly journals publishers. However, in criticizing publishers for charging subscription fees, he (like many scientists) largely ignores the fact that the filtering, archiving and peer-reviewing that publishers perform or manage is not cheap. In particular, much time, energy and money goes into dealing with papers that do not meet a particular quality threshold and so will never appear as a “product”. Any open system that seeks to replace traditional subscription models will have to address this challenge. Nielsen, by and large, does not.

The author concludes by saying that he wrote this book to light “an almighty fire under the scientific community”. I am not quite sure, though, that it will really set the community ablaze. Indeed, one could argue that things have not moved on significantly since the author discussed open science in an article he wrote for Physics World in 2009 (May pp30–35), and which forms the basis for chapters eight and nine of the current book. So although this is a well-written book – Nielsen’s descriptions of citizen-science projects are particularly lucid – it could have been stronger on telling scientists what to do next. Nielsen’s dream of open science is likely to remain just that for some time.

Physicists create new slow-light technique

A physical phenomenon that is widely used to slow and store pulses of light in clouds of atoms has been seen for the first time in a system of nuclear-energy levels. The breakthrough has been made by a team of physicists in Germany that has seen evidence for the phenomenon, known as electromagnetically induced transparency (EIT), as X-rays pass through nanometre-scale layers of iron. The researchers think their method, which is also the first to achieve EIT using just two energy levels rather than the usual three, could lead to the development of devices for controlling X-rays, which is currently very tricky to do.

EIT occurs in special media that do not usually transmit light at a certain wavelength but can be made transparent by applying a second “control” beam of light at a slightly different wavelength. If this control beam is switched on and off at just the right time, EIT can been used to slow down a pulse of light so it is effectively stored in the medium for a second or more.

EIT requires that atoms within the medium to have a specific configuration of three energy levels in which transitions between one specific pair of levels are forbidden. While such three-level configurations can be found in many atomic systems, they are not readily available in nuclear systems. To get round this problem, Ralf Röhlsberger and colleagues at the DESY lab in Hamburg have devised a way to make a two-level nuclear system behave like a three-level system suitable for EIT. The researchers also hope that similar techniques could be applied to other two-level systems such as quantum dots.

Iron-57 layers

The experiment comprises two 2 nm-thick iron layers sandwiched between two platinum mirrors. The mirrors themselves are 45 nm apart and the cavity they create supports a standing wave of X-rays. One of the iron layers is positioned at a peak of the standing wave and the other at a trough. The layers are made of the isotope iron-57, which has a two-level nuclear transition at an energy of 14.4 keV. This corresponds to the absorption and emission of a hard X-ray photon. In the standing wave, however, the upper level of the nuclei in the peak is shifted relative to the upper level of the nuclei in the trough – thereby providing a three-level system.

The team confirmed that the system is suitable for EIT using 14.4 keV X-ray pulses generated by the PETRA III synchrotron at DESY. The researchers looked at two different configurations. In the first, the pulses enter the cavity and then pass through the iron layer in the trough of the standing wave, followed by the layer in the peak. In the other configuration, the pulses pass through the peak layer first. In the latter situation, much of the intensity of the pulse is reflected back from the system – which is what is expected from a process called nuclear resonant scattering that is related to the transition at 14.4 keV.

However, in the configuration where the pulse strikes the trough layer first, there is a sharp dip in the reflectivity – which means that, together, the two iron layers have become transparent to X-rays that they would normally reflect. This, according to Röhlsberger and colleagues, is evidence of EIT.

No control pulse needed

Although the creation of a transparent window shows that EIT is occurring in the system, it is unlike conventional EIT because it does not involve the application of a control pulse. Instead, the electromagnetic interaction between the two iron layers plays the role of the control pulse.

Creating “slow X-rays” using their system should also be possible, according to Röhlsberger. Indeed, he told physicsworld.com that the system could be used to create delay lines for X-rays – something that is currently very difficult to achieve. This new ability to manipulate X-rays could lead to the radiation being used in quantum-information systems, believes Röhlsberger. X-rays are attractive for this application because, unlike visible photons, X-ray photons can be detected at near 100% efficiency. One application could be creating experiments that test possible loopholes in quantum entanglement that are related to detector efficiency.

Röhlsberger also believes that the technique could be used to achieve EIT in optical systems that do not readily have the appropriate three levels – such as quantum dots.

The research is described in Nature.

How supercontinents are born

Geophysicists in the US believe they have finally solved the riddle of how supercontinents form. According to their model, each supercontinent assembles within the “Ring of Fire “subduction zone located 90° from the previous supercontinent. Projecting this into the future, the next supercontinent is forecast to be “Amasia” – a merger of the Americas and Asia.

The supercontinent cycle

The collision of continents into one huge landmass – and their subsequent drifting apart – is thought to follow a cycle of 300–500 million years. The last supercontinent, Pangaea, began to disintegrate about 200 million years ago, and a new supercontinent is expected to form in the future. Two competing hypotheses have previously been put forward to explain how this will happen.

One hypothesis says that the continents will continue to drift apart as they do today, with the Atlantic Ocean continuing to widen – eventually bringing together North America and Asia. In this “extroversion model”, the new supercontinent is an “inside out” version of Pangaea situated on the opposite side of the globe to its predecessor.

Alternatively, the continents may at some point perform a U-turn and drift back towards their starting position. This hypothesis – the “introversion model” – relies on new subduction zones opening up that would allow the Atlantic oceanic crust to sink back beneath the continents. This would close off the Atlantic Ocean, forming a new supercontinent in the same location as Pangaea.

Third time lucky

However, neither of these models successfully explains all of the features of bygone supercontinent transitions. Now, geophysicists at Yale University have developed a third model, which they say provides a better fit to past data.

In their “orthoversion model”, after a supercontinent breaks up, the continents initially drift apart but become trapped within a north–south band of subduction – a relic of the previous supercontinent (on our present-day Earth, this is the Pacific Ring of Fire). The new supercontinent forms in this band, one-quarter of the way around the globe (90°) from the centre of its predecessor.

In order to test their model, the researchers used paleomagnetic data – records of the Earth’s magnetic field preserved in rocks – to study variations in the rotation of the Earth with respect to its spin axis. These variations, known as “true polar wander”, are caused by changes in the planet’s mass distribution; they are the Earth’s attempt to maintain rotational equilibrium – a re-adjustment that takes place over millions of years.

By combining these data with knowledge of how supercontinents affect the Earth’s motion, the researchers were able to calculate the angles between successive supercontinents. Their analysis reveals an angle of 87° between Pangaea and its predecessor Rodinia, and an angle of 88° between Rodinia and its predecessor Nuna. From these two independent measurements, the researchers inferred that the orthoversion model best describes supercontinent transitions.

Waiting for Amasia

If the same mechanism is applied to our current continents, the orthoversion model predicts that the next supercontinent will be Amasia, the union of the Americas with Asia. The Americas will remain in the Pacific Ring of Fire region and the Arctic Ocean and Caribbean Sea will be closed off. This model therefore paints a very different picture to the introversion and extroversion models, which forecast closures of the Atlantic and Pacific oceans, respectively.

“Our model is somewhere in-between the two previous models,” says Ross Mitchell, lead author of the study. “However, we don’t propose a fuzzy combination of these two models; rather, we say that 90° seems to be the answer for every supercontinent cycle historically. It’s nice that the geological record is finally compatible with a larger tectonic model.”

Peter Cawood, a geologist at the University of St Andrews in the UK, says that the study is important because it explains “how we get from one supercontinent to another”. “In the past, we wondered whether there is ‘method in the madness’ of continental reconstructions and the position of continents through time. If this paper is correct, the answer is yes – there is indeed a method (orthoversion) and it is driven by true polar wander,” he adds.

So when can we expect to see this new supercontinent? “Amasia is most likely anywhere between 50 to 200 million years away,” says Mitchell. “I’d be surprised if humans lasted that long!”

The research is described in Nature.

George Smoot on the current state of cosmology

In 2006 George Smoot shared the Nobel Prize for Physics for his studies of the cosmic microwave background (CMB). Smoot famously likened his findings to seeing God, and Stephen Hawking was quoted at the time as saying it was the greatest discovery of the century, if not of all time.

In an exclusive interview filmed at the Lawrence Berkeley National Laboratory (LBNL), Smoot gives his thoughts on the current state of cosmology and reveals whether winning the Nobel prize has altered his approach to science.

“The Nobel prize is both an asset and a weight,” Smoot says. “It’s funny, there are some people who consistently underestimate what I can do. And then there is another set that overestimate โ€“ ‘You have a Nobel prize you can do everything’. The truth is somewhere in between.”

Branching out

Since winning the prize, Smoot has continued his studies of the CMB through his role as one of the founders of the Planck satellite that launched in April 2009. But he has also branched out into other areas of cosmology, including the study of gamma-ray bursts โ€“ energetic explosions observed in distant galaxies that can reveal details about the early universe.

In the interview, Smoot is also asked to share his thoughts on the state of science in the US and he admits that President Obama has struggled to deliver on the promises he made to science when he came to power. “I think he really would like to see science grow up; he sees that as a way to have innovation and jobs growth and for the economy to grow,” says Smoot.

“The issue you have is that you have to be careful about spending because you’ve been borrowing too much money. But on the other hand, you have to make sure that your growth and innovation continue to be competitive in the world, and that you can eventually pay down your debts,” he cautions.

At LBNL, Smoot works on the same floor as Saul Perlmutter, who shared the 2011 Nobel Prize for Physics for the discovery of the accelerating expansion of the universe through observations of distant supernovae. Indeed, success is a familiar feeling at the Berkeley lab. Since the LBNL was established in 1931, no fewer than 13 of its scientists have won Nobel prizes (across all categories).

The story of success

In a separate video, a number of researchers who work alongside Smoot and Perlmutter discuss what it is like to be at a lab with such a strong tradition of success. The researchers also speculate about where they believe the next Nobel-prize-winning discoveries might lie. And given the history of this lab, you would not bet against some of these cosmologists becoming the Nobel laureates of tomorrow.

Raman technique peers into cabin baggage

Every seasoned flyer knows better than to carry a large bottle of shampoo, perfume or even champagne in their hand luggage. But all that might change, thanks to researchers in Europe who have developed a scanner to be used at airports to screen liquids in opaque or translucent bottles. The scanner is currently on trial at several European airports and might allow the ban on liquids of more than 100 ml in hand luggage to be lifted as early as 2013.

The scanner uses a technology known as spatially offset Raman spectroscopy (SORS). This is a variation on conventional Raman technology that allows a chemical analysis deep within a sample and that can be used to scan everything from bone beneath skin and drugs in plastic packs to liquids in opaque bottles. SORS was invented and developed by Pavel Matousek and collaborators at the Rutherford Appleton Laboratory in the UK in 2004. The new scanner is known as the INSIGHT100 and was by developed by Cobalt Light Systems – a company founded by Matousek in 2006 to develop technologies that utilize SORS.

Random photons

Conventional Raman spectroscopy relies on the inelastic backscattering of photons as light interacts with matter. Normally, the scattered photons are detected from the same spot on the sample that has been illuminated. The problem there, explains Matousek, is that Raman signals from surface layers tend to dominate those signals from within the sample. “If you apply Raman spectroscopy to, say, biological tissue or plastic pill bottles, you only get a surface signal…you can’t see deeper,” he explains. To get round this issue, the researchers use SORS to collect photons from a spot a few millimetres away from the illuminated area – a “spatially offset” spot. This works because photons migrate from the illuminated spot and travel through the body of the sample. Thus, SORS delivers a smaller surface signal and a sharper signal from deeper within the sample.

“The [photons] diffuse through the body of the sample…moving sideways and deeper, and then re-emerge at the surface,” says Matousek, and so the measured signal is clear and not restricted to the surface. He goes on to say that a simple analogy for SORS is looking at the sky during the day. “Although we know that there are stars in the sky throughout the day, we cannot see them thanks to the bright light of the Sun. But if an eclipse blocks the light of the Sun, the starts become visible. So with SORS, we are blocking out the surface signal to see the pure signal from the sample body.” The other salient feature of using SORS is that it is always non-invasive.

Quick scan

There is a myriad of applications for SORS. This current one – the INSIGHT100 scanner – uses SORS to scan the products that people carry with them while travelling. The current ban on items of more than 100 ml in hand baggage can only be lifted when airports are able to screen liquids quickly and without opening containers. While X-ray scanners currently do that job, there are a few disadvantages: the main one is that they produce high false-alarm rates, which slows the screening process. The false-alarm rate with the INSIGHT100 is considerably lower at 1% or less. The INSIGHT100 can screen individual bottles in less than 5 s and also provides a high chemical specificity. It can be used for all types of containers in a variety of sizes. The scanner has already passed the stringent testing procedure necessary to allow it to be trialled and is now being used at many major European airports. Matousek also points out that the scanner is to be used in parallel with X-ray scanners. “It complements the existing technology. If something sets off the X-ray scanner, the INSIGHT100 can quickly check and verify it,” he says.

Cobalt Light Systems is also developing SORS technology to be used in many other types of scanners. For example, it is already used in the pharmaceutical industry to test and identify raw materials as they come into a processing plant without needing to open the packaging, and to check in a non-invasive manner if the concentration of active chemical ingredients in a drug is accurate. Other long-term plans include using SORS for in vivo diagnosis of bone disease and cancer detection.

Copyright © 2026 by IOP Publishing Ltd and individual contributors