Skip to main content

Early supermassive black holes could grow it alone

Astronomers know that supermassive black holes at the centres of galaxies existed in the early universe, but how these objects managed to accumulate such heft in a short cosmological timespan is a mystery. Now, a team of researchers in Germany and the US has used a humongous computer simulation to show that cold streams of gas from outside a young galaxy could have fed its central black hole fast enough for the hole to grow rapidly.

Supermassive black holes are furnaces at the centres of galaxies. They suck in vast amounts of matter – which releases energy that causes the gas that surrounds them to glow. Astronomers call these glowing galactic centres quasars, and the UK Infrared Telescope Deep Sky Survey (UKIDSS) has found light from a quasar that was emitted as little as 800 million years after the Big Bang. This quasar and several picked up by the Sloan Digital Sky Survey are considerably brighter than expected. Indeed, they emit so much light that the black holes at their centres must have been enormous, at least a billion times the mass of the Sun.

Assuming that a supermassive black hole begins life as a relatively small black hole at the collapsed core of a massive supernova, Volker Springel of the Heidelberg Institute of Theoretical Studies in Germany says that it would need to have fed at its maximum rate from birth onwards in order to reach a billion solar masses now. “It seems possible, but it’s a bit contrived,” he says. This is because the rate at which a black hole accumulates matter is proportional to its mass, and therefore small black holes grow very slowly.

Direct collapse

An alternative explanation is that a very large amount of gas – roughly 100,000 solar masses – may have collapsed directly into the black hole. Now, Springel and colleagues – including team leader Tiziana Di Matteo at Carnegie Mellon University in the US – have used a computer simulation to show that this scenario is possible.

The team modelled the universe in a virtual box 2.4 billion light-years to a side – a volume that is roughly 1% of the visible universe today. This size of simulation was chosen in order to increase the chances that extremely massive quasars would emerge from the model. Inside the box, gas and dark matter, a form of matter that interacts through gravity alone, were represented by 65.5 billion particles.

“It’s a remarkable achievement to be able to simulate such a huge volume of space to the precision needed to say something about a single black hole,” says Daniel Mortlock of Imperial College London. While the resolution of the study was good enough to look at individual black holes, it had to be coarse enough to make the simulation feasible. As a result, each gas “particle” had the mass of 57 million Suns, while dark matter weighed in at 280 million solar masses per particle.

Billion-year simulation

The simulation covered the timespan from 10 million years after the Big Bang to about 1.3 billion years later. As time progressed, gravity caused the particles to gradually clump together. Once a congregation of gas particles reached a density associated with black-hole formation, the program introduced a particle of 100,000 solar masses into the middle of the clump to represent a black hole. This “seed” could then begin accreting gas particles according to a model of black-hole growth.

After 800 million years, one black hole had reached 3 billion solar masses, while nine more were close to the billion-solar-mass mark. To find out how they had grown, the team zoomed in on them, finding that those growing the fastest appeared to be fed by dense streams of gas. This picture supports the idea of “cold gas flows” penetrating directly to the black hole without warming up through interactions with the hot gas already in the vicinity. Although black-hole mergers have been proposed as a route to supermassive black holes, merged black holes were not among the largest in the simulation.

“[The simulation] is the first to quantitatively estimate that cold gas flows can deposit large quantities of fresh ‘fuel’ to the centre of galaxies, possibly feeding supermassive black holes even in absence of mergers,” says Lucio Mayer of the University of Zürich, Switzerland. “However, the resolution of the simulations is still too low to ascertain if such gas would directly feed the central black hole.” He suspects that it would be more likely to settle into the disc of gas surrounding the black hole, feeding it more slowly, but this detailed behaviour must be explored with higher-resolution simulations.

The research is reported in The Astrophysical Journal.

Voyagers of discovery

The two Voyager spacecraft have long since earned their place in the space “Hall of Fame”. Since they were launched in 1977, Voyagers 1 and 2 have investigated all the giant planets in our outer solar system, nearly 50 of their moons, and the distinctive systems of rings and magnetic fields that those planets possess. Without a shadow of doubt, their discoveries rewrote several textbooks. Some of the original analyses from these planetary fly-bys have yet to be surpassed by subsequent missions, and scientific papers involving analysis or re-analysis of Voyager science data are still being published.

On a more personal note, I can also say that Voyager 1 changed my life. Its flight past Saturn’s largest moon, Titan, in November 1981 revealed a world so intriguing that more than 20 years later another NASA spacecraft, Cassini, paid Titan a return visit. Cassini carries a project that consumed me for some 15 years: the European Huygens probe, which landed on Titan’s icy surface in 2005.

But perhaps the most astonishing thing about the two Voyagers is that their mission is not yet over. Although they were originally designed with a five-year lifespan, and were only supposed to study Jupiter and Saturn, Voyager 1 currently holds the record for being the furthest man-made object from the Earth, and some of its scientific instruments (and those of its slightly nearer sister craft) are still returning data. The Voyagers were – and are – truly heroic achievements. They form part of a series of great exploration missions undertaken by NASA, starting with the Mariner probes in the 1960s and continuing with remarkable accomplishments at Mars and Venus, as well as Jupiter and Saturn.

Stephen Pyne acknowledges all of this in his book Voyager, which is partly a history of the missions and partly a meditation on their wider significance. Pyne, a historian at Arizona State University, has meticulously researched the Voyager project and has certainly uncovered new and intriguing perspectives on this heroic undertaking. But he also suggests that his book is a study or meditation about context, one that strives to generate new understanding by emphasizing comparisons, contrasts and continuities. To this end, Pyne argues that the Voyager adventures were very much a continuation of humanity’s exploration – making their era, in the words of the book’s subtitle, the “Third Great Age of Discovery”.

Whether Pyne’s book succeeds in making this case must be left to the individual to decide, but both its breadth and depth cannot fail to entertain, perhaps even to captivate. What is less compelling to space scientists like me – though maybe not to others – is his continual insistence on comparing, contrasting and connecting the Voyager epic to the earlier explorations of Cortes, Columbus, Cook and others. It is true that such missions – whether on ancient wooden ships, overland through the uncharted interiors of Australia or the US, or indeed with the robotic explorers of our solar system – share common threads. As Pyne copiously demonstrates, all of them, Voyager included, are rooted in the economic, political, cultural and technological landscapes of their times. But his decision, at many turns, to relate the Voyager adventure to events in the previous millennium of exploration sometimes seemed unnecessarily contrived, forced and artificial, rather than demonstrating a genuine association. This is not to deny the fact that any grand undertaking such as this is a product of its wider context, but the parallels drawn in this book do seem overstretched.

Pyne does do several things well, though, and one that I found particularly intriguing was his exposition of the history of the Jet Propulsion Laboratory (JPL) prior to Voyager and how the project established the JPL as the US’s (and in fact the world’s) leading centre for the exploration of our solar system. Many of the leading individuals involved in the JPL’s work are portrayed, demonstrating forcefully that such undertakings are as much about people as they are about science and technology. I can attest to this sentiment based on my own experience; the Cassini–Huygens mission that I was involved in was indeed, like Voyager, a “very human creation”. The most public face of the Voyager missions was Carl Sagan, who was a member of the imaging team. It was he, perhaps above all, who saw that this was more than just a mission of scientific discovery – it was a truly cultural endeavour and, in the broadest sense, one that could be conveyed to a very wide audience through the media.

Pyne argues that golden ages of exploration generally transition into silver ones and he suggests that this has happened in space exploration. After the grand and open-ended Voyager missions, he argues, came “complex single missions”, such as Cassini to Saturn, Huygens to Titan and various sophisticated rovers and orbiters to Mars. If we follow Pyne’s reasoning, we should probably not hark back to the golden age represented by the Voyagers. Instead, we should strive to make sure that we do not sink into a bronze age. There is a danger, of course, of this happening: neither the Voyagers nor even single-mission craft such as Cassini have the immediate utilitarian grab of missions to monitor the Earth or to provide ever-more bandwidth for our data-hungry world. Yet as we space scientists continually argue to our political masters, these grand projects of exploration do indeed drive our technology, and they also provide inspiration, especially to the young.

In the words of Nobel laureate Steven Weinberg, “The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce.” As Pyne shows, the Voyagers admirably played their part in this effort.

Cell softening could promote tumour growth

Changes in the mechanical properties of living cells may be responsible for the growth of cancerous tumours, according to computer simulations carried out by scientists in the US. Their work suggests that a softening of cells can change the rate at which cells divide and also make cancer cells survive for longer. Taken together, these two factors could lead to the rapid growth of malignant tumours.

Scientists have long known that most cancers occur when genetic or environmental factors cause changes in how living cells behave. In many cancers these changes seem to occur in the mechanical properties of cells. In particular, cancer cells tend to be softer than healthy cells, with some researchers believing that this greater flexibility can lead to rapid tumour growth.

What Roger Bonnecaze and Parag Katira of the University of Texas at Austin and Muhammad Zaman of Boston University have done is to create a 3D computer model that quantifies the connection between cell softness and tumour growth. The researchers model individual cells as shells of viscoelastic material with liquid cores. In isolation, each cell would simply be a sphere, but the cells can also pack together to form a tissue-like material.

Live, die or divide

The researchers modelled the change in mechanical energy involved in creating such a 3D conglomerate in terms the elastic energy of the shell; the energy released when neighbouring cells stick to one another; and the work done by the osmotic pressure of the cell as its volume changes. These parameters were derived from experimental data for both healthy and cancerous cells.

Bonnecaze and his team ran the simulation until the 3D structure of healthy cells reached its minimum energy configuration, which they used as the initial state for their investigation. The result is a mass of cells that stick to one another at flat interfaces as each cell adopts the shape of a “Voronoi polyhedron”. Such Voronoi polyhedra are objects that can fill 3D space to create a structure that is similar to a tissue of close-packed living cells.

The simulation then allows individual cells to do one of three things: live, die or divide. The probabilities of these events occurring are given by a set of rules that mimic real-life processes – for example that a cell is more likely to die if it is compressed and its surface becomes smaller, but more likely to divide if it spreads out and its surface gets bigger.

Multiplying mutants

When parameters for healthy cells were used, the team found that the simulation reaches an equilibrium in which dead cells are replaced by dividing cells and the overall structure of the tissue remains constant. The team then introduced a small group of mutant cells to the tissue to see if they could mimic tumour growth. These mutants have shells with an elastic modulus that is half that of the healthy cells – thereby making them softer.

When the group consisted of fewer than eight mutants, there was no change in the behaviour of the mutants compared with the healthy cells. However, when eight or more mutants were used, the team saw a significant increase in the multiplication rate of the mutant cells – and the more initial mutants, the greater the rate. This increase in multiplication of the mutants is interpreted as the onset of tumour growth. Indeed, the team says that the simulated growths displayed clinically observed characteristics of malignant tumours.

The team believes that mutants in small groups are surrounded by stiff neighbours and cannot spread out to increase their surface area, which means that they will not divide rapidly. However, above the threshold of about eight cells, some mutant cells are able to push against their soft mutant neighbours and start dividing in earnest.

The team also looked at how the stickiness of the mutant cells affects tumour growth. The researchers found that both changing the strength at which mutant cells stick to each other and the strength at which mutant cells stick to healthy cells affects tumour growth. Interestingly, however, they found that the tumours are most spread out when the stickiness is not changed.

A new approach

“To date, cancer research has focused on biochemical factors,” says Katira. “Instead of trying to solve numerous interdependent biochemical carcinogenic factors, we can now focus on a small number of mechanical factors. It’s a new approach.”

Bonnecaze told physicsworld.com that the team is now looking at creating an experimental realization of the model that would look at the behaviour of cells within a material. Beyond boosting our understanding of cancer, he believes that the simulations could also help scientists understand how wounds heal and could even point the way to organ regeneration and organ growing.

The simulations are described in Physical Review Letters.

Nanotube bundles could boost solar cells

Thin-film solar cells could be made far more efficient with the addition of bundles of carbon nanotubes. So say researchers at the Los Alamos National Laboratory in the US, who have shown that the bundles can be used to adeptly perform the two important steps for generating an electric current. It is first time this has been demonstrated in a single thin-film photovoltaic material.

Thin-film photovoltaic materials are superior to conventional solar-cell materials, such as silicon, in that they are cheaper to make, are lighter and more flexible. They work by absorbing photons from sunlight and converting these into electron–hole pairs, known as “excitons”. Then, in order to generate electric current, an electron and hole must be separated quickly before the two particles come back together and are reabsorbed into the material. In existing solar cells, these excitons are usually reabsorbed too quickly, leading to low efficiencies.

Jared Crochet and colleagues believe that this process of separation within thin-film solar cells can be facilitated by adding bundles of semiconducting carbon nanotubes. The researchers have discovered that while individual nanotubes are of little use, the efficiency can be improved if the nanotubes are bundled together into groups possessing the same chirality. This property describes the direction in which the graphene sheet has been twisted to form a tube – from left to right, or right to left.

A promising effect

Such nanotube bundles respond to absorbed light in the same way as the parent material graphene, and so charge separation can be very efficient. “This effect is promising for incorporating carbon nanotubes into photovoltaic devices as active layers where both light absorption and charge separation can occur,” says Crochet.

The materials used in these experiments were produced by centrifuging individual carbon nanotubes so that tubes of the same twist direction and diameter aggregated together. The researchers chose bundles with a diameter and twist that strongly absorb light at a wavelength of about 570 nm – ideal for exposure to sunlight.

By exposing the samples to a brief flash of laser light and recording spectra every tens of femtoseconds, Crochet’s team was able to observe signals that are characteristic of excitons being formed, plus additional peaks that indicated the production of free electrons and holes. In samples made of non-bundled, individual carbon nanotubes, only the peak corresponding to exciton creation was seen.

Developing for the real world

The researchers now plan to incorporate single-chirality, semiconducting carbon-nanotube networks into real-world photovoltaic devices as active layers. “We would ideally like to see an all-carbon solar cell made of graphene, graphene oxide and carbon nanotubes,” says Crochet.

The team is also busy trying to better understand exciton dissociation and charge transport in the nanotube bundles using the high-speed spectroscopy technique. “The advantage of having the material in a device is that we can investigate every step, from photon absorption to charge collection,” concludes Crochet.

The work was reported in Physical Review Letters.

And the winner is…

Tevatron

By Margaret Harris

Congratulations to Andrew Palfreyman of San Jose, California, for winning the Physics World Quiz of the year 2011. This annual feature tests your knowledge of great and small events that occurred in the physics community over the past 12 months, from the shutdown of Fermilab’s Tevatron to the discovery that building a nuclear reactor in your kitchen is a great way to get arrested (who knew?).

We received quite a few entries this year, and about a dozen of them came from alert readers like Palfreyman who got every question right. If you didn’t win this year, better luck next time; in the meantime, though, here are the answers.

A. Fermilab’s Tevatron accelerator
B. Wrinklon
C. The Allen Telescope Array
D. Jane Fonda
1. Studying how the Sun and aerosols affect the Earth’s climate
2. Mercury
3. Lake Baikal, Russia
4. Subaru 8 m telescope
5. “Heavenly Palace”
6. B (Mobile phones)
7. C (A degree and a PhD)
8. A (Writing research papers)
9. B (String theory)
10. A (They are part of a microgravity experiment)
11. B (David Cameron)
12. A (Jocelyn Bell Burnell)
13. E (Michael Gove)
14. D (John Ellis)
15. C (Athene Donald)
16. B (25)
17. C (The bars contained elevated levels of lead)
18. A (Building a nuclear reactor in his kitchen)
19. C (Jupiter)
20. D (Galileo Galilei)

Kepler space telescope could find exomoons

NASA’s Kepler space telescope could be used to find exomoons, which are the moons of planets orbiting stars other than the Sun. That is the claim of an international team of astronomers, which says that careful analysis of data collected by Kepler could reveal if such exoplanets are circled by moons. The results could have major implications for astronomers’ understanding of how moons form. It could even provide important information about the probability of there being life elsewhere in the universe.

The Kepler telescope was launched in 2009 and keeps its gaze permanently fixed on one randomly chosen area of the Milky Way that is about 10 degrees square. Its main aim is to detect exoplanets by observing the slight drop in light received from a star as one of its planets passes in front of it. So far, hundreds of exoplanets have been found this way.

Moon, or another planet?

To ensure that it really has detected an exoplanet, rather than just a random, temporary drop in the brightness of a star, Kepler looks for periodic drops in the star’s output. Now, David Kipping of the Harvard-Smithsonian Center for Astrophysics and colleagues from other US universities and the Niels Bohr Institute in Copenhagen want to look for slight variations in this periodicity. The team claims that such variations could indicate that another object besides the star is influencing the planet’s motion – and that object could be a large exomoon.

To claim discovery of an exomoon, however, astronomers would have to rule out other explanations for the variations – such as the presence of other planets orbiting the same star. Kipping and colleagues argue that this can be done by looking more closely at the data from Kepler. Variation in the magnitude of a light dip would provide a further indication that the planet had a moon, because the planet and moon together would block more light when side by side than when one was in front of the other. Furthermore, the change in period would be related to the gravitational pull of the moon, and hence its mass, while the change in brightness would be related to the moon’s diameter. The two measurements together could therefore allow scientists to estimate the moon’s density, giving some clue about its composition.

Massive moons

Kipping and his colleagues have worked out that Kepler should be able to find moons as small as 0.1 Earth masses. While this is still four times the size of the largest moon in our solar system, it is possible that such a moon could form. A smaller planet could become a large moon when captured by a larger planet, for example, or a large moon could be created when two planets collide.

Darin Ragozzine, a scientist at the Harvard-Smithsonian who was not involved in the research, says that failure to detect such large moons will be a valuable result: “My favourite part of this paper,” he explains, “is that, even if nothing is found – which is a distinct possibility – they will still have scientifically valuable results because they will be able to say specifically what can be ruled out.”

Lunar requisite for life?

One of the most intriguing possibilities is that Kepler may find a potential cradle for life outside our own solar system. The search for moons is particularly important in this respect for two reasons. First, very large moons, as might be found around a gas giant, for example, could, in principle, host life. Second, many scientists believe that life could not have evolved on Earth had the moon not been around to stabilize its axial tilt, preventing extreme variations in climate. Planets with a relatively large moon would therefore be more promising as habitable planets. “Without a massive moon, it is not clear how, or if, intelligent life could develop,” says Ben Moore, a computational astrophysicist at the Institute for Theoretical Physics in Zurich.

Nevertheless, Kipping stresses that the project is not a quest to find alien life. “If we found a habitable moon, that would be a dream come true, but it’s not a primary science objective,” he says. “Principally we’re just trying to find moons, whether habitable or not.”

A preprint of the paper is available at arXiv:1201.0752.

New frontiers in medical physics

First is an interview with Simon Cherry from the University of California, Davis, who is the incoming editor-in-chief of the journal Physics in Medicine and Biology. Founded in 1956, the journal’s previous editors include Nobel laureate Joseph Rotblat, who famously quit the Manhattan atomic-bomb project on humanitarian grounds and went on to have a starring career in medical physics.

In the interview, Cherry focuses on the benefits of “molecular imaging”, which can pinpoint the biochemical and molecular changes that accompany the very early stages of chronic diseases such as cancer or neurodegenerative diseases such as Alzheimer’s. As Cherry explains, such information is impossible to obtain with traditional clinical-imaging techniques such as X-ray or MRI, which largely reveal structural changes in the human body.

Cherry also explains how the Cerenkov effect – a well-established physical phenomenon – is now being exploited within the medical arena. The effect occurs when certain radionuclides, in addition to emitting gamma rays, also give off charged particles that, temporarily at least, travel through tissue faster than light in that medium. The particles emit characteristic “Cerenkov radiation” that can be used for imaging purposes. “Cerenkov luminescence imaging” is particularly useful for radionuclides such as yttrium-90 that do not emit any gamma rays and so are not easy to image by other means.

In the second interview, Uwe Oelfke from the German Cancer Research Center in Heidelberg, explains how image-guided radiation therapy (IGRT) can address one of the key challenges in modern radiotherapy – namely how to deliver a lethal dose of radiation to a tumour while sparing surrounding healthy tissue. The problem is that radiotherapy generally involves directing an invisible beam at an invisible tumour, based on patient images acquired prior to the treatment. Oelfke explains how IGRT involves acquiring additional images of the patient in the treatment position, immediately before or during radiation treatment, ensuring that the beam is precisely targeted at the tumour.

Finally, Freek Beekman, head of radiation, detection and medical imaging at the Delft University of Technology in the Netherlands, discusses the use of a technique known as “single-photon emission computed tomography” (SPECT), which can create 3D images of the distribution of gamma rays emitted by a radionuclide. SPECT is particularly useful for imaging small animals, which is invaluable for developing new therapeutic strategies for human medicine, offering, for example, a way of tracking the response of pharmaceuticals in animal models of cancer, diabetes and other diseases. Beekman also discusses the work of MILabs, a company that he founded to commercialize pre-clinical molecular imaging and that offers a range of SPECT systems.

Copper collisions create much strangeness

Colliding pairs of copper ions produce significantly more strange quarks per nucleon than pairs of much larger gold atoms. That is the surprising discovery of physicists working on the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory in the US. The finding gives further backing to the core–corona model of such high-energy collisions and could shed further light on the quark–gluon plasma – a state of matter though to have been present in the very early universe.

Quarks are normally bound-up by gluons in particles such as protons and it takes a high-energy collision to create a glimpse of free quarks. If large nuclei such as gold or lead are smashed together at high enough energies, the result is expected to be a soup of free quarks and gluons called a quark–gluon plasma. In addition to boosting our understanding of the strong force that binds quarks together, a quark–gluon plasma is thought to provide a microscopic picture of the very early universe.

When heavy nuclei are collided at RHIC, they generate a fireball that dissipates much of its energy by creating new particles. Some of these particles contain strange quarks – the lightest of the exotic quarks – and a relatively large number of strange quarks produced in a collision can imply the presence of a quark–gluon plasma. This is because an unconfined quark in a plasma behaves as if it is lighter than a quark confined in a nucleon, and this effective reduction in mass means that generating strange quarks does not take as much energy. For this reason, those hunting quark–gluon plasmas pay close attention to the number of strange quarks that crop up in particle collisions – the number should be larger than expected if the plasma is produced.

Something strange about copper

When trying to create a quark–gluon plasma, it is normally thought that the larger the nucleus, the better. As such, RHIC normally collides gold ions and the Large Hadron Collider (LHC) smashes lead. But now, physicists in the STAR collaboration at RHIC have found that copper–copper collisions produce between 20% to 30% more strange quarks per nucleon than their gold–gold counterparts. The latest study involves about 40 million copper–copper collisions and 20 million gold–gold collisions, all of which were carried out at an energy of 100 GeV per nucleon.

The copper ions contained a total of 63 nucleons – 29 protons and 34 neutrons. If their collisions produced more strange quarks than 63 proton–proton collisions at the same energy, then this is called a “strangeness enhancement”, which could be evidence that the collision created a quark–gluon plasma.

There is, however, an alternative explanation for why colliding copper produces more quarks than protons. It could be that the production of strange hadrons (particles containing strange quarks) are suppressed in proton–proton collisions because of the requirement that strangeness must be conserved. Conservation rules require that for every strange quark, its antimatter version (the antistrange quark) must be produced. In collisions involving smaller nuclei, where fewer particles are generated, the burden of making extra antistrange quarks means that particles containing two or more strange quarks are harder to create. This limitation brings down the number of strange quarks generated on average in proton–proton collisions.

Almond-shaped collision

The team compared gold and copper collisions with the same number of “participating” nucleons. Because gold has 197 nucleons, many more than copper, the gold nuclei had to sideswipe one another rather than crash head on in order to get a collision involving 126 nucleons or less – the number involved when two copper nuclei collide. This results in an almond-shaped collection of protons and neutrons, rather than the more circular head-on copper collisions.

“The canonical picture says that the strangeness enhancement should just depend on the number of participants,” says Anthony Timmins, a STAR collaborator at the University of Houston. But if that were true, the copper collisions would not have come out significantly stranger than the gold collisions.

As an alternative, the collaboration suggests that the core–corona model describes the data best. In this picture, the colliding nucleons form a core of quark–gluon plasma surrounded by ordinary nucleon–nucleon collisions. A relatively compact copper collision collects its energy in a smaller space, meaning that more nucleons join the quark–gluon plasma and produce strange quarks. Meanwhile, more nucleons in the almond-shaped gold sideswipe are lost to collisions in the corona, thus generating fewer strange quarks.

Other particles get a boost

Aneta Iordanova, a former STAR collaborator who is now at the University of California, Riverside, is particularly interested in the fact that other particles – without strange quarks – also get a boost in the head-on copper collisions compared with the gold collisions. “If the particles produced in the core region dominate the particle production as a whole, then an increase in the yield of all particles, strange or not, is expected,” she says.

Federico Antinori, physics co-ordinator for the ALICE experiment on CERN’s Large Hadron Collider in Geneva, Switzerland, calls this a “bonus point” for the core–corona model. “It’s not the final proof, but it’s interesting to note that this model does rather well at explaining the data,” he says. ALICE collaborators presented their first look at particles containing multiple strange quarks coming out of lead–lead collisions last year, and although quantitative comparisons with the core–corona model have yet to be made, he notes that the behaviour is qualitatively similar.

This research will be published in Physical Review Letters, and a preprint is available at arXiv.

Mystery surrounds death of Oxford astrophysicist

A postmortem on an Oxford University physicist who was found dead at the home of a colleague has proved inconclusive. Astrophysicist Steve Rawlings, 50, a fellow at St Peter’s College in Oxford, was found dead on the evening of Wednesday 11 January. Thames Valley Police have now launched a murder investigation following the arrest of a 49-year-old man – reported to be Oxford mathematician Devinder Sivia – in connection with the incident.

Thames Valley Police were called to a house in Laurel Drive, Southmoor, at 11.22 p.m. on Wednesday night by a neighbour who reported that there had been an incident within the property – alleged to be Sivia’s home – and that a man had been injured. The neighbour attempted to resuscitate him but when officers and paramedics arrived on the scene, Rawlings was pronounced dead. Thames Valley Police say that a postmortem examination carried out on Rawlings yesterday has proved inconclusive.

It is alleged that Rawlings and Sivia had an argument after returning to Sivia’s house late on Wednesday evening. Sivia remained in police custody until late this morning but has now been released on bail until 18 April. Rawlings and Sivia co-wrote a book called Foundations of Science Mathematics, which was published in 1999 by Oxford University Press. “We are all stunned here,” Roger Davies, head of astrophysics at Oxford University, told physicsworld.com.

Saddened and shocked

Rawlings was an observational cosmologist who worked on the physics of extragalactic radio sources and active galactic nuclei. He was involved in the Square Kilometre Array – a next-generation radio telescope that will be built in Australia or Southern Africa – and played a prominent role in the redevelopment of the Goonhilly Satellite Earth Station in Cornwall as a radio-astronomy facility.

Rawlings completed a PhD at the University of Cambridge in astrophysics and remained there to do postdoctoral work before moving to Oxford in 1992. He became a lecturer in astrophysics and later a professor, serving as head of astrophysics at Oxford from 2006 to 2010. Until his death he taught vector calculus to first-year undergraduates and was a Tutorial Fellow at St Peter’s college.

A statement on Oxford University’s website says “We are immensely sad to report the death of our much-loved colleague Professor Steve Rawlings. Steve has for many years been a creative and inspirational colleague, and we shall miss him greatly. The heartfelt condolences and sympathies of all of us go out to his wife and family.”

At the closing of the 219th American Astronomical Society meeting in Texas, Austin, yesterday a minute’s silence was held in memory of Rawlings. “The entire university community has been profoundly saddened and shocked by the tragic and untimely death of Professor Steve Rawlings,” says Andrew Hamilton, vice-chancellor of Oxford University. “Our thoughts are with his family and friends.”

“He was a much liked and admired tutor and colleague within the College and will be greatly missed,” says Mark Damazer, master of St Peter’s College. “We extend our deepest sympathy to his wife Linda.”

Which scenario is the most likely to end civilization as we know it?

By James Dacey

hands smll.jpg

On Tuesday, the famous Doomsday Clock swung a minute closer to midnight, suggesting that humanity has recently edged slightly nearer to self-destruction. The time on the Doomsday clock now reads five minutes to midnight, having being wound back to six minutes before midnight back in January 2010.

The Bulletin of the Atomic Scientists (BAS), who created and control the clock, attributed the change to inadequate progress on nuclear weapons reduction and proliferation, and continuing inaction on climate change.

We have addressed this rather gloomy topic of Doomsday scenarios in this week’s Facebook poll in which we are asking the following question:

Which scenario is the most likely to lead to the end of civilization as we know it?

A nuclear world war
Runaway climate change
An asteroid impact
An act of bioterrorism
An alien invasion

To cast your vote, please visit our Facebook page. And, of course, if you believe that some other ghastly scenario is more likely to wipe bring us to an unsavoury end, please feel free to post a comment.

In last week’s poll we asked you to select the person you believe to be the greatest living physicist from a shortlist of five. It quickly became a two-horse race between the two Steves: Steven Weinberg and Stephen Hawking that is. But in the end Weinberg narrowly won out, gathering 36% of the vote, compared with Hawking’s 34%. In third place was Ed Witten, accruing 15% of the vote. In 4th place was Philip Anderson with 14%, and in last place was Franck Wilczek with just 2% of the vote.

The poll also attracted a lot of comments and various other scientists were proposed for this mantle of greatest living physicist. The suggestions included: Murray Gell-Mann, Leonard Susskind, Gerard ‘t Hooft, Sean Carroll and Peter Higgs.

Thank you for all of your votes and comments and we look forward to hearing from you again in this week’s poll. And we promise to ask a slightly more cheerful question next week!

Copyright © 2026 by IOP Publishing Ltd and individual contributors