Skip to main content

Magnetic erosion could explain heavyweight Mercury’s density

Mercury’s high density and large iron content could have been caused by magnetically excited collisions within part of the Sun’s protoplanetary disc, suggests new research from scientists at the American Museum of Natural History. These collisions could have knocked off the rocky, non-magnetic parts of the dust grains, leaving behind iron-enriched material from which Mercury could have formed.

Compared with the other rocky planets in the solar system, Mercury is unusually dense. The planet is estimated to contain about 70% iron by mass, compared with the near-30% values attributed to Earth and Venus. First acknowledged almost half a century ago, this peculiarity is an enduring puzzle. Explanations for the anomalously high iron content are varied, with popular theories including the removal of silicates from the surface of the young Mercury by a giant impact, or evaporation within a hot solar nebula.

Puzzling potassium

Recent measurements taken by NASA’s MESSENGER spacecraft, however, appear to have ruled out many of these models, including the evaporation hypothesis. Any such evaporation of silicates would require high enough temperatures to also remove potassium – a result that contrasts with the terrestrial potassium/thorium ratios MESSENGER has measured. While the giant-impact hypothesis remains viable – provided that the displaced material did not amass again onto the planet’s surface – some newer theories propose instead that Mercury’s iron enrichment might have occurred very early in its history, when tiny dust grains in the protoplanetary disc initially combined to form larger boulders and planetesimals.

In the new work, astrophysicist Alexander Hubbard proposes one such model. “In the solar nebula, at the position that Mercury now occupies, the ambient magnetic field was surprisingly strong,” he says. A narrow window exists in which the temperature is hot enough to support an amplified magnetic field – brought about by the differential rotation of the inner and outer parts of the protoplanetary disc – and yet cool enough to lie below iron’s Curie temperature. Here, the field would have been sufficient to magnetically saturate the iron-rich grains, causing them to violently smash together. “These collisions could have knocked off the rocky bits of the dust grains, in a process we name ‘magnetic erosion’,” says Hubbard. “The surviving iron-rich dust would have gone on to form Mercury.”

Lost silicates

In contrast – along with being stripped from the growing iron grains – the silicate-rich particles would be prevented from combining because of the negative charge they accumulate, he proposes, which would cause them to repel each other. Iron-rich particles – which can rearrange their charges – do not experience this limitation. Unable to accumulate at the same rate as the iron, the silicates would eventually be lost to the host star. According to Hubbard, the very limited area in which magnetic erosion could occur explains why similar levels of iron enrichment are not seen in our solar system’s other rocky planets.

Magnetic erosion is “certainly an interesting idea”, says Hannah Jang-Condell, an astronomer from the University of Wyoming who was not involved in this study. Jang-Condell expresses concern, however, about the temperatures required for the erosion to take place – at around 1000 K, this would be “fairly close to the sublimation temperature for silicate grains (∼1500 K)…[giving] a very narrow range of radii in which this effect could take place”. In addition, she notes, typical models of protoplanetary discs around Sun-like stars only produce temperatures as high as 1000 K at Mercury’s orbital radius in young, rapidly accreting systems – leaving questions as to whether the discs’ cooling would leave enough time for magnetic erosion to occur.

Steven Desch, a theoretical astrophysicist at Arizona State University who was also not involved in the study, also has reservations. “[The paper] assumes dust grains have high electric charges that prevented them from approaching each other, but in reality dust in the solar nebula was probably electrically neutral,” he says, explaining that, if that were the case, grains would stick together regardless of their magnetism. “Collisional stripping of Mercury’s silicate mantle remains a viable, and the most probable, mechanism for explaining its high core mass,” he says.

The research is described in the journal Icarus.

Why we’re five years overdue for a damaging solar super-storm

The cover feature of the August issue of Physics World, which is now out in print and digital formats, looks at the Sun – and in particular, at the consequences here on Earth of a “solar super-storm”. As I point out in the video above, these violent events can disturb the Earth’s magnetic field – potentially inducing damaging electrical currents in power lines, knocking out satellites and disrupting telecommunications.

One particularly strong solar super-storm occurred back in 1859 in what is known as the “Carrington event”, so named after the English astronomer who spotted a solar flare that accompanied it. The world in the mid-19th century was technologically a relatively unsophisticated place and the consequences were pretty benign. But should a storm of similar strength occur today, the impact could be devastating to our way of life.

(more…)

New holographic waveguide augments reality

A new optical gadget that uses holographic technology looks set to transform wearable, augmented-reality displays. That is the claim of engineers in the UK, who have developed a device that could be incorporated into a variety of existing technologies and allows users to overlay full-colour, 3D, high-definition images onto their normal line of sight, so that it interacts with their surroundings. This, the researchers say, sets it apart from similar augmented-reality (AR) technologies such as Google Glass and virtual-reality devices such as Oculus Rift.

Immersive information

In recent years, immersive augmented-reality or virtual-reality mobile or computer displays have become more commonplace. From greeting cards that play videos when scanned by a mobile phone to futuristic windshield displays in luxury cars, the line between digital and analogue visual information is being blurred by new technologies. But developing AR displays that seamlessly integrate digital information into the everyday environment can be a challenge. Particularly difficult is overlaying high-quality colour images that are still transparent enough not to obscure the field of vision.

Now, developers at a UK-based company, TruLife Optics, together with researchers from the adaptive-optics group at the National Physical Laboratory (NPL) near London, have overcome this overlay problem. They have created an optical component that consists of a waveguide (a rectangle of high-quality glass or plastic that acts as the lens) that contains two postage-stamp-sized holograms overlaid onto it. TruLife Optics is a spin-off from Colour Holographic – a company with expertise in producing holograms.

Earlier this month, TruLife Optics launched its glass waveguide/hologram systems, which can be bought by AR device developers for about £360. The devices are not standalone products and must be incorporated into an eyewear frame, along with a microdisplay, which provides the input image.

Reflecting waveguides

The waveguide itself is about 10 cm long, 3 cm wide and 2.8 mm thick, including the two holograms. The holograms provide a convenient way of routing light in a controllable manner. In the team’s device, incoming images from a microdisplay are routed into the first hologram, where the light is turned 90° through the length of the waveguide, via total internal reflection. Then the light hits the second hologram, where it is turned a further 90° so it is projected into the human eye. This means that the overlaid transparent images are projected from the centre of the device into the eye and are perfectly focused.

Using holograms also means a component can be created that is, at most, 2.8 mm thick, making it easy to incorporate into any eyewear. The researchers claim that the image projected “does not lose any fidelity or resolution and is focused to accommodate the eye, thus avoiding the need to squint or move your eye to see the information”. TruLife Optics also plans on making bespoke components that match the requirements of individual developers. The team has a “developer zone” to help buyers and a hackspace, where it encourages customers using its product to discuss possible improvements to the device and contribute towards its evolution.

Simon Hall, a senior researcher at NPL who was key in the development of the component, told physicsworld.com that in the years ahead, the team hopes to produce workable demonstrators for curved waveguides (curved lenses for glasses) and be able to carry out aberration correction to allow the use of prescription lenses. “Adaptive systems allow the image focal plane to follow the accommodation state of eyes. We hope that in five years’ time a number of AR systems available in the market would incorporate our technology,” says Hall. “There are multiple specialist applications of this technology…medical and industrial applications could also be produced.”

Real-time applications

Indeed, there could be a range of applications for the device, including entertainment and educational technologies plus displays built into car windscreens and shop windows. Hall also describes more specific applications such as an infrared version that could be developed for firefighters or eyewear worn by doctors during surgery. With the latter, Hall describes a scenario where a colleague in a distant city or country could view and advise on a surgical procedure in real time. The team also envisions that its technology could be useful for experimental scientists and engineers who could overlay a schematic plan on equipment they are working on.

A SKA for astronomy

The Square Kilometre Array (SKA) promises to usher in a new era in radio astronomy. Astronomers will use the telescope to probe the early universe by looking as far back in time as the first 100 million years after the Big Bang. It will also be employed to search for life and planets, as well as to study the nature of dark energy. This video takes you on a tour of the sites in Australia and southern Africa that will host the SKA, featuring artists’ impressions of the impressive telescope equipment. The film will also transport you to the headquarters of the SKA Organisation in the UK, where scientists and engineers describe the challenges and opportunities that lie ahead.

When completed, the SKA will be the world’s largest radio telescope, with a total collecting area of one million square metres. Construction of the first phase is scheduled to begin in 2018. This will see an array of 254 dishes being built in South Africa’s Karoo region covering the bulk of the high and mid-frequencies of the radio spectrum. Meanwhile, the Murchison region in Western Australia will host the low-frequency section of the array with 96 dishes accompanied by approximately 250,000 individual dipole antennas.

Engineers involved in the SKA project are full of impressive facts about the scale of the technology infrastructure. For instance, they say that the number of data being collected by the array will be equivalent to 10 times the global Internet traffic. And given its processing capabilities, the array will be able to survey the sky 10,000 times faster than any existing radio telescope and at a sensitivity that is 50 times greater. To put the latter figure in perspective, it means that the SKA would be able to detect an airport radar signal on a planet tens of light-years away.

“I think it’s fair to say that the SKA really represents the next step in the evolution of low-frequency radio astronomy,” says Jeff Wagg, a SKA project scientist featured in the film. “Observing the universe at low radio frequency not only tells us about the evolution of gas in our own galaxy and other galaxies, but also tells us about the evolution of star formation in the universe.”

In addition, the film takes a look at some of the precursor telescope arrays that are being developed in both host nations as a means of testing some of the SKA technologies. South Africa has the MeerKAT array, which is currently under construction in the Karoo and had the first of its 64 antenna inaugurated in March. Meanwhile, Australia has the Murchison Widefield Array (MWA), which is already up and running. It also has the Australian Square Kilometre Array Pathfinder (ASKAP), which astronomers are currently commissioning and testing.

“There are some really exciting images coming out of those instruments right now,” says SKA engineer Roshene McCool, referring to developments at ASKAP and MeerKAT. McCool says that as well as being important scientific instruments in their own right, the SKA precursor projects will also return a lot of practical information about building telescope arrays in these environments. “The design and the construction of those telescopes has built both infrastructure and also human capital in those areas so that we have skilled people who understand what is actually quite a specialized area,” she says.

The July issue of Physics World features an update on the SKA project, including the surprise news that Germany has announced its intention to withdraw from the project. Members of the Institute of Physics (IOP) can read the article in the magazine. Being an IOPimember gives you a full year’s access to Physics World both online and through the app.

Physics on babies’ bottoms

About 10 years after I left university, I went to a reunion of my former classmates. When we talked about our jobs, they were stunned when I told them I was still doing physics pretty much every day. “But I thought you worked on baby nappies!” I confirmed this was accurate, but added that I had just hired a theoretical physicist to help me develop the differential equations for urine transport through nappies.

My classmates thought I was being funny, but I was just telling the truth. I work in the research and development (R&D) division at Procter & Gamble (P&G) and we joke that what we do is not rocket science – it’s harder. The fact is that many commercially available software codes for simulating fluid flow and mechanics do not readily work for nappies. As a result, we continuously face situations where either the theory that describes relevant phenomena does not exist or the simulations are numerically unstable owing to challenges that are specific to our systems.

For example, unlike in geological materials, where the pore structure is typically relatively stable during fluid flow, the materials used for consumer products such as nappies are soft and can deform because of wet collapse or external conditions. In addition, the swelling of super-absorbent materials produces large changes in the dimensions of the pore structure. This poses a range of challenges when developing methods of physically characterizing the materials and theories of how they behave. In short, the stereotypical view that “consumer products are simple to use – therefore they are simple to understand” is almost completely wrong. After 23 years “in baby diapers” (as the Americans would phrase it), I still find myself using physics every day and having a lot of fun too.

An accidental entry

Joining P&G was one of the best decisions of my life, but it happened more or less by chance. I studied physics at the University of Leipzig in what was then East Germany and my diploma thesis focused on developing simulations to compute the molecular orientation of nematic liquid crystals in electrical fields. For my PhD, I extended these simulations to include birefringent optics, making it possible to predict the behaviour of liquid-crystal displays based on the display’s design and material properties.

When I graduated in 1991, I was only 26 years old, but I had met my wife at university and (as was common in East Germany at the time) we decided to have children early. Our daughter was born before I earned my diploma and our son a few years later. My original plan was to stay at university as an assistant after my PhD, working my way towards a tenured professorship. However, while I was working on my thesis the Berlin wall came down and after Germany reunited the university system changed almost overnight. Permanent assistant positions like the one I had hoped for were no longer available; instead, it became common to accept temporary assistant or postdoc positions, with the hope of eventually becoming a professor.

I decided that this new path would be irresponsible for our family and that working in industry would provide a safer and more stable environment. Then I spotted an advertisement in a newspaper; a company called Procter & Gamble was soliciting applications from graduate students to attend a seminar for “technical management”. I thought P&G must be a consulting company (it was unknown in East Germany and this was before the Internet became popular) and I had no idea what technical management was, but I was curious, so I sent in my application.

A couple of weeks later I received an invitation to an interview. When I arrived, the recruiter informed me that P&G was a consumer-goods company and that I would be interviewed for an actual job, since I was already too advanced for the seminar. I didn’t know any more about consumer goods than I did about technical management, but again, I was willing to find out and later that day I was offered a starting position in material development for Pampers, P&G’s brand of nappies.

A physicist among chemists

My first assignment at P&G’s centre in Schwalbach, Germany, was to develop an upgrade for the absorbent gelling material (AGM) in nappies that absorbs and “locks in” urine to keep the baby’s skin dry. AGMs are hydrogels that are made of partially neutralized polyacrylate polymer networks and they were originally invented for agriculture as a means of improving the water-holding capacity of soil. P&G had introduced AGMs into Pampers in the mid-1980s and now my boss was convinced that they could be improved.

AGM development was seen as the domain of chemists, and as the only physicist working on it within P&G and our material suppliers, I was really pushed out of my comfort zone. I had to learn a lot more about polymer chemistry and materials science. But I was also able to ask physics questions such as “How does liquid transport happen in nappies?” and “How can we understand how the swelling of the AGM changes this?” A few of these things were known qualitatively, but to my surprise there was no detailed understanding and no predictive model to guide me. I had to develop models and characterization-test methods on my own and the breadth of the task – which also included working with material suppliers and even supervising some consumer testing – was very new to me. However, I found it exciting and within a few years I led the first AGM upgrade in P&G’s European Pampers plants.

This success encouraged me to push my role as a physicist further. In addition to AGM development, I started programmes aimed at improving absorbent-core technology, and our modelling and simulation programmes. In a way, I think that being a physicist among non-physicists was one of the reasons for my success because my different point of view helped spur us along.

Becoming an ‘expert generalist’

P&G has a dual career system, with management and technologist tracks. I went for the latter, progressing from principal scientist in 1995 to research fellow in 1999. In 2006 I was inducted into the company’s Victor Mills Society, which is the top rung of the technical career track. In fact, it is a bit like being a professor because I get to lead major R&D programmes and help develop new ways of educating and nurturing young innovators. I also collaborate with a range of companies, universities and institutions, and frequently present at conferences.

The term “expert” is usually associated with deep knowledge in one particular field. However, I find it more useful to view myself as an “expert generalist” or “master integrator” – someone with deeper-than-average knowledge and experience of multiple fields. My physics education taught me that “if it is the same equation, then it is the same problem”, and I think this has helped me to think and act like a master integrator because I can find connections between areas that appear unrelated on the surface. This is especially useful at the fuzzy front end of innovation, when the uncertainties surrounding what is needed and what is possible are both very high.

One thing that my physics education did not teach me, however, was the role of emotions and perceptions in the decision-making process. When I started working at P&G, one of the first things I learned was that “perception is reality” for consumers; a product may work fine, but if it does not look that way, consumers will not accept it. It took me much longer to learn that “perception is reality” also applies much more generally in decision making. For example, I sometimes give a talk called “Can I trust your model?” that highlights the challenges of status-quo bias and human behaviour as it applies to innovation. Learning more about how to influence people has become a hobby for me, so I read a lot of books about behavioural science in addition to keeping up with new topics in physics, chemistry, materials science and engineering.

Overall, I have found that working as a technical expert at P&G has given me plenty of new insights, as well as tremendous opportunities for learning. I enjoy tasks that take me outside my comfort zone and being “in nappies” means there is also the sense of excitement that comes with getting new technologies into consumers’ hands – and onto the bottoms of their babies.

High-gain optical transistors flipped by just one photon

Two independent teams of physicists in Germany have created the first high-gain optical transistors that can be switched using a single photon. Based on ultracold atomic gases, the devices make use of the “Rydberg blockade”, whereby the creation of an atom in a highly excited state has a huge effect on the ability of the surrounding gas to transmit light. The research might lead to the development of all-optical logical circuits that could operate much faster than conventional electronics. The transistors could also find use in photon-based quantum-information systems of the future.

Communications and computing systems that use only light to transmit and process information have the potential to be faster and much more energy-efficient than those that use electronic signals. While optical-fibre communications is already widespread, the switching and processing of optically encoded data is usually done by converting light pulses to an electronic signal, which can then be easily processed. The electronic signal is then converted back to a light pulse.

Making photons interact

This time-consuming and energy-hungry process is necessary because photons do not readily interact with each other, which makes the design of all-optical components a major challenge that is currently being addressed by physicists and engineers. During the past few years, several research groups have made important breakthroughs in this area by showing that photons can be made to interact with each other in specially prepared samples of ultracold atomic gas.

Now, two independent teams led by Sebastian Hofferberth of the University of Stuttgart and Stephan Dürr of the Max Planck Institute of Quantum Optics near Munich have created devices in which a single “gate” photon can switch off a stream of as many as 20 photons. This gain of 20 is a huge improvement on previous attempts at optical switches, which either needed pulses of several gate photons to achieve gains greater than one or offered gains of much less than one for single-gate photons.

Both teams based their gates on gases of rubidium atoms that were cooled to temperatures below 1 mK. Normally, the gas is transparent to a beam of “source” photons, which can travel through the device and emerge via the “drain” – gate, source and drain being terms used to describe the control, input and output channels, respectively, of a conventional field-effect transistor.

Blocking the drain

When a gate photon is fired into the gas, it is absorbed by one atom, which puts that atom into a highly excited Rydberg state with one electron in an extremely large orbital. The large distance between this electron and the nucleus gives the atom a very large electric dipole moment, which shifts the energy levels of nearby atoms. This shift causes the gas to become opaque to light from the source, effectively switching the transistor off. The Rydberg state endures for about 1 μs, which is a surprisingly long time for an atomic system. This allowed Dürr and colleagues to use their transistor to switch off a stream of 20 source photons, while Hofferberth’s team prevented 10 photons from reaching the drain of its device.

“This effect should make it possible – at least in principle – to cascade such transistors to solve complex computational tasks,” says Dürr. He also points out that the experiments offer physicists a new and non-destructive way of studying the physics of Rydberg states. The ability to operate at the single-photon level also means that the transistors could find use in quantum-information applications such as secure quantum-communication systems or powerful quantum computers.

Another interesting aspect of the devices is that the gate photon is re-emitted by the gas when the Rydberg states decay – an effect that has been observed in other experiments. In principle, this means that the transistors could also be used as storage devices for quantum information.

Both experiments are described in separate papers in Physical Review Letters.

Self-assembly and plasmonics could join forces to boost solar energy

Researchers in the US have used a self-assembly method based on viruses and DNA to position almost 200 fluorescent molecules to within a few nanometres of a tiny gold nanoparticle. This accurate positioning of the molecules boosts their fluorescence output and the method could have applications in information processing, sensing and energy technologies.

Electrons in a metallic nanoparticle undergo collective oscillations known as a surface plasmon resonance when exposed to certain frequencies of light. The nanoparticle then behaves as a tiny antenna, concentrating light within a few nanometres of the nanoparticle surface. If a fluorescent molecule – or fluorophore – is placed within this region, the amount of light captured by the molecule can be boosted significantly.

“This provides a means of creating an intense electromagnetic field near a light-absorbing centre, thus allowing us to greatly increase the capture of light,” explains James De Yoreo of the Pacific Northwest National Laboratory, who was part of the research team. Boosting the number of fluorophores surrounding the nanoparticle helps increase the amount of light captured even further. However, the process is very sensitive to the distance between the nanoparticles and fluorophores, making it difficult to take advantage of the effect in complicated arrangements of nanoparticles and fluorophores.

Nanoscale assembly

The researchers, from the Lawrence Berkeley National Laboratory, the Pacific Northwest National Laboratory, University of California, Berkeley and Arizona State University, combined two self-assembly approaches to collect hundreds of fluorophores and position them next to a gold nanoparticle.

First, they used a virus “capsid” to form a container for the fluorophores. The capsid, which would normally encapsulate a virus, is made of a protein that self-assembles with many copies of itself to form a shell. The researchers modified the inner surface of the shell with fluorophore attachment sites, capturing almost 180 fluorophores and giving a density of one fluorophore per 14 nm2. The capsid was further modified so that the outside was coated with DNA strands.

The second step involved a process called “DNA origami”, whereby a collection of several hundred synthetic DNA strands self-assemble into shapes that are around 100 nm in size. The researchers formed a tile to act as a “molecular breadboard”, with two binding locations to allow objects to be attached. Finally, the team modified gold nanoparticles with a set of DNA strands that recognized a location on the origami. The DNA sequences on the outside of the capsid were designed to bind to a different location.

Adjustable origami

Mixing the three components together – the fluorophore-loaded capsid, the origami tile and the gold nanoparticle – resulted in the capsid and nanoparticle being held nanometres apart on the origami. By adjusting the origami design, the capsid–nanoparticle separation distance could be adjusted.

The researchers confirmed that the system had formed as designed by using atomic force microscopy and electron microscopy. First they investigated how the capsid–nanoparticle separation affected the fluorescence characteristics by studying a sample using confocal microscopy. The separation distance was then determined using atomic force microscopy. The combined measurements demonstrated increased fluorescence intensity for a number of separation distances.

To better understand the experimental results, the researchers modelled the interactions of the fluorophores with the nanoparticle, demonstrating that the system behaved as they expected. For larger sizes of gold nanoparticle, the model showed that the fluorophores would undergo significant increases in fluorescence.

Mix and match

“The model enabled us to explore changes to the nanoparticle size, choice of fluorophore, arrangement of fluorophores and even the capsid shape to optimize the performance,” says De Yoreo.

The primary interest of the team is to create technologies that mimic some of the highly efficient processes that living organisms use to harvest energy from the Sun. “Our use of the effect is directed towards energy harvesting for solar-energy applications,” explains De Yoreo. “When one looks at light-harvesting complexes in biological systems, they often utilize a similar architecture.”

The research is described in ACS Nano.

New correction to speed of light could explain SN1987 neutrino burst

The effect of gravity on virtual electron–positron pairs as they propagate through space could lead to a violation of Einstein’s equivalence principle, according to calculations by James Franson at the University of Maryland, Baltimore County. While the effect would be too tiny to be measured directly using current experimental techniques, it could explain a puzzling anomaly observed during the famous SN1987 supernova of 1987.

In modern theoretical physics, three of the four fundamental forces – electromagnetism, the weak nuclear force and the strong nuclear force – are described by quantum mechanics. The fourth force, gravity, does not currently have a quantum formulation and is best described by Einstein’s general theory of relativity. Reconciling relativity with quantum mechanics is therefore an important and active area of physics.

An open question for theoretical physicists is how gravity acts on a quantum object such as a photon. Astronomical observations have shown repeatedly that light is attracted by a gravitational field. Traditionally, this is described using general relativity: the gravitational field bends space–time, and the light is slowed down (and slightly deflected) as it passes through the curved region. In quantum electrodynamics, a photon propagating through space can occasionally annihilate with itself, creating a virtual electron–positron pair. Soon after, the electron and positron recombine to recreate the photon. If they are in a gravitational potential then, for the short time they exist as massive particles, they feel the effect of gravity. When they recombine, they will create a photon with an energy that is shifted slightly and that travels slightly slower than if there was no gravitational potential.

Irreconcilable differences

Franson scrutinized these two explanations for why light slows down as it passes through a gravitational potential. He decided to calculate how much the light should slow down according to each theory, anticipating that he would get the same answer. However, he was in for a surprise: the predicted changes in the speed of light do not match, and the discrepancy has some very strange consequences.

Franson calculated that, treating light as a quantum object, the change in a photon’s velocity depends not on the strength of the gravitational field, but on the gravitational potential itself. However, this leads to a violation of Einstein’s equivalence principle – that gravity and acceleration are indistinguishable – because, in a gravitational field, the gravitational potential is created along with mass, whereas in a frame of reference accelerating in free fall, it is not. Therefore, one could distinguish gravity from acceleration by whether a photon slows down or not when it undergoes particle–antiparticle creation.

An important example is a photon and a neutrino propagating in parallel through space. A neutrino cannot annihilate to create an electron–positron pair, so the photon will slow down more than the neutrino as they pass through a gravitational field, potentially letting the neutrino travel faster than light through that region of space. However, if the problem is viewed in a frame of reference falling freely into the gravitational field, neither the photon nor the neutrino slows down at all, so the photon continues to travel faster than the neutrino.

Two neutrino pulses?

While the idea that the laws of physics can be dependent on one’s frame of reference seems nonsensical, it could explain an anomaly in the 1987 observation of supernova SN1987a. An initial pulse of neutrinos was detected 7.7 hours before the first light from SN1987a reached Earth. This was followed by a second pulse of neutrinos, which arrived about three hours before the supernova light. Supernovae are expected to emit large numbers of neutrinos and the three-hour gap between the second burst of neutrinos and the arrival of the light agrees with the current theory of how a star collapses to create a supernova.

The first pulse of neutrinos is generally thought to be unrelated to the supernova. However, the probability of such a coincidence is statistically unlikely. If Franson’s results are correct, then the 7.7-hour gap between the first pulse of neutrinos and the arrival of the light could be explained by the gravitational potential of the Milky Way slowing down the light. This does not explain why two neutrino pulses preceded the light, but Franson suggests the second pulse could be related to a two-step collapse of the star.

Scepticism needed

Nevertheless Franson is cautious, insisting that “there are very serious reasons to be sceptical about this and the paper doesn’t claim that it’s a real effect, only that it’s a possibility.” He is also pessimistic about the prospects for the idea being proven or refuted in the near future, saying that the chances of another supernova so close are very low, and other possible tests do not presently have sufficient accuracy to detect the effect.

Raymond Chiao of the University of California, Merced, agrees with Franson that, observationally and experimentally, “there are a lot of caveats that need to be clarified,” most notably, that if Franson’s hypothetical interpretation of SN1987a is correct, there are two clear neutrino pulses separated by five hours, but little evidence of two corresponding pulses of light. Nevertheless, he says “There is a deep seated conceptual tension between general relativity and quantum mechanics…If, in fact, Franson is right, that is a huge, huge step in my opinion: it’s the tip of the iceberg element that quantum mechanics is correct and that general relativity must be wrong.”

The research is published in the New Journal of Physics.

Seeing the invisible: using gravitational lensing to map dark matter

It’s one of the most memorable moments of my career – and not in a good way. I was giving a talk to a room packed full of eminent astrophysicists, but there had been a bit of a childcare crisis, so child number two was sitting grumpily on the front row. I was in full flow, proudly leading up to my new result, when an all-too-familiar voice cut through the air. “She doesn’t know what she’s talking about!”

As time has passed, I have slowly recovered from this mortifying experience, comforted in the realization that for a four year old, my son was being quite astute. You see, I specialize in observing the dark side of our universe – a kind of shadow realm that we can’t see or touch, but which extends throughout the whole universe and even permeates our everyday world.

What we do know about this invisible stuff is that it appears to make up over 95% of our universe and comes in two forms. Dark matter is a special type of matter that, unlike normal matter, cannot interact via the electromagnetic force – the one that light uses to travel by. Dark energy, meanwhile, is a mysterious source of energy that is causing the rate at which our ever-expanding universe grows to get faster and faster each and every day. Numerous independent observations point to the existence of both entities via the effects they have on the matter that we can see.

As physicists, we have begun to quantify this realm, and we have thought long and hard about what it is made of. But in the grand scheme of things, we are still pretty clueless. And so, you see, to some extent my son was right. To prove him wrong – to truly know the nature of the dark side – would involve solving some of the biggest challenges facing science today.

Well, I do like a good challenge! And so, using a powerful astronomical technique called gravitational lensing, I am working on several projects that are beginning to expose the mysterious dark side of the universe.

Cosmic raindrops

If you are unfamiliar with gravitational lensing – and even if you’re not you may like this analogy – take a look out of your nearest window and ask yourself: how do you know the glass is there? Perhaps there are some little imperfections, or some raindrops that distort your view? If you see raindrops, the reason they are apparent is that these transparent globules bend the light travelling from an object beyond the window – from a tree, say – to your eye. But because you know how the scene outside should look in the absence of distortions, you infer the existence of the raindrops.

To apply the same thinking to gravitational lensing, simply swap the trees for galaxies billions of light-years away. As for the raindrops, replace these tiny objects with huge transparent clumps of dark matter, sitting between the distant galaxies and you. The physical reason why the light bends is, however, different: while raindrops simply refract light, clumps of dark matter bend the very fabric of space–time, and the path that the light is travelling along gets bent with it (see figure 1).

1 Bending the light of distant galaxies

Diagram of gravitational lensing

The gravitational field of a massive object extends far into space, warping space–time. The path of any light rays passing close to that object (and thus through its gravitational field) will become bent. The light is then refocused somewhere else. The more massive the object, the stronger its gravitational field and hence the greater the deflection of the light.

Anything with mass warps space–time to some extent, according to Einstein’s general theory of relativity, but we only perceive the phenomenon when the mass, and hence the distortion, is very big – and dark matter has a very big mass, making up 83% of all matter in the universe. In fact, we can now image these distant, distorted galaxies using extremely powerful telescopes. And by combining the measured distortion in these images with the equations of general relativity, we can directly weigh all of the matter lying in-between us and the galaxies, irrespective of whether it is luminous or dark.

The dark question

How can we be so sure, though, that the majority of the matter we detect through this lensing technique is an unknown and mysterious dark substance? One reason is that from studies of the Sun and other stars, we know roughly how much mass in each galaxy is locked up in the stars, and we find that there simply is not enough luminous stellar mass in the galaxies to account for the lensing effects that we measure. Indeed, the same conclusion is drawn from the observation that stars orbit their galaxies faster than the mass that we can see should allow them to.

But could dark matter be non-luminous objects that we cannot see with telescopes, such as the failed faint brown dwarf stars that never summoned up the temperatures required to switch on nuclear fusion in their core, or a multitude of primordial black holes that have been lurking since the birth of the universe? Again, lensing can help us answer this question. If these entities, known as massive compact halo objects (MaCHOs) existed in the quantities required to account for the missing mass, they would regularly pass between us and distant stars. In doing so, their mass would warp space–time, focusing more of a certain star’s light towards us such that for a passing moment, we would see that star brighten, and then dim again. Although these “micro-lensing” observations are not easy – requiring the dedicated monitoring of stars over many years – the relatively few events that several teams have found have essentially ruled out MaCHOs as a major source of dark matter. Excitingly, this dark-matter search instead led to the discovery of many exoplanets orbiting distant stars.

The arguments so far cannot quite rule out the possibility that the extra mass is down to something mundane that we already know about – namely gas. In massive galaxy clusters – collections of hundreds of galaxies – there is indeed a significant amount of gas between the galaxies, which we can see because it emits X-ray light. However, in rare events in which two galaxy clusters collide, such as the Bullet Cluster, we can cleanly distinguish the gas from the galaxies and the dark matter, proving that they are separate entities (see box).

A shot in the dark

Galaxy cluster 1E 0657-56 – the Bullet Cluster

The power of weakness

In the most powerful examples of gravitational lensing, it is plain to see that very strong lensing is occurring. Galaxy clusters, such as Abell 2218 (see lead image), are the strongest lenses we know and provide us with the most striking images of this gravitational physics in action. The giant arcs that encircle these clusters show the highly distorted light emitted by distant galaxies situated almost directly behind the cluster.

But in our goal to map dark matter, we would be left with a very patchy map if galaxy clusters – which are fairly scarce – were our only indicators of mass. Thankfully, every galaxy tells us something, even if it is only mildly distorted, showing that a relatively small amount of mass lies between us and that galaxy. In fact, most distant galaxies are only “weakly” lensed and it is these galaxies that we mostly rely on to make dark-matter maps. Still, even with these weakly lensed galaxies, how do you know if an elliptical galaxy, say, looks that shape because it actually is that shape, or because its light has been gravitationally lensed?

Help to answer this question comes from the other galaxies in the neighbourhood. Imagine light travelling towards us from two nearby galaxies. As it journeys across the universe, the light will pass by the same structures of dark matter, and hence experience the same gravitational distortion. So when we look at those two galaxies in the sky they will appear to be weakly aligned, with the level of alignment increasing with the amount of dark matter they have passed. If there were no dark matter in a particular patch of the sky, the average galaxy shape would just be a circle, assuming that galaxies are randomly oriented in the universe. With dark matter, however, we find the average galaxy shape is an ellipse. The stronger the average galaxy ellipticity is in the patch, the more dark matter there is in that region of the universe. This induced ellipticity is a faint signature that dark matter writes across the cosmos to tell us exactly where it is and how much of it there is.

Dark matter when our universe was 4.7 Gyr old

The biggest map of dark matter to date, made using this weak lensing technique, was the very result that caused my son’s abrupt outburst. Over five years from 2003 to 2008, the absolute best weather at the Canada–France–Hawaii Telescope in Hawaii was reserved to map dark matter. By analysing the weak alignment of the images of over 10 million galaxies, whose light was emitted when the universe was only six billion years old, we had our first direct glimpse at dark matter on the largest of scales. The survey revealed a cosmic web of dark matter in each of the four directions we looked. Our map exposed massive clumps of matter, wispy filamentary structures joining them together, and expansive voids between them spanning millions of light-years (2013 MNRAS 433 3373).

Even before these observations were complete, theorists had simulated dark-matter maps by taking our best theories about the nature of dark matter and using supercomputers to build a dark universe, allowing it to grow and evolve. These computer studies, such as the Millennium Simulation in 2005, had provided us with a glimpse of the invisible dark side, predicting the giant cosmic web we saw in our observations. In fact, theorists had also predicted another feature we saw in the maps: the visible universe largely overlaps with the dark universe, because the dark-matter web has dictated when and where the visible universe should form.

The Canada–France–Hawaii Telescope Lensing Survey

As for dark energy, it too has an important role to play in the cosmic web of dark matter. In a universe without dark energy, the clumps of dark matter would be even more dense than they are now due to the attractive forces of gravity causing the densest regions to accrete neighbouring structures of matter. But dark energy – the mysterious source of energy that is causing the post-Big Bang expansion of our universe to accelerate – slows this process down. Together, the two dark entities play out a cosmic battle of epic proportions. While the gravity of dark matter slowly pulls structures together, dark energy causes the dark-matter structures to get further and further apart, making it harder for them to grow.

Looking further away in our universe is the same as looking back in time – with gravitational lensing surveys so far having mapped objects as far away as when the universe was only six billion years old. This technique has therefore let us map dark matter in different epochs in the history of the universe. So by studying the evolution of the dark-matter web we have been able to measure how dark energy has affected the growth of those structures, and we are slowly learning about what this mysterious dark energy could be.

Technical challenges

Gravitational lensing has been heralded as the most powerful technique for studying the dark universe, but it is also the most technologically challenging. The typical distortion induced by dark matter, as a galaxy’s light travels through the universe, is only enough to alter the ellipticity of that galaxy by less than 1%. But in the last few moments before that light is captured on Earth, the atmosphere, telescope and detector can together change the ellipticity of the galaxy by 10% or more. So to isolate the alignment signature that dark matter imprints, we need to model all the distortions introduced by technology and the atmosphere to very high precision and then invert these terrestrial effects to accurately recover the cosmological signal. Just to up the ante, the terrestrial effects change every second as the wind and ground temperature alter the density of the air in different layers of the atmosphere, and the telescope slowly moves to track the rotation of the Earth.

Gravitational lensing has been heralded as the most powerful technique for studying the dark universe, but it is also the most technologically challenging

Furthermore, this lensing effect is so weak that to detect it we need to analyse the images of hundreds of millions of galaxies, which involves rapidly processing petabytes of data. For the past decade, however, astronomers have been setting “big data” challenges to crowd-source the best minds to solve this monumental computational task. In 2011, for example, the Kaggle “Mapping Dark Matter” challenge saw 700 non-astronomers competing for a prized tour around NASA’s Jet Propulsion Laboratory. Their submissions fuelled a new range of machine-learning ideas for the astronomers to put into practice.

One final astrophysical challenge persists, which the astute reader will have already recognized. How valid is our assumption that galaxies are randomly oriented throughout the universe before their light is lensed? We know that the way galaxies form and evolve depends on their local environment, and hence two galaxies in the same district of the universe may well have a natural-born alignment with each other. However, we have measured this effect by looking at the alignment of galaxies in tight-knit communities, in contrast to the alignment of galaxies widely dispersed throughout the universe. What we found was that the average natural alignment between galaxies is roughly 100 times smaller than the observed alignment that dark matter induces. This is small, but not negligible and we do take it into account (2013 MNRAS 432 2433).

Current and future missions

Three lensing teams are currently competing to be the first to reveal the next major leap in our understanding of the dark universe. Researchers from Europe (with their Kilo-Degree Survey) and from Japan (with their Hyper-Suprime Cam survey) are imaging 1500 square degrees of the cosmos – nearly 5% of our sky and 10 times as much sky as our current best lensing survey. Astronomers in the US, with the Dark Energy Survey, will eventually cover three times that area. All three surveys will conclude their observations over the next few years.

Lensing teams are competing to be the first to reveal the next major leap in our understanding of the dark universe

There is great interest in whether these surveys will uncover the same “tension” that we see between current lensing observations of the invisible dark universe and the Planck satellite’s observations of the cosmic microwave background. Gravitational-lensing surveys are very sensitive to how dark matter clumps, but the Planck data imply a much clumpier universe than the lensing surveys are currently seeing. This lack of agreement could mean a flaw in one or both of the methods. If it persists, as the methods and data quality improve, it has been speculated that this could be evidence for the existence of a new type of neutrino called the sterile neutrino. (See “What’s the matter?” Physics World July 2014, pp30–31.)

Over the next decade, three major new international projects will work in tandem in the final stages of our quest to understand the dark side. The Euclid satellite will be launched above the atmosphere, providing Hubble-Space-Telescope-quality imaging across the whole sky. Getting above the atmosphere gives us a much clearer view of the universe, and the keen vision of Euclid will be extremely sensitive to the weak dark-matter distortions that we are trying to detect. Euclid will also measure the spectra – and hence redshift and distance information – of millions of galaxies with which to chart the expansion of the universe.

Meanwhile, the Large Synoptic Survey Telescope will image the whole southern sky every three nights and provide deep multicolour imaging with which to measure distances to the galaxies without spectra. Not only will this allow us to chart the evolution of dark-matter structures, but this telescope will also be able to detect killer rocks in our solar system that may one day obliterate planet Earth!

Finally, the Square Kilometre Array will provide high-resolution imaging in the radio part of the electromagnetic spectrum, with precision redshift and polarization observations that will allow us to untangle the lensing alignment signature of dark matter from naturally arising alignments. In combination, these surveys will be able to use gravitational lensing to map dark matter and dark energy over the last 10 billion years of the history of the universe, testing gravity on the largest of scales in space and time.

With 35 years still left before I retire, my hope is that I will be able to see these projects through to their conclusion and truly know the nature of the dark side. So one day I will finally be able to tell my son that I really do know what I’m talking about!

Life after a nuclear bomb, farewell to MetroCosm and the nerdiest thing ever

 

Observed change in Tatooine surface temperature

What’s it like to have a nuclear bomb dropped on you? Okay, I know the question is a bit heavy for this light-hearted column but I was really inspired by this piece about Shinji Mikamo who was less than a mile from the epicentre of the Hiroshima bomb. He was 19 at the time and not surprisingly the bomb changed the course of his life in many ways. What I found most amazing is that Mikamo managed to survive an explosion so intense that it blasted off the glass and hands of his father’s pocket watch, but not before imprinting the time of the blast on the watch’s melted face. The article is called “When time stood still” and it appears on the BBC website.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors