Skip to main content

Flash Physics: Photons could interact in tiny silicon voids, plasma drives high-gain laser amplifier

Photons could interact in tiny silicon voids

Photons in relatively weak beams of light could be made to interact with each other by shining them through a piece of silicon with a specific set of voids cut through it. That is the conclusion of Hyongrak Choi, Mikkel Heuck, and Dirk Englund of the Massachusetts Institute of Technology in the US. They have done calculations that suggest a weak beam of light can create strong electric fields within a piece of silicon that contains a precise arrangement of nanometre-sized voids. The field can be as much as 10,000 times the strength of the electric field normally associated with such light. The presence of such a field would allow a photon to modify the index of refraction in the region that surrounds it. A second photon travelling through this region would be affected by this change – the result being an interaction between the photons. Normally, extremely intense laser light is required to create this effect. The ability to interact photons within much weaker light beams could lead to the development of new types of switches and other devices to create fast and energy-efficient optical communications networks that do not require electrical components. The effect is described in Physical Review Letters and could even be used to create devices for quantum computers in which information is encoded into photons.

Plasma drives high-gain laser amplifier

Photograph of the Vulcan laser target area at the Central Laser Facility showing the set up for the plasma laser amplifier

A plasma-based amplifier of laser light is described by its creators as having the highest ever gain. Built by an international team led by Dino Jaroszynski at the University of Strathclyde, the system takes picosecond-duration laser pulses carrying just a few picojoules of energy and boosts them up to about 100 mJ – which is a gain of about 100 million. The amplifier uses high-energy 100 J laser pulses at the Vulcan laser at the UK’s Central Laser Facility in Oxfordshire to create a plasma by firing the laser at a jet of hydrogen gas. The picojoule laser pulse to be amplified is fired at the plasma, where it collides with a high-energy laser pulse. The collision produces a beat wave of light that drives plasma electrons into a regular pattern that mimics the beat wave. This wave sweeps up the energy of the high-energy pulse and outputs it into the low energy pulse, resulting in a huge amplification of the low-energy pulse. An important feature of the amplification process is that the duration of the low-energy laser pulse is not increased significantly during the amplification process. “Our results are very significant in that they demonstrate the flexibility of the plasma medium as a very high gain amplifier medium,” says Jaroszynski. “We also show that the efficiency of the amplifier can be quite large, at least 10%, which is unprecedented and can be increased further.” However, he points out that random fluctuations in the plasma are also amplified, which contributes to noise in the amplified pulse. The team believes that plasma-based amplifiers could play important roles in the development of the next generation of high-power lasers. The research is described in Scientific Reports.

Flash Physics: Old galaxies make lots of stars, xmons solve linear equations, work begins on giant telescope

New type of old galaxy is a prolific star-maker

Astronomers have discovered galaxies in the early universe that are creating stars more than a hundred times faster than the Milky Way. These rapidly growing galaxies formed less than a billion years after the Big Bang but are so distant that their light is only just reaching Earth, where it has been observed by researchers using the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. Roberto Decarli of the Max Planck Institute for Astronomy in Germany and colleagues were originally investigating star formation in very distant galaxies with quasars – the supermassive black holes at the centre of massive galaxies. “But what we found, in four separate cases, were neighbouring galaxies that were forming stars at a furious pace, producing a hundred solar masses’ worth of new stars per year,” Decarli explains. The Milky Way only forms one solar mass per year and other early universe galaxies had star formation rates between one and 10 solar masses per year. “Very likely it is not a coincidence to find these productive galaxies close to bright quasars,” says team member Fabian Walter. “Quasars are thought to form in regions of the universe where the large-scale density of matter is much higher than average. Those same conditions should also be conducive to galaxies forming new stars at a greatly increased rate.” The team suggests this chance discovery may explain a cosmic mystery – a population of massive elliptical galaxies from when the universe was 1.5 billion years old. Astronomers had been puzzled about how these had formed so many stars so quickly, but the newly found hyper-productive galaxies may be the answer. To determine if this is the case, follow-up observations will investigate how common this new type of old galaxy is. Also presented in the Nature paper, the ALMA observations showed the earliest known example of two merging galaxies.

Xmon quantum processor solves linear equations

Composite image showing the quantum processor

Physicists in China are the first to use a superconductor-based quantum processor to implement a quantum algorithm for solving linear systems of equations. Yarui Zheng, Chao Song and Chao-Yang Lu of the Chinese Academy of Sciences and colleagues ran the HHL algorithm on a processor they built. It comprises four superconducting “xmon” qubits – which store quantum information in terms of the number of superconducting Cooper pairs held within each qubit. The HHL algorithm was devised in 2009 by Avram Harrow, Avinatan Hassidim and Seth Lloyd, and is able to solve a system of linear equations with N variables in a running time that scales with the logarithm of N. This is much faster than the best classical algorithm, which has a running time that scales with N. Solving large-scale systems of linear equations is crucial in many fields of science and engineering, and therefore there is great interest in developing a practical quantum computer that could perform this task. The HHL algorithm has already been demonstrated in quantum processors based on photons and nuclear magnetic resonance. However unlike xmon-based systems, both these technologies are not easily scaled-up for solving practical problems. While the team’s four-qubit system offers no advantage over a classical computer, they write in Physical Review Letters that “the superconducting quantum circuits could be used to implement more intricate quantum algorithms on a larger scale and ultimately reach quantum-computational speed-up”.

Construction begins on European super telescope

Work has begun on a huge telescope that will capture 15 times more light than any other optical telescope currently in existence. The $1.5bn European Extremely Large Telescope (E-ELT) is being built by the European Southern Observatory (ESO) on top of a 3 km-high mountain at Cerro Armazones in the northern Chilean Andes. The E-ELT will feature a 39 m primary mirror while the telescope’s secondary mirror will be up to 4 m in diameter. When complete in 2024, the E-ELT’s seven science instruments will study galaxy and planet formation as well as planets orbiting other stars, including probing their atmospheres using spectroscopic measurements. The “First Stone” ceremony to mark the start of construction was attended by Chile’s president, Michelle Bachelet, who noted that the E-ELT is “more than a telescope”. “It marks one of the greatest expressions of scientific and technological capabilities and of the extraordinary potential of international co-operation,” she adds.

Solid becomes liquid-like when irradiated

The atomic structure of an irradiated material is closer to a liquid than a glass, according to a team of researchers in the US. Glasses have been used by researchers to study and predict possible effects of radiation damage, but the engineers behind the new research say that studying liquid states may be more appropriate. They add that the findings from their molecular-dynamic stimulations could help to identify novel radiation-resistant materials.

Exposure to neutron radiation can cause significant structural damage to materials. Understanding the effects of this damage is important for applications such as the construction of nuclear facilities, and the storage of nuclear waste.

“When exposed to radiation, materials undergo some disordering of their atomic structure,” explains Mathieu Bauchy, a civil engineer at the University of California, Los Angeles (UCLA). “In turn, this disordering can affect properties such as density, stiffness, strength and toughness. Therefore, it is essential to understand the effect of irradiation on the atomic structure of materials in order to ensure their integrity.”

The disordered atomic network resulting from irradiation resembles the disordered non-crystalline state of glassy materials. Glasses are formed when a liquid material is rapidly cooled, or quenched, through a process known as vitrification. Instead of forming an ordered crystalline solid, the rapid cooling causes the atoms to become stuck in a non-crystalline state.

Irradiation versus vitrification

Because of their similarities, glasses have been used to predict the properties of irradiated materials. But some differences have been noticed between the materials, leading to questions about whether irradiation and vitrification have equivalent affects. To address this, Bauchy and colleagues at UCLA and Oak Ridge National Laboratory used reactive molecular-dynamic simulations to compare the atomic structures of irradiated quartz and glassy silica, which are both forms of silicon dioxide (see video).

It is essential to understand the effect of irradiation on the atomic structure of materials in order to ensure their integrity
Mathieu Bauchy, UCLA

The effect of radiation on quartz – one of the most abundant minerals on Earth and a major component of many sands used for building – is important as it has many civil-engineering applications, including in the building of nuclear facilities and waste repositories.

After running simulations of both irradiation damage and heating followed by rapid cooling – vitrification – on quartz, the researchers compared the atomic structures of the resulting materials. They found significant differences in the disorder created by irradiation and vitrification. The irradiated material was more disordered than the glass and had an atomic structure closer to that of a liquid.

Counter-intuitive result

“Since irradiation results in the disordering of the atomic structure, it is intuitive to assume that, upon exposure to radiations, crystals should evolve towards a glassy state,” explains Bauchy. “However, by comparing the structure of irradiated quartz with that of glassy silica, we found that this assumption does not hold true.”

Team member N M Anoop Krishnan adds: “We observed that irradiated quartz exhibits more disorder than glassy silica, both in the short- and the medium-range environment of the atoms. Interestingly, we found the structure and thermodynamic properties of irradiated quartz to be equivalent to those of a silica-liquid melt.”

Indeed, the atomic structure of irradiated quartz features co-ordination defects, edge-sharing units, and large silicate rings. These are all absent from glassy silica that is produced through vitrification.

Damage slowdown

The researchers say that their finding that irradiated materials have a liquid-like structure has important implications. Bauchy says that from a fundamental perspective, it explains why structural damage slows after prolonged radiation exposure, rather than continuously increasing. “Once the material reaches a liquid-like structure it becomes easier for the atoms to move and reorganize, which prevents the accumulation of any further damage.”

The result also “suggests that the structure and properties of irradiated materials can be predicted from those of their corresponding liquid,” explains Krishnan. According to the researchers, this understanding could help to identify novel radiation-resistant materials.

The research is described in The Journal of Chemical Physics.

Identifying fingerprints, attractive scientists, what physics students should know

Do you have the pattern-matching skills needed for identifying fingerprints? If so, researchers at National Institute of Standards and Technology in the US want to hear from you. They have put together a visual quiz that tests your ability to “focus on minute visual details that would leave most people cross-eyed”. You can try the test here.

If fingerprints aren’t your thing, perhaps could you judge the intellectual prowess of a scientist by their looks? Surely not, but a study by psychologist Will Skylark of the University of Cambridge and colleagues suggests that people do judge scientists by their looks. The researchers found that people rated good-looking scientists as being less competent than researchers of ordinary appearance. You can read more in this article in the Telegraph, which features a photograph of physics heartthrob Brian Cox.

Should undergraduate physics students know that the Standard Model is an SU(3)xSU(2)xU(1) gauge theory and what that means? Yes, according to cosmologist and science writer Sean Carroll – who said so in a recent tweet. The inevitable backlash seems to have started with Chad Orzel, who begged to differ in his column in Forbes. “I’ve had a pretty good career in physics to this point despite never learning those things as an undergrad,” writes Orzel, who works in atomic and molecular physics. He is backed-up by the blogger ZapperZ, who writes “Considering that about half of BSc degree recipients in physics do not go on to graduate school, I can think of many other, more important skills and knowledge that we should equipped physics majors”.

Flash Physics: Jupiter’s swirling poles, ordered droplets, no fifth force, topological magnetoelectric effect

Juno’s stunning portrait of Jupiter shows swirling storms

 

Jupiter's south pole as seen by Juno

NASA’s Juno mission has sent back stunning images of Jupiter’s poles. The above image shows the gas-giant’s south pole. The spacecraft’s JunoCam instrument took multiple pictures from an altitude of 52,000 km on three separate orbits, allowing researchers to create full enhanced colour projection. The images of both poles reveal that they are covered in Earth-sized swirling storms up to 1000 km across, but do not look like each other. “We’re questioning whether this is a dynamic system, and are we seeing just one stage, and over the next year, we’re going to watch it disappear, or is this a stable configuration and these storms are circulating around one another?” explains Juno’s principal investigator Scott Bolton. As well as the images, Juno sent back a huge array of results from its first data-collection pass. They are presented in two Science papers and 44 papers in Geophysical Research Letters. “There is so much going on here that we didn’t expect, that we have had to take a step back and begin to rethink this as a whole new Jupiter,” says Bolton.

Small water droplets show unexpected order

Tiny water drops have surprisingly ordered surfaces, according to Sylvie Roke from École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and colleagues. The team looked at droplets with a diameter of around 200 nm. Such nanoscale beads of water are everywhere – in the air, rocks, oil fields and even our bodies – and therefore understanding their behaviour may provide insights into atmospheric, geological and biological processes. To study the tiny droplets, the scientists look at how their curved surfaces interact with the surrounding water-repellent environment. “The method involves overlapping ultrashort laser pulses in a mixture of water droplets in liquid oil and detecting photons that are scattered only from the interface,” explains Roke. “These photons have the sum frequency of the incoming photons and are thus of a different colour. With this newly generated colour we can know the structure of only the interface.” The team discovered that the surfaces of these tiny pockets of water at room temperature are much more ordered than that of normal water. The enhanced tetrahedral structure is instead comparable to super-cooled water – liquid water below the freezing point – which has very strong hydrogen bonds between the water molecules. The results, presented in Nature Communications, suggest the nano-droplets may have reduced reactivity, and further studies will investigate how this affects real-world systems.

Study places limit on a “fifth force”

A new way of working out whether a “fifth force” exists has been developed by an international team led by Andrea Ghez and Aurélien Hees at the University of California, Los Angeles. The group looked at the motions of two stars (S0-2 and S0-38), which orbit the supermassive black hole (SMBH) at the centre of the Milky Way. The stars were monitored for 19 years, which is roughly the time it takes the stars to complete an orbit of the SMBH. The team looked for deviations from the trajectories predicted by Einstein’s general theory of relativity, and no discrepancies were seen. This suggests that the strength of a fifth force is less than 1.6% the strength of gravity. Modern physics includes four forces: gravity, and the electromagnetic, strong and weak forces. A hypothetical fifth force appears in some theories that try to unify gravity with quantum mechanics or to explain dark matter and dark energy. While much stronger exclusions of a fifth force have already been obtained by studying forces on masses on Earth and also on objects within the solar system, this is the first study to look at large objects in the huge gravitational field of a SMBH. Writing in Physical Review Letters, Ghez, Hees and colleagues point out that their measurement should be improved next year when one of the stars makes its closest approach to the SMBH, where a deviation from general relatively could be strongest.

Topological magnetoelectric effect rotates light

A new interaction between light and a material has been observed by physicists in Austria and Germany. The team shone a polarized beam of terahertz electromagnetic radiation through a thin film that included a topological insulator in an applied magnetic field. The researchers found that the polarization of the beam is rotated by a specific angle as it travels through the material. At first glance, this rotation is similar to the well-known magneto-optical effect that occurs when light passes through a magnetic material. However, Andrei Pimenov and colleagues at the Technical University of Vienna and the University of Würzburg found that the angle is independent of the thickness of the topological insulator – which is not the case for the magneto-optical effect. Furthermore, they found that the angle is fixed at a specific value that is related to the fine-structure constant. This is a dimensionless quantity that defines the strength of the electromagnetic interaction. According to the team, the polarization is rotated by a fixed value every time it passes through a surface of the topological insulator. The researchers say this is related to the peculiar properties of a topological insulator, which is an electrical conductor at its surfaces but an insulator in the bulk. Writing in Nature Communications, the team says that this “topological magnetoelectric effect” could provide a way of defining three basic physical constants that are related to the fine structure constant: the charge of the electron, the speed of light and the Planck constant.

Laser-engraved graphene pixels work in extreme environments

With our persistent march towards nuclear fusion, the need for technologies that can operate in high-energy environments is becoming ever more urgent. Now, researchers in the UK and Spain have discovered a material that can allow us to take pictures inside a nuclear reactor.

In the past, graphene has been used in the design of flexible photodetectors that operate over a large range of frequencies. However, compared to current inorganic photodetectors, the resolution and power response just do not measure up.

Chemical doping of graphene can help some of these problems by increasing the density of charge carriers (electrons or holes). Adding iron-chloride (FeCl3) molecules between a few layers of graphene has been shown to lead to extremely high concentrations of carriers in the material. This FeCl3-intercalated few-layer graphene (FLG) is also stable in ambient conditions. It is 1000 times more conductive than pure single-layer graphene, while retaining an equally low absorption in the visible frequency range.

Starting with this material as a base, Saverio Russo and his group at the University of Exeter, UK, used a laser to engrave regions of lower carrier concentration. The laser removes some of the FeCl3 molecules, creating photoactive junctions between highly doped and laser-treated regions of the material. When light shines on this junction, a current is detected across the material, like in a pixel in a camera sensor.

The recipe for extreme photodetectors

The cleverness of this particular material lies in the ultra-high carrier concentration and accelerated cooling of the carriers. Unlike in previously studied devices, the response of the carriers is purely photovoltaic. In FeCl3-intercalated FLG, when the carriers are generated by the incident photons, the electric field created by the difference in charge density on either side of the junction leads to a separation of electrons and holes. This in turn leads to a current.

In other graphene-based photodetectors the carrier behaviour is dominated by the photothermoelectric (PTE) effect; the difference in Seebeck coefficients either side of the junction is responsible for the diffusion of hot carriers when illuminated, as in a common thermocouple. This means the response is spread across an area of a few μm, which limits the size and packing density of the pixels. Using FeCl3-intercalated FLG instead has a huge advantage here, as the miniaturization of pixels is not hindered by a need for thermal isolation.

“This is truly a wonder material; our results show for the first time that graphene-based photodetectors are not always dominated by the PTE,” says Adolfo De Sanctis, lead author of the paper. A reduction of the PTE effect, in fact, results in another marvel. Pixels made from FeCl3-intercalated FLG have a linear (and therefore predictable and reliable) response to photons over a range of incident powers that reaches around 4500 times higher than for other graphene-based devices. This huge linear dynamic range holds true for frequencies from mid-IR to UV-A. This makes these pixels ideal for extreme environments, such as within nuclear reactors, or for working with the high-energy lasers needed for nuclear fusion.

Breaking the diffraction limit

In the absence of the PTE effect, the size of the junctions is instead constrained by the diffraction limit of the laser used to create them. In collaboration with a team at the Institut de Ciències Fotòniques (ICFO), Spain, led by Frank Koppens, the researchers pushed beyond the limits of diffraction when creating the junctions. Using near-field optical nanoscopy, they were able to carve away the FeCl3 molecules and create a junction with a width of only 250 nm, more than halving the diffraction limit of the laser.

The next challenge that faces the researchers lies in producing larger sheets of this exciting material. They can then pattern extensive arrays of photoactive junctions, creating an imaging device suitable for the extreme environments that modern research is leading us towards.

You can read more about this work in the original paper, published in Science Advances.

Diamond sensors boost NMR resolution

A new way of boosting the resolution of quantum magnetic sensors has been developed independently by three teams of physicists. The technique has already been used to achieve a huge improvement in nuclear magnetic resonance (NMR) spectroscopy.

Quantum sensing is used to measure frequencies in multiple areas of physics, but for a quantum sensor to measure anything, it must interact with its environment. This degrades its quantum properties very quickly – and this limits the measurement accuracy. Now, however, three research groups have independently synchronized multiple quantum measurements using a classical clock, allowing frequency measurements up to 100 million times more accurate than previously possible with a quantum sensor. One group then went on to demonstrate unprecedented accuracy in micron-scale NMR spectroscopy.

All three groups – at ETH Zürich in Switzerland, Ulm University in Germany and Harvard University in the US – made use of negatively charged nitrogen-vacancy (NV) centres in diamonds. These occur when two adjacent carbon atoms in a carbon lattice are replaced by a nitrogen atom and a vacant site. The spin states of NV centres can be controlled and measured using light, and are also exquisitely sensitive to magnetic fields. Whereas the traditional coil detectors used in NMR spectroscopy and magnetic resonance imaging (MRI) require bulk samples, atomic-scale NV centres can be placed right next to molecules in “nano-NMR” experiments, which are becoming widespread. In 2016, the Harvard and Ulm researchers detected individual protein molecules on the surface of an NV-implanted diamond and even inferred some structural features by studying changes in the frequencies of the fields detected by the NV centres.

Spatial versus spectral

To determine the structure of large molecules using nano-NMR requires even better spectral resolution to allow more precise measurement of the precession frequencies of nuclei, and thus their chemical environments. “The length of time over which you can sample a signal limits the resolution with which you can determine its spectrum,” explains Kristian Cujia, a member of the ETH Zürich team. Unfortunately, the coherent quantum state of an NV centre collapses after a few microseconds because of environmental interactions. Such a short measurement carries significant uncertainty. Worse still, to improve the spatial resolution of diamonds, researchers often implant NV centres more densely or place them closer to the surface. This brings the NV centres closer to the sample, making them more sensitive to its magnetic field, but it also makes them less isolated, causing decoherence to occur more quickly, further reducing the spectral resolution.

Researchers can improve the magnetic sensitivity of NV centres by simply making multiple measurements. As the errors on successive measurements are uncorrelated, the precision improves as more measurements are made. However, the spectral resolution does not improve with such repeated, uncorrelated measurements. The three teams have surmounted this problem by synchronizing repeated NV magnetic measurements to an external clock. This allows them to keep track of time even after decoherence occurs.

“Normally, you would have to take your next measurement as an independent measurement,” explains Ulm’s Liam McGuinness. “When we did our next measurement, we already had a clock that was keeping track of time. That let us stitch together a sequence of measurements.” Indeed, the researchers could make a measurement on an NV centre that could be monitored indefinitely, effectively eliminating the limitation of NV decoherence. All the groups were able to measure megahertz-scale frequencies with sub-millihertz precision – nearly a million times better than the spectral resolution of other NV measurement protocols.

Diffusion difficulties

McGuinness and colleagues used their measurement protocol to perform NMR spectroscopy on a nanometre-sized sample of polybutene. However, the researchers encountered a problem: “Our molecules diffuse past our NV centre,” explains McGuinness. This restricted the length of time the researchers could observe a single molecule, preventing them from obtaining a resolution better than about 1 kHz.

The Harvard group, however, came up with a solution to this problem by getting the measurement protocol to work for ensembles of NV centres in the same diamond. This means that their sample volume is slightly larger (micron sized) and their measurements suffer much less from the effects of molecular diffusion. “With current technology, you can’t use the synchronized readout technique usefully for high-spectral-resolution NMR at the nanoscale, because of the random fluctuation of the sample’s spin polarization [which impedes coherent detection of the small NMR signal],” says Harvard’s Ronald Walsworth. “At the micron scale you can.”

The Harvard researchers obtained resolutions as good as 3 Hz – nearly 100 times smaller than ever seen before in NMR using NV centres. They also observed many of the crucial features used to interpret NMR signals for the first time – including J-couplings. “That opens up a whole new world of micron-scale NMR – potentially for intracellular NMR, for example,” says Walsworth. The next step, says Walsworth, will be to try to perform genuinely new science using NV-centre NMR.

McGuinness says the new sensing protocol is a “general technique” and could find application well beyond NV centres and NMR. “We draw parallels to heterodyne, or beat-note, detection. If you have a weak laser and you want to measure its frequency, you take another very strong laser, join them together and measure the beat note. Here, instead of taking a classical laser, we take a quantum sensor.”

“Important technique”

Theoretical physicist Andrew Jordan, who was not involved in the research, says that the ETH Zürich and Ulm University papers represent “a nice advance in this field…Maybe the most important parameter we have is frequency, because that sets the precision of our timekeeping devices. I think this is going to be an important technique going forward, if nothing else to calibrate people’s systems before they go on to do other applications”. He declined to comment on the Harvard research because it has not yet been through the peer-review process.

The ETH Zürich and Ulm University papers are published in Science. The Harvard research is described in a preprint on arXiv.

Simulating the universe

When Einstein proposed his general theory of relativity a century ago it meant that scientists could in theory describe the universe’s behaviour with mathematics. However, Einstein’s equations are so fiendishly hard to use that researchers were only able to apply the equations to approximations of the real universe. Now, however, two independent groups have finally used Einstein’s equations to describe reality. Find out more by reading this feature article from the May issue of Physics World.

Flash Physics: Flytrap robot catches prey, doughnut-shaped ‘planets’, EU dishes out £55m for UK physics

Flytrap soft robot catches prey

A Venus flytrap’s autonomous insect-catching ability has been replicated by a tiny soft robot. To create the device, Arri Priimägi and team from Tampere University of Technology in Finland attached a strip of light-responsive liquid-crystal elastomer to the tip of an optical fibre. Mimicking the Venus flytrap’s head, the strip of elastomer is about 10 mm long, 1 mm wide and 20 μm thick. It contains layers of ordered molecules that have a different orientation in each layer – those in the “insect-facing” layer are horizontal while those on the opposite side are vertical. The molecules in between are at an intermediate angle. When light is shone on the elastomer, the molecular alignment becomes random. This causes the insect-facing layer to contract and the other side to expand – in other words, the strip of elastomer bends like a flytrap closing. Usually a light-responsive elastomer requires external illumination, but by attaching the strip to an optical fibre, Priimägi and colleagues integrated a light source. Light shone through the optical fibre and elastomer creates a cone of illumination. When an object such as an insect enters this field of view, light is reflected back in the direction of the elastomer. This thereby triggers the elastomer to bend and close around the object. To release the object, the light is simply turned off. The autonomous device, presented in Nature Communications, could be used for intelligent micro-robotics as well as handling delicate small objects.

Could huge doughnut-shaped “planets” exist?

Schematic of a synestia

Huge doughnut-shaped objects made from vapourized rock could be orbiting stars other than the Sun. That is the conclusion of Simon Lock of Harvard University and Sarah Stewart at the University of California, Davis, who have done calculations that suggest a new type of planetary object called a synestia could form when rocky planets collide with each other. Such an object would be about four times the diameter of Saturn’s rings and would comprise a ring of rapidly rotating vapourized rock. It would resemble a doughnut, but instead of having a hole in the middle, a synestia would have a dense planet-like object at its centre. Lock and Stewart say a synestia would form when the debris from planetary collisions was both very hot and carrying large amounts of angular momentum. They also suggest that most planets could have been synestias early in their lifetimes. Small planets such as Earth would only spend a few hundred years in this phase before condensing into solid objects. However, larger or hotter objects such as gas-giant planets or even small stars could spend much longer times as synestias. Although synestias have not been observed, the calculations could encourage astronomers to look for huge doughnut-shaped objects alongside rock and gaseous exoplanets. The research is described in the Journal of Geophysical Research: Planets.

European Union dishes out £55m for UK physics

UK physics received £55m in 2014/2015 from the European Union (EU) according to a report by Technopolis Group – an independent policy research organization. Commissioned by the UK’s four national academies – the Academy of Medical Sciences, the British Academy, the Royal Academy of Engineering and the Royal Society – the report looked at how reliant UK research is on EU funding. The EU’s Seventh Framework Programme, which ran from 2007 to 2013, provided UK organizations with around €7bn and its successor – Horizon2020 – is providing around €1.1bn per year. This figure amounts to more than 10% of total UK government support for research and is around 5% of the UK’s gross domestic expenditure on R&D. The report finds that UK universities received around £725m in research grants from EU government bodies in 2014/2015, of which £55m was received by both physics and chemistry while the biosciences got £90m. As the top 10 UK universities receive almost half the £725m funding, the report warns that this will be “difficult to replace” after the UK leaves the EU in 2019.

Search for the ‘perfect’ theory

The term “theory of everything” was a common turn of phrase among high-energy particle theorists during the 1980s, used with varying degrees of irony. Physicists from other fields were often not amused, seeing this terminology as yet more evidence of the hubris of particle physicists. In his new book Theories of Everything: Ideas in Profile, author Frank Close uses the term unapologetically, outlining the current state of our best attempt at a unified theory that should apply to “everything”.

Currently, the closest such theory that we have is commonly known as the Standard Model of particle physics, although Close also uses an alternate name some favour – the Core Theory. He describes some of the features of the theory, leading up to the vindication of one of its central ideas – that of a Higgs field – with the first-ever observation of the Higgs boson, made by researchers using the Large Hadron Collider (LHC) at the CERN particle-physics laboratory in Geneva in 2012. For a more detailed account of this story, Close’s 2013 book, The Infinity Puzzle, is an excellent source.

The great success of the Standard Model has left particle physicists in a difficult position; with not just the Higgs, but all other results from the LHC and other particle-physics experiments so far agreeing perfectly with the theory. This has crushed hopes that something unexpected might be found, which would ultimately indicate a way forward to a better, more complete theory. A major goal of Close’s latest book is to put this situation in historical context, describing earlier “theories of everything” and the theoretical advances that gave new, fundamental insight into the nature of physical reality.

A crucial question about our current situation is whether we really are at, or near, the end of our search for what theoretical physicist and Nobel laureate Steven Weinberg refers to as a Final Theory, or whether there is another revolution in our understanding still to come. One often reads quotes attributed to Albert Michelson (“the grand underlying principles have been firmly established”, 1894) and Lord Kelvin (“there is nothing new to be discovered in physics now”, 1900), indicating that they, like many now, thought they were near the end of the road. That of course would have been a huge mistake, with the great revolutionary discoveries of modern physics – relativity and quantum mechanics – just a few years off.

Close points out that Kelvin’s actual 1900 speech was much more prescient, as he described “two clouds” on the horizon, pointing out that experimental results were in direct conflict with the accepted theory of that time (the Michelson–Morley experiment and the black-body radiation spectrum). For anyone trying to look for a lesson from history applicable to today’s “theory of everything”, a key question is whether any analogue of these “two clouds” can be found.

Close takes up this question and argues that there are good candidates for our “two clouds”. The first is the energy density of the vacuum, also known as the cosmological constant. Cosmological observations appear to indicate that this is a non-zero number, with an order of magnitude so small that it doesn’t fit at all with what one might expect from the Standard Model and general relativity.

The second cloud, according to Close, is the so-called “hierarchy problem”. This is a theoretical problem with our now experimentally confirmed theory of the Higgs field, which is strongly sensitive to very-short-distance phenomena. We seem to lack a convincing idea that would consistently describe what is happening at unobservably short distances, without requiring an unmotivated and very special choice of parameters in order to get the Higgs physics seen at the LHC.

An increasingly popular tactic for theorists frustrated by not having an answer to these problems is that of postulating a “multiverse”, in which our universe is but one disconnected component, born out of some process that left it with some essentially random choice of fundamental parameters. In this scenario there’s no point in worrying about why these parameters have the scales they do, since somehow the “multiverse did it”, in a manner constrained only by the “anthropic principle”, which says that the parameters must have values consistent with our existence. Close quite rightly raises the issue of whether this is really a valid explanation, since it’s one currently lacking any means to subject it to experimental test.

He then goes on to explain that the current “two clouds” seem to have a root in the same fundamental issue – the lack of a viable general quantum theory of gravity that would unify the theory of relativity with the quantum field theory of the Standard Model. For quite a few decades now theorists have put great hopes in certain speculative ideas proposed back in the 1970s that were supposed to lead towards such a unified theory. Recent years have not been kind, though, to these proposals, with results from the LHC killing hopes for experimental evidence of one of them – supersymmetry – and the great complexity needed to get anything not obviously inconsistent with experiment making the other – string theory – less and less appealing.

I think Close is on the right track with his final argument where he concludes “My conjecture is that in some future theory of everything, space and time will turn out not to be fundamental and will emerge from some deeper concept. Whoever first establishes what this is will enter the pantheon of science, along with Newton, Maxwell and Einstein.”

The lack of a compelling, unified theory that can explain how the degrees of freedom fundamental to the Standard Model and its forces fit together with those describing space, time and the gravitational force is a major hole in today’s “theory of everything”. Perhaps the future will bring a new idea that tidily fills that hole, thereby dispersing Close’s clouds. It’s also possible that the clouds are indications of a storm to come, with new ideas tearing apart the Standard Model, replacing it with a quite different new “theory of everything”. I hope we’ll soon find out which route the future of physics will take.

  • 2017 Profile Books 176pp £8.99pb
Copyright © 2026 by IOP Publishing Ltd and individual contributors