Skip to main content

Astronomers reveal most detailed atlas of the Milky Way

The European Space Agency (ESA) has released the latest star map of the Milky Way taken by its €450m Gaia mission. Gaia’s third data set, released today, contains high-precision measurements of nearly 2 billion stars, allowing astronomers to trace the various populations of older and younger stars out towards the very edge of our galaxy – the so-called “galactic anticenter”. The new data will also let scientists study how the solar system formed as well as its acceleration compared to the universe.

Gaia was launched in December 2013 and started observations the following year from its position around 1.5 million kilometres from Earth in the opposite direction from the Sun. Gaia has two telescopes and the spacecraft rotates once every six hours so that they scan the sky, focusing light onto a huge CCD sensor with nearly a billion pixels – one of the largest ever flown in space. Gaia’s mission is to plot the precise positions, motions, temperatures, luminosities and compositions of over a billion stars across the Milky Way.

The first data release, based on just over one year of observations, was published in 2016 and contained the distances and motions of two million stars that was followed by a second release in 2018 that covered the period between July 2014 and May 2016. This included high-precision measurements of nearly 1.7 billion stars as well as measurements of asteroids within our solar system.

The third data set comes in two parts, with today’s announcement being an early release of the full set, which is planned for 2022. “The new Gaia data promise to be a treasure trove for astronomers,” says Jos de Bruijne, ESA’s Gaia deputy project scientist.

Durable perovskite solar cells

Want to learn more on this subject?

Dr Anita Ho-Baillie will start the talk with an introduction of perovskite solar cells followed by an outline of the instability issues of perovskite solar cells. She will then move onto the strategies addressing these such as moisture instabilty. The why and how perovskite solar cells become unstable under thermal stress will be discussed followed by our strategies of overcoming this.

Want to learn more on this subject?

Dr Anita Ho-Baillie research interest is to engineer materials and devices at nanoscale for integrating solar cells onto all kinds of surfaces generating clean energy. A Clarivate Highly Cited Researcher (2019), Dr Ho-Baillie is considered an international leader in advancing perovskite solar cells. Her achievements include setting solar-cell energy-efficiency world records in various categories.

 

 




Hot time for hard disks: why magnetic-recording technology is still going strong

When it comes to storing data, you might think that magnetic devices are rapidly becoming a thing of the past. Surely solid-state drives are the future? But when I visited Queen’s University Belfast earlier this year, I was surprised to see lots of exciting R&D going on in magnetic storage. My host was the physicist Robert Bowman, who works closely with Seagate Technology – one of the world’s largest manufacturers of magnetic storage devices and products.

Seagate is among the firms developing new approaches to magnetic storage to allow data to be stored more quickly and with higher density than ever before. Indeed, it sees no end in sight for the technology. The market-intelligence firm IDC, for example, expects the amount of data stored in such devices to rise from 33 zettabytes (33 × 1021 bytes) in 2018 to 175 zettabytes by 2025. If you stored that much information on Blu-ray discs, stacked together they would stretch 23 times the distance to the Moon.

Over 350 million hard disks are shipped every year and by 2029 the global market is expected to surpass $80bn globally

In terms of cost and speed of access, magnetic storage wins hands down. And even if solid-state storage were cheaper (which it isn’t and probably never will be), there just isn’t enough fabrication capacity to physically build the storage we are projected to need in time. Over 350 million hard disks are shipped every year and by 2029 the global market is expected to surpass $80bn globally. Seagate’s technology facility in Northern Ireland alone makes almost 30% of the global supply of read/write heads.

Spinning a yarn

It’s incredible how far we’ve come in magnetic storage since 1888 when the US engineer Oberlin Smith first proposed storing audio in tiny metallic particles on a thread of cotton or silk. Practical difficulties stopped Smith’s ideas in their tracks, but in 1928 the German-Austrian engineer Fritz Pfleumer developed the first magnetic tape recorder – an analogue device for storing sound. Magnetic tape was first used for data storage in 1951 when the UNIVAC I computer was developed.

These days digital “linear-tape” cart-ridges are the cheapest form of storage, costing well below $0.01 per gigabyte. They have an immense storage capacity, with the latest cartridges being able to cram 30 TB (30 × 1012 bytes) of data – a figure that’s expected to grow to 40 TB by the end of the decade. It’s a $5.8bn market that will rise to $6.5bn by 2026 according to market intelligence company PMR, partly driven by the need for offline back-ups and copies for disaster recovery and to counter the growing “ransomware” threat.

Linear thinking

The trouble with tapes is that they’re linear strips, which means it takes time to go from one part to another. That’s one reason why we have hard disks, which allow data to be accessed more quickly by jumping from one circular ring, or track, to another. The first commercial computer to use a moving-head disk drive was IBM’s RAMAC 305, produced in 1956. Containing 50 disks, each 24 inches in diameter, it offered 5 MB of storage, which was huge back then. These days a hard disk has up to 20 TB of capacity, costs less than $0.02 per gigabyte and fits in the palm of your hand.

Hard disks are marvels of physics and technological innovation. Tiny read/write heads fly above their mirror-like surface (itself a complex multi-layer ferromagnetic coating) with a clearance of as little as 3 nm. Operating in a humidity-controlled air- or helium-filled cavity, the height is controlled by an air bearing etched onto the head’s disk-facing surface and attached to a photo-etched precision-machined slider (itself controlled by an ultra-precise stepper motor).

Each hard-disk track – of which there are about a million per inch – contains bits of information recorded into tiny areas just 42 nm wide (roughly six to eight magnetic grains) and 10 nm long (barely two to three grains). The disks rotate at speeds of up to 15,000 revs per minute (as fast as the engine on a Formula 1 car) while the read/write heads themselves are fabricated on 200 mm wafers using state-of-the-art photolithography and micro- and nano-fabrication processes.

A hard disk’s unique mix of semiconductor technologies, precision machining and ultra-precise stepper motors is a staggeringly impressive piece of engineering

Modern heads are based on tunnelling magnetoresistance (a quantum effect linked to the 2007 Nobel Prize for Physics), but more advanced designs are being explored to increase the density further still. Each hard disk also has multiple heads and multiple double-sided disks (depending on the storage required), plus lots of precision-control electronics. What a pity that most hard disks are encased in dull-looking metal boxes that live in thousands of dull‑looking “cloud” data centres around the world.

For me, a hard disk’s unique mix of semiconductor technologies, precision machining and ultra-precise stepper motors is a staggeringly impressive piece of engineering. If modern hard disks didn’t exist and you asked a team of engineers to design one from scratch – with a sub-$50 price tag – they’d think you were crazy and give you a thousand reasons why it couldn’t be done, not at any price. Yet thanks to half a century of hard work and the power of incremental technological development, such devices do clearly exist.

A date with density

And as Bowman explained, thanks to “heat-assisted magnetic recording” (HAMR), Seagate has been able to increase the storage density of its hard disks to more than 2 TB per square inch, with the bits written via a laser and plasmonic near-field transducer integrated into the read/write head. The firm will soon ship its first HAMR-based drives and Seagate is targeting 50 TB drives by 2026. So next time you check social media, browse the Web or look at a photo stored in the cloud, remember your data are on a hard disk on low-power standby, at a data centre somewhere in the world, waiting for your command.

The humble hard disk is one of the least humble things imaginable.

Stretchy cardiac implant measures and treats heart disease

Researchers from the University of Houston, the Texas Heart Institute and the University of Chicago have developed an implantable rubber patch that can collect information about the heart and use it to diagnose cardiac conditions.

Many cardiac devices, such as pacemakers, are limited in one of two ways. They are either made using rigid materials and unable to conform to the complex surface of the heart, or they are flexible but limited in function.

This new device is made from fully rubbery materials and can be placed directly on the heart. It can record data whilst bending with a beating heart. This marks the first time that bioelectronics have been developed based on rubbery materials with similar mechanical properties to the tissues in the heart.

Multifunctional implant

The device has several functions and can both collect information to diagnose conditions and carry out therapeutic functions. It does not need an external power source as it uses energy collected from the beating of the heart.

The implant can monitor tissue strain and temperature, as well as measure electrophysiological activity, all of which are important factors that can help diagnose a variety of heart conditions. Importantly, this device can simultaneously collect this information from multiple regions across the heart to fully map the extent of any problem. In addition to monitoring the heart, the device can also electrically pace it and perform thermal ablation to destroy damaged tissues.

The device itself is a thin elastic electronic sheet that is only 0.4 mm thick. It is based entirely on rubbery materials, from the substrate to the semiconductors. The main base of the device is PDMS (polydimethylsiloxane), a common, low-cost flexible material.

The device can be bent, stretched and crumpled without suffering any damage. In fact, it can be stretched by up to 30% and still function. As the mechanical properties of the implant are similar to those of cardiac tissue, the device can have a close interface with the heart, and the risk that the implant could damage the heart is reduced.

Flexible futures

This device is not meant to be a long-term implant, but rather a temporary diagnostic measure. With an extensive knowledge of how the heart is functioning, doctors can better diagnose and treat cardiac disorders. It could even help speed up the development of new treatments.

“Unlike bioelectronics primarily based on rigid materials with mechanical structures that are stretchable on the macroscopic level, constructing bioelectronics out of materials with moduli matching those of the biological tissues suggests a promising route towards next-generational bioelectronics and biosensors that do not have a hard-soft interface for the heart and other organs,” the researchers write.

Details of the implant are reported in Nature Electronics.

Lightning ‘superbolts’ have a unique formation mechanism, satellite studies suggest

Superbolts – the rare and most extreme form of lightning — can be more than a thousand times brighter than regular lightning flashes and have a distinct formation mechanism – according to a new study done in the US. As well as shining new light on the origins of this impressive phenomenon, this latest research could also help inform lightning safety efforts.

Superbolts were first reported in 1977, following observations made by the satellite constellation of Project Vela, which was a US programme to monitor the globe for nuclear explosions. Vela spotted flashes that were at least 100 times more intense than typical lightning and lasted about 1 ms. Superbolts were spotted across the globe, although more frequently over the North Pacific in regions of strong convective activity and in association with severe thunderstorms.

Since then, the superbolt mechanism and whether superbolts are a distinct form of lightning has remained a subject of debate. One argument, for example, suggests that the detection of apparent superbolts by Vela and other satellites may simply be an artefact of these observatories having a good vantage point.

Good views

Normally when lightning is viewed from space, cloud cover makes it appear dimmer than it would appear on the ground. However, Michael Peterson, an atmospheric scientist at the Los Alamos National Laboratory in New Mexico points out that “sometimes the satellite is in just the right place to see the source with little-to-no cloud in the way, and this causes the flash to appear brighter than normal. This usually happens when the satellite is closer to the horizon and can see below the upper anvil cloud surrounding the storm core.”

In two new studies done at Los Alamos, Peterson, Erin Lay and Matt Kirkland analysed data collected by optical sensors aboard the Fast On-Orbit Detection of Transient Events (FORTE) and GOES-16 satellites to determine the brightness of superbolt events. FORTE provided twelve years of lightning observations, while two years of data gathered by the Geostationary Lightning Mapper (GLM) aboard GOES-16 were analysed.

The GLM recorded 2,021,554 superbolt events with a magnitude of at least 100 times that of the average for regular lightning, while FORTE detected 20,283. The team found that the global distribution of the least radiant superbolts mirrored that of regular lightning. The most extreme events, however, had a unique distribution, with those seen by GLM (which views the Americas) concentrated over the central US and South America’s La Plata basin.

The team determined that the brightest superbolt events typically arise from rarer positively charged cloud-to-ground flashes, rather than the negatively charged phenomena that characterize most lightning. “There is a link between superbolts and high-current positive polarity strokes in the long horizontal lightning flashes that stretch for hundreds of kilometres known as ‘megaflashes’,” says Peterson. This is unlike regular lightning, which originates in the main, negatively charged region in the mid-levels of thunderstorms – and thus carries a net negative charge to the ground.

Bolt from the blue

“Positive-polarity lightning comes from a positive charge region. The most prominent of these is usually located above the main negative charge region, so positive-polarity lightning tends to travel over longer distances to reach the ground,” Peterson adds. “This is more difficult to do, and you often see this occur outside of the thunderstorm core where it can strike as a bolt from the blue.”

The team often saw superbolts in clouds as part of larger flashes involving extensive networks of ionized lightning channels. These networks move larger amounts of charge than normal and have more compact, lightning bolts. They therefore produce considerably more intense optical emissions than normal lightning.

“One lightning stroke even exceeded 3 TW of power – thousands of times stronger than ordinary lightning detected from space,” notes Peterson. “Understanding these extreme events is important because it tells us what lightning is capable of.”

Protecting wind farms

“Many of the [superbolts] occur over the oceans which deserves to be considered in lightning protection guidance, as the number of wind farms rapidly increases,” observes Martin Fullekrug, a physicist at the University of Bath.

University of Washington physicist Robert Holzworth, meanwhile, is sceptical that “true” superbolts are only associated with positive charge events – which he notes disagrees with the findings of his previous work. The Los Alamos work, he said, “brings up the question about the details of strong lightning detected by RF systems, as compared to optical satellite systems.  I think this may be a red herring.”

Much remains to be learnt about superbolts, megaflashes and how lightning illuminates storm clouds in general.  Peterson says, “We have only recently – in the past few years – started taking the continuous space-based measurements from geostationary orbit that permit these exceptionally-rare cases to be routinely identified,” he explained. “The launch of the Meteosat Third Generation satellite next year will be an important milestone for space-based lightning research because it will cover a key hotspot region for lightning activity – the Congo Basin in Africa – and also a critical hotspot region for superbolts – the Mediterranean Sea.”

The research is described in two papers in the Journal of Geophysical Research: Atmospheres.

Thirty years of ‘against measurement’

“Surely, after 62 years, we should have an exact formulation of some serious part of quantum mechanics?” wrote the eminent Northern Irish physicist John Bell in the opening salvo of his Physics World article, “Against ‘measurement’ ”. Published in August 1990 just two months before his untimely death at the age of 62, Bell’s article outlined his concerns. As he further explained, “By ‘exact’ I do not of course mean ‘exactly true’. I mean only that the theory should be fully formulated in mathematical terms, with nothing left to the discretion of the theoretical physicist…until workable approximations are needed in applications.”

Although Bell spent the majority of his career as a theoretical particle physicist and worked on accelerator design at the CERN lab in Geneva, today he is best known for his contributions to deep, foundational questions that probe the meaning of quantum mechanics. Nearly a century after it was first formulated, there is still no consensus among physicists on how the theory should be interpreted. “I think I can safely say that nobody understands quantum mechanics,” Richard Feynman famously declared – a rather extraordinary admission for a foundational theory that underpins much of our understanding of modern physics.

Indeed, the debate about the interpretation of quantum mechanics, which began in 1927, continues to this day. It became polarized around the views of its two leading protagonists – Niels Bohr and Albert Einstein. In essence, this was a debate about the meaning of the theory’s central concept, the quantum wavefunction – a mathematical description of the quantum state of a system, which contains all its measurable information.

According to Bohr the wavefunction shouldn’t be taken as a literal representation of the real physical state of a real physical system. While he acknowledged its importance and significance in solving quantum problems, he insisted that “it must be recognized, however, that we are here dealing with a purely symbolic procedure, the unambiguous physical interpretation of which in the last resort requires a reference to a complete experimental arrangement” (Essays 1958–1962 on Atomic Physics and Human Knowledge, Wiley Interscience). Indeed, he is famously quoted as saying “There is no quantum world. There is only an abstract quantum physical description.”

For Bohr, the quantum formalism is a “purely symbolic procedure” that lets us use our experiences of past measurements to predict the results of future ones. On this understanding all measurements are classical, as this is the only kind of physics we can experience directly. But the quantum nature of the objects under study means that the apparatus and the way it is set up determines what we can expect to observe. With one kind of apparatus we can choose to observe the wave-like nature of a “beam” of electrons. With another kind of apparatus, we can choose to observe the particle-like nature of individual electrons. These are mutually exclusive, not because we lack the ingenuity to conceive of an apparatus to expose both types of behaviour simultaneously, but because such an apparatus is simply inconceivable.

According to Bohr what we cannot do is go beyond these complementary descriptions and say what an electron actually is when it is not being observed. This became known as the “Copenhagen interpretation”, named after the location of Bohr’s Institute for Theoretical Physics. This and other variants on the general theme of the Copenhagen interpretation are essentially “anti-realist”. This doesn’t mean that such interpretations deny the existence of an objective reality, or the reality of “invisible” entities such as electrons. It means that the theoretical representation of these entities shouldn’t be taken too literally.

Unreal interpretations

Einstein was profoundly disturbed by this. As American physicist David Bohm explained in a 1987 interview, “The main point was whether you could get a unique description of reality. And Einstein took the ordinary view of a scientist that you could, and Bohr said you couldn’t. Bohr said you were inherently limited to this use of the classical language for the experimental conditions and the results, and the symbolic mathematical description of the quantum…[Einstein] didn’t accept that Bohr’s approach could be taken as final, and Bohr insisted that it was.”

The exchanges between Bohr and Einstein are among the most dramatic in the history of science. The relatively modern folk-historical telling of the story of this debate paints Bohr as a rather dogmatic bully, browbeating the senile Einstein as the physics community looks on, cheering from the sidelines. The story goes that Bohr’s victory in the debate helped to establish the Copenhagen interpretation as a new gospel, to be accepted without question or challenge. Einstein called it a “tranquilizing philosophy”.

Bohr and Einstein

Indeed, Einstein was deeply uncomfortable with the Copenhagen interpretation, as it does not allow us even to define a definite value of a property until a measurement is made. In 1935 while at Princeton, Einstein – working together with colleagues Boris Podolsky and Nathan Rosen (collectively referred to as EPR) – threw his last thought grenade in Bohr’s direction. The trio developed a gedankenexperiment, based on the idea of creating a pair of quantum particles (such as a pair of electrons) whose properties – let’s say “up” and “down” – are “entangled”.

Quantum mechanics forbids us from knowing in advance which particle will be measured to have which property, but we do know that their properties will be correlated: if one particle is measured to be “up” then the other must be “down”, and vice versa. Suppose we let the particles move some significant distance apart, and then measure the property of one of the particles, to discover that it is “down”. This implies that we have simultaneously (and instantaneously) discovered the property of the other particle – it must be “up” – without disturbing it in any way.

If we interpret the wavefunction as a representation of the real properties of the particles, then there is nothing in quantum mechanics that tells us what these are before we make a measurement on one of them. It seems that the property of the second particle is then determined by a measurement we choose to make on a completely different particle, which can be an arbitrarily long distance away. The measurement somehow causes the wavefunction to “collapse”, and the entangled particles appear to exert influences on one another over vast distances, at speeds faster than light. This is strictly forbidden by Einstein’s theory of special relativity. Indeed, Einstein was very concerned by this apparent “spooky action at a distance”, and EPR argued that “no reasonable definition of reality could be expected to permit this”.

To Bohr, the mistake is to take the wavefunction too literally

As far as Bohr was concerned, though, there was nothing in the EPR gedankenexperiment that needed any further explanation. No matter how mysterious the correlation between distant entangled particles might seem, we can only know the properties of the particles as they are measured. To Bohr, the mistake is to take the wavefunction too literally. There is no “collapse” and no spooky action at a distance, because the wavefunction isn’t real. Quantum mechanics is complete and there’s nothing more to be said.

But why not just suppose that the properties of the particles are determined at the moment they interact or are created, and these properties are then maintained all along? Sure, there’s nothing in quantum mechanics that can account for this. But what happens if we imagine that, lying beneath the quantum formalism, there exists some kind of “hidden” physical mechanism, much like thermodynamic properties are determined by the “hidden” motions of atoms and molecules? Indeed, Einstein and colleagues used their paradox as a way of pointing out that quantum mechanics must therefore be incomplete – though they left open the question of just how it might be completed.

From drama to dogma

Despite Einstein’s grievances, Bohr was judged to have won the debate and, as time wore on, attentions inevitably shifted from endless arguments about the meaning of quantum mechanics to more practical concerns related to its application. These demanded the invention of new mathematical techniques, not endless philosophical nit-picking over the meaning of the theory. By 1949 the Copenhagen interpretation had become synonymous with general indifference or lack of interest.

Physicists who didn’t care to trouble themselves with questions of interpretation would wave in the general direction of the standard student textbooks and just shrug their shoulders. But for the endlessly curious, there were few answers to be found in these books. For example, Leonard Schiff’s Quantum Mechanics, first published in 1949, informed the teaching of the theory throughout North America, Europe and Asia through three editions spanning 20 years. It carried a rather garbled version of the Copenhagen interpretation, and did nothing to satisfy the curiosity of the young Bell, then in his final year of undergraduate study at Queen’s University in Belfast, Northern Ireland. Schiff’s brief account of measurement in quantum mechanics led Bell to conclude that Bohr was “annoyingly vague” and that, “for Bohr, lack of precision seemed to be a virtue”.

By the time Bell was beginning his career as a professional physicist, the indifference of the community had developed into outright hostility. Questions about the meaning of quantum mechanics no longer had a place in mainstream physics. In 1981 Bell summarized this general mood in a paper, writing that “making a virtue of necessity…many came to hold not only that it is difficult to find a coherent picture but that it is wrong to look for one – if not actually immoral then certainly unprofessional” (Journal de Physique Colloques 42 C2 41).

Bell’s inequality

Pushing back against these “shut up and calculate” fashions of the time, Bell decided to pull on the loose thread that Einstein had first snagged. In 1964 he demonstrated that any attempt to supplement the standard quantum formalism with a system based on “hidden variables” yields a theory that inevitably makes predictions that are incompatible with those of standard quantum mechanics (Physics 1 195). He summarized this incompatibility in a “no-go” theorem and a numerical inequality, which became known as Bell’s inequality.

John Bell

These constraints arise if we assume the particles to be physically separable (so-called “Einstein-separable” or “locally real”) and then seek to establish a causal relation between their properties. The standard formalism is more generous and less restrictive: it doesn’t care how we think reality ought to be organized and predicts results that violate Bell’s inequality. Bell’s theorem states that “If the [hidden variable] extension is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local.”

The conclusion is inescapable. Supposing that the entangled particles possess fixed properties all along simply doesn’t work. If we assume this then our predictions conflict with those of standard quantum mechanics. Bell had provided a direct experimental test. Experiments reported in 1982 by Alain Aspect and his colleagues at the University of Paris confirmed beyond reasonable doubt the violation of Bell’s inequality, as had previously been indicated in other earlier tests (Phys. Rev. Lett. 49 91). The standard formalism, with all its casual indifference and inscrutability, is unambiguously correct. Some “loopholes” remained, but the results were a heavy blow to those who had hoped that local hidden variables would offer an easy way out.

In the rather fraught 100-year history of the search for meaning in quantum mechanics, the early 1980s through to the early 1990s was a period of sober reflection. In an interview for a BBC Radio 3 documentary broadcast in May 1984, Bell hoped that the Aspect experiments were not the end. “I think that the probing of what quantum mechanics means must continue,” he said, “and in fact it will continue, whether we agree or not that it is worth while, because many people are sufficiently fascinated and perturbed by this that it will go on.”

Against “measurement”

In his Physics World article, Bell doubles down, restating the problems associated with the orthodox, default interpretation of the process of quantum measurement. He rails against physicists such as Paul Dirac, and others in the community, who had expressed indifference, if not outright hostility, to questions about the meaning of a theory that demonstrably works, for all practical purposes (referring to them as the “why bother?”ers). And he exposes the confused and often contradictory descriptions of quantum measurement that can be found in a selection of “good books”.

Nowhere in this article does Bell explicitly mention the Copenhagen interpretation. However, his criticism of Bohr (or, at least, various interpretations of Bohr and, through guilt-by-association, Copenhagen), is apparent. He rejects sentiments such as that expressed in Nico van Kampen’s “Ten theorems about quantum mechanical measurements” (Physica A 153 97), whose fourth theorem states “Whoever endows [the quantum wavefunction] with more meaning than is needed for computing observable phenomena is responsible for the consequences.” And this, I believe, is principally what Bell was against. Accepting this procedure at face value obliges us to resist the temptation to ask: but how does nature actually do this? Like emergency services personnel at the scene of a tragic accident, Bohr advises us to move along, as there’s nothing to see here. And there lies the rub; for what is the purpose of a scientific theory if not to aid our understanding of the physical world? We want to rubberneck at reality.

What is the purpose of a scientific theory if not to aid our understanding of the physical world?

“The idea that quantum mechanics, our most fundamental physical theory, is exclusively even about the results of experiments would remain disappointing”, wrote Bell in his article. He adds that “The aim remains: to understand the world. To restrict quantum mechanics to be exclusively about piddling laboratory operations is to betray the great enterprise.” The only way to achieve this aim within the framework of quantum mechanics is to take the wavefunction more literally and realistically. Bell wasn’t so much “against measurement”; he was against anti-realist interpretations in which applying the quantum formalism to the measurement process is a “purely symbolic procedure”, and which therefore “betrays the great enterprise”.

If extensions of quantum mechanics based on local hidden variables are excluded by experiment, then Bell argues that we must embrace non-local alternatives, such as the “pilot wave” theory associated with Louis de Broglie and David Bohm, based on the notions of real particles guided by a real wave field (and thus, accept spooky action at a distance from the outset). Or perhaps we must accept the need to introduce explicit physical mechanisms to collapse an entangled wavefunction, as proposed in the late 1980s by Giancarlo Ghirardi, Alberto Rimini and Tullio Weber, known collectively as GRW (Phys. Rev. D 34 470). The GRW theory is a “spontaneous collapse” theory, which does away with the measurement and observer issues by postulating that the wavefunction collapses randomly in both time and space.

Thirty years on

Bell was right to rail against the indifference and hostility of the physics community; against Bohr’s apparent vagueness; and against the inconsistencies of presentation in his choice selection of popular texts. But the Copenhagen interpretation had become a dogma not through acceptance of reasoned philosophical arguments or design, but by default. Bohm explained it in a 1987 interview saying, “Everybody plays lip service to Bohr, but nobody knows what he says. People then get brainwashed into saying Bohr is right, but when the time comes to do their physics, they are doing something different. That introduces confusion into physics. In fact, even [Werner] Heisenberg and [Wolfgang] Pauli did not do exactly what Bohr did.”

In the three decades since the publication of Bell’s article in this magazine, we have witnessed a succession of extraordinary experiments designed to address issues in foundational quantum physics. The results of these experiments unequivocally demonstrate the superiority of the standard quantum-mechanical formalism over extensions based on both local and so-called “crypto” non-local hidden variables. Both Bell’s inequality and a further inequality devised by physicist Anthony Leggett in 2003 (which relies only on realism, and partly relaxes the reliance on locality) have been violated, in practice. Experiments on entangled triplets of photons have ruled out local hidden variable without recourse to Bell’s inequality (Nature 403 515).

There can be no doubt that the slow-to-reawaken post-war interest in foundational questions raised by Bohm, Bell and others has uncovered some remarkable phenomena that might have otherwise gone unnoticed. Growing experimental understanding of entanglement has helped to establish the wholly new disciplines of quantum information and quantum computing. But we should not overlook the simple fact that all the experimental studies of the last 50 years have failed to establish the superiority of any realist interpretation or extension of quantum mechanics. The matter of interpretation remains undecided. We still don’t know what the quantum wavefunction stands for.

For many years, on balance I preferred Einstein’s realism. I championed Bell’s rejection of the Copenhagen orthodoxy (I still do, though I think I now understand better the origin and default nature of this orthodoxy). My first book on quantum mechanics, published in 1992, was praised for its “realist honesty”. Years ago, I trained as an experimentalist, and I can tell you that it’s really hard to do experiments of any kind without a strong belief in the reality of the things you’re experimenting on. This is why I think it’s fundamentally important to unpack what it means to be a “realist”.

It’s hard to do experiments of any kind without a strong belief in the reality of the things you’re experimenting on

If you’re willing to sit back and reflect, and to set your philosophical prejudices temporarily to one side, you might be ready to at least acknowledge the possibility that the history of physics since Galileo is a history of our attempts to encode or encrypt our experiences of the world using the language of mathematics. If we are able to identify the concepts (such as mass, charge, momentum and energy) that appear in this language with the real properties of the real things that appear in this world, then we become satisfied that we can explain the physics using these concepts and this language.

But there was never any guarantee that we would continue to be able to do this ad infinitum. Philosophers have been warning us for centuries that we may eventually run into a fundamental threshold between things as they “really are” and things as they appear to be. Although my primary realist convictions remain firm, the experimental results gathered over the last 30 years, and some quiet reflection on how we actually use the quantum formalism, have forced me to question the presumption that the wavefunction represents the real physical states of real physical things. I’ve developed some real doubts.

Like the great philosopher Han Solo, I’ve got a very bad feeling about this.

Biodegradable sensor opens door to real-time monitoring of key disease biomarker

An international team of researchers has developed a novel biodegradable electronic sensor capable of measuring trace amounts of nitric oxide (NO) species. In a paper published in NPG Asia Materials, the team – from Korea University, the University of Cambridge, The Pennsylvania State University and Korea Institute of Science and Technology – outline how the implantable technology could be used to record NO concentration from the surface of organs before safely degrading into materials cleared by the body.

Nitric oxide and nitric dioxide, referred to collectively as NOx, play vital roles in vascular, nervous and respiratory health. Currently, commercial sensors monitor NOx concentration externally via exhalation. Such devices, however, may not be sensitive enough to sufficiently quantify the presence of these gases within the patient.

“It might be much more beneficial to monitor the gas levels from internal organs,” explains study author Huanyu “Larry” Cheng. Measuring NOx levels from inside the patient offers increased accuracy and sensitivity over traditional sensors.

Nevertheless, making such a device is not an easy task. The sensor must be pliable, operating under mechanical loads without compromising electrical performance. It must selectively measure NOx, even at very low concentrations. Finally, it must be removable after use. In this work, the twist – and the focus of the team headed by Chong-Yun Kang and Suk-Won Hwang – is to exploit non-toxic, biodegradable materials that are completely absorbed by the body. This means that detection is a one-and-done operation, and following implantation, no further surgery is required.

The highly flexible device uses silicon-based technology to detect NOx concentrations as low as 0.0005%. What’s more, the team determined the sensor to be 100 times more sensitive to NOx than to other well-known physiological substances – providing an edge over other promising sensors made of graphene and metal oxides.

Nitric oxide: putting the ‘NOx’ in noxious

NOx gas is a notorious air pollutant. Exposure to environmental NOx can trigger respiratory conditions such as asthma and chronic obstructive pulmonary disease (COPD). Despite this, physiological quantities of NO are critical to cardiovascular health – a discovery so important that it won the 1998 Nobel Prize for Physiology and Medicine.

Nitric oxide

Our bodies use NO as a chemical messenger, regulating blood flow to control oxygen and nutrient transport. An NO deficiency can lead to high blood pressure and the onset of atherosclerosis, or the narrowing of arteries due to fatty plaque build-up.

But NO is also toxic to nerve cells in high quantities. Excess NO production is linked to neurodegenerative conditions like Alzheimer’s disease and Parkinson’s disease.

How does the sensor work?

To create their sensor, the researchers patterned the electronic components onto a silicon nanomembrane (approximately 100 nm thick) and transfer printed them onto the biodegradable polymer poly(lactic-co-glycolic) acid (PLGA).

Silicon, as Cheng puts it, “is the building block for modern electronics”. The element is also highly sensitive to NOx. When gaseous NOx molecules adsorb onto the surface of the sensor, they deplete the top layer of electrons in the silicon. This increases the electrical resistance of the gas with respect to the baseline response (dry N2/air).

The study recognized that gas absorption is not the only parameter that affects electrical resistance. Mechanical forces can also impact how current flows through the system. If the sensor is to conform to the surfaces of different organs, it must remain functional under deformation.

A large-scale sensor array showed little change in the relative response rate when subjected to 1000 cycles of bending or stretching. Furthermore, the researchers used computer simulations to probe the stress and strain response of each layer within the device. They concluded that the array remains functional when stretched up to 40% of its length.

The team assessed the selectivity of the sensor to NOx by comparing the intensity of the resistance signal to its responses to other common gaseous biomarkers, including carbon oxides and ammonia. Not only does the technology show exceptional selectivity, but it does so incredibly quickly. When introduced to the sensor, the gas was detected in just 30 s.

Crucially, the device biodegrades over a timescale suitable for tracking a patient’s NOx levels. When submerged in a body-fluid mimic (phosphate buffered saline, pH 7.4, 37°C), all electronic components degraded to non-toxic end products in just 8 hr.

The results demonstrate that the sensor operates effectively under the harsh conditions of the body. Next, the researchers plan to look at tweaking the design to monitor other bodily functions for various disease detection applications.

Cosmic Bell test probes the reality of quantum mechanics

In this interview, Johannes Handsteiner describes an experiment to test the foundations of quantum mechanics. Handsteiner was part of a team at the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna, Austria, that designed a Bell test experiment using light from now-ancient quasars. Handsteiner explains why his team went to such lengths to close an important loophole in this famous test designed to rule out the possibility of a classical explanation.

The interview was recorded in 2019 before the COVID-19 pandemic. Handsteiner now works at Quantum Technology Laboratories, a company involved in contract research in the field of Quantum Key Distribution (QKD).

Single molecules keep to the straight and narrow

A change in the position of a single molecule can determine the outcome of a chemical reaction, but studying such movements is difficult because molecular motion is random at the atomic scale. Researchers at the University of Graz in Austria, together with colleagues in Germany and the US, have now succeeded in controlling the motion of single organic molecules by using the sharp tip of a scanning tunnelling microscope (STM) to shift them 150 nm along a flat surface.

In the work, researchers led by Leonhard Grill placed dibromoterfluorene (DBTF) molecules on an extremely flat silver surface under ultrahigh vacuum at cryogenic temperatures (7 K). Since the orientation of a molecule on a surface can affect how it diffuses along that surface, Grill and colleagues decided to try rotating individual molecules of DBTF using the sharp tip of the STM. To their surprise, they found that the molecules became extremely mobile when their orientation matched that of a specific atomically close-packed configuration of the silver surface. This increased mobility appears as a bright and smooth “stripe” in the STM images that runs across a single row of atoms over the entire region scanned.

“Sender-receiver” experiment

The team then used the STM tip to apply a local electric field to single molecules. They found that the electrostatic forces induced by this field caused the molecules to move along a perfectly straight track, in a direction that depended on whether the electrostatic forces were attractive or repulsive. In this fashion, molecules could be sent or received over distances of up to 150 nm, with a final position that could be controlled within 0.01 nm. The researchers also measured the time required for the molecules to travel this distance, and calculated the speed of a single molecule to be roughly 0.1 mm/s.

In a further experiment, Grill and colleagues created a “sender-receiver” set-up in which they successfully transferred a single molecule between two independent probes – like two people throwing and catching a ball. To do this, the “sender” tip applies a repulsive force to the molecule, which then moves to the exact position of the “receiver” tip. The information encoded within the molecule (such as its elemental composition and atomic arrangement) is thus transmitted, too.

Members of the team, who include researchers from Oak Ridge National Laboratory in the US, the Humboldt-Universität zu Berlin, the Leibniz Institute for Interactive Materials and Aachen University, say they now plan to study how molecular speed correlates with chemical and structural properties. “Such experiments should allow us to determine the kinetic energy and momentum of molecules and help us measure energy dissipation as the molecules diffuse or after they collide with other molecules,” Grill tells Physics World.

Full details of the present research are reported in Science.

Arecibo Observatory destroyed as metal platform collapses onto the iconic telescope

The iconic Arecibo Observatory in Puerto Rico has been destroyed after a 900 tonne metal platform suspended above the telescope collapsed around 8 a.m. local time today. The National Science Foundation (NSF) says that no injuries have been reported and that it is now “working with stakeholders to assess the situation”.

Since it opened in 1963, the Arecibo Observatory has been crucial for radio astronomy and at 305 m wide was the world’s second-largest single-dish telescope. Although the telescope’s main uses were focused on radio astronomy, space weather and atmospheric science, it was also renowned for its planetary radar facility, which NASA used for near-Earth asteroid tracking and the characterisation of planetary surfaces.

On 10 August, however, one of the six 8 cm-wide auxiliary steel cables that support the telescope’s platform failed, tearing a 30 m gash through the main reflector dish as it flailed. The auxiliary cables were added in the 1990s to help balance the increased weight when the reflecting system was upgraded. Part of that upgrade involved installing the Gregorian Dome, which was also damaged as the cable crashed down.

Then on 6 November one of four main cables supporting the platform snapped with investigations showing that another two had wire breaks, increasing the likelihood of the tower platform falling and destroying the telescope. On 19 November, the NSF – one of the organizations that manages the observatory together with the University of Central Florida, Universidad Ana G Méndez and Yang Enterprises – decided to decommission the telescope on safety grounds.

Following the news, a “save the Arecibo Observatory” campaign began on social media as astronomers and members of the public also shared their thoughts on the closure using the Twitter hashtag #WhatAreciboMeansToMe. There was also a petition to the US government calling for emergency action to stabilize the telescope, which had been signed by almost 60,000 people.

“[Arecibo’s] demolition or unplanned collapse presents the potential of an environmental emergency as it lies on top of an aquifer and would affect the nearby population,” the petition states. “We urge emergency action to have the Army Corps of Engineers or another agency evaluate the telescope structure and search for a safe way to stabilize it, to provide time for other actions to be considered and carried out.”

However, that now appears impossible after reports that the structure had collapsed (see below). “The Arecibo radio telescope platform has collapsed onto the dish, presumably the scenes are very messy on the ground,” noted James O’Donoghue, a planetary astronomer at the Japanese Aerospace Exploration Agency. “It’s a sad day for astronomy.” 

The NSF says it is “saddened” by the development. “As we move forward, we will be looking for ways to assist the scientific community and maintain our strong relationship with the people of Puerto Rico,” it adds.

Copyright © 2025 by IOP Publishing Ltd and individual contributors