Skip to main content

Does time dilation destroy quantum superposition?

Why do we not see everyday objects in quantum superpositions? The answer to that long-standing question may partly lie with gravity. So says a group of physicists in Austria, which has shown theoretically that a feature of Einstein’s general relativity, known as time dilation, can render quantum states classical. The researchers say that even the Earth’s puny gravitational field may be strong enough for the effect to be measurable in a laboratory within a few years.

Our daily experience suggests that there exists a fundamental boundary between the quantum and classical worlds. One way that physicists explain the transition between the two, is to say that quantum superposition states simply break down when a system exceeds a certain size or level of complexity – its wavefunction is said to “collapse” and the system becomes “decoherent”.

Complex wavefunction

An alternative explanation, in which quantum mechanics holds sway at all scales, posits that interactions with the environment bring different elements of an object’s wavefunction out of phase, such that they no longer interfere with one another. Larger objects are subject to this decoherence more quickly than smaller ones because they have more constituent particles and, therefore, more complex wavefunctions.

There are already multiple different explanations for decoherence, including a particle emitting or absorbing electromagnetic radiation or being buffeted by surrounding air molecules. In the latest work, Časlav Brukner at the University of Vienna and colleagues have put forward a new model that involves time dilation – where the flow of time is affected by mass (gravity). This relativistic effect allows for a clock in outer space to tick at a faster rate than one near the surface of the Earth.

Dilating states

In their work, Brukner and colleagues consider a macroscopic body – whose constituent particles can vibrate at different frequencies – to be in a superposition of two states at very slightly different distances from the surface of a massive object. Time dilation would then dictate that the state closer to the object will vibrate at a lower frequency than the other. They then calculate how much time dilation is needed to differentiate the frequencies so that the two states get out of step with one another, so much that they can no longer interfere.

With this premise, the team worked out that even the Earth’s gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond.

Beyond any potential quantum-computing applications that would benefit from the removal of unwanted decoherence, the work challenges physicists’ assumption that only gravitational fields generated by neutron stars and other massive astrophysical objects can exert a noticeable influence on quantum phenomena. “The interesting thing about this phenomenon is that both quantum mechanics and general relativity would be needed to explain it,” says Brukner.

Quantum clocks

One way to experimentally test the effect would involve sending a “clock” (such as a beam of caesium atoms) through the two arms of an interferometer. The interferometer would initially be positioned horizontally and the interference pattern recorded. It would then be rotated to the vertical, such that one arm experiences a higher gravitational potential than the other, and its output again observed. In the latter case, the two states vibrate at different frequencies due to time dilation. This different rate of ticking would reveal which state is travelling down each arm, and once this information is revealed, the interference pattern disappears.

“People have already measured time dilation due to Earth’s gravity,” says Brukner, “but they usually use two clocks in two different positions. We are saying, why not use one clock in a superposition?” Carrying out such a test, however, will not be easy. The fact that the effect is far smaller than other potential sources of decoherence would mean cooling the interferometer down to just a few kelvin while enclosing it in a vacuum, says Brukner.

The measurements would still be extremely tricky, according to Markus Arndt, at the University of Vienna, who was not involved in the current work. He says they could require superpositions around a million times bigger and 1000 times longer lasting than is possible with the best equipment today. Nevertheless, Arndt praises the proposal for “directing attention” towards the interface between quantum mechanics and gravity. He also points out that any improvements to interferometers needed for this work could also have practical benefits, such as allowing improved tests of relativity or enhancing tools for geodesy.

The research is published in Nature Physics.

Graphene light bulb shines bright

 

The first on-chip, visible-light source that uses “wonder material” graphene as a filament has been created by an international team of researchers. The team found that small strips of freely suspended graphene, attached to metal electrodes, can reach temperatures of up to 2800 K, allowing it to emit visible light. The research, while preliminary, opens up intriguing scientific questions, and possible applications of atomically thin, transparent, flexible displays and optical interconnects with electronic circuits.

Generating light on the surface of a chip is key to developing fully integrated photonic circuits that would, in theory, use light to carry information. In many ways, the new light source functions much like an incandescent light bulb, which glows as its wire filament is heated to a high temperature via an electrical current. Among its other remarkable properties, graphene is known for its very high thermal conductivity, but this makes it hard to heat it to extreme temperatures. Indeed, if it retained its really high thermal conductivity, it would not be possible to heat the graphene until it glowed in the visible spectrum, because the heat would flow away continuously.

Free-falling

Luckily, graphene’s thermal conductivity drops dramatically at high temperatures because of a process called “Umklapp scattering”, in which phonons scatter off one another. While graphene is normally mounted on a substrate, which acts as a heat sink to stop it from reaching such high temperatures, research done in 2006 by Andrea Ferrari of the University of Cambridge and colleagues produced the first freely suspended graphene. In the new work, the researchers attached reliable electrical contacts to suspended graphene, something Ferrari, who was not involved, describes as “much more challenging”.

The researchers were investigating suspended graphene’s potential for producing nano-mechanical oscillators. Led by Young Duck Kim then of Seoul National University, who is now at Columbia University in New York City, the team was investigating how the oscillation frequency of graphene changes with temperature by applying a variable voltage bias to heat it when Kim noticed something unusual. “During the measurement, I could see something blinking,” he says. “I realized that the blinking was light emission from the graphene.”

The heat is on

The team members fabricated multiple devices from both monolayer and multilayer graphene, and investigated their potential scalability. The researchers also tested devices made from graphene produced both by chemical vapour deposition and by the Scotch-tape method: all of them could effectively produce visible light. Based on theoretical models and analysis of the emission spectra, the researchers estimate that the temperature of the graphene at the centre of the suspended sheets was up to 2800 K.

By placing a layer of silicon behind the graphene, the researchers produced interference between light emitted by the graphene from both its front, unblocked side and the light emitted and reflected off the silicon. This led to constructive interference at some frequencies and destructive interference at others, thereby tuning the colour of the light produced. The devices are, at present, between 0.3% and 0.45% efficient – low for light sources in general but 1000 times higher than previous graphene-based radiation emitters using graphene mounted on a substrate. These earlier devices could not emit frequencies higher than infrared because of the restricted temperature of the graphene. The researchers are currently working to increase the amount of light emitted at desired wavelengths, for example by using photonic crystals and optical cavities.

Futuristic displays

The team suggests its device could have multiple applications, including in transparent and flexible lighting displays. “Graphene is well known as a transparent and flexible electrode material,” says Kim. “Now, we can also use graphene as a light source.” In addition, the researchers are investigating its potential to interface electronic circuits with optical ones – crucial to optical logic applications. “The heating and cooling are very fast,” Kim explains. “We expect to be able to turn on and off within 10 picoseconds, which means we can maybe achieve data transfer at 100 gigabits per second.”

“It’s a nice result. It’s of course a new result, but it’s not totally surprising,” says Ferrari, adding that the next step would be “to conduct similar experiments with other 2D materials such as molybdenum disulphide that may also become light emitters in this particular condition”. He is more sceptical about the potential for short-term technological application, however. “The technique [to produce suspended graphene] is very laborious,” he says. “So the question is: can this be easily mass-produced? At the moment, the answer is probably no, but in the future, one can find a way of making useful devices.” Ferrari notes, however, that, once backed with silicon, graphene ceases to be transparent, which might make transparent displays difficult.

The research is published in Nature Nanotechnology.

Busting dust deep underground in SNOLAB

By Hamish Johnston at the CAP Congress in Edmonton, Alberta

I’m a bit of a DIY enthusiast and one thing that I know about drilling into a masonry wall is that you should hold a vacuum-cleaner hose to the hole or you will end up with dust all over the wall and the floor below. Believe it or not, that is exactly what workers at SNOLAB in Canada do in order to keep background levels of radiation from affecting their dark-matter and neutrino detectors.

(more…)

The cradle of modern science

Why did Newton’s laws take the form they did? We are so familiar with the three laws of motion, the inverse square law of gravity and even “toys” such as Newton’s cradle that we can easily overlook how such achievements were actually reached and how they fit in with our present understanding of the world.

In To Explain the World: the Discovery of Modern Science, the particle physicist and Nobel laureate Steven Weinberg addresses this gap by giving us a historical tour of the development of the scientific method as we know it today. On the way, he gives us plenty of his own comments on what he, as a physicist, thinks about the way we got there.

This story is fascinating, with plenty of ups and downs throughout history, yet a big question remains: why does Weinberg avoid placing this narrative squarely in the context of the philosophy of science? After all, he is dealing with some big questions, such as the degree to which laws really “explain” the world and the sense in which modern science was there to be “discovered”. Many philosophers have already pored over these topics in much greater depth, so why not make use of their expertise?

I will return to this question later, for Weinberg’s tour is worth taking in itself. He guides us through four main “landscapes” of scientific thought, starting with the early years of ancient Greece and some of the first major efforts to speculate about nature. Once these foundations are laid, he moves on to the great developments in Greek astronomy that culminated in the competing Aristotelian and Ptolemaic theories of planetary motions.

The next landscape is the Middle Ages, which is more multicultural, being a mix of Greek thought and contributions from Arab science and European universities – all of which prepared the ground for the next landscape, which is the scientific revolution that took place in the 17th century.

Here, Weinberg plunges us into the writings and thinking of the familiar characters of Copernicus, Galileo, and Kepler, ending with Newton and the beginning of what we would call “modern science” (the final landscape in this journey).

On the way, he introduces us to less familiar but important contributors, such as Hipparchus, who is generally regarded as the greatest astronomical observer of the ancient world, and al-Khwarizmi, an Arab scientist of the 8th century who brought us our familiar “Arabic” numbers (which are actually of Indian origin).

One aspect of the book that I particularly enjoyed was the inclusion of many quotes from works that I knew of, but had never had the time to look up myself, such as Aristotle’s On the Heavens (containing his empirical evidence for the spherical shape of the Earth) or the annotated quotes of the three laws of motion from Newton’s Principia. Some of Weinberg’s comments on the way we do science also serve as a good reminder for us today. For example, Newton achieved what he did by realizing the power of being able to predict simple phenomena (the law of gravity) without necessarily being able to explain the more complicated aspects (what gravity is). Very good advice; start with the simple.

The book is not overlong (the main text runs to just 267 pages) and is fairly easy to follow. It is technically thorough from the physics point of view, and the generous endnotes, bibliographic sources and index make it a useful resource for further exploration. On the other hand, though, I rather suspect that philosophers of science will have a field day with it.

While Weinberg offers some useful historical commentary, and makes some attempts to acknowledge certain aspects of the philosophy of science (such as the uncertainty of theories or the possibility that the Standard Model could change significantly), the book’s lack of connections to important previous works on the subject considerably weakens Weinberg’s attempts to shore up the position of realism – the idea that an objective reality exists that is independent of human beliefs, philosophical systems and so on.

Indeed, there is a stark contrast between Weinberg’s evident suspicion of philosophers (he does not appear to appreciate the philosophy of Bacon, Descartes or Kuhn much at all) and his obvious desire to contribute to the philosophy of science by putting forward his own arguments for realism and reductionism.

An example of this reluctance to engage may be found in the way in which Weinberg describes the separation between science and religion during the scientific revolution. He calls this a “divorce” brought about by the need to “outgrow a holistic approach to nature”. The classical view, though, is that the shift had more to do with the desire of Bacon, Galileo, Newton and others to remove personal bias from scientific endeavour (despite their having religious convictions). This fundamental change in thinking – one that is crystallized in the writings of Descartes as being a separation between matter and spirit, res extensa and res cogitans – is what gave modern science its objectivity. Why does Weinberg celebrate the pleasure scientists experience when a law is “discovered” and not want philosophers to experience the same pleasure in discovering patterns within the history of the scientific method?

In some ways, Weinberg’s desire to keep philosophy at a respectable distance is understandable. There are two major reasons for this. The first is the difficulty scientists have in getting a handle on the philosophy of science as a subject. To remedy this difficulty, I would recommend having Mel Thompson’s very accessible Introduction to the Philosophy of Science (2001 Hodder Headline) at hand when reading To Explain the World in order to help judge what Weinberg is saying. The second, more fundamental, reason for keeping philosophy at arm’s length is due to the attacks on science made by adherents of social constructivism. The Sokal–Bricmont affair of the mid-1990s, in which Weinberg was involved, illustrates well the disturbing gulf that still exists today between science and some strands of philosophy.

This incident was part of what was dubbed the “Science Wars”, although this was in fact less a meeting of armies (or minds) than the lobbing of boulders from entrenched positions (see for example Knowledge and the World, M Carrier et al., Springer 2004). If Weinberg wants to reinforce the position of realism – a worthy cause – I think he would do better to reconcile himself with classical philosophy of science. By admitting its usefulness and achievements, he would be better equipped to fight against the real enemy of science – the relativism at the base of post-modernism – without falling into the extremism of scientism. He could also take courage from a more recent philosophical movement, known as speculative realism, that appears in, for example, the work of Quentin Meillassoux (After Finitude) and the physicist-philosopher Bernard d’Espagnat (Veiled Reality).

In this way, instead of skirting round the battlefield, Weinberg could take us into it, equipped with the sort of weapons that philosophers understand and respect, to defend science against relativism in hand-to-hand combat. And, on the way, by raising again questions about the basis of science, radical progress might be made in such fields as reconciling quantum mechanics with general relativity – just as Kuhn observed often happens before a paradigm change.

  • 2015 Allen Lane/Harper Collins £20.00/$28.99hb 432pp

The physics of Alzheimer’s disease

By Hamish Johnston at the CAP Congress in Edmonton, Alberta

One promising route to understanding the causes of Alzheimer’s disease (AD) –  and hopefully finding a cure – is the study of how and why proteins in the brain sometimes form neurotoxic plaques. These plaques are disc-like structures that are about 50 µm in diameter and made from polypeptides. Their presence in the grey matter of the brain is strongly associated with AD and some other neurological conditions, but why they form and why they cause dementia are both not understood.

(more…)

Getting the best glimpse of first-generation stars in the universe

The first possible detection of the earliest stars formed in our universe has been made by an international team of researchers. Using ESO’s Very Large Telescope, the team discovered the brightest distant galaxy observed in the early universe, and in the process found evidence for the as-yet-undetected first generation of massive stars that lie within it. While there has been no conclusive physical proof of their existence until now, astronomers have been keen to study these early behemoths as they had a significant effect on the environment of the nascent universe.

Immediately after the Big Bang, the only chemical elements that existed were hydrogen, helium and some trace amounts of lithium. All heavier elements such as oxygen, nitrogen, carbon and iron – referred to as “metals” by astronomers – require the presence of stars to form. Some are forged in the high-pressure centres of stars, and the heaviest ones are created when the first, enormous stars exploded in supernovae.

Elusive population

The stars we currently observe in our universe fall into two categories: metal-rich “population I” stars such as our Sun, which contain large amounts of heavier elements, and older, metal-poor “population II” stars, such as those in the Milky Way’s halo, which contain less recycled material.

A population that has remained conspicuously absent, however, are the very first stars, which should have formed before the recycled materials existed. These hypothetical, extremely metal-poor stars, termed “population III” stars, should have started forming between 106 and 107 years after the Big Bang. Not only would these early stars have been extremely hot and enormous – several hundred or even a thousand times more massive than the Sun – they would have exploded as supernovae after only about two million years, seeding the universe with the metals to form population II stars. But while they have been theoretically predicted, they are yet to be directly detected, as spotting these stars is very difficult. They were extremely short-lived and would have shone at a time when the universe was largely opaque to their light, towards the end of the “dark ages”.

A team led by David Sobral at the University of Lisbon, and Leiden Observatory in the Netherlands, may have changed this paradigm with the recent detection of an extremely bright galaxy in the early universe. Indeed, as the team’s survey looks at exceedingly distant galaxies, it lets us look back in time, revealing the universe as it was a mere 800 million years after the Big Bang. The survey uncovered several unusually bright galaxies, including the brightest galaxy ever seen at this distance – an important discovery in itself.

But further scrutiny of this galaxy, named CR7, produced an even more exciting find – a bright pocket of the galaxy contained no sign of any metals, and further observations with other telescopes confirmed this initial detection. “By unveiling the nature of CR7 piece by piece, we understood that not only had we found by far the most luminous distant galaxy, but also started to realize that it had every single characteristic expected of population III stars,” says Sobral.

Formation waves

Sobral and his team postulate that we are observing this galaxy at just the right time to have caught a cluster of population III stars – the bright, metal-free region of the galaxy – at the end of a wave of early star formation. The observations of CR7 also suggest the presence of regular stars in clumps around the metal-free pocket. These older, surrounding clusters may have formed stars first, helping to ionise a local bubble in the galaxy and allowing us to now observe the light from CR7.

It was previously thought that population III stars might only be found in small, dim galaxies, making them impossible for us to detect. But CR7 provides an interesting alternative: this galaxy is bright, and the candidate population III stars are surrounded by clusters of normal stars. This suggests that these first-generation stars might in fact be easier to detect than was originally thought.

Additional observations with other telescopes will help to confirm the identity of these stars. In particular, NASA’s upcoming James Webb Space Telescope, set to launch in 2018, is expected to further advance the pursuit of the earliest galaxies and stars in the universe.

The work is to be published in the Astrophysical Journal. A preprint is available on arXiv.

Muzzling scientists leads to mistrust of government says physicist MP

By Hamish Johnston at the CAP Congress in Edmonton, Canada

Yesterday I caught up with Ted Hsu, who is member of the Canadian parliament for Kingston and the Islands – and a former physicist. Hsu is a member of the Liberal party, which means he sits on the opposition benches. There he has been an outspoken critic of the current Conservative government over its apparent “muzzling” of scientists on the federal payroll. He believes that when governments seek to silence their experts it leads to more public mistrust of government.

(more…)

Physics World 2015 Focus on Optics & Photonics is out now

PWoptics2015-cover-500By Matin Durrani

With the 2015 International Year of Light now in full swing, it’s time to tuck into the latest focus issue of Physics World, which explores some of the latest research into optics and photonics.

The focus issue, which can be read here free of charge, kicks off by looking at the giant laser interferometers underpinning the latest searches for gravitational waves. We also report on recent efforts to use optical instead of radio waves for satellite communication and have an interview with Ian Walmsley from the University of Oxford about the vital role that optics and photonics play in the UK’s new £270m Quantum Technologies Programme.

(more…)

US cosmic-ray observatory set for expansion

The Telescope Array observatory in Utah, US, is set for a $6.4m upgrade that will see some 400 detectors added to the facility. This will quadruple its collecting area from 730 km2 to 2500 km2. Increasing the size of the northern hemisphere’s largest ultra-high-energy cosmic-ray detector will allow astronomers to learn more about the origins of the most energetic particles in the universe.

When a cosmic ray hits the Earth’s atmosphere, it produces a cascade of secondary particles. The Telescope Array currently has 507 scintillation detectors, which generate light in response to incident radiation. By detecting the cascade of particles, astronomers can then obtain information on the direction and energy of the original ray.

Scintillating science

Many astrophysicists believe that ultra-high-energy cosmic rays are just protons, although some argue that they may include helium and nitrogen nuclei. Possible sources for the highest energy rays include active galactic nuclei, supernova remnants and colliding galaxies. The Telescope Array, which started collecting data in 2008, is able to observe cosmic rays with energies greater than 1 × 1018 eV.

The observatory involves institutions from Belgium, Japan, Russia, South Korea and the US. Japan has announced it will provide ¥450m ($4.6m) to fund the majority of the expansion, with researchers seeking to find the remaining $1.8m to complete the upgrade. One possible source could be the National Science Foundation in the US. The new detectors will be built over the next three years.

“These experiments are very large because the flux of cosmic rays at the highest energies is very low, about two per square kilometre per century,” says Gordon Thomson, co-principal investigator for the Telescope Array and an astrophysicist at the University of Utah.

Cosmic hotspot

The Telescope Array has previously identified a possible “hotspot” of ultra-high-energy cosmic rays centred on the constellation Ursa Major. The observatory detects around 15 events per year, with a quarter in the hotspot – although this could be a statistical fluctuation. “If cosmic rays were [uniform], we would expect 0.9 events per year in the hotspot area of the sky, whereas we see about 3.5 events per year on average,” says Thomson. To confirm and study the hotspot, researchers need to collect more data. “With the fourfold increase in data rate from a four-times-larger detector, we expect to answer interesting questions about the origin of cosmic rays,” he adds.

The expansion, when complete, will make the Telescope Array similar in size to the Pierre Auger Observatory in Argentina, which is currently the world’s largest cosmic-ray detector with a 3000 km2 collecting area. “Having observatories of a similar size in both the northern and southern hemispheres will enable full-sky surveys of ultra-high-energy cosmic rays,” says Karl-Heinz Kampert, a particle physicist at the University of Wuppertal in Germany who is co-spokesperson for the Pierre Auger Observatory. “The Telescope Array suffers a lot from the relatively low rate of events set by the present area, so expanding the detector array is a natural step – this should strengthen indications of a hotspot seen in the northern sky.”

Debating warp bubbles and quark novae over beer and samosas

Photograph of Miguel Alcubierre lecturing in Edmonton

By Hamish Johnston at the CAP Congress in Edmonton, Canada

The first day of the Canadian Association of Physicists (CAP) Congress at the University of Edmonton closed yesterday on the theme of time travel. Surely that is science fiction, you are thinking? But Miguel Alcubierre of the National Autonomous University of Mexico (UNAM) wasn’t joking when he delivered the Herzberg Memorial Lecture yesterday evening (although he did giggle a lot during his talk, which was very endearing). The session was called “Faster than the speed of light” and it was a fascinating romp through some of the more bizarre implications of Einstein’s general theory of relativity (GR) – which is 100 years old this year.

(more…)

Copyright © 2025 by IOP Publishing Ltd and individual contributors