Skip to main content

Coherent Schrödinger’s cat still confounds

The famous paradox of Schrödinger’s cat starts from principles of quantum physics and ends with the bizarre conclusion that a cat can be simultaneously in two physical states – one in which the cat is alive and the other in which it is dead. In real life, however, large objects such as cats clearly don’t exist in a superposition of two or more states and this paradox is usually resolved in terms of quantum decoherence. But now physicists in Canada and Switzerland argue that even if decoherence could be prevented, the difficulty of making perfect measurements would stop us from confirming the cat’s superposition.

Erwin Schrödinger, one of the fathers of quantum theory, formulated his paradox in 1935 to highlight the apparent absurdity of the quantum principle of superposition – that an unobserved quantum object is simultaneously in multiple states. He envisaged a black box containing a radioactive nucleus, a Geiger counter, a vial of poison gas and a cat. The Geiger counter is primed to release the poison gas, killing the cat, if it detects any radiation from a nuclear decay. The grisly game is played out according to the rules of quantum mechanics because nuclear decay is a quantum process.

If the apparatus is left for a period of time and then observed, you may find either that the nucleus has decayed or that it has not decayed, and therefore that the poison has or has not been released, and that the cat has or has not been killed. However, quantum mechanics tells us that, before the observation has been made, the system is in a superposition of both states – the nucleus has both decayed and not decayed, the poison has both been released and not been released, and the cat is both alive and dead.

Mixing micro and macro

Schrödinger’s cat is an example of “micro-macro entanglement”, whereby quantum mechanics allows (in principle) a microscopic object such as an atomic nucleus and a macroscopic object such as a cat to have a much closer relationship than permitted by classical physics. However, it is clear to any observer that microscopic objects obey quantum physics, while macroscopic things obey the classical physics rules that we experience in our everyday lives. But if the two are entangled it is impossible that each can be governed by different physical rules.

The most common way to avoid this problem is to appeal to quantum decoherence, whereby multiple interactions between an object and its surroundings destroy the coherence of superposition and entanglement. The result is that the object appears to obey classical physics, even though it is actually following the rules of quantum mechanics. It is impossible for a large system such as a cat to remain completely isolated from its surroundings, and therefore we do not perceive it as a quantum object.

While not disputing this explanation, Christoph Simon and a colleague at the University of Calgary, and another at the University of Geneva, have asked what would happen if decoherence did not affect the cat. In a thought experiment backed up by computer simulations, the physicists consider pairs of photons (A and B) generated from the same source with equal and opposite polarizations, travelling in opposite directions. For each pair, photon A is sent directly to a detector, but photon B is duplicated many times by an amplifier to make a macroscopic light beam that stands in for the cat. The polarizations of the photons in this light beam are then measured.

Two types of amplifier

They consider two different types of amplifier. The first measures the state of photon B, which has the effect of destroying the entanglement with A, before producing more photons with whatever polarization it measures photon B to have. This is rather like the purely classical process of observing the Geiger counter to see whether it has detected any radiation, and then using the information to decide whether or not to kill the cat. The second amplifier copies photon B without measuring its state, thus preserving the entanglement with A.

The researchers ask how the measured polarizations of the photons in the light beam will differ depending on which amplifier is used. They find that, if perfect resolution can be achieved, the results look quite different. However, with currently available experimental techniques, the differences cannot be seen. “If you have a big system and you want to see quantum features like entanglement in it, you have to make sure that your precision is extremely good,” explains Simon. “You have to be able to distinguish a million photons from a million plus one photons, and there is no current technology that would allow you to do that.”

Quantum-information theorist Renato Renner of ETH Zurich is impressed: “Even if there was no decoherence, this paper would explain why we do not see quantum effects and why the world appears classical to us, which is a very fundamental question of course.” But, he cautions, “The paper raises a very fundamental question and gives us an answer in an interesting special case, but whether it is general remains to be seen.”

The research will be published in Physical Review Letters.

Physics of writing is derived at last

While humans have been writing for at least 5000 years, we have surprisingly little understanding of the physics underlying how ink moves from pen to paper. Now, physicists in South Korea and the US have worked out a theory – backed by experiment – that suggests the ink’s flow rate depends on a tug-of-war that is played out between the capillary properties of pen and paper.

The team, led by Ho-Young Kim of Seoul National University, considered two scenarios: the blot and the line. With the pen stationary, the researchers identify four main factors that affect the flow of ink: the capillary pull of the pen; the capillary pull of the pores in the paper; the surface tension of the ink; and the viscosity of the ink. When it is moving, the speed of the pen is a fifth factor.

The team’s theory defines a “minimal pen” as a simple capillary tube, not unlike the hollow reeds used by the Egyptians to make inscriptions on papyrus. The researchers approximate paper as an array of cylinders. Rough paper is modelled as narrow, closely packed pillars, while shorter, wider, more generously spaced pillars are used to approximate smoother paper. While paper is actually a much more complex matrix of cellulose fibres, the team argues that such arrays draw fluid through the same mechanisms that allow paper to draw ink from a pen.

Pulling power

The smaller cavities in paper have a greater capillary pull than the wider tube of the pen, but very small pores also restrict the flow of the ink. As long as the pores in the material are not wider than the opening in the minimal pen, rougher materials pull ink more quickly. This also explains why it is so difficult to write in pen on a piece of glass – without pores, the surface cannot draw ink. In contrast, wider pens have less capillary force, so they give up the ink more easily. As for the ink itself, higher surface tension allows it to wet the paper or pillar array more effectively, while higher viscosity slows it down. The team condensed these relationships, plus the time that the pen spends in contact with the paper, into scaling laws.

Kim’s group confirmed the models by making the idealized pen and paper in the lab. The researchers etched silicon wafers into patterns of pillars 10–20 µm in diameter, height and spacing. Glass capillary tubes with diameters of 0.5–1 mm served as pens, filled with various concentrations of glycerine solution for mock ink. The team filmed the spread of ink blots and the drawing of lines, finding that the data matched their models.

After millennia of writing, you might think there would be nothing unexpected in these results, but Kim says that the shape of the ink in front of the moving pen was a surprise. “We showed that the actual shape is exactly parabolic – beautifully simple but nobody has predicted it,” he explains. The reason for the parabola is that the pores in front of the moving pen draw the bulk of the ink, while the pores behind are already at full capacity.

Putting pen to paper

Testing their scaling laws with a real pen and paper, the researchers held a 0.1 mm-diameter nib against ordinary paper for 2 s. They predicted a blot radius of 3 mm, expecting a line width of 0.82  mm with the pen moving at 5 mm s–1. The actual line width was quite close, 0.7 mm, while the blot was not quite half the predicted size at 1.3 mm. The team explains the difference in terms of paper swelling. Unlike rigid silicon, cellulose fibres deform as they imbibe fluid, thereby increasing the size of the pores.

Laurent Courbin of the University of Rennes, France, describes the study as fun that strikes at the heart of physics. “The aim of a physicist is to identify interesting problems not yet understood and to make such problems understandable using the simplest possible theoretical models,” he says. “When such investigations involve physical phenomena we experience in our everyday life, our work is even more rewarding!”

Although the team did not directly simulate the need for pocket protectors, Kim says that “ink stains on shirts from a pen in the pocket are very good examples of the ink blot that we treated”. At last, the pragmatic have a rigorous proof for why the nerdy plastic lining is a good idea.

This research will be published in Physical Review Letters.

LHC could shed light on superluminal neutrinos

The recent result that neutrinos appear to travel faster than light could be tested at the Large Hadron Collider (LHC), according to a pair of physicists in the US. Although the European particle accelerator would not be able to fully confirm or refute the result, it would be able to test a mechanism that is thought to occur when neutrinos move faster than light.

The result that neutrinos may travel faster than light came in September, when physicists at the OPERA experiment in Italy reported that neutrinos travelling 730 km underground appeared to arrive 60 ns too early. If the result is correct, it will contradict Einstein’s theory of special relativity, which says the speed of light is the maximum speed possible.

Indeed, many physicists have pointed out that the OPERA result should be incompatible with other reported neutrino behaviour. In 1987, for example, a wave of neutrinos arrived at Earth as a result of a distant supernova explosion three hours before astronomers saw the light from the event. However, if neutrinos were as superluminal as the OPERA result suggests, their arrival would have been early not by three hours, but by more than three years.

Depleted at high energies

At the end of September, theorists Sheldon Glashow and Andrew Cohen at Boston University in the US highlighted another potential problem. They developed a theoretical framework that would allow neutrinos to travel slightly faster than light, in accordance with the OPERA result. However, they found that the framework opened up other processes that particle physics would normally forbid. In particular, say Glashow and Cohen, a superluminal neutrino should be able to decay into an electron–positron pair plus a less energetic neutrino. As a result, the spectrum of neutrinos at OPERA should be depleted at high energies – but this is not what the OPERA collaboration saw.

Now, Hooman Davoudiasl of the Brookhaven National Laboratory in New York and Thomas Rizzo of SLAC National Laboratory in California have re-examined Glashow and Cohen’s theory. True, the framework would open up neutrino decay in a vacuum, Davoudiasl and Rizzo say, but the OPERA neutrinos were travelling mostly through rock. Perhaps the rock stalls the decay for some reason – for example by making the neutrinos transform or “oscillate” into different types – which would mean Glashow and Cohen’s theoretical framework would still be compatible with the OPERA result.

If so, then Glashow and Cohen’s mechanism should turn up in other places – notably at the LHC, say Davoudiasl and Rizzo. Neutrinos are produced in the particle accelerator, for example when energetic top quarks decay, but they are not normally observed because they pass straight through the detectors. But if Glashow and Cohen’s mechanism is at work, then some of the neutrinos should themselves decay, at roughly a metre from where they are produced. To someone studying the particle trails, this decay should manifest as an energetic electron–positron pair appearing suddenly, as if from nowhere. “This is a relatively easy signal to spot at the LHC,” says Rizzo.

“Using a pile-driver to break an egg”

Glashow and Cohen agree with Davoudiasl and Rizzo’s analysis. However, they think it would be too much effort: although a positive result would favour the existence of superluminal neutrinos, a null result would only suggest that the theoretical framework is faulty. On the other hand, other “long baseline” experiments, such as the MINOS experiment at Fermilab in the US, do have the ability to refute the OPERA result. Davoudiasl and Rizzo’s experiment would be “like using a pile-driver to break an egg”, says Glashow.

Rizzo agrees that a long-baseline experiment – that is, another OPERA-like experiment that detects neutrinos sent over many kilometres – is the best way forward. But he points out that it might take more than a year for such an experiment to be conducted with sufficient statistical certainty. “It is interesting to perform as many other, albeit model-dependent, tests using as many techniques as possible while we wait,” he says. Rizzo adds that existing datasets from the LHC’s ATLAS and CMS experiments should reveal the neutrino decays, if they exist. “It may be possible to obtain the results within a matter of a few months,” he says.

The research will be published in Physical Review D and a preprint is available on arXiv.

Physicist picked as UK environment chief

By Matin Durrani

d_wingham.jpg

One of the big advantages of a physics degree is that it opens the door to a wide range of different careers.

In fact, relatively few physicists stay within the confines of academic research, with plenty heading off into IT, finance, industry and teaching.

And, of course, there are lots of other unusual jobs that physicists end up doing, from opera singing to beach-animal sculpture-making, some of which appear in our regular “Once a physicist” column, a selection of which can be read via this link.

Plenty of physicists also end up working in the environmental sector and it was pleasing to see that the new head of the UK’s Natural Environment Research Council (NERC) is a physicist too.

Appointed today by the Department for Business, Innovation and Skills, the new NERC boss is Duncan Wingham, who graduated with a BSc in physics from the University of Leeds in 1979 and obtained a PhD from the University of Bath, also in physics, in 1984.

Most of Wingham’s career since then has been at University College London (UCL), where he was chair of space and climate physics and later head of Earth sciences from 2005 to 2010.

Wingham was also founding director of the NERC’s Centre for Polar Observation and Modelling from 2000 to 2005, which, among other things, discovered the widespread mass loss from the West Antarctic Ice Sheet and its origin in accelerated ocean melting.

He is also currently chairman of the NERC’s science and innovation board and has been lead investigator of the European Space Agency’s CryoSat and CryoSat-2 satellite missions.

Wingham replaces Alan Thorpe, who was also a physicist.

We’ll be publishiing a special issue of Physics World magazine next March on Earth sciences and we’re filming some videos at the American Geophysical Union’s meeting next month – so stay tuned for more Earth-sciences coverage.

What’s that ‘fluctuation’ at 120 GeV?

By Hamish Johnston

Last week’s particle-physics conference in Paris began with the news that CERN’s Large Hadron Collider (LHC) may have produced the first glimpse of direct CP violation in a charmed-muon decay.

If that wasn’t enough to get particle physicists mildly excited, Friday’s joint announcement by the ATLAS and CMS experiments should do the trick.

Physicists working on the LHC’s two biggest experiments have pooled their data from 2011 (or at least the bits they have managed to analyse so far) to obtain the best mass exclusion yet for the Higgs boson.

The data reveal that the mass of the Higgs is unlikely to fall in the range 140–480 GeV/c2. This is news because most of this energy range had not been excluded by previous colliders, including the Tevatron at Fermilab. When combined with work at other colliders, the ATLAS and CMS data suggest that the Higgs mass falls into a window between about 110–140 GeV/c2, or is greater than about 480 GeV/c2.

Perhaps the most intriguing feature of the data is a sharp change at about 120 GeV/c2. In his blog, Tommaso Dorigo explains the significance of this fluctuation, and argues that it could be the first indication of the Higgs mass, which he believes is 119 GeV/c2. You can read Dorigo’s analysis here.

In the video above, physicists from the CMS experiment talk about the mass-exclusion results.

In other LHC news, researchers from the CMS experiment have published a paper in Physical Review Letters detailing the most extensive search for supersymmetry to date. Supersymmetry (or SUSY) is an attractive concept because it offers a solution to the “hierarchy problem” of particle physics, provides a way of unifying the strong and electroweak forces, and even contains a dark-matter particle. An important result of the theory is that every known particle has at least one superpartner particle, or “sparticle”.

Sadly, those waiting for a revolution in particle physics will have to wait a little longer, because no evidence for such sparticles has been found by CMS. You can read the paper here free of charge.

23 November Higgs update from CERN: Physicists on the ATLAS experiment have published a paper in Physical Review Letters that excludes the Standard Model Higgs mass from 340-450GeV/c2 at 95% confidence. You can read the paper here.

Was a giant planet ejected from our solar system?

A fifth giant planet was kicked out of the early solar system, according to computer simulations by a US-based planetary scientist. The sacrifice of this gas giant paved the way for the stable configuration of planets seen today, says David Nesvorný, who believes that the expulsion prevented Jupiter from migrating inwards and scattering the Earth and its fellow inner planets.

The Nice model (named after the city in France where it was devised) is currently the best explanation of why the solar system looks the way it does today. It describes how early gravitational interactions between the large outer planets and planetesimals – smaller bodies that are the building blocks of planets – would have scattered the latter throughout the solar system. This accounts for the period known as the “late heavy bombardment”, the scars of which bodies such as the Moon still bear today. It also helps explain the structure of the asteroid and Kuiper belts. However, for the model to work, the planets must have started out much closer together than their present-day locations.

Working at the Southwest Research Institute in Colorado, Nesvorný set out to explain how the planets got to their current positions. “I wanted to put everything together into a single model that explains why our solar system looks the way it does,” he told physicsworld.com. To do this, he performed computer simulations that varied the starting positions of the four outer planets: the gas giants Jupiter, Saturn, Uranus and Neptune. The planets were also given varying eccentricities – the amount their orbits deviate from being perfect circles – and different orbital inclinations. A thousand equally sized planetesimal bodies were also added. He then let the gravitational dance play out.

Dysfunctional giants

What Nesvorný found was surprising. “If you use four planets, in the majority of cases the solar system loses one,” he explains. With the presence of four outer planets today, this obviously did not happen. He altered the model. “I added in more mass to the planetesimals, which could act as a gravitational glue to keep the planet [from being ejected],” he says. It did not work: in 120 four-planet simulations only three of them yielded a familiar-looking solar system. “I kept running into problems. I tried all sorts of initial conditions but still things didn’t work,” he adds.

Instead Nesvorný was forced to consider the presence of a fifth planet. “If you had four planets and the system was losing one, I thought of trying five planets to see if one was lost, leaving four,” he explains. He then ran thousands of simulations with five planets to see what happened. By varying the total mass of the planetesimals between 10–100 Earth masses, the models presented a familiar-looking solar system at least 23% of the time – a 10-fold increase on early solar systems containing only four outer planets. “In all likelihood the outer solar system had more than four planets,” he says. “One got kicked out.”

This five-planet scenario also helps to explain why the Earth is still here. If Jupiter had migrated inwards, as some Jupiter-like planets have done in other solar systems, it could have hit a “danger zone” where its gravitational influence disturbed the stability of the inner planets. “Not only does Jupiter eject the fifth planet, but the interaction causes it to jump away from the danger zone,” Nesvorný explains.

Solar-system specialist Craig Agnor of Queen Mary, University of London says the simulations are “excellent work” and a “possible missing piece” linking the Nice model to today’s planetary configuration. “Had someone suggested this 25 years ago, it would have been somewhat outside the mainstream,” he adds. “However, today there are a handful of free-floating planets that appear to have been ejected from their parent stars.”

However, Agnor also warns that it “isn’t a solved problem”. “This is the beginning of the exploration of the idea that our planetary system was once unstable. How the planets formed and how that instability developed is something that really needs further study,” he explains.

The work is described in Astrophysical Journal Letters.

Prize-winning science book makes waves




Image of Gavin Pretor-Pinney with his book and award on the left and the front cover of the UK edition of The Wavewatcher’s Companion on the right. (Courtesy: The Royal Society)

By Tushna Commissariat

The Royal Society announced its 2011 Royal Society Winton Prize for Science Books today and the winning book is Gavin Pretor-Pinney The Wavewatcher’s Companion. Sir Paul Nurse, president of the Royal Society, presented the prize to Pretor-Pinney at an award ceremony held at the society’s headquarters in London.

The society’s annual book prize, originally established in 1988, aims at encouraging the writing, publishing and reading of science books – especially those that deal with complex subjects in a style that can be absorbed by a non-specialist audience. The society also awards the Royal Society Young People’s Book Prize for books that communicate science to a younger audience.

The winner of the Winton Prize is selected by a panel that first chooses a longlist of about 12 books followed by a shortlist of six books, before the winner is announced. The authors of the short-listed books each receive £1000 and the winner receives £10,000.

In The Wavewatcher’s Companion, Pretor-Pinney, who also happens to be the founder of the Cloud Appreciation Society, talks about another formation in nature that caught his eye – waves – and explores why it is they appear everywhere around us. On winning the prize, Pretor-Pinney said “I’m really grateful to the jury, the Royal Society and Winton Capital Management. What interests me in science is that it follows from being curious about the world around us. I hope my book motivates others to be curious too!”

The judging panel included Richard Holmes, biographer and a previous winner of the prize Jenny Clack FRS, Professor and Curator of Vertebrate Palaeontology at the University of Cambridge; Robert Llewellyn, writer, actor and TV presenter; and Professor Cait MacPhee, Professor of Biological Physics at the University of Edinburgh.

The six books shortlisted were:

*Alex’s Adventures in Numberland by Alex Bellos

*Through the Language Glass: How Words Colour Your World by Guy Deutscher

*The Disappearing Spoon by Sam Kean

*The Wavewatcher’s Companion by Gavin Pretor-Pinney

*Massive: The Missing Particle That Sparked the Greatest Hunt in Science by Ian Sample

*The Rough Guide to the Future by Jon Turney

“At the heart of the scientific enterprise is a desire to explore our world, and to understand it better. The Wavewatcher’s Companion used relatively straightforward science to transform our perspective on the world around us, both visible and invisible, in a completely radical way. From mexican waves to electromagnetic waves, it gave us a new delight and fascination in our immediate surroundings,” said Holmes, chair of the judges. He went on to say that the panel was “inspired to see waves everywhere” after reading the book and that it was a “delightful winner”.

The first chapter of each shortlisted book is available to download free of charge here.

Microscope probes living cells at the nanoscale

Researchers in the US and UK say they have invented a new microscopy technique for imaging live tissue with unprecedented speed and resolution. The technique involves using the tiny tip of an atomic force microscope to tap on a living cell and analysing the resulting vibrations to reveal the mechanical properties of cell tissue. The team says that the technique could have widespread applications in medicine. However, another expert in the field suggests that the group has not demonstrated the superiority of the technique to those already available.

Atomic force microscopy (AFM)is a standard technique for obtaining images of a surface at resolutions of a few nanometres. A needle sharpened to just one atom thick is mounted on the end of a flexible metal bar called a cantilever. The cantilever is vibrated, causing the needle to move up and down rapidly. The needle’s tip is then brought so close to the surface that intermolecular forces between the two affect the movement of the needle. By scanning the cantilever across the sample, an image of the forces – and therefore the surface – is obtained.

One useful application of AFM is imaging living tissue. Whereas scanning tunnelling microscopy and electron microscopy usually require radical preparation of samples in ways that would kill a live organism, AFM can potentially work well with living cells because extreme preparation is not needed.

Squishy samples

There are problems, however, including the fact that the interaction between soft tissue and the needle is more complex than when imaging a hard surface. One of the researchers, Arvind Raman of Purdue University, likens it to the difference between tapping a pen on a table top and on a cotton puffball. “Instead of tapping on it, you’re going to be buried inside it and just moving up and down,” he says.

Raman and colleagues at Purdue and the University of Oxford vibrate their AFM cantilever at a frequency of 7 kHz. They then bring it into gentle contact with the sample. Because of the squishiness of the soft tissue, it sinks under the force of the needle. The presence of the tissue interferes with the vibration of the needle so that, instead of containing just one pure frequency at 7 kHz, the cantilever vibrations now contain several harmonics of this frequency.

By analysing these harmonics, the researchers build up maps of the mechanical properties of different sorts of soft tissue, including bacteria and two types of cell. They claim to have enhanced the speed with which such properties can be mapped by a factor of 10–1000, a breakthrough that they say could have widespread applications in medicine, including watching cancers spreading and finding out how new drugs work.

Already available?

One leading scientist in the field, however, believes that the research has a number of flaws, notably that other schemes already exist that are superior to the one described. The researcher, who has asked not to be named, argues that two commercially available atomic force microscopes can achieve greater speed and resolution in imaging soft tissue than a microscope using the method developed by the Purdue/Oxford collaboration.

Raman, however, feels that this criticism is unfair, because his group’s technique is designed to be implemented on a standard atomic force microscope. “What we’re talking about is using traditional AFM – where you don’t have to buy a new microscope. You can take conventional AFM and up the speed of that,” he says.

The anonymous critic also suggests that the needle’s permanent contact with the tissue could cause problems with friction between the needle and the tissue sample, which could introduce artefacts into the images produced. In particular, the critic says that the researchers have not ruled this out by imaging a standard sample, such as a silicone gel, and checking their results against an image made using an accepted technique to check that they get the same results.

Raman accepts that this would have been a desirable check but argues that finding a standard sample soft enough to mimic the “puffball” properties of living tissue is not straightforward. “It’s hard to find standard samples that you can use for validation,” he says. “Having said that, we are currently looking at adapting the theory to allow for intermittent contact. That is an ongoing work.”

The research is published in Nature Nanotechnology.

New tests support superluminal-neutrino claim

Physicists working on the OPERA experiment in Italy have released preliminary results of a new experiment that appears to confirm their previous finding that neutrinos can travel faster than the speed of light. In September the OPERA collaboration announced that neutrinos travelling 730 km underground from the CERN particle-physics lab in Switzerland to the Gran Sasso lab in Italy appeared to be travelling faster than the speed of light – something that goes against Einstein’s special theory of relativity.

Although some theories allow superluminal speeds for neutrinos, many in the physics community were sceptical of the finding. Shortly after the result was announced, a paper was published in the journal Physical Review Letters that argued that any superluminal neutrinos detected at Gran Sasso should have a distinct energy spectrum as a result of the Cerenkov-like emission of charged particles on the journey – something that was not evident. There also seemed to be some disagreement within the OPERA community about whether the results should be subject to further internal scrutiny before being submitted for publication in a peer-reviewed journal.

Many physicists suspect a systematic error to be the cause of the excess neutrino speed. One issue that was highlighted early on is the effect of the length of the neutrino pulses on the result. In the original experiment the pulses lasted 10.5 µs and were separated by 50 ms, and some critics had suggested that such wide pulses could introduce a systematic error in the time-of-flight measurement.

One neutrino at a time

In this latest experiment the neutrino pulses were shortened to 3 ns long and separated by up to 524 ns. As a result, the experiment is essentially looking at single neutrinos rather than bunches. “With the new type of beam produced by CERN’s accelerators, we’ve been able to measure with accuracy the time of flight of neutrinos one by one,” says Dario Autiero of the Institute of Nuclear Physics of Lyon in France.

This latest experiment involved 20 neutrinos, rather than the 16,000 studied in the previous analysis. However, Autiero claims that the new measurement delivers accuracy that is comparable to the previous work. “In addition, [the] analysis is simpler and less dependent on the measurement of the time structure of the proton pulses and its relation to the neutrinos’ production mechanism,” he says. However, Autiero adds that both results require further scrutiny.

Jenny Thomas of MINOS – a similar neutrino experiment at Fermilab in the US – agrees. “OPERA’s observation of a similar time delay with a different beam structure only indicates no problem with the batch structure of the beam,” she says. “It doesn’t help to understand whether there is a systematic delay that has been over looked.”

The work has been submitted to the arXiv preprint server.

When East meets West

Wroclaw


Centennial Hall in Wrocław, where the meeting took place

By Susan Curtis

I was recently in the Polish city of Wrocław to attend the second Asian-European Physics Summit (ASEPS), where one message emerged loud and clear – scientists from West and East need to collaborate with each other more.

The summit brought together representatives of the European Physical Society with those from the Association of Asian Physical Societies. The latter is an umbrella organization that represents the physical societies of countries such as Japan, China, Korea, Australia and India.

Physicists at the meeting argued that working together is the best way to push the boundaries of scientific discovery, while policy-makers recognized that the drive for ever-more-sophisticated research facilities can only be realized by combining global resources.

There was good news in Wrocław from speakers from Japan, Korea and China, who reported that science funding is increasing across Asia. While Japan already has a reputation for research excellence, China and Korea are making big investments in basic research in a bid to move up the value chain from product supply to a knowledge-based economy. They are keen to work with European research centres to speed up that transition, and to learn from Europe’s approach to developing a structured and collaborative research infrastructure.

A good example of how that’s happening in practice is Korea’s activity in fusion research. Having established its capability with the KSTAR research tokamak, Korea has become a key member of the ITER consortium, which is building a proof-of-concept fusion reactor in the south of France. Korea plans to exploit the experience gained at ITER to build a commercial nuclear-fusion facility sometime between 2022 and 2036.

But collaborations like that are few and far between. Asian scientists have traditionally viewed the US as the best place to develop a physics career, so much so that Asia is suffering a brain drain as talented scientists relocate for better pay and research opportunities. And while some Asian scientists come to Europe to work on particular projects, very few European researchers spend significant time in Asia.

In January 2009 the EU set up a project called KORANET to investigate the reasons why. One obvious problem is the eight- or nine-hour time gap, combined with the cultural and linguistic differences that make it hard for Asian scientists to live and work Europe, and for Europeans to move to Asia. More practical problems also discourage mobility, such as finding suitable accommodation and ensuring continuity of pensions provision and healthcare insurance.

One idea suggested by KORANET is for European research organizations to set up “branches” in Asia. A particularly successful initiative has been the Sino-German Center for Research Promotion, a joint venture formed 10 years ago to co-ordinate and encourage collaborative activities between China and Germany. The Max Planck Society has also established 24 partner groups in China, which allows Chinese students and postdocs to gain research experience in Europe before returning to work in well funded, well equipped Chinese facilities.

For their part, delegates at the ASEPS event said that more exchange opportunities should be developed for small research programmes as well as for large projects, and that a network of local contact points should be set up to help scientists who are working in an unfamiliar part of the world.

To encourage student mobility – which will be crucial for future collaboration – delegates were keen to ensure mutual recognition of degrees, and suggested a joint summer school to address some of the key challenges facing young physicists, such as the need for sustainable energy technologies.

A small working group will take these ideas forward so that real progress can be reported at the next ASEPS meeting, which is due to take place in Asia at some point over the next two years.

In the meantime, don’t forget to check out our Physics World special report on China, which can be read via this link.

Copyright © 2026 by IOP Publishing Ltd and individual contributors