Skip to main content

Neutrinos spotted from Sun’s main nuclear reaction

Physicists working on the Borexino experiment in Italy have successfully detected neutrinos from the main nuclear reaction that powers the Sun. The number of neutrinos observed by the international team agrees with theoretical predictions, suggesting that scientists do understand what is going on inside our star.

“It’s terrific,” says Wick Haxton of the University of California, Berkeley, a solar-neutrino expert who was not involved in the experiment. “It’s been a long, long, long time coming.”

Each second, the Sun converts 600 million tonnes of hydrogen into helium, and 99% of the energy generated arises from the so-called proton–proton chain. And 99.76% of the time, this chain starts when two protons form deuterium (hydrogen-2) by coming close enough together that one becomes a neutron, emitting a positron and a low-energy neutrino. It is this low-energy neutrino that physicists have now detected. Once this reaction occurs, two more quickly follow: a proton converts the newly minted deuterium into helium-3, which in most cases joins another helium-3 nucleus to yield helium-4 and two protons.

Unexpected measurement

Neutrinos normally pass through matter unimpeded and are therefore very difficult to detect. However, the neutrinos from this reaction in the Sun are especially elusive because of their low energy. “It’s a measurement that we weren’t really expected to do,” says Andrea Pocar, a physicist at the University of Massachusetts at Amherst who is part of the Borexino experiment.

The Borexino detector is a large sphere containing a benzene-like liquid that is located deep beneath a mountain at the Gran Sasso National Laboratory to shield the experiment from cosmic rays. Occasionally, a neutrino will collide with an electron in the liquid and the recoiling electron will create a flash of ultraviolet light that can then be detected.

Pocar told physicsworld.com that the main experimental challenge is in the liquid itself. It contains carbon, some of which is radioactive carbon-14 that also emits electrons. To minimize this problem, the scientists derive the liquid from petroleum so ancient that most of the troublesome carbon isotope has already decayed.

It’s a very direct confirmation that what we have been saying about the Sun is correct
Andrea Pocar, University of Massachusetts at Amherst

The standard solar model predicts that 60 billion neutrinos from the Sun’s main nuclear reaction pass through a square centimetre on Earth each second; the scientists measure an actual flux of 66±7 billion neutrinos. “It’s a very direct confirmation that what we have been saying about the Sun is correct,” Pocar says.

Neutrino experiments have not always been so kind to solar theory. In fact, the first detected solar neutrinos, which arise from a rare nuclear reaction involving boron-8, showed a deficit when compared with theory. This was resolved when physicists showed that electron neutrinos – the type the Sun produces – can change into other types, which previous experiments did not detect.

Wild proposals

Stan Woosley, an astronomer at the University of California, Santa Cruz, recalls those times. “There were all sorts of desperate things going on to try to understand why the neutrinos weren’t what was expected,” he says, mentioning one wild proposal that suggested that the Sun had a black hole at its heart. “People used to taunt those of us who did stellar evolution: ‘How can you believe anything you do when you can’t even understand our own star?’.” Woosley, who is not involved in the Borexino experiments, therefore finds the new result “really gratifying”.

While the Borexino result will bring some solace to solar physicists, a new controversy has erupted. A decade ago, some astronomers claimed that the Sun has far less carbon, nitrogen and oxygen than had been thought. Fortunately, 1% of the Sun’s energy arises not from the proton–proton chain but instead from the CNO cycle, in which carbon, nitrogen and oxygen nuclei catalyse the hydrogen-to-helium reaction. These reactions spawn neutrinos, too, and the more carbon, nitrogen and oxygen the Sun has, the more of these neutrinos there should be.

But CNO neutrinos are so rare that no-one has yet detected them. Will Borexino succeed? “I’ll call it 50/50,” Pocar says.

The scientists have published their work in Nature.

What can you learn at a quantum 'boot camp'?

By Tushna Commissariat in Stockholm, Sweden

Google the word “quantum” and take a look at what comes up.
NORDITA logo

In addition to the obvious news articles about the latest developments in the field and the Wikipedia entries on quantum mechanics, you’ll undoubtedly come across a heap of other, seemingly random, stories.

I found, for example, a David Bowie song being compared to a quantum wavefunction (by none other than British science popularizer Brian Cox), as well as a new cruise ship being named Quantum of the Seas. Then there’s the usual jumble of pseudo-scientific “wellness” therapies that misguidedly adopt the word in a strange attempt to give their treatments some sort of credibility.

So while it seems that everyone is talking about quantum something or other, how much do we really understand this notoriously difficult subject? More to the point, how much do science journalists, like me, really know about the subject? I write stories about quantum mechanics from time to time for Physics World and the subject can, I assure you, be fiendish and quite mind-bending.

(more…)

CLOUD experiment at CERN aids climate predictions

In its fifth assessment report published last year, the Intergovernmental Panel on Climate Change identified atmospheric aerosols and their influence on clouds as the largest source of uncertainty in current climate models. Hundreds of these tiny particles – ranging from a few nanometres to a few micrometres across – are present in every cubic centimetre of atmosphere, with higher concentrations in urban environments. In addition to cooling the climate by reflecting or absorbing solar radiation, atmospheric aerosols also provide the seeds for all cloud droplets. Some cloud seeds enter the atmosphere directly from dust or sea spray, but around half are produced indirectly by the nucleation of trace vapours to form tiny molecular clusters that grow under condensation.

This process is poorly understood, but an experiment called CLOUD at CERN, the particle-physics laboratory in Geneva, is now providing scientists with some unique perspectives on it. At its heart is a electropolished stainless-steel cylinder – with a volume of roughly 26 m3 – in which researchers recreate a portion of the atmosphere under precisely controlled conditions. Earlier this year, the 50-strong CLOUD collaboration reported its latest results, revealing that biogenic vapours emitted by trees have a significant impact on the formation of aerosols and clouds that helps cool the planet.

“The reason why it has taken so long to understand the vapours responsible for new particle formation is that they are present in minute amounts near one molecule per trillion air molecules,” explains CLOUD spokesperson Jasper Kirkby of CERN. “Reaching this level of cleanliness and control is at the limit of current technology, and CERN’s expertise in materials, gas systems and ultra-high-vacuum technologies has been crucial,” he says.

Cosmic origins

CLOUD, which stands for Cosmics Leaving Outdoor Droplets, is designed to explore the link between galactic cosmic rays and cloud formation. First proposed in the 1970s, the idea is that charged particles that strike the Earth’s atmosphere produce ions that help new aerosol particles nucleate. To mimic such processes the CLOUD chamber is bombarded by beams of charged pions from CERN’s Proton Synchrotron.

A bespoke gas system and strict use of clean materials in the chamber lets the CLOUD researchers get much lower concentrations of contaminants than in previous experiments. Indeed, the team even produces its own synthetic air from cryogenic liquid oxygen and nitrogen, since natural air cannot be purified sufficiently. The temperature of the chamber can be varied between +30 °C and –70 °C and kept steady to within 0.01 °C, allowing researchers to reproduce any region of the troposphere. In order to simulate sunlight, which is necessary for photolytic reactions, the chamber is fitted with ultraviolet lamps linked to a quartz fibre-optic system and 250 optical-fibre vacuum feedthroughs.

In 2011 CLOUD reported its first results, revealing that trace sulphuric-acid and ammonia vapours at high altitudes can rapidly cause clusters to nucleate and become the seeds for clouds. The results also showed that ionization from cosmic rays can make nucleation occur up to 10 times faster. On the other hand, the data showed that these vapours could only account for 1000th of the rate of actual aerosol formation observed at low altitudes (Nature 476 429).

Two years later, the collaboration showed that amines – derivatives of ammonia – in concentrations of a few parts per trillion can stabilize sulphuric-acid particles and better reproduce the observed particle formation rates (Nature 502 359). Yet unlike sulphuric acid, which enters the atmosphere mainly through coal-fired power plants, amines are only found close to primary sources such as farms. This suggested that additional vapours must be at work.

Biogenic source

CLOUD’s latest results show that sulphuric-acid aerosols do indeed have a significant influence on the formation of clouds, but only if they are stabilized by volatile biogenic vapours emitted by trees (Science 344 717). These vapours, which give pine forests their characteristic smell, are rapidly oxidized in the atmosphere to produce low-volatility vapours that participate in the formation of new aerosol particles. When the researchers modelled the global impact of the new process, they were able to account for the observed seasonal variations in atmospheric aerosol concentrations for the first time.

According to Ian Ford, a condensed-matter theorist at University College London who is not involved in the experiment, the result represents a significant step forward in climate science. “CLOUD is one of the best experimental rigs in the world and the wide range of atmospheric conditions that it covers means that we can now better parameterize the impact of aerosols and clouds in global climate models,” he says.

The CLOUD team is now studying how nucleation rates vary under different conditions, vapour concentrations and particle-beam intensities, and the experiment will run for 10 more years. “We are confident that it will settle the link between cosmic rays and cloud formation, while also reducing uncertainties in anthropogenic aerosol-cloud changes,” says Kirkby.

Refraction redefines the pascal

Many of us are unaware of the extent to which the international system of units (SI) touches our everyday lives, and this is especially true for pressure. Highly accurate, traceable measurements of pressure and vacuum conditions are vital in many manufacturing processes including that of semiconductor chips, where precise pressure measurements can improve control over features at the nanometre level. The routing of air traffic is another critical application: the minimum separation between aircraft as they approach busy airspace currently is about 600 m, but better pressure measurements could allow this distance to be cut in half – thereby reducing congestion and saving fuel.

Although pressure is one of the most widely measured units in everyday processes – and is key to vacuum science and technology – the standards that underpin it are very old. The SI unit for pressure, the pascal, is realized by mercury manometers that date back more than 300 years and we have now reached a point where the accuracy of such standards cannot be improved further. The same is true of temperature, the most widely measured unit, where the fundamental SI unit – the kelvin – is defined by the triple point of water.

A five-year project at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, US, aims to fundamentally change the way that the pascal is realized and disseminated. Based on ultra-precise measurements of the refractive index of gases using optical interferometry, it will raise pressure standards to the level of other SI units based on fundamental constants and also offer a brand new way to redefine the kelvin.

Mercury falling

The story of barometric pressure and early vacuum measurements is the story of mercury manometers, which were invented in 1643 by Italian physicist and mathematician Evangelista Torricelli. Some of the earliest barometric measurements were made by taking liquid in a glass–mercury manometer up a mountain and periodically observing the height of the mercury column along the way: the higher the altitude, the lower the mercury column height, indicating a lower barometric pressure. Named in Torricelli’s honour, the torr (equal to 133.322 Pa) is nominally equal to 1 mm of mercury-column height and the unit is still in common use today on many vacuum gauges.

Since Torricelli’s day, improvements in the mercury manometer have been incremental, based on resolving column heights with greater precision and accuracy. The lowest pressure uncertainties today come from a device called the ultrasonic interferometer manometer developed at NIST in 1975, which uses pulsed ultrasound to determine column heights in a 3 m-tall mercury liquid-column manometer to within 10 nm. But we have now reached the limit of such improvements. Furthermore, mercury’s status as a neurotoxin and environmental hazard has led to a ban on the purchase of mercury products. Government laboratories, national laboratories and corporations have therefore been forced to get rid of their mercury manometers, and as a result have seen their pressure-measurement capabilities downgraded. However, NIST still maintains its three mercury manometers as the national standard for the US – and will continue to do so until a high-accuracy, mercury-free alternative has been developed.

We have reached the limits of improvements based on mercury manometers

Since reference manometers are not portable (the NIST device is 3 m high and contains 250 kg of mercury), a parallel goal is to build a pressure standard that can deliver pressure measurements directly to the community. Several years ago, NIST researchers took an intermediate step towards this goal by developing a mercury-free “transfer standard package” (TSP) that enables high-quality measurements of NIST’s mercury manometers to be transferred to industry, academia and national standards laboratories around the world. This device allows reliable pressure measurements ranging from high vacuum (10–2 Pa) to atmospheric conditions (105 Pa), where improper use or incorrect calibration of gauges can cost time and money. But even though it provides a bloodline that links vacuum gauges directly to the SI system, the TSP still requires NIST to maintain and operate a 3 m-high mercury manometer. Our new project aims to replace not only the mercury manometer, but also the TSP, and do it at higher accuracy with an even more compact device.

Pressure by refractive index

In search of a better, mercury-free way to realize pressure – and one based on fundamental physics rather than a physical object – NIST has embarked on an entirely new optical pressure standard that links the pascal to quantum calculations of helium’s refractive index. Pressure and vacuum standards based on refractive index will significantly cut measurement uncertainties and improve accuracy in the aerospace, energy and advanced manufacturing sectors by between three- and 10-fold.

The refractive index of a gas depends on its density, which is a function of temperature and pressure. Quantum mechanics tells us the exact relationship between these variables, and calculations for atomic helium will allow us to relate pressure, temperature and refractive index with an accuracy significantly better than one part in 106. By measuring temperature and refractive index to high accuracy, we can determine the pressure and turn the device into a primary pressure standard based on the atomic properties of helium. Conversely, if pressure can be measured accurately using alternative means, then thermodynamic temperature can be determined by measuring the refractive index.

The key to making this approach worthwhile is to measure refractive index with much higher accuracy than has previously been achievable, which is possible using laser interferometry. When measuring a length using laser interferometry, the presence of a gas such as helium causes the distance to look a little bit longer than would be observed in vacuum. If we were able to compare two identical lengths in helium and in vacuum, the apparent difference would therefore allow us to determine the refractive index. The challenge facing NIST is that this difference must be measured with an accuracy of a few picometres – about 1% of a typical atomic diameter. Although picometre precision is relatively straightforward using a Fabry–Pérot interferometer, in which light bounces back and forth many times in an optical cavity, we face the challenge of generating identical displacements in two optical cavities (one in vacuum and one in helium) both with picometre accuracy.

Conceptual picture of the VLOC apparatus

To this end, NIST is developing a variable-length optical cavity (VLOC) comprising four individual cavities: a central cavity in helium gas surrounded by three cavities in vacuum (see figure). The mirrors at each end of the apparatus are built on highly stable bases such that the displacement of one mirrored end-piece generates equal displacements in both the outer and inner interferometers. The reason for having three interferometers in vacuum is that it allows measurement and control of angular tilts as the end-piece is translated, which is necessary to avoid Abbe errors.

Building such a primary pressure standard at NIST is all well and good, but we also have to find a way to transfer its superior accuracy to locations outside the standards lab. We are therefore also developing a less complex fixed-length optical cavity (FLOC), which will offer enhanced sensitivity over the current TSP in a greatly simplified and more portable package. The transportable FLOC standard, a prototype of which is currently being built, will use nitrogen as a working medium because it has a higher refractivity than helium and is 100 times less sensitive to contaminants. Fixed-length cavities are attractive as pressure standards because they can be made from materials with small temporal instabilities, and will deliver an immediate improvement by a factor of three in pressure-measurement uncertainties over existing commercial technologies for vacuum pressures down to 1 Pa. NIST plans to employ the FLOC and VLOC as optical pressure and vacuum standards for pressures up to 360 kPa, which will allow us to replace all mercury-based pressure standards within the next 5–10 years.

Branching out

The five-year-long project is currently in its second year. The design phase has been completed, custom parts are currently being fabricated and NIST plans to have a fully working prototype by the summer of 2015. Over the next three years, we will complete the VLOC and refine the quantum-mechanical calculations of helium’s refractive index. This will also allow us to measure the refractive index of nitrogen to a level sufficient to make the FLOC a standalone pressure and vacuum standard. Finally, the optical standard will be compared with the existing NIST mercury manometers to close the final chapter on Torricelli’s mercury pressure standard.

Since the new optical pressure standard does not require the long central column that mercury manometers do, there is essentially no limit to how small the device can get. While the first generation of FLOCs will be 15 cm in length (which is already 20 times smaller than NIST’s mercury manometer), future versions may become still smaller and more compact to deliver better pressure measurements directly to the pressure and vacuum community. Even more exciting is that the unit for pressure will be quantum-based. Manometers have a direct link back to the SI through high-accuracy measurements of elemental mercury’s density that is traceable to the kilogram. However, the new standard will no longer be linked to the kilogram, which is an artefact-based standard, but based on the refractive index of helium and on fundamental constants that are the same every-where in the universe.

The same technology can be used to determine thermodynamic temperature with uncertainties below what has been previously achieved, providing a definition of the kelvin that is independent of water’s triple point. The NIST project also applies to dimensional metrology using laser interferometry, for which accuracy can be limited by uncertainty in the refractive index of air. With the FLOC cavity being open to the environment, it provides in situ measurements of the local refractive index and therefore allows real-time wavelength corrections to be made with a projected uncertainty at the level of three parts in 109 when measuring large distances, say on a factory floor.

NIST’s new mercury-free optical pressure standard will benefit science, industry and defence applications, as well as secondary-measurement services. Indeed, the FLOC and VLOC devices will ultimately eliminate mercury from all NIST primary pressure standards and at other labs and industrial facilities. We expect the optical pressure standard to become a commercial device used across academia and industry at a fraction of the cost of a commercial pressure standard, while being cheaper to maintain and having a larger pressure range and lower uncertainty.

Vacuum challenge for the Sirius synchrotron

In the past few decades, synchrotron light sources have significantly advanced our knowledge of the structure and properties of materials. Based on electrons travelling around large storage rings, synchrotrons produce high-brilliance radiation ranging from the infrared to hard X-ray region of the spectrum. By tapping off this radiation and sending it along numerous beamlines that fan out tangentially from the storage ring, many different and unique experiments can be carried out simultaneously. In addition to advanced X-ray crystallography and imaging methods, synchrotrons allow scientists to link the atomic-scale structure of materials with their macroscopic properties and increasingly are able to characterize working devices such as solar cells in their native states.

Driving synchrotron science forward is the quest for ever-smaller beam “emittance” – a measure of the lateral spread of the electron beam. This results in a brighter and more collimated X-ray output, allowing the study of smaller features and faster processes. Around 50 synchrotrons of varying sizes and capabilities are in operation worldwide, and some of the top facilities are planning upgrades based on storage rings with very low emittance. The Advanced Photon Source (APS) in the US, the European Synchrotron Radiation Facility (ESRF) in France, and SPring-8 in Japan are all proposing ambitious upgrades, while other labs are building entirely new machines – namely MAX IV in Sweden and Sirius in Brazil.

Sirius will replace Brazil’s existing, second-generation synchrotron with one of the brightest X-ray sources on the planet. Boasting a very low emittance of 0.28 nm rad, it will also be the first third-generation light source in Latin America. Ground works for the new facility are complete and construction is scheduled to start at the end of this year, with engineering teams already deeply engaged in the design, prototyping and R&D of the machine’s subsystems. Commissioning is planned to start in mid-2016 and the first beam for users in mid-2017, which is a tight schedule.

Distributed environment

In all synchrotron light sources, an ultrahigh-vacuum environment is essential in order to accelerate electrons and extract the radiation that they produce. The most demanding component of such a facility is the storage ring, which must operate at pressures bellow 10–9  mbar to minimize beam-gas scattering and other effects that reduce the lifetime of the electron beam. Since the cross section of the vacuum chamber is very small compared with its length (the Sirius storage ring will have a circumference of 518 m and a chamber diameter of 24 mm, for instance), these machines tend to be pumped in a discrete or distributed manner.

Most modern storage rings use discrete pumping, whereby hundreds of pumps are installed at intervals of around 1.5 m or less, which requires vacuum chambers with good conductance and sufficient longitudinal space in which to place the pumps. But this approach goes completely against the design of next-generation synchrotrons, which in order to reduce beam emittance will guide electrons more gradually around the ring. Technically, Sirius is based on a “multi-bend achromat” design that involves many small-aperture magnets and leaves little space for other components. In total, the storage ring will comprise 20 “five-bend cells” and 20 straight sections.

Distributed pumping based on non-evaporable getter (NEG) technology offers a more effective option for such a compact “lattice”. NEG coatings, which were developed by the CERN particle-physics lab, are based on thin films that coat the inner surfaces of a vacuum chamber and have a chemical affinity for gas molecules. CERN has used the technology extensively in the beam pipes of the Large Hadron Collider, which requires extreme vacuum conditions. Today, NEG coatings are a proven industrialized technology used in many other settings. The first synchrotron to extensively use NEG coatings was France’s national light source Soleil, with 56% of the machine exploiting the technology. Sirius and MAX IV will be the first synchrotron light sources to base vacuum pumping of the storage ring mainly on NEG coatings, with more than 95% of the vacuum chambers being coated.

The Sirius vacuum chamber

The Sirius challenge

The highly compact lattice of the Sirius storage ring leaves very little space for components and therefore requires narrow vacuum chambers, which are harder to pump. Another challenge is that the vacuum chambers and components must provide a continuous electrical path to minimize the impedance of the machine, which can affect beam stability by introducing electromagnetic wake fields. The most common materials used in third-generation light sources are stainless steel and aluminium, but we have chosen to build the Sirius vacuum chamber from oxygen-free silver bearing (OFS) copper. The higher electrical conductivity of this material minimizes the machine’s impedance, while its improved thermal conductivity makes it better at absorbing unused synchrotron radiation. OFS copper also has a higher annealing temperature, which is convenient for NEG coatings because vacuum chambers need to be heated to a temperature of at least 200 °C to activate the coating. Most of the Sirius storage ring vacuum chambers will have a circular cross section with an inner diameter of 24 mm and a wall thickness of 1 mm.

Since the chambers also have to absorb unused synchrotron radiation, narrow copper cooling pipes will be attached to their outer side. But the machine’s compact lattice means that other sections of the storage ring – especially where the X-rays are siphoned off and sent down various beamlines towards experimental targets – demand more complex chambers. This makes the NEG coating extremely challenging. Sirius will have around 450 chambers in total, and even the circular cross-section chambers inside the multipole and bending magnets (where there is no need to extract radiation for the beamlines) do not have a simple manufacturing process.

Three joining processes will be needed to manufacture every single chamber to ensure that the copper does not become annealed and distorted. Vacuum brazing will be used to join short copper adapters to the stainless-steel vacuum flanges, while tungsten inert gas (TIG) welding will be used to weld these components to the copper vacuum chambers. A robotized TIG welding station gives us precise control and is suitable for complex geometries, but neither TIG welding nor vacuum brazing are suitable for attaching the copper cooling pipes to the vacuum chambers because of the potential for distortion or annealing, respectively. Here, vacuum soldering at a temperature of about 330 °C is being developed to produce joints for thermal contact.

NEG coatings

Since all the Sirius vacuum chambers will have small vacuum conductance, on account of their narrow cross section, NEG coatings are paramount. The host lab of Sirius, the Brazilian Light Source Laboratory (LNLS), has signed a licence agreement with CERN to use the coating technology and also has its own NEG coating facility to produce the Sirius vacuum chambers. Built last year by the LNLS engineering teams at a cost of around R$300,000 (about £80,000), this is one of the few places in the world where NEG coatings can be produced according to standards determined by CERN – offering the ability to coat vacuum chambers up to 3.2 m long and 450 mm in diameter (see image at top of article).

To ensure that the NEG coatings have good adhesion and pumping properties, the surfaces of the vacuum chambers must be completely free of contaminants. We therefore developed a special cleaning process based on a recirculation system where only the inner surfaces of the vacuum chambers are exposed to the etching solutions, which also reduces workers’ exposure to harmful substances. Although NEG coatings for the simple circular vacuum chambers are currently in their final design stages, the coating procedure for the complex vacuum chambers is still under development.

Following the NEG coating process, the vacuum chambers will be filled with nitrogen and stored in batches according to their assembly in the storage ring. Since the coatings must be activated by heating them in situ (in a procedure called a bake-out), Sirius requires lots of bellows to accommodate the chamber’s expansion and this can lead to a higher machine impedance. Also, the heating tapes wrapped around the chambers to heat them up must be very thin for those chambers inside the multipole and dipole magnets. For this reason, we have developed a customizable thin polyimide heating tape in conjunction with Brazilian company EXA-M Instrumentação do Nordeste, which is one of the first successful examples of LNLS’s partnership with Brazilian companies to build Sirius.

Global ambition

Further Brazilian firms are being encouraged to take part in Sirius, and collaborations between LNLS and other laboratories are in progress. One involves CERN, where there is a mutual interest between our vacuum teams in studying the behaviour of surfaces exposed to synchrotron radiation. These studies will allow a better understanding of the Sirius vacuum chambers as well as an opportunity for CERN to better understand the proposed new surface technologies for the LHC chambers.

Sirius will soon be one of the world’s brightest synchrotrons, opening new frontiers for research across materials science and also serving as a stepping stone to a diffraction-limited storage ring – a so-called ultimate synchrotron. The design is pushing synchrotron technology to the limit, especially concerning the different and unique concepts proposed for the vacuum system. All of the vacuum chambers and components must be designed and manufactured according to tight requirements, not just in terms of vacuum-system specifications but also to maximize the scientific performance of Sirius.

There are still many challenges to overcome with the Sirius vacuum system, such as the manufacture and NEG-coating of the complex-shape chambers. But once operational, Sirius will provide research from Brazil and the rest of Latin America with the opportunity to develop cutting-edge science in many fields and put nations from this region on the scientific map.

The wonderful world of ultrasound

Ultrasound refers to a type of vibrational wave that has frequencies above those detectable by the human ear. The most familiar application of ultrasound is perhaps medical imaging, where these high-frequency sound waves are used to scan unborn babies. But Bruce Drinkwater and the researchers in his lab are more interested in developing new applications of ultrasound beyond its traditional uses.

One example is using ultrasound to levitate small objects. In the podcast, Drinkwater explains the physics of this eye-catching phenomenon, in which objects up to a centimetre in size can be trapped in the nodes of an ultrasonic standing wave. You can see the procedure in action in the video below, as University of Bristol PhD student Philip Bassindale delicately creates a “pearl necklace” of levitating polystyrene balls.

Drinkwater explains that his research group has scaled this levitation experiment from one dimension to three dimensions to create a cubic lattice. This system can then be tuned so that it acts as a sound filter. Another exciting possibility is to use the principles of this ultrasonic array to create a “hyper-lens” – a lens that has a resolution beyond the standard diffraction limit. Such a device, explains Drinkwater, could be particularly useful for near-surface medical applications such as detecting and mapping skin cancer.

Later in the podcast, Drinkwater recounts a very different application of his lab’s ultrasound equipment. He and his colleagues were asked if they could use ultrasound to probe the internal structure of the Clifton Suspension Bridge – the iconic Bristol landmark designed by Isambard Kingdom Brunel. The bridge-maintenance team was concerned that cracks might be developing in one of the bridge’s principle supporting structures. Listen to the podcast to find out what they discovered.

When light listens to your every step

Hidden underground beneath international borders and military zones, a new form of stealth technology is taking root. In a technique known as distributed acoustic sensing, the same type of fibre-optic cables that we use for telephony and the Internet are being used as highly sensitive arrays of buried acoustic sensors. Being developed by a number of different firms, these systems can be used to remotely listen out for illegal immigrants crossing international borders, trespassers walking towards a military base, people tampering with oil or gas pipelines, or criminals damaging railway lines.

In the technique, sounds are “heard” by sending pulses of coherent laser light down an optical fibre and analysing the very small signal that is both scattered and reflected back along the fibre. Scattering and reflection occur at every point along the length of the fibre, so the original laser pulse comes back as thousands of very weak overlapping laser pulses. If there are no sounds to detect, the returning signal shows nothing special. But in the presence of a sound it has characteristic features that can be identified as footsteps, for example, or even a drone flying overhead.

The light in the fibre is scattered in all directions – including back along the direction from which it came – because it hits tiny imperfections such as density fluctuations. Known as Rayleigh scattering, this phenomenon is most familiar to us when sunlight passes through the atmosphere and is responsible for the sky appearing blue. “The atmosphere has random fluctuations of density and refractive index from place to place on a small scale due to small particles and molecules in the atmosphere,” says David R Selviah of University College London, who has been involved in commercializing distributed acoustic sensing (DAS). “These random fluctuations cause light to be scattered in all directions.”

But light is also sent back where one piece of cylindrical fibre ends and another begins. Where sound waves come into the picture is that when they pass through the fibre, they cause it to flex ever so slightly, which changes the fibre’s local optical properties. The signal that is scattered and reflected back along the cable is then also altered and it is this pulse modulation that companies are exploiting to develop highly sensitive systems based on DAS. The exterior sound modulates the reflected laser pulses, and once the laser pulses are received, the sound can be extracted from them. As Selviah puts it, such systems essentially work as a distributed underground array of microphones.

The source of a sound can be pinpointed spatially to within tens of metres  – which is a good accuracy considering that the optical fibres can be as long as 50 km

Using DAS, the source of a sound can be pinpointed spatially to within tens of metres – which is a good accuracy considering that the optical fibres can be as long as 50 km. “We can tell which point in the fibre reflected which pulse by noting the time that each pulse returns,” explains Selviah, who has in the past worked on the signal processing and pattern-recognition analysis of data from DAS systems for Silixa – a UK-based fibre-optics firm founded in 2007 that operates in the energy, security and industrial sectors. Uses of DAS in these areas include detecting earthquakes and Earth tremors, and hearing the creaking sounds of buildings, bridges or dams to see if they are deteriorating.

Sound interpretation

1 Suspicious sounds

Diagram of using sound to detect criminal activity

Using distributed acoustic sensing, sensitive areas such as borders and railway lines can be monitored for warning signs of criminal activity. Shown here, left to right, are the tell-tale signs of gunfire, digging, a vehicle, a person walking and a microlight (a small aeroplane).

Another firm leading the development of DAS systems is OptaSense – a subsidiary of the UK technology group QinetiQ. OptaSense this year won Queen’s awards for both innovation and international trade, and if you visit its headquarters in Farnborough, south-west of London, you’ll see computer screens showing complicated zigzagging lines, similar to those a seismograph would show when picking up Earth tremors. These lines represent ground vibrations, and a car passing above the buried fibre-optic cables generates more lines than footsteps.

These lines are essentially the raw exterior acoustic data, and the control centre’s job is to use high-speed signal-processing algorithms to extract the information relevant to the user’s problem in real time, says David Hill, OptaSense’s chief technology officer. The data are then relayed for integration with other sensors, or sent to users via their mobile devices. Besides detecting footsteps to within roughly 10 m either side of a fibre-optic cable, DAS can even spot a low-flying drone overhead. The sounds can be displayed as shown in figure 1, but they can also be played out loud and made to sound like the original noise, using a bit of signal processing and programming.

Listening for leaks

Oil or gas pipelines

One sound that many companies are listening out for using DAS is oil or gas flowing along a pipe or in a well – in particular to check for leaks. In this case, rather than the optical fibres being buried underground, they are inserted into a well bore. Both Hill and Selviah think this is an especially exciting application area for DAS.

That view is echoed by Kari Anne Kjolaas-Holland, business development manager for microseismic services at Schlumberger, which supplies technology services to the oil and gas industry. The company uses DAS to sense an entire well at one time for vibration, temperature, strain and so on. “Without moving the down-hole cable, a snapshot of the properties in the well may be collected instantly, like a photograph,” Kjolaas-Holland explains. “We see both the big picture and fine details in each snapshot, and by combining successive pictures, it becomes very much like watching a movie.”

Before DAS came on the scene in the late 2000s, oil and gas firms had to place arrays of microphones into wells and pipes. Time on the oil or gas rig had to be set aside for this specific activity, and multiple sparse measurements had to be taken by moving the microphones along the length of the well bore. For this purpose, DAS is a game changer. DAS systems can be permanently installed so that conditions in the well are monitored all the time, with the sound being sensed almost continuously along the entire length of the well.

Ships, sharks and submarines

Although DAS is barely able to pick up a conversation, Selviah thinks that might one day change. It is already being buried in roads to listen to cars passing to determine exactly where they are, and – as it develops further – it could also work in buildings. With sufficient future developments, these optical fibres could even, Selviah adds, be inserted into blood vessels to listen to the flow of blood and the heart during operations.

Marine physicist Philippe Blondel of the University of Bath is one researcher who thinks the technology could also be used to make cheap hydrophones – underwater microphones, developed for recording or listening to underwater sounds. Water is a compressible medium, and applying pressure to it will create pressure further away. So any change in water pressure, for example by creating sound, will be picked up by the DAS systems.

Current hydrophones look for a full range of frequencies, and the signal they produce then needs to be processed to extract the frequencies of interest. But with a DAS sensor, it might be possible to design it so that only very specific frequency bands can be detected. It could then be used to listen for ships, submarines or radar instruments on the seabed that “ping” a particular signal – for military, navigation or commercial monitoring purposes.

As an advanced application, these underwater DAS microphones could also be used to identify and monitor fish, sharks or other animals based on the sound they generate under water or on the ground. If these sensors had been implemented under water in the right area, then – who knows – perhaps they could even have been used to look for the Malaysian aeroplane that mysteriously went missing in March this year.

China pursues 52 km collider project

Particle physicists in China have unveiled plans to build a huge 52 km particle collider that would smash electrons and positrons together to study the Higgs boson in unprecedented detail. The so-called “Higgs factory”, if given government approval, would be built by 2028 and put the country at the forefront of international particle physics.

Researchers are currently preparing a proposal to the government to carry out a full R&D study into the machine, which they envisage having an energy of 250 GeV. However, Yifang Wang, director of the Institute of High Energy Physics (IHEP) in Beijing, warns that the project is still in its infancy. “We are still at a very early stage of the discussion and we have a long way to go to get government support,” says Wang.

Vigorous pursuit

Even though the collider is a long way off, Brian Foster from Hamburg and Oxford universities, who is European regional director for the planned International Linear Collider (ILC), which is also designed to study the properties of the Higgs boson, says that Chinese researchers are nevertheless “pursuing it rather vigorously”. Yet he thinks that China would find it hard to build a machine on its own given that the country’s biggest collider – the Beijing Electron Positron Collider at IHEP – is just 240 m in circumference. “They maintain that this will be a Chinese project, although they also admit they don’t have the people to build it themselves, so assistance from the international community would be required,” says Foster.

That view is shared by theorist John Ellis at King’s College London and CERN, who says that China is now beginning to “reach out” to international partners. “They don’t have the expertise at the moment to build something of that size with the technology required,” says Ellis, who adds that China would be much further ahead now if they had been more involved in building CERN’s Large Hadron Collider (LHC).

Encouraging statements

Meanwhile, members of the International Committee on Future Accelerators (ICFA), which met in Valencia in July, reaffirmed in a closing statement their support for the ILC as well as a circular collider like that which China is proposing. “The ICFA continues to encourage international studies of circular colliders, with an ultimate goal of proton–proton collisions at energies much higher than those of the LHC,” the statement said.

Foster points out, however, that the proposed collider will “not interfere” with the ILC proposal, which Japan is currently considering hosting and has even begun discussing at ministerial level with governments in the US and Europe. “The physics output [of a 250 GeV circular electron–positron collider] compared with the ILC is very limited and of course it will need several years before it is in a state that it can be believably costed and proposed to proceed to a proper technical design,” he says.

Cracking the code for single-molecule junctions

Measuring the conductance of single-molecule junctions, which feature in a range of photovoltaic and energy-harvesting devices, is essential to gain better insights into the energy and charge-transfer processes of the molecule, in order to optimize the devices themselves. But accurately calculating the conductance is no mean feat. Now, by applying a correction to a standard modelling method used to make measurements in such systems, an international group of researchers has demonstrated qualitatively and quantitatively correct solutions for a compound known as “porphyrin” for the first time, thereby enabling predictive modelling of complex molecular junctions.

Porphyrins are cyclic organic compounds that are commonly used as a “single molecule junction” wherein – a single organic molecule connected to macroscopic metallic electrodes. Standard methods predict erroneous electron orbitals for transition metals in porphyrin molecules, as well as overestimating the conductance by an order of magnitude. In the new work, the team tackled both inaccuracies using a modified “hybrid” formulation of the “density functional theory” (DFT) – a computational quantum mechanical modelling method that looks at the electronic structure of many-body systems.

“The hybrid functional is the only pragmatically feasible method we can employ to correct both errors at the same time,” says Zhenfei Liu, a postdoctoral researcher at the Molecular Foundry and Materials Sciences Division at the Lawrence Berkeley National Laboratory. “Other approaches need two stages to correct the qualitative and quantitative inaccuracies.”

Metal centres

Porphyrins usually have a transition metal at the centre, which as Liu stresses has an important impact on the molecule’s properties. “For catalysis, for example, the metal centre is key to make the porphyrin work as it’s supposed to,” he says. Calculating the properties of transition metals has some known nuances. To deal with these, chemists developed a mathematical formulation known as the “exact exchange”, which was incorporated into DFT over a decade ago, and is responsible for more accurate calculations of the electronic bandgap, charge transfer and other properties of these systems.

“The functional was already known but had not been applied to molecular junctions,” says Liu. “But we knew you need the exact exchange to make DFT work for transition metals, so we used a combination of standard functional and an exact exchange. This additional component – the exact exchange – gives the qualitatively correct calculations.”

Liu, working with Jeff Neaton at the Molecular Foundry at Lawrence Berkeley Lab and the University of California, Berkeley, along with Latha Venkataraman, Luis M Campos and colleagues at Columbia University in New York and Yonsei University in Korea, incorporated a previously developed modification known as “DFT+Σ”, which corrects inaccuracies that arise due to underestimating the alignment of energy levels in the junction.

Experiment versus theory

The researchers synthesized different types of porphyrin with cobalt, copper, nickel or no metal at the centre. Solutions of the porphyrins were deposited on a gold-on-mica substrate and a gold scanning tunnelling microscopy tip was dipped into the solution and pulled out to create a break junction. Comparisons of measured conductances with calculated values favoured the accuracy of the researchers’ newly modified formulation over standard DFT.

Liu explains that the team’s novel hybrid approach can be applied to other systems as well. “For example, in organic metallic interfaces the junction has two interfaces on each side – probably a more common scenario in nanoscience.”

He also describes previously published work on a study of the conductance of similar junctions but with graphite as one of the electrodes instead of gold. Breaking the symmetry of the system in this way gives rise to rectification in the junction – when the bias is reversed, the conductance value changes, which can be useful for energy-harvesting devices.

The work is published in Nano Letters.

What can you learn from Descartes?

By James Dacey in Córdoba, Argentina

What’s the best way to teach tricky physics concepts to students? Naturally, this was one of the questions underpinning many of the talks here at the International Conference of Physics Education (IPCE) in Córdoba. According to a couple of educationalists in Latin America at least, it seems that one approach is to enlist the help of some of the great scientists and philosophers of the past.

Patricia del V. Repossi, a lecturer at the Pontificia Universidad Católica Argentina in Buenos Aires, spoke about how she uses the history of science as a framework for teaching optics. Repossi explained how she had come to realize that some of the students taking her conventional optics course believed that photons are made of the same stuff as “tennis balls”. So, she and her colleagues set about transforming the way they teach the topic – by combining a physics class with a history lesson.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors