Skip to main content

Black holes act as galactic thermostats

The supermassive black hole at the centre of a massive galaxy or galaxy cluster acts as a furnace, pumping heat into its surroundings. But astronomers have struggled to understand how a steady temperature is maintained throughout the whole galaxy when the black hole only appears to interact with nearby gas. Now, researchers in Canada and Australia believe the answer could be a feedback loop in which gravity causes gas to accumulate around the black hole until its density reaches a tipping point. Then, the gas rushes into the black hole, temporarily turning up the heat.

Galaxies emit X-rays and this ongoing loss of energy should cool their gas so that it coalesces into stars. However, astronomers only see a fraction of the expected star formation in massive elliptical galaxies and galaxy clusters, which means that something must be heating the gas. The only major heat source is the supermassive black hole at the centre of the galaxy or cluster – also known as the active galactic nucleus (AGN). But such AGNs do not get feedback from most of the gas in a galaxy, which can be as far as 330,000 light-years from the AGN. So how does the AGN maintain the temperature of the whole galaxy?

Pressure drop

Edward Pope and Trevor Mendel, both of the University of Victoria in British Columbia, together with Stanislav Shabala of the University of Tasmania in Australia think they know how this feedback occurs. They argue that as the gas in the centre of the massive galaxy or galaxy cluster cools by emitting X-rays, it loses pressure, thereby allowing more gas from further out in the cluster to flow inwards. Eventually, the gas becomes so dense that it cannot support its own weight and it collapses suddenly, rushing in towards the black hole. The black hole swallows some of the gas and uses this energy to hurl the remaining gas outwards. The researchers believe that this outburst could be so energetic that some gas could even be ejected from an elliptical galaxy – but it is not energetic enough to evict gas from a cluster of galaxies.

The outburst would contain particles travelling at near the speed of light and would extend beyond the furthest reaches of even a massive galaxy. “Even though it is fuelled only by the central gas, the black hole can actually heat all of the gas in the galaxy,” says Pope. Such outbursts from an AGN can continue for 10 to 100 million years according to the researchers’ calculations, which they say match observations of giant bubbles of gas blown by AGN jets over similar timescales. Once the AGN settles down, the gas begins to cool once more, flowing toward the centre of the galaxy or cluster again.

The average rate at which the gas builds up is the key connection between the AGN outbursts and the temperature of the galaxy at large, Pope explains. It depends on the difference between the cooling rate of the whole galaxy plus the average heating rate by the AGN. Gas accumulates more quickly when cooling dominates, and more slowly when heating is stronger. “Consequently, you can see that this is a self-regulating loop – just like a thermostat,” says Pope.

Promising explanation

Andrew Benson of the California Institute of Technology in Pasadena says that the inclusion of periodic AGN outbursts in this explanation of how galaxies and clusters regulate their temperatures is promising “since we observe that AGN are ‘on’ for only a short time, followed by long periods of being ‘off’ ”. The amount of “on” time for an AGN depends on the amount of cooling it has to counteract, and the researchers say that observations bear this idea out: clusters that are brighter in X-rays are more likely to contain a jet-producing AGN than dimmer clusters.

David Rafferty of Leiden Observatory in the Netherlands says the idea is “quite appealing and could well be correct”. However, he cautions that “Its importance can only be judged after its predictions have been carefully tested.”

Benson is not entirely convinced that the inflow of gas to the black hole is truly periodic – for example, he says it is possible that gas could flow inwards along one direction while flowing outwards in another. However, he agrees that the researchers’ predictions, such as how the “on” time of the AGN scales with the mass of the black hole, make the theory testable “which is always the most important thing”.

The research will be described in an upcoming issue of the Monthly Notices of the Royal Astronomical Society and a preprint is available at arXiv:1108.4413.

Physicists in tune with neurons

Have you ever wondered why certain sets of musical notes sound perfectly melodious while others make you want to cover your ears? Now, physicists in Europe have developed a model that suggests that certain notes sound harmonious because of the consistent rhythmic firing of neurons in the auditory system. The researchers say that they have now quantified this effect by showing that the neural signals are regularly spaced for frequencies that are pleasant sounding, but are erratic for those that are not. They say that their model may also provide insights into other senses, such as vision, that employ similar neural processing systems.

Hitting the right notes

How humans and animals perceive sound has long fascinated scientists because of the brain’s amazing ability to process auditory signals. “The auditory sensory system is the most investigated apparatus. But even simple sound signals – pairs of pure tones such as musical chords – are able to cause phenomena that can not be simply explained,” says Bernardo Spagnolo of the University of Palermo, Italy, and one of the authors of the study published in Physical Review Letters this month.

As examples, Spagnolo cites two things that musicians take for granted: the “pitch perception” and the “perception of consonance and dissonance”. Simply put, perceiving pitch is simply being able to differentiate clear frequencies of sounds, while the perception of consonance and dissonance is being able to tell the difference between harmonious and inharmonious chords.

In their current research, the researchers focused on consonance and dissonance perception with the aim of identifying the location and quantity of the signals associated with harmony and disharmony in the brain.

On board the “spike trains”

Hearing involves the conversion of sounds into neural “spike trains”. The researchers’ model comprises three neural-like elements. Two of these elements represent sensory neurons and are driven by noise at two different frequencies. The outputs of two neurons, in the presence of a “noisy” environment, are synaptically connected with the third neuron, which represents an “interneuron”. This is an internal neuron that connects a sensory neuron to other neurons in the brain. In reality there are more than two initial neurons, as the human ear can operate at frequencies of between 20 Hz and 20 kHz and can detect sounds over a range of 120 decibels.

The output spike train of the interneuron is the main focus of the study, which found that if an acoustic signal is transformed by the auditory system into spike trains with a regular distribution of inter-spike intervals, then the signal is perceived as harmonious. But when the inter-spike intervals are irregular, the signal is perceived as inharmonious. At the output of the interneuron, inharmonious input signals give rise to blurry spike trains, while the harmonious signals produce more regular, less “noisy” spike trains.

Senses and sensibility

The research team quantified the regularity of the interneuron output in terms of the entropy of the signal. “This regularity is linearly connected with informational entropy: harmonious chords give rise to high spike-train regularity and so low entropy; inharmonious chords give rise to low spike-train regularity and so high entropy.”

Spagnolo points out that the model can investigate the role of external and internal “environmental noise” in the nervous system, with respect to the sensory phenomena of “recognition” and “permanence of information” contained in complex input signals in the brain. “Investigating this process can help to understand which types of input signals are able to survive in the noisy environment of the brain, reveal the mechanism of this process, and explain what it means from a perceptional and cognitive point of view,” claims Spagnolo. He also says that studying and understanding the auditory system provides a basis for other less-studied sensory systems “that exhibit the analogous principles of conversion of environment stimuli into the neural spike trains”.

Using the Sun as a cosmic detector, part II

sun.jpg

By Hamish Johnston

Edwin Cartlidge has just written a nice article for us about how the Sun could be used to test alternative theories of gravity. The idea is that gravitational quirks would manifest themselves as deviations in the expected properties of the Sun such as its acoustic modes and neutrino output.

The Sun offers an ideal laboratory for studying gravity because it is extremely massive and so close that we can detect tiny fluctuations in its behaviour. Now, physicists in the US think that the Sun could also be used to detect primordial black holes, which are smallish black holes that may have been formed in the early universe. Such black holes could vary greatly in mass – from that of a small asteroid to several Earth masses – and are expected to endure for at least as long as the age of the universe.

Physicists have yet to detect primordial black holes but Michael Kesden of New York University and Shravan Hanasoge of Princeton University believe that they could be spotted when they travel through the Sun.

The pair calculate that a primordial with a mass of about 1018 kg and passing through the Sun would induce transient seismic oscillations that could be detected by solar observatories. A simulation of such oscillations is shown above (image courtesy of the American Physical Society).

However, other astronomical measurements have put limits on how likely it is for a primordial black hole to collide with the Sun – and the suggestion is that it’s extremely unlikely.

But there is good news: in the race to find more planets orbiting stars other than the Sun, astronomers have built telescopes that are very good at astroseismology. Kesden and Hanasoge believe that these could be use to survey the heavens for signs of primordial black holes.

The physicists describe their work in Phys. Rev. Lett. 107 111101 and you can read the paper here.

Sun puts relativity to the test

Alternatives to Einstein’s general theory of relativity can be investigated by studying the Sun. That is the claim of a group of physicists in Portugal who have found that a variation of a theory put forward nearly a century ago by Arthur Eddington is constrained but not ruled out by observations of solar neutrinos and solar acoustic waves.

General relativity, which describes gravity as the curvature of space–time by massive objects, has so far passed every experimental and observational test dreamed up by physicists. But the theory does present a number of problems. In addition to the difficulty of unifying it with quantum mechanics and the challenge to explain the nature of dark matter and dark energy, there remains the conceptual problem of singularities, where the laws of physics break down.

Since Einstein introduced general relativity in 1916, many alternatives have been proposed. Last year Máximo Bañados of the Pontifical Catholic University in Chile and Pedro Ferreira of Oxford University reported a variant of a theory originally put forward by the British astrophysicist Arthur Eddington that adds a repulsive gravitational term to general relativity. This has the virtue of not requiring singularities, and as a result does not predict that the universe originated from a Big Bang, nor does it imply the formation of black holes.

Looking inside a star

When considering a gravitational field within a vacuum, this Eddington-inspired theory is equivalent to general relativity but predicts different effects for gravity acting within matter. The ideal place to look for such differences would be inside neutron stars – but the interiors of neutron stars are not understood sufficiently to compare the theories.

The answer, say Jordi Casanellas and colleagues at the Technical University of Lisbon, is to use the Sun. While a much less extreme source of gravity than a neutron star, the inner workings of the Sun are described accurately by solar models. Casanellas’s group has calculated that even in its non-relativistic Newtonian form, the Eddington-inspired theory should predict measurable differences in solar output compared with standard gravitational theory.

The Lisbon researchers have shown that the presence of the repulsive gravity term in the theory of Bañados and Ferreira is similar to setting a different value for the gravitational constant inside matter. And with the strength of gravity higher or lower than it would otherwise be inside the Sun, the inner solar temperature is also modified because the Sun is assumed to be in hydrostatic equilibrium. This means that the inward pressure of its mass is balanced by the outward thermal pressure generated by the fusion reactions within it. A higher temperature implies a greater rate of fusion burning, which in turn implies higher emission rates of solar neutrinos.

Altering acoustic waves

Similarly, a different strength of gravity inside the Sun implies a variation in its density distribution, which should modify the propagation of acoustic waves measured using the techniques of helioseismology.

Casanellas and co-workers have shown that observations made by neutrino telescopes of the solar neutrino flux coming from the proton–proton chain reaction that produces boron-8 significantly constrain the correction to general relativity, calculating an upper limit of 1.26 G to the effective gravitational constant. Combined with a lower limit of 0.92 G obtained from helioseismic data, the researchers are able to put a significant constraint on the Eddington-inspired theory. However, they point out that their calculations do not rule out such a theory.

The researchers say that improving on these upper and lower limits will be difficult because of uncertainties in a few of the parameters within solar models, such as the abundance of helium on the solar surface. As such, more sensitive measurements of neutrino fluxes are unlikely to have much of an impact. But they believe their approach could be used to constrain other alternative theories of gravity.

Further testing on Earth

Ultimately, adds team member Paolo Pani, such theories could be tested experimentally by measuring, for example, the gravitational attraction between a metal ball inserted into a hole in the ground and the mass of the Earth surrounding it. The idea would be to make the hole just big enough for the ball to fit and no more, so that what is measured is the strength of gravity through matter and not the surrounding void (in this case air). However, Pani points out that doing so would be a considerable experimental challenge.

Clifford Will of Washington University in St Louis, US, described the latest work as a “nice example of using the Sun as a laboratory for probing fundamental physics” but added that “it’s not yet clear whether the bounds proposed by this paper present serious threats to alternative gravity theories”.

The research is reported at arXiv:1109.0249.

Particle physicist teams up with violin virtuosos

musicians
Jack and Brian (centre and right) attempt Edward’s composition

By James Dacey

Over the centuries, the deep connections between physics and music have been noted by many, particularly in the way both endeavours are underpinned by a mathematical language.

In a new collaboration, the two activities are about to meet head on with the internationally renowned composer Edward Cowie teaming up with particle physicist Brian Foster and the violinist Jack Liebeck. Cowie has been commissioned to produce a major new series of works for solo violin that will trace the history of particle physics from the late 19th century through to the present day. The plan is for Foster and Liebeck to perform the pieces at several major science facilities during the 2012/2013 season. The trio may also be releasing a commercial CD.

On Friday I went to meet the three collaborators at Oxford University where they were discussing the progress of this unique composition, entitled Particle Partitas. I was there with the Physics World multimedia team to record a short feature film about the project, which will be appearing on physicsworld.com within the next few weeks.

Cowie, who says he writes his music as an expression of the experiences he has living and moving in the natural landscape, in fact trained as a physicist. He studied the subject at Imperial College, London, while he was still learning the piano and violin in his spare time, as well as doing the odd performance. Unfortunately, Cowie had to stop playing the violin seriously after a sporting injury to his left hand in 1966. It was interesting to hear that Cowie had composed the whole of this latest work without an instrument.

It was also fascinating to hear about the depths of thought that have gone into the work and to see how Cowie’s understanding of particle physics has informed the music. For instance, he talked about how one section of the piece gradually divides into shorter and shorter musical expressions. This, he said, mirrors the way Democritus conceived the concept of an atom as the division of matter into a final indivisible particle.

After all of the discussions, Cowie then treated us to a rendition of one of his earlier works: Rutherford’s Lights – a set of 24 “studies in light and colour for piano”. (He still regularly performs on the piano.) You can hear part of this work and Cowie talking about its influences in this interview with my colleague Michael Banks from December last year. You can also see a few more pictures from Friday’s meeting in this photo set on Flickr.

Flowing gas helps nanobubbles stick around

Physicists in the Netherlands say they have explained the mystery of why tiny nanobubbles on wet surfaces can endure for weeks, despite having extremely high internal pressures. Measuring about 1 µm across and 20 nm high, these highly stable entities are tiny versions of the ordinary bubbles that cling to the inside of a full beer or champagne glass. According to James Seddon and colleagues at the University of Twente, nanobubbles last for so long because gas molecules inside them do not escape into the main liquid, but instead hitch a ride on a circular path that puts them back inside.

Unlike the air in larger bubbles, which is just slightly above atmospheric pressure, physicists know that nanobubbles have internal pressures of tens or even hundreds of atmospheres. At such pressures, the conventional model of diffusion suggests that all the gas inside these tiny bubbles should be absorbed by the liquid in a few microseconds. So when nanobubbles were first spotted a decade ago, physicists were left puzzled by the fact that they hang around for weeks.

But the Seddon group’s new calculations and experiments could explain why. Its theory relies on two important physical properties of the system. One is that the nanobubbles are so small that a gas molecule will usually travel from one side of a bubble to the other without colliding with any other gas molecule. The other is that a gas molecule sticking to the surface inside the bubble is most likely to leave the surface in the perpendicular direction.

Flowing fountain

According to Seddon’s team, what happens first is that such gas molecules move away from the solid surface and towards the liquid interface of the bubble. But because they do not collide along the way, all the molecules are moving in approximately the same direction when they strike the edge of the bubble. This imparts a momentum to the liquid at the interface, causing liquid to flow along the bubble away from the solid surface.

The gas molecules get caught up in this flow and are swept towards the apex of the bubble. At this point the flow leaves the bubble and loops back down to the solid surface, bringing the gas molecules with it – and creating a fountain-like effect. But instead of going with the flow back along the bubble, the gas molecules tend to stick to the surface and move into the bubble. Here they are released from the surface and the process repeats itself.

To test its theory, the researchers used an atomic-force microscope to look for the outward flow at the apex of nanobubbles. Seddon told physicsworld.com that they measured an upwards force of about 1.3 nN above their nanobubbles. This is in line with their theory, which predicts a force of about 1 nN.

Bursting their bubble?

However, not all physicists agree with the team’s conclusions. Phil Attard of the University of Sydney – who pioneered the study of nanobubbles – told physicsworld.com that he finds fault with the thermodynamics of the theory. “In my opinion the proposed model by Seddon and colleagues is not viable,” he says. The team now plans to confirm its findings by taking snapshots of the process by adding nanoparticles to the liquid, which should get caught up in the flow.

Gaining a better understanding of the physics of nanobubbles is important, according to Seddon, because their existence is a fundamental problem of fluid dynamics. Nanobubbles also have a number of technological applications. They could, for example, play an important role in microfluidics, where their presence on the walls of tiny channels could make it much easier for fluid to flow.

The research is described in Phys. Rev. Lett. 107 16101.

Graphene tunes in to terahertz radiation

Graphene responds strongly to light at terahertz frequencies and this could be fine-tuned to make practical devices. That is the conclusion of researchers in the US who believe that their findings could help lead to graphene finding use in a wide range of applications that include medical imaging and security screening.

Terahertz radiation lies between the microwave and mid-infrared regions of the electromagnetic spectrum. It passes through clothing and packaging but is strongly absorbed by metals and other inorganic substances, making it of great interest to those developing airport security scanners. However, the radiation has proven extremely difficult to create, manipulate and detect.

Now, Feng Wang and colleagues at the Lawrence Berkeley National Laboratory and the University of California at Berkeley say that they have made the “beginnings of a toolset” for working with terahertz radiation. The technology is based on graphene, which is a sheet of carbon just one atom thick. The graphene is arranged in arrays of extremely narrow ribbons, called nanoribbons. The terahertz response of the array can be tuned by varying the width of the ribbons and the number of charge carriers (electrons and holes) in the structures. In fact, varying these two parameters allows the researchers to control the collective oscillations of electrons (plasmons) in the graphene ribbons, and it is these plasmons that couple strongly to the terahertz light.

Lower-frequency plasmons

Plasmons are more familiar to us in the high-frequency, visible part of the electromagnetic spectrum and notably in 3D metallic nanostructures. One well known example of where they can be seen is in stained glass windows. Here, the colours come about from oscillating collections of electrons on the surfaces of nanoparticles of gold, copper and other metals contained in the glass. However, graphene is only one atom thick and its electrons move in just two dimensions, explains Wang, so plasmons in this material occur at much lower frequencies.

What is more, terahertz radiation lies in the wavelength range of between about 1 to 0.03 mm but the width of the graphene ribbons is just 1–4 µm. “A material that consists of structures with dimensions much smaller than the relevant wavelength, and which exhibits optical properties distinctly different from the bulk material, is called a metamaterial,” said Wang in a LBNL press release. “So we have not only made the first studies of light and plasmon coupling in graphene, we’ve also created a prototype for future graphene-based metamaterials in the terahertz range.”

Resonant excitations

So, how can varying the width of the graphene nanoribbons make them absorb different frequencies of light? As mentioned, a plasmon describes the collective oscillations of many electrons but its frequency depends on how rapidly these oscillations travel between the edges of a ribbon. When light of the same frequency as the oscillations is applied, a “resonant excitation” results, something that produces an increase in the strength of the oscillations and the amount of light absorbed at that frequency. Because the frequency of the oscillations depends on the width of the ribbon, varying its width thus allows it to absorb different frequencies of light.

The number of charge carriers in the ribbon can also affect the strength of the coupling between light and plasmons. One advantage of graphene is that the concentration of charge carriers can easily be increased or decreased in the material by applying a strong electric field. Such a technique is called electrostatic doping. The researchers have also found that light shone perpendicularly onto the ribbons is much better absorbed at the plasmon resonant frequency than light shone at other angles.

Room-temperature measurements

Wang and co-workers obtained their results by shining terahertz light (from the beamline at Berkeley Lab’s Advanced Light Source) onto the graphene ribbon arrays. They then measured the light transmitted using the beamline’s infrared spectrometer. The result shows that coupling between light and plasmons in graphene is an order of magnitude stronger than in other 2D systems, like semiconductors. The light absorption in graphene can also be measured at room temperature unlike in these other materials, which need to be near absolute zero in experiments.

“Wang’s team reports an interesting study of plasmons in graphene ribbons and shows how to tune their properties, and the work represents a further step in understanding light-electrons interactions in this material,” commented Andrea Ferrari of Cambridge University, who was not involved in the work. “The terahertz range covered by these experiments could eventually enable products such as portable terahertz sensors for remote detection of dangerous agents, environmental monitoring or high-speed wireless communications. However, to achieve these goals will require much more effort on the part of the graphene and plasmonics communities.”

Wang told physicsworld.com that his team, for its part, is now looking into designing different metamaterial structures from graphene and examining their properties.

The current work is published in Nature Nanotechnology.

Journal editor resigns over climate-change paper

The editor of the journal Remote Sensing has resigned over a climate-change paper that he admits should “not have been published”. Wolfgang Wagner of the Vienna University of Technology in Austria announced his resignation on 2 September, taking responsibility for his reviewers’ decision to publish the “controversial paper”.

The article was published in Remote Sensing – an open-access journal produced by the Multidisciplinary Digital Publishing Institute – on 25 July. Entitled “On the misdiagnosis of surface temperature feedback from variations in Earth’s radiant energy balance”, the paper was written by Roy Spencer, principal research scientist at the University of Alabama in Huntsville (UAH), and his UAH colleague William Braswell.

The pair’s paper questions the reliability of climate models following research on data obtained by NASA’s Clouds and the Earth’s Radiant Energy System satellite. The scientists claim that the relationship between clouds and climate temperatures leads to results that are inconsistent with accepted models of climate change.

“We were trying to make a point about the role of clouds in the climate system. We demonstrated a big difference between satellite observations of how the climate behaves and climate models,” says Spencer. “We were demonstrating something we had demonstrated before but that was ignored.”

Opponents of the idea that humans contribute to climate change hailed the paper, with the US business magazine Forbes claiming “new NASA data blow gaping hole in global warming alarmism”. However, climate scientists immediately focused on the paper’s methodology. “It has no discussion of uncertainties or error bars,” says Kevin Trenberth, head of climate analysis at the US’s National Center for Atmospheric Research. “The observation record from space is only 10 years long. And Spencer and Braswell used only 6 of 14 climate-change models.”

Wagner, however, decided to take responsibility for publishing the paper and step down as editor-in-chief. “I saw several basic problems [in the Spencer–Braswell paper], including that correlation does not imply causality, the fact that 10 years’ of satellite data are not enough to come to such strong conclusions about the subtle and long-term changes in climate, and that, indeed, too little quantitative evidence was presented to support these strong claims,” he told physicsworld.com.

Spencer sees the resignation as the result of political pressure from the Intergovernmental Panel on Climate Change (IPCC). “The IPCC was formed to build the scientific case that humans were the cause of climate change,” he says. “It’s becoming very difficult for a sceptic to get a paper published. Virtually all climate science that is funded goes to support the IPCC process.” Wagner, however, rejects that accusation, adding that “nobody exerted any pressure on me or the journal”.

Climate debate

Every year, one or two sceptical papers get published, and these are then trumpeted by sympathetic media outlets as if they had discovered the wheel Andrew Dessler of Texas A&M University

Climate scientist Kerry Emanuel from the Massachusetts Institute of Technology says the affair reflects the broad debate over climate change rather than the specific detail of the Spencer–Braswell paper. “There’s a huge discrepancy between what the paper says and what people, including Roy Spencer, say that it says,” he explains. “People seem to be replying to the climate-change sceptics rather than the paper itself.”

Andrew Dessler of Texas A&M University agrees. “Every month, dozens if not hundreds of papers are published that are in agreement with the mainstream theory of climate science,” he says. “But every year, one or two sceptical papers get published, and these are then trumpeted by sympathetic media outlets as if they had discovered the wheel. It therefore appears to the general public that there’s a debate.”

Portable lasers probe oil-rig waste

Researchers in Europe have developed a new device for monitoring the quality of water using a tool derived from fundamental physics known as a quantum cascade laser. The portable device has been built by the Austrian company QuantaRed together with another firm called Eralytics, which produces such instruments to monitor petrochemical products. QuantaRed was spun off from the Vienna University of Technology in 1999 and now produces sensor technology for analysis of liquid or gaseous media based on the specific properties of quantum cascade lasers.

Electronic waterfall

While regular semiconductor lasers are now commonplace, the quantum cascade laser is based on a different fundamental principle. First demonstrated in 1994, it uses only one type of charge carrier – electrons – rather than the recombination of electron–hole pairs and operates a bit like an electronic waterfall. Electrons cascade down a stack of identical energy steps built into the laser material during crystal growth, emitting a photon at each step. In contrast, diode lasers only emit one photon over a similar cycle.

Another useful feature of the quantum cascade laser is that it can be designed to emit at any wavelength of radiation over an extremely wide range using the same combination of materials in the active region. If tuned to the mid-infrared range, a quantum cascade laser can be embedded in miniaturized chemical analysers and be ideal for monitoring hydrocarbons.

Monitoring the field

Detecting the presence and quantities of hydrocarbons is vital in the oil industry, where the quality of waste water must be closely monitored. Conventional methods generally involved in on-site monitoring activities rely on chlorofluorocarbon (CFC) organic solvents – the drawback being that these damage the environment. Moreover, CFC-free measurements – such as gas chromatography – need to be performed on land, with the analyses carried out by experts. In some cases this involves transporting water samples over long distances, which can cause the samples to be contaminated.

QuantaRed say that it has spent five years developing the new instrument – the “ERACHECK” – which they say provides fast, accurate results of measurements of hydrocarbons in waste water from oil platforms, rigs and refineries out at sea, without the use of the ozone-depleting CFCs.

Weighing 8 kg, the ERACHECK is built specifically to be portable and rugged so that it can be transported to off-shore sites and be operated by staff on-site using an easy-to-use touchscreen. QuantaRed guarantees a fast response, with a measuring time of 2 min and an accuracy better than 1 ppm. It also has a chargeable battery for use in the field.

Out at sea

QuantaRed recently sold its first ERACHECK device to the well-known Norwegian oil-company Statoil. “After six months of detailed testing, Statoil has declared that the ERACHECK fully meets its high expectations. All tests were passed convincingly. We were pleased to receive the order to equip Statoil’s first offshore platform with our instrument,” says Wolfgang Ritter, chief executive officer of QuantaRed.

Brian McCarry, an environmental toxicologist at McMaster University in Canada, thinks that the ERACHECK may prove to be very useful. While he says that he cannot gauge the accuracy of fine measurement that it may provide, he points out that “the people who need this sort of measurement do not need great accuracy of measurement, rather they need a good, ‘ball park’ estimate of hydrocarbon levels”. He says that while the gas-chronology method will always be the gold standard, the ERACHECK could be used to screen large numbers of samples, and in that sense it fills an interesting niche in the market.

Big science at very low energies

Institut Laue-Langevin

Particle physics usually conjures up images of electrons or protons smashing together at extremely high energies. But when I visited the Institut Laue-Langevin (ILL) in Grenoble, France, I met two physicists who do particle physics and cosmology using neutrons that are so lethargic they move slower than your average sprinter. Forget about the TeV (1012 eV) particle energies at the Large Hadron Collider, the energy of these neutrons is measured in neV (10–9 eV).

This means that these ultracold neutrons can be stored for long periods of time – and carefully poked and prodded to reveal their secrets. You can read more in this interview with ILL’s Oliver Zimmer and Peter Geltenbort.

I also recorded a broad-ranging interview with Andrew Harrison, who heads up the science division at ILL. I asked Andrew about the role that ILL’s reactor-based neutron source will play once the accelerator-based European Spallation Source (ESS) starts up in Sweden in about eight years’ time. That interview will appear online later this month.

Looking forward to October, Michael Banks is putting the finishing touches on a bumper Physics World supplement on “big science”. As well as looking at the challenges involved in making neutrons at the ESS, the supplement will also look at the ITER fusion facility, the Extremely Large Telescope, the Large Hadron Collider and much more…so stay tuned.

Copyright © 2026 by IOP Publishing Ltd and individual contributors