Skip to main content

New cochlear implant takes the middle road

A new, low-power signal-processing chip that could be used to develop a novel cochlear implant that would require no external hardware has been developed by researchers at the Massachusetts Institute of Technology (MIT), along with physicians from Harvard Medical School and the Massachusetts Eye and Ear Infirmary (MEEI). The implant – which would use the natural microphone of the inner ear rather than a skull-mounted sensor – could be wirelessly recharged and could run for about eight hours on each charge.

A cochlear implant is a surgically implanted medical device that provides a sense of sound to a person who is profoundly deaf or severely hearing-impaired as a result of damage to the sensory hair cells in their cochleae. Hundreds of thousands of people worldwide have benefited from such devices since they were first implanted in adults in 1984.

Easy charging

Unfortunately, current implants are bulky and unattractive. “Today’s cochlear implants rely on an external behind-the-ear component to house an external microphone and power source, which can be cumbersome, has limited usage in water and can be aesthetically unappealing,” says Anantha Chandrakasan of the Department of Electrical Engineering and Computer Science at MIT, who led the new work. “The idea with this design is that you could use a phone, with an adaptor, to charge the cochlear implant, so you don’t have to be plugged in,” he adds.

The new implant is an adaptation of existing devices that vibrate bone structures in the middle ear that are normally intact in patients using cochlear implants. The inner ear has three bones, or “ossicles”, which conduct sound vibrations to the inner ear and to the cochlea – the small, spiral chamber that converts acoustic signals to electrical signals that are then detected by the brain. Patients with middle-ear implants suffer because one of the ossicles – the stapes – does not vibrate strongly enough to stimulate the auditory nerve. The new implant therefore has a tiny sensor that detects the ossicles’ vibrations and an actuator that helps drive the stapes.

Chandrakasan and his team envisage that the new device would use a similar type of sensor, but the signal it generates would travel to a microchip implanted in the ear, which in turn would convert it to an electrical signal and pass it on to an electrode in the cochlea. The team has now made a prototype of this chip, along with a prototype charger that plugs into an ordinary mobile phone and can recharge the signal-processing chip in about two minutes. The findings and prototype were presented by Marcus Yip, the lead author of the paper, at the International Solid-State Circuits Conference held in San Francisco, US, last week.

Specialized waveform

One key procedure that let the team dispense with the bulky apparatus was cutting the power requirements of the converter chip. Chandrakasan’s lab specializes in low-power chips, and so the researchers could apply techniques they have perfected over the years, such as tailoring the arrangement of low-power filters and amplifiers to the precise acoustic properties of the incoming signal.

But one novel feature of the chip is the new signal-generating circuit that slashes its power consumption by an additional 20–30%. It does this by using a new waveform – the basic electrical signal emitted by the chip that is modulated to encode acoustic information. Based on previous research into simulated nerve fibres, the waveform was adapted to make the device more power-efficient while still offering enough stimulation of the auditory nerve. Two of Chandrakasan’s collaborators at MEEI – Konstantina Stankovic and Don Eddington – tested the waveform on four patients who already had cochlear implants, and found that it did not affect their ability to hear and that the results were on a par with the team’s computational modelling.

While the device has not been implanted in any patients yet, the sensor was tested in the ears of human cadavers to show that the sensor and the microchip could pick up speech signals played into the ear of the corpse. Conventional cochlear implants are approved by the US Food and Drug Administration for one-year-old children, so the team envisions that its implants could benefit toddlers too. Stankovic told physicsworld.com that the added advantage of a fully implantable implant for children is that they could “play without worrying about the external component falling, breaking or being uncomfortable”.

The research will be published in the proceedings of the IEEE International Solid-State Circuits Conference.

Alices in a nuclear Wonderland

Denise Kiernan’s The Girls of Atomic City tells the story of a dozen women who left rural America in the early 1940s and tumbled suddenly, like Alice down the rabbit hole, into the nascent US military-industrial complex, with all its regulations, factory discipline, dangers and surveillance. These women, along with thousands of others like them, made up the major part of the labour force of the factory in Oak Ridge, Tennessee, that enriched uranium for the Manhattan Project’s first atomic bomb. Kiernan narrates the story from the perspective of these ill-informed young women, who worked in blackout security conditions with no information about what they were doing, or why.

In this accessible account, each woman’s story, derived from Kiernan’s oral interviews, is spliced together with that of the others. To provide some of the background that “the girls” themselves lacked at the time, Kiernan intersperses their personal stories with chapters on “tubealloy” – the code name for uranium. These chapters give the standard account of the Manhattan Project, featuring the big men whom Kiernan refers to as “The General” (Leslie Groves), “The Scientist” (J Robert Oppenheimer) and “The Engineer” (Kenneth Nichols). For this master narrative, Kiernan draws on a couple of dozen archival documents, and relies heavily on the book City Behind a Fence by Charles Johnson and Charles Jackson, along with Peter Bacon Hales’ masterful Atomic Spaces.

The tales of “the girls” frequently stress their naivety, ignorance and fear of security officials. However, in writing about them, Kiernan focuses on conventional “women’s issues” to the point where, at times, the prose descends to the mundane. For example, when Celia Szapka and a few of her fellow northerners stopped at a café near the end of their journey to Oak Ridge, Kiernan writes, “One menu item puzzled them…None had heard of any such thing as ‘grits’.” Kiernan explains that Szapka’s Polish mother had cooked mostly potatoes, rather than the cornmeal porridge that was (and is) a staple of southern American food.

Celia liked the grits, and in the book they form part of her “arrival” story – one of many stories in the book to contain first impressions of Oak Ridge. This muddy town of hutments, prefab housing and corrugated steel structures took shape quickly, accommodating 75,000 new residents in just a few months, but in Kiernan’s account, the reader learns less about the effects of placing a dangerous plant in the middle of a rural community or about the women’s work, and much more about their food, dating, wardrobe, weddings and families. The Girls of Atomic City, as a consequence, reads like Physics World welded to Good Housekeeping.

This is a shame because Kiernan provides some fascinating material that adds to the history of the Manhattan Project. Rosemary Maiers, who had worked as a nurse at Oak Ridge’s first medical clinic, related to Kiernan the story of a young naval ensign who suffered a nervous breakdown at the plant and started to do exactly what was forbidden: he talked, babbling on and on about the secret plant and the weapons it was going to produce. Afraid to release the voluble young man to medical care away from the nuclear reservation, the site psychiatrist Eric Clarke commandeered an apartment and locked the sailor in, treating him with electric shock therapy.

Kiernan also recounts how some women, just out of high school, operated the plant’s array of “calutrons” – the large electromagnetic separators built to disentangle U-238 from U-235. The women challenged the male scientists at Ernest Lawrence’s lab in Berkeley, California, where the calutron was invented, to a race to see who could generate more “product” (enriched uranium) per run. The “girls” handily beat the California PhDs. Lawrence explained his team’s loss with condescension, rationalizing that his crew was always fiddling with the dials trying to make the calutrons run faster and smoother, while the girls stayed on task precisely following directions. Kiernan doesn’t question why, then, the PhDs, with all their extra knowledge and expertise, didn’t in fact succeed in producing more enriched uranium, more quickly.

Later in the book, Kiernan provides intriguing new material on the story of Ebb Cade, a black employee who, after landing in the Oak Ridge hospital following a car crash, was deployed as a human lab rat. Expecting that Cade would soon die of his injuries, Manhattan Project doctors injected a sizable dose of plutonium into his veins to see how much lodged in his body. For this part of her story, Kiernan quotes from an oral history by Karl Morgan, a health physicist. Morgan described how Robert Stone, one of the leading lights of the Manhattan Project medical programme, told him about the opportunity the badly injured Cade presented when he was checked into the hospital: “Karl, do you remember that [racial epithet] truck driver that had this accident some time ago?” Stone then laid out how they planned to take samples of bone, liver and other organs from Cade’s body after his death.

When Cade refused to expire, Stone and the other doctors nevertheless held off setting his broken bones for 90 days in order to let the plutonium travel through his body before taking bone samples during surgery. As another consolation prize, they extracted 15 teeth, which they determined Cade no longer needed, as sample material. Kiernan recounts how in Morgan’s version, Cade, clearly suspicious of his “treatment” at the hospital, ran away as soon as he was ambulatory, taking with him his valuable plutonium-laced urine, faeces and organs. Unfortunately, much of the incriminating material of this story is buried in the endnotes. More distressing, Kiernan describes the event and lets it lie, undigested, for her readers to figure out just what this episode means to her larger history.

So what is the message of The Girls of the Atomic City? Kiernan doesn’t directly state it, but she implies that she is adding to the trope of the “greatest generation” – the journalist Tom Brokaw’s description of the Americans who grew up with the deprivations of the Great Depression and made a decisive contribution to the war effort. Kiernan is right to suggest that women, too, numbered among the Americans who worked, sacrificed and soldiered on selflessly for a just cause, although she is by no means the first to do so. But in seeking to balance between “commemoration and celebration”, Kiernan draws back from assessing the impact of this pioneering nuclear-weapons community on democracy, civil rights and public health. While she sporadically acknowledges that there were problems – including the loss of civil rights, incessant surveillance, segregation, gender and racial discrimination, medical experimentation on unwitting human subjects and (though Kiernan does not address it) residents being unknowingly subjected to toxic and radioactive contaminants – her narrative choices unfortunately lead the reader to feel a lot like Alice, lost in a historical Wonderland with little direction or analysis.

  • 2013 Touchstone Books £16.67/ $27.00hb 373pp

Are ‘sterile neutrinos’ dark-matter particles after all?

It is always interesting to us at Physics World when a particular topic suddenly attracts the attentions of the physics community, especially when it’s a rather hotly debated subject. The past couple of days, for example, have seen a lot of talk about “sterile neutrinos”, based on two papers – published in quick succession on the arXiv preprint server – that suggest the tentative detection of these hypothetical particles.

Both papers are based on an unidentified emission line seen in the X-ray spectrum of some galaxy clusters obtained by the European Space Agency’s XMM-Newton observatory. Intriguingly, sterile neutrinos are also considered to be possible dark-matter candidates, meaning that – if discovered – they would be the first fundamental particles to lie beyond the bounds of the Standard Model of particle physics.

For those of you who haven’t heard about the sterile neutrino before, it’s a proposed and much-debated fourth type of neutrino that would have a mass but would not interact with the weak force at all, in contrast to the three types of neutrinos we do know exist – the electron, muon and tau neutrinos. Indeed, sterile neutrinos (if they exist) are likely to be even trickier to detect than the others as they would interact only with these “active” neutrinos.

In the new research, Bulbul et al. and Boyarsky et al. both identified an extremely weak, monochromatic, 3.5 keV line in the X-ray spectrum that could be interpreted as a signal emerging from a decaying 7 keV sterile neutrino. The graph above, taken from Bulbul’s paper, shows all the recent constraints placed on sterile neutrino production models — the measurement obtained by Bulbul and colleagues is marked with the star in red and is consistent with previous upper limits and is in a region of space that has not yet been ruled out. The researchers involved in both papers clearly state that the observation itself is currently very uncertain. That’s because the signal is weak and is located within several well-known faint lines – plus there are “significant modelling uncertainties”.

All of which means that the detection might be just one more false alarm. In fact, experiments have long been trying to detect sterile neutrinos and, so far, placing limits on their mass or lifetime is the best that has been achieved.

Adam Falkowski, who writes the Resonaances blog, has discussed the papers in some depth and what he has to say is worth reading. As he explains, if the signal itself is found to be accurate and indicative of “new physics”, then one possible explanation may point towards the sterile neutrino. While Falkowski is cautiously optimistic about this finding, others such as Matt Strassler, who writes the Of Particular Significance blog, are more sceptical. In fact, Strassler suggests that we don’t lose any sleep over it until the signal is confirmed.

On the other hand, I’ve been surprised that no neutrino researchers have been commenting the papers, so I dropped Maury Goodman, the leader of the Argonne high-energy-physics neutrino group, an e-mail asking for his view. While Goodman has not had a chance to read though the new research, he’s clear where he stands on the subject of sterile neutrinos: he doesn’t believe they exist.

Goodman explains that measurements made on neutrino mass and mixing angles over the last 15 years have been mostly consistent, with the exception of a few measurements that some researchers interpret as evidence of more neutrinos. “It is my understanding that it is impossible to fit any of those anomalies together in a consistent way, and I hear a lot of statistical arguments that are, in my opinion, misguided,” he insists. Goodman goes on to say that while “a sizeable minority of neutrino physicists are considering sterile-neutrino searches, I think a majority of us discount the idea”.

The bottom line is that the new research has got many physicists talking – both Sean Carroll and Jim Al-Khalili tweeted about it yesterday and there is lots more chatter about the particle on Twitter. (Only yesterday, we published another news story that revolved around the sterile neutrino, but I will leave you to take a look at that for yourselves, as it is based on an unrelated observation.)

Whether anything truly groundbreaking will emerge from this anomalous signal or if it too will be banished to the vaults of near-miss signals remains to be seen.

Microscope exploits spooky action at a distance

A phenomenon dubbed by Einstein as “spooky” might lead to very real benefits for biologists, thanks to new research by physicists in Japan. They have improved the signal-to-noise ratio of a phase-based microscope by exploiting entanglement – the connectedness between particles that allows a measurement on one of them to instantaneously fix the quantum state of another, no matter how far apart they are. This improved performance, say the researchers, could be particularly useful when studying delicate and transparent samples, such as biological tissue.

The phase of light plays an important role in modern microscopy. Standard optical microscopes record variations in the intensity of light passing through or reflecting off an object, but this approach produces very low-contrast images when the sample under scrutiny is highly transparent. Microscopes that instead exploit phase produce images by recording the interference of light rays that pass through regions of differing refractive index across a sample. Such devices are well suited to imaging living cells, for example, which are both highly transparent and extremely sensitive to intense light.

More signal, less noise

In the latest work, Shigeki Takeuchi and colleagues at Hokkaido University use entangled photons to boost the performance of a “differential interference contrast microscope” (DIM). This type of device splits a laser beam into two new beams that are focused on adjacent points on the plane of the sample. The pair of beams is scanned across the sample – being focused onto one set of neighbouring points after another – and made to recombine and interfere in a suitable detector. In this way, the device reveals the variation of refractive index, and hence composition, of the sample.

Since each photon in a laser beam experiences phase, the signal from a DIM using non-entangled beams scales with the number of photons in the beam (N). As the statistical error associated with counting discrete photons introduces a noise equal to the square root of N, the signal-to-noise ratio in an ordinary DIM is therefore also √N. But if these photons are entangled, each one of them senses the phase N times, thereby multiplying the signal and improving the signal-to-noise ratio by a further factor of √N.

In their experiment, Takeuchi and colleagues used what are known as NOON states – so-called because they are a quantum superposition of N horizontally polarized photons (with “0” photons in the vertical mode) and N vertically polarized photons (with “0” in the horizontal mode). Using these NOON states, the team generated pairs of entangled photons (i.e. where N is 2) to image a letter “Q” etched 17 nm deep in a glass plate. The researchers created pixels with an average of 460 photon pairs and were able to obtain much better contrast than was possible using single (classical) photons (see image above). In fact, the researchers found that the entanglement improved the signal-to-noise ratio of the DIM by a factor of 1.35 – slightly short of the expected value of √2 (i.e. 1.41) because of non-perfect two-photon quantum interference.

NOON a no-no?

Warwick Bowen, a quantum physicist at the University of Queensland in Australia, praises the latest work as “a milestone in quantum measurement and a step towards biological applications”. But he says that the precision achieved by the Japanese group is “far inferior to the state of the art in conventional phase-contrast microscopes”. Rather than sticking with NOON states, he thinks Takeuchi and co-workers might be better off using another form of non-classical light with very low noise levels that is known as “squeezed light”, which can be produced in bright beams and is already used in gravitational-wave detectors.

Takeuchi and colleagues are now building what they dub a “more user-friendly prototype” tailored for biological imaging. But Takeuchi admits that it will be difficult to further increase the number of entangled photons – and so raise the signal-to-noise ratio – using the group’s existing source of entanglement (parametric down conversion from a conventional nonlinear crystal). Using a NOON state with 10 or more photons, Takeuchi says, might instead be done via a special kind of crystal that uses less power and produces smaller optical losses or by using the kind of photonic circuits used in quantum computation. “This is a truly exciting challenge, both in fundamental science and applications,” he adds.

Details of the research are available on arXiv.

Could sterile neutrinos solve the cosmological mass conundrum?

Scientists know that when they measure the total amount of matter in the universe using two competing methods, one will give a higher value for the total density of matter than the other. To resolve this measurement discrepancy, two separate research groups have now proposed that the missing mass might be in the form of neutrinos. Accurately measuring the total amount of matter in the universe is a crucial cosmological parameter for interpreting a vast number of astrophysical phenomena.

Neutrinos are difficult to study because they interact only by the weak nuclear force, which acts only over very short distances, and by gravity, which is extremely weak. In the Standard Model of particle physics, neutrinos come in three flavours – electron, muon and tau. They were once thought to be massless, but the discovery of neutrino oscillations – whereby neutrinos change flavour, requiring that their masses be different – implies at least two of them have mass, with the rate of oscillation depending on the mass difference of each flavour. The rate has been measured in particle-physics experiments such as SuperKamiokande in Japan, which has allowed particle physicists to place a lower bound on the sum of the neutrino masses of 0.06 eV, but the absolute values remain unknown.

Cosmic discrepancies

Measurements of the total amount of matter in the universe that are made based on cosmic microwave background (CMB) – the thermal remnant of the Big Bang – generally give a higher value for the total density of matter, as compared with the measurements that look at the number of galaxy clusters and gravitational lensing. In the first of the new papers, Richard Battye of the University of Manchester and Adam Moss of the University of Nottingham, both in the UK, suggest that massive neutrinos could account for the fact that the total mass inferred from the size of ripples in the CMB – measured by satellites such as Planck and the Wilkinson Microwave Anisotropy Telescope (WMAP) – is higher than the total mass measured by the other methods. Mark Wyman and colleagues at the University of Chicago in the US make the same suggestion based on different data sets in the second paper.

The CMB came into being about 380,000 years after the Big Bang, when neutral atoms first formed and the decoupling of matter and radiation allowed photons to travel freely across the universe. Neutrinos at this temperature (about 3000 K) would have been highly relativistic. Now, however, they have cooled down to the point where their mass is effectively their rest mass. They could therefore account for the apparently lower mass of the universe today than in the CMB.

The number of neutrinos produced when neutral atoms first formed can be calculated relatively easily, and if the sum of the neutrino rest masses is 0.06 eV, neutrinos would make a negligible contribution to the mass in the universe. The authors of the two papers therefore calculated what combined mass would actually reconcile the CMB observations with observations of the universe made today. Both groups considered two possibilities – the first, and simplest, involved adding to all the masses of the known neutrinos. In this case, Battye and Moss calculated a combined mass of approximately 0.32 eV and the Chicago scientists arrived at a figure of about 0.39 eV (well within each other’s errors).

Sterile particles

Both groups also looked at models containing “sterile neutrinos” – a proposed and much-debated fourth type of neutrino that contributes mass but does not interact with the weak force. Such sterile neutrinos would therefore only interact with the other three “active neutrinos”, making them that much more difficult to detect.

In the new work, the researchers kept the active neutrino mass pegged at 0.06 eV and attributed the extra mass to sterile neutrinos. Depending on the exact model they used, Battye and Moss found a sterile-neutrino mass between 0.3 eV and 0.5 eV, while Wyman’s group found the mass would be around 0.4 eV. This model fitted slightly better with measurements of the matter in the universe made using other techniques than scenarios with heavy active neutrinos. “In our paper we were fairly even handed between the two situations,” says Wyman, now at New York University, “but we chose to present our figures for the sterile-neutrino case because, in our collaboration, we all view that as the more likely case from a particle-physics standpoint.”

Observational cosmologist Ofer Lahav of University College London, who in 2010 placed an approximate upper bound on the neutrino mass of 0.28 eV, is intrigued but sceptical. “It’s extremely interesting to get the neutrino mass from cosmological data sets,” he says. “It’s also very challenging – I can easily see different analyses getting slightly different answers, and equally I think the tension in the data might have other explanations such as local variation of the Hubble constant or other systematic errors.”

Both papers are published in Physical Review Letters (Phys. Rev. Lett. 112 051303; Phys. Rev. Lett. 112 051302).

Graphene oxide makes perfect sieve

Membranes made from graphene oxide could act as perfect molecular sieves when immersed in water, blocking all molecules or ions with a hydrated size larger than 9 Å. This new result, from researchers at the University of Manchester in the UK, means that the laminated nanostructures might be ideal for water filtration and desalination applications.

Graphene is a sheet of carbon just one atom thick in which the atoms are arranged in a honeycomb lattice. Graphene oxide is like ordinary graphene but is covered with molecules such as hydroxyl groups. Graphene-oxide sheets can easily be stacked on top of each other to form extremely thin but mechanically strong membranes. These membranes consist of millions of small flakes of graphene oxide with nanosized empty channels (or capillaries) between the flakes.

Two years ago, a team of researchers led by Andre Geim – who was first to isolate graphene in 2004 – found that graphene-oxide membranes were impermeable to all gases and vapours except for water. In fact, Geim and colleagues found that water passes through a film of graphene oxide extremely fast, while all other gases and liquids are blocked by the film. Even helium, which is extremely difficult to block, cannot pass through the membranes – but water vapour goes through so quickly that it is as if the membranes are not even there. This happens because the graphene-oxide sheets are arranged in such a way that there is room for only one layer of water molecules. In the absence of water, however, the capillaries shrink and do not let anything through this way, thus making the material impermeable to everything but water.

Now, Geim’s team has found that when the membranes are immersed in water, as opposed to just being exposed to water vapour or ambient humidity, they appear to swell slightly and are able to block all molecules or ions with a hydrated size larger than 9 Å. (A hydrated sugar molecule, for example, has a diameter of 10 Å.) What is more, the membranes are able to distinguish between atomic species that differ in size by only a few per cent. In addition, ions that are smaller than 9 Å across can pass through the membranes 1000 times faster than is expected by simple diffusion processes alone.

“Ion sponging”

“We believe that this last phenomenon is thanks to another exceptional property of graphene-oxide membranes that we have called ‘ion sponging’,” says co-team-leader Rahul Nair. “The capillaries between the individual graphene-oxide flakes appear to act rather like powerful little vacuum cleaners that ‘suck up’ small ions.”

The Manchester researchers produced their membranes by stacking thousands of individual layers of graphene oxide on top of each other using simple techniques such as vacuum filtration and spray-coating. The graphene-oxide sheets are separated from each other by about 6–7 Å when dry; but when the sheets are immersed in water, this separation increases to about 11–12 Å, because not one but two layers of water are lodged between the sheets. “Water layers formed inside this ultra-narrow graphene capillary can move very freely, something that helps ions and molecules that are smaller than the size of the capillary itself to pass through,” explains Nair.

Decontamination and desalination applications

According to the team, the membranes could be ideal for removing valuable salts and molecules from contaminated larger molecules – for example during oil spills. “More importantly, our work shows that if we were able to further control the capillary size below 9 Å, we should be able to use these membranes to filter and desalinate water,” says Nair.

Indeed, the team says that it is now busy looking at ways to control the mesh size of the graphene oxide and reduce it to about 6 Å so that the membranes can filter out even the smallest salts in sea water. “We might achieve this by preventing the graphene-oxide laminates from swelling when they are placed in water,” says Nair.

“Our ultimate goal would be to make a filter device from the carbon-based material that allows you to obtain a glass of drinkable water from sea water using a hand-held mechanical pump,” adds team member Irina Grigorieva.

Take a look at the video below, filmed at the University of Manchester lab, where the researchers talk more about the idea of using graphene to produce drinking water.

The current work is detailed in Science.

A question of responsibility

By Margaret Harris in Chicago

The first 45 minutes of Amy Smithson’s talk here at the 2014 AAAS meeting were interesting but not especially controversial. Smithson, a senior fellow at the Center for Nonproliferation Studies in Washington, DC, began by speaking about her role in combating the spread of nuclear, chemical and biological weapons over the past two decades, and how this made her persona non grata for both conservative Republicans and the Clinton White House during the 1990s. After drawing parallels between Iraq in the late 1980s and Syria today, she outlined some of the tactics that “bad guys” like Saddam Hussein and Bashar al-Assad have used to circumvent international weapons treaties and delay their enforcement.

At that point, Smithson changed tack. Warning that she was about to become “the skunk at the party”, Smithson turned her fire on the scientific community. Policy-makers, she observed, can’t make weapons of mass destruction on their own. For that, they need scientists, and over the past 60 years, “hundreds of thousands of scientists” have obliged by working on nuclear, chemical or biological weapons.

(more…)

Bad weather? Blame Santa

By Margaret Harris in Chicago

If you’re fed up with floods in England, sick of snow in the US or mystified by mild temperatures in Scandinavia, blame it on Santa Claus. That’s the message coming from atmospheric scientist Jennifer Francis, whose “Santa’s revenge” hypothesis suggests that the weather weirdness that we’re currently seeing at middle latitudes could be linked to recent warming in the Arctic.

Francis’ theory begins with the polar jet stream, the high-altitude “river of air” that flows over parts of the northern hemisphere. This jet stream owes its existence to the temperature differential between the Arctic region and middle latitudes: because warm air expands, that temperature differential produces a “hill” of air with (for example) England at the top and Greenland at the bottom.  The Earth’s rotation means that air doesn’t flow straight down this hill; instead, it curves around, producing the west–east flow seen in animations like the one in this video from the NASA Goddard Science Visualization Studio.

(more…)

Burning the midnight gas

The environmental risks of shale-gas production are real, but the things people worry about most aren’t necessarily the ones that cause the most damage. That was the message of this morning’s AAAS symposium on “Hydraulic Fracturing: Science, Technology, Myths and Challenges”, which featured talks on the social implications of hydraulic fracturing as well as the risks of water and air contamination.

Hydraulic fracturing, or “fracking”, involves drilling a well and filling it with a high-pressure mixture of water and other chemicals. These high pressures cause nearby rock formations to fracture, releasing trapped oil and gas.  According to the first speaker, energy consultant David Alleman, fracking and horizontal drilling have “revolutionized the energy picture in the US”:  a few years ago, the country imported 60% of the oil it consumed, but today the figure is just 30%.

The fracking revolution has, however, generated costs as well as benefits. As one of the later speakers, Michael Webber, put it, “Shale production has environmental risks, and most of them are water-related.” Fracking consumes, on average, 5–10 litres of water for every litre of oil produced – about twice as much as conventional oil production – and the water that returns to the surface during the “flowback” stage of well production is often contaminated with natural radiation from deep rock formations, as well as added chemicals. In Webber’s view, the risks associated with this returned water are greater than the risks of groundwater contamination during the fracking stage, even though most public attention has focused on the latter.

Treating and reusing the fracking fluid would, of course, reduce both the  amount of waste generated and the overall amount of water required – a big deal in the US, where many intensively fracked regions are currently experiencing severe drought. The treatment process would be energy-intensive, Webber acknowledged, but some well-heads have more energy than they can handle: fully one-third of North Dakota’s fracked gas is flared off at the well-heads because it can’t be transported cheaply, and light from the flames can be seen from space (see photo above). If instead that gas were used to power regional water treatment centres, air and water pollution would both drop. The bottom line, Webber concluded, is that constraints on water and energy are coupled – you need water to get energy, but it also takes energy to clean up the water afterwards.

Cruise-ship physics, the many ways to tie a tie, shaken-up carbon dating and more

By Tushna Commissariat

If you like piña coladas and quantum mechanics, then we hope you are currently on the two-week “Bright Horizons 19” Southeast Asia cruise, as on board is physicist and writer Sean Carroll. He will be giving multiple lectures over the next 15 days on everything from the Higgs boson to dark matter and other fundamentals of quantum mechanics. Also floating along with Carroll are other lecturers who will cover topics from natural history to genetics to military strategy. If, like us, you are stuck at home, you can take a look at Carroll’s slides on his blog, maybe have a cocktail while you are at it.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors