Skip to main content

Smaller fusion reactors could deliver big gains

Compact tokamak

Researchers from the UK firm Tokamak Energy say that future fusion reactors could be made much smaller than previously envisaged – yet still deliver the same energy output. That claim is based on calculations showing that the fusion power gain – a measure of the ratio of the power from a fusion reactor to the power required to maintain the plasma in steady state – does not depend strongly on the size of the reactor. The company’s finding goes against conventional thinking, which says that a large power output is only possible by building bigger fusion reactors.

The largest fusion reactor currently under construction is the €16bn ITER facility in Cadarache, France. This will weigh about 23,000 tonnes when completed in the coming decade and consist of a deuterium–tritium plasma held in a 60 m-tall, doughnut-shaped “tokamak”. ITER aims to produce a fusion power gain (Q) of 10, meaning that, in theory, the reactor will emit 10 times the power it expends by producing 500 MW from 50 MW of input power. While ITER has a “major” plasma radius of 6.21 m, it is thought that an actual future fusion power plant delivering power to the grid would need a 9 m radius to generate 1 GW.

Low power brings high performance

The new study, led by Alan Costley from Tokamak Energy, which builds compact tokamaks, shows that smaller, lower-power, and therefore lower-cost reactors could still deliver a value of Q similar to ITER. The work focused on a key parameter in determining plasma performance called the plasma “beta”, which is the ratio of the plasma pressure to the magnetic pressure. By using scaling expressions consistent with existing experiments, the researchers show that the power needed for high fusion performance can be three or four times lower than previously thought.

Combined with the finding on the size-dependence of Q, these results imply the possibility of building lower-power, smaller and cheaper pilot plants and reactors. “The consequence of beta-independent scaling is that tokamaks could be much smaller, but still have a high power gain,” David Kingham, Tokamak Energy chief executive, told Physics World.

The researchers propose that a reactor with a radius of just 1.35 m would be able to generate 180 MW, with a Q of 5. This would result in a reactor just 1/20th of the size of ITER. “Although there are still engineering challenges to overcome, this result is underpinned by good science,” says Kingham. “We hope that this work will attract further investment in fusion energy.”

Many challenges remain

Howard Wilson, director of the York Plasma Institute at the University of York in the UK, points out, however, that the result relies on being able to achieve a very high magnetic field. “We have long been aware that a high magnetic field enables compact fusion devices – the breakthrough would be in discovering how to create such high magnetic fields in the tokamak,” he says. “A compact fusion device may indeed be possible, provided one can achieve high confinement of the fuel, demonstrate efficient current drive in the plasma, exhaust the heat and particles effectively without damaging material surfaces, and create the necessary high magnetic fields.”

The work by Tokamak Energy follows an announcement late last year that the US firm Lockheed Martin plans to build a “truck-sized” compact fusion reactor by 2019 that would be capable of delivering 100 MW. However, the latest results from Tokamak Energy might not be such bad news for ITER. Kingham adds that his firm’s work means that, in principle, ITER is actually being built much larger than necessary – and so should outperform its Q target of 10.

The research is published in Nuclear Fusion.

Coming soon(ish) to a galaxy near you

Image of Smith's Cloud taken by the Green Bank Telescope.

By Margaret Harris at the AAAS meeting in San Jose

A giant cloud of hydrogen gas is barrelling towards the Milky Way faster than the speed of sound, and dark matter may hold it together long enough to produce a spectacular outburst of new stars in the night sky – but not for another 30 million years.

The cloud – which is known as Smith’s Cloud after Gail Bieger-Smith, who discovered it as an astronomy student in 1963 – is one of several starless blobs of hydrogen known to exist in the space between galaxies. According to Felix “Jay” Lockman, principal scientist at the US National Radio Astronomy Observatory’s Green Bank Telescope, such gas clouds are, in effect, “construction debris” left over from an earlier age of galaxy formation. “These are parts for remodelling your house that didn’t arrive by the time the contractor left,” Lockman told an audience at the 2015 AAAS meeting in San Jose, California.

(more…)

Making the invisible visible: the potential of X-ray phase-contrast imaging

“I was completely blown away,” says Alessandro Olivo of University College London (UCL) in the UK, recalling the fine detail of a wasp revealed in his first X-ray phase-contrast image. Then working at the University of Trieste in Italy, Olivo and colleagues took the image in 1997 at Elettra, a synchrotron on the outskirts of the city. The research was part of a global resurgence of interest in X-ray phase-contrast imaging (XPCi) in the 1990s led by Japanese pioneer Atsushi Momose.

The first XPCi image was acquired by Ulrich Bonse and Michael Hart at Cornell University in New York in 1965. The renewal in interest was motivated by the technique’s potential to improve on conventional X-ray images, in medicine as well as in other applications.

Introduced to hospitals shortly after their discovery by Wilhelm Röntgen in 1895, X-rays’ ability to non-invasively peer inside a patient rapidly made them indispensable. Today, 3D computed tomography (CT) and real-time fluoroscopy (which uses X-rays to obtain real-time moving images of a patient’s internal anatomy) have also become invaluable. The underlying physical principle, which is to create images based on the fact that different materials in the body attenuate X-rays to different extents, has remained unchanged in 120 years. However, this approach hides subtle composition fluctuations in organs and other soft tissues. Refraction at an interface – a manifestation of phase change – is a significantly more sensitive indicator of composition. In fact, the resulting variations in speed are up to 1000 times greater than changes in attenuation.

Better images and lower doses

The potential implications for medicine of the XPCi technique are profound. It could improve the detection and characterization of abnormalities. Alternatively, increased image sensitivity could be traded for shorter exposures and lower radiation doses, reducing a small but finite risk of radiation-induced cancer.

Increased image sensitivity could be traded for shorter exposures and lower radiation doses, reducing a small but finite risk of radiation-induced cancer

In imaging the breast – a radiosensitive soft-tissue structure – mammography arguably has the most to gain from XPCi and is attracting a lot of attention from researchers. Lower doses in cancer screening and improvements in image quality could have large health and economic benefits, particularly for the majority of people who do not have cancer. Currently, absorption-based mammography returns around nine false-positive results for every tumour correctly identified, resulting in unnecessary and invasive follow-up investigations.

The simplest XPCi technique is free-space propagation. It involves firing a highly coherent beam through an object, which changes its phase and refracts it by angles of the order of 1 μrad – equivalent to a 1 mm deflection 1 km from the object. X-rays adjacent to the object provide an unshifted reference that combines with the refracted X-rays to generate an interference pattern. Unlike absorption-based imaging, where the detector is placed directly behind the object, a detector recording the pattern is placed between tens of centimetres and several metres downstream where the interference pattern can be resolved.

Highly coherent, intense and energy-tunable synchrotrons provide perfect beams for free-space propagation and other XPCi techniques. Starting in 2006, researchers co-led by Renata Longo of the University of Trieste undertook the first clinical XPCi synchrotron study. Using free-space propagation with monochromatic 17–22 keV X-rays at Elettra, they imaged 71 women for whom conventional mammography and ultrasound returned inconclusive findings.

Despite the tough challenge the cohort posed, the images were better than conventional mammograms and doses were comparable or lower. Excitingly, XPCi also cleared 17 individuals subsequently confirmed as tumour-free where absorption imaging had failed. “We have a technique that has the potential to decrease the number of healthy people undergoing further examinations,” says Longo.

From synchrotron to lab

However, before all patients can benefit from XPCi, compact, robust and affordable techniques are needed. The only commercial system, which was launched by Japanese company Konica Minolta in 2005, has so far had limited impact, says Olivo. A partially coherent X-ray source combined with a free-space propagation approach restricted improvements in image contrast over conventional X-ray images.

An alternative approach that sidesteps the requirement for beam coherence is edge illumination, which measures how much X-rays get deflected when refracted by an object. Developed by Olivo, the method uses two “masks”: one in front of the X-ray source and another in front of the detector. Each comprises a set of apertures with a period – or slit spacing – of around 70–80 μm, the first creating a set of beamlets.

During a reference scan with no object present, the masks are aligned so that only half of each beamlet passes through an aperture onto a detector pixel. When an object is introduced, more or less of the beamlet hits the pixel, depending on its deflection. The measured intensity change can be used to derive the phase change.

The latest mammography prototype at UCL uses a powerful polychromatic X-ray source that enables quick exposures and results in breast doses comparable to those in clinical practice. The masks are easy to make in larger sizes, enabling extended fields of view. Imaging excised breast tissue, the system has detected features invisible in absorption images and increased the contrast of “microcalcifications” – tiny deposits that can indicate cancer – by up to a factor of nine.

For manufacturers, a potential limitation is that the devices have to be quite tall because the further the X-rays travel from the patient, the more they get deflected and the more sensitive the system is. In fact, standing 1.5 m high, the UCL system may be too large to be incorporated into existing commercial systems.

Another technique – grating-based interferometry – involves placing a diffraction grating in front of a regular polychromatic X-ray tube to increase coherence. Originally developed in synchrotrons, it was first achieved by Franz Pfeiffer and colleagues at the Paul Scherrer Institute (PSI) in Zurich, Switzerland. Pfeiffer now leads a group at the Technical University of Munich (TUM) in Germany.

X-rays transmitted through an object pass directly through a second grating that creates a diffraction pattern downstream at a specific distance. A third “analyser” grating in front of a detector interrogates the pattern. The phase shifts can be deduced by comparing the intensities with a reference scan with the object absent.

Imaging challenges

Researchers are tackling several issues that dictate the performance of grating-based XPCi. In general, the X-rays needed to image structures deep in the body are several times more energetic than those that have been used to date in XPCi studies of the breast, tissue samples and laboratory animals. Higher energies require gratings thick enough to absorb X-rays incident between slits that would otherwise degrade the diffraction pattern. However, high-sensitivity imaging requires a small period and fabricating a stable grating with both properties is difficult.

Several groups are beginning to overcome this challenge. Using gratings made by researchers at the Karlsruhe Institute of Technology in Germany, a collaboration led by Pfeiffer has acquired 82 keV and 133 keV images – the highest to date – at the European Synchrotron Radiation Facility in Grenoble, France. “Grating fabrication has improved such that gratings with high aspect ratios – structures up to 150 μm high with very small periods down to 2.4 μm – are now possible,” says Julia Herzen, who is leading research at TUM that aims to translate advances in the lab into clinical applications.

By absorbing X-rays, gratings also reduce the number of photons reaching the detector and are a complicating factor in the trade-off between image sensitivity and dose. The challenge is to identify the sweet spot that improves contrast beyond that of absorption images, yet keeps patient doses within prescribed limits.

Mammogram images

With simultaneous absorption, phase-contrast and dark-field images of excised breast tissue, a Swiss Federal Institute of Technology Zurich and PSI group led by Marco Stampanoni is using grating-based imaging to provide further evidence of the clinical potential of phase measurements. In a first for the technique, the researchers demonstrated significant improvements in image quality in the lab.

Dark-field imaging is a variation of the phase-contrast principle and is a hot topic in the XPCi community. Instead of detecting phase shifts that highlight boundaries between macroscopic objects, it detects ultra-small-angle scattering by microscopic structures. Pioneered in the early 2000s, dark-field imaging can be achieved with the same pixelated detectors used for XPCi. “This is a very important new signal because you can get information from structures that are much smaller than pixels in a coarse detector,” says Stampanoni.

In a separate study, the group has taken the first steps towards a quantitative measure that could help assess the risk of cancerous or precancerous cells in clusters of microcalcifications in the breast. Analysing microcalcifications in mastectomy samples, the researchers found that some scattered a lot yet absorbed few X-rays, while others exhibited the opposite characteristics. Stampanoni’s team hypothesizes that the ratio of the two signals could discriminate between two types of microcalcification: one rarely associated with cancer and another commonly found in so-called proliferative lesions that include the most aggressive breast cancers.

However, more work is needed. As all of the tissue came from patients with breast cancer sufficiently advanced to require mastectomies, the sample was biased. “We have an indication where this threshold could be, but we need to have much stronger statistical evidence,” says Stampanoni.

From the lab to the hospital

The only company with an imager in a hospital is Konica Minolta, which is working with researchers at Japanese universities, including Momose. In a simple application, its second, grating-based prototype system has imaged the finger joints of volunteers in 2D, using a small field of view and mean beam energy of 28 keV, where each image takes tens of seconds to acquire. There are plans to adapt the system for mammography.

One long-term goal for the XPCi community is 3D imaging in patients. A large Italian consortium co-led by Longo is planning the first patient scans in a second breast-imaging study at Elettra, which is due to start by the end of 2016. With the ideal conditions that the synchrotron provides, the study will give a performance benchmark for phase-contrast breast CT. Like conventional CT, it requires multiple projections taken over 360° around the patient. Achieving this without a synchrotron, within non-negotiable dose limits and in short exposures in particular, is an even tougher challenge than for 2D imaging.

While several groups have links with companies, aside from the newer Konica Minolta prototype there is no 2D or 3D commercial imager on the horizon, making it hard to predict when patients might start benefiting from XPCi. Funding is a big issue because XPCi has reached a stage where it is no longer eligible for blue-sky funding from research councils, yet is not advanced enough to gain serious financial support from business, which means that – according to Olivo – groups might struggle to find the funding needed to push XPCi into the clinic. “You can make anything work if you throw enough money at it, once you have the proper principle,” he says. “I cannot build a perpetual motion machine, but I think I can build a phase-contrast imager!”

Beyond mammography

X-ray of a human chest

Researchers around the world are using phase-contrast X-ray imaging to study many different diseases and areas of the body, including cartilage, bone, lungs, blood vessels and kidneys, typically using animals and tissue samples because human scanners are still under development.

In the lungs, the interface between air and soft tissue provides the strongest phase gradient in the body. Potential applications include diagnosing and monitoring conditions that impair lung function, like emphysema and cystic fibrosis. With multiple overlapping airways, the lungs appear in phase-contrast images as speckled masses that are hard to read by eye, and research has focused on quantitative analyses.

Examining absorption and dark-field signals in the lungs of live mice using a miniature CT scanner developed in-house, researchers at the Technical University of Munich measured dramatically lower scattering signals in emphysematous lungs, while conventional imaging showed little change.

Marcus Kitchen’s group at Monash University in Australia, meanwhile, has developed techniques that quantify lung function by analysing the speckle texture. “Our technique uses a statistical approach to say what is the average size and number of the airways that are contributing to the speckle pattern,” he explains.

Using animal models, the researchers have employed XPCi to identify optimal approaches for the resuscitation of newborn babies with great success. Their findings have resulted in changes to clinical practice around the world.

Also at Monash, Kaye Morgan and colleagues were the first to measure the depth of fluid lining the airways in live mice, using a high-speed XPCi technique they developed. Thinner fluid layers in people with cystic fibrosis make them more prone to infection, and the researchers are using the technique to investigate new therapies. On the physics side, the group is working on approaches that could enable monitoring of individual patients. “It’s definitely a long-term goal,” says Morgan.

A clinical take on molecular imaging

Molecular imaging – the in vivo visualization of cellular processes – is used extensively within preclinical research. Positron emission tomography (PET), single photon emission computed tomography (SPECT), fluorescence imaging and other techniques with a high sensitivity can all non-invasively image molecular targets in living animals, shedding light on the origins and mechanisms of disease, and enabling potential new therapies to be tested over time.

But can these advantages be translated from preclinical studies to a clinical setting? “We’ve been using imaging in preclinical modes for two decades – it’s now time to take what we’ve learned and translate it to the clinic,” says Christopher Contag, principal investigator of the Molecular Biophotonics and Imaging Laboratory at Stanford University in the US.

Early diagnosis of diseases like cancer is vital for improving treatment outcomes

Making an early diagnosis of diseases like cancer is vital for improving treatment outcomes. Almost all diseases begin with alterations at the cellular level, and detecting these earliest changes calls for imaging techniques with both high sensitivity and high resolution.

Unfortunately, most of today’s diagnostic tools can find only relatively large lesions. For example, magnetic resonance imaging (MRI) and X-ray computed tomography (CT) can detect tumours of approximately 1 cm3 in volume, equivalent to about one billion cells, while methods based on blood tests can detect about one million cells. Optical-imaging techniques such as fluorescence microscopy, on the other hand, can detect lesions as small as 0.001 mm3 – equivalent to about 1000 cells. “To improve early diagnosis, we need tools that can detect microscopic lesions,” Contag says.

Point-of-care diagnosis

So could optical techniques provide this desired early detection? Optical imaging certainly offers the necessary high resolution, but light cannot penetrate far into the body, which is a problem. While MRI and CT scan through the entire body, techniques such as white-light endoscopy can only image up to 1 cm beneath the surface of the body. Confocal microscopy, meanwhile, offers 1 μm resolution at depths of just 100 μm to 1 mm.

It should be possible, however, to exploit existing methods such as endoscopy to place the optical-imaging tools directly at the tissue surface and visualize early changes with microscopic resolution. This approach could shift diagnostic methods from “biopsy followed by histopathology” to non-invasive “point-of-care” diagnosis. “We’d like to provide the pathologist with an in vivo image that looks a lot like a stained-tissue image,” Contag explains.

To do this, the Stanford team has developed a miniaturized dual-axis confocal fluorescence microscope with a MEMS (microelectromechanical systems)-based scanning core. The researchers showed that the microscope, which is small enough to fit inside the instrument channel of a standard endoscope, could image the junction between Barrett’s oesophagus (a condition that increases the risk of oesophageal cancer) and normal cells. This junction is currently detected by taking biopsies in random suspect areas; in vivo imaging could instead accurately reveal where best to biopsy.

The team is now testing the miniaturized microscopes in the clinic to image oesophageal and colon cancer, using topical indocyanine green (ICG) as a contrast agent. ICG acts as both a colorimetric and a fluorescent contrast, enabling simultaneous macroscopic viewing and microscopic imaging with 3–5 μm resolution.

Contag and colleagues have made several modifications to the original microscope design, including stitching together overlapping image frames in real time to increase the field of view. They have also redesigned the device to perform 3D imaging by using two MEMS mirrors to scan in the xy and z directions – enabling scanning of a 300 μm3 volume in a second or two.

Fluorescence imaging is ideal for detecting multiple probes, each emitting light at a different wavelength, at the same time. This “multiplexed” approach can provide information on both the molecular content and environment of the tissue, and enable endoscopists to rapidly distinguish between normal and precancerous tissues. The Stanford team demonstrated multispectral functionality of the dual-axis microscope by imaging multiple molecular probes that emit in the 500–800 nm range.

Fluorescence imaging enables multiplexing with four or five different fluorophores, but as this number increases it becomes more likely that the signal from one fluorescent dye will overlap with adjacent channels. Ideally, says Contag, a diagnostic system should use nearer to a dozen different channels. To achieve this, the researchers turned their attention to surface-enhanced Raman spectroscopy (SERS) using functionalized SERS nanoparticles as molecular-imaging contrast agents.

A group of men and women in front of a office building

These nanoparticles have a layer of Raman-active molecules adsorbed onto a gold core (which acts to significantly enhance the Raman signal) and are coated with a silica layer functionalized with a variety of tumour-targeting agents. By using different Raman-active molecules, each nanoparticle type generates a unique Raman spectral signature, enabling high levels of multiplexing.

To perform in vivo SERS, Contag’s team needed to miniaturize a Raman microscope to create an endoscopic probe. Designing a device for colon-cancer screening, the researchers developed a non-contact, 5.3 mm diameter Raman scanning probe that can scan the entire colon in less than 10 minutes. The probe’s rotating fibre-bundle tip contains an illumination fibre surrounded by 36 collection fibres that direct the Raman signals onto a CCD camera. The spectra are then resolved into individual Raman fingerprints.

For clinical application, the scanning procedure will involve using the endoscope to spray tumour-targeted SERS nanoparticles onto the area of interest during endoscopy. Any unbound particles are washed off prior to scanning and spectral analysis can be performed in real time to determine the presence of any pathological conditions.

In tests on a hollow 5 cm cylindrical phantom (a tissue substitute), the scanning Raman probe could detect particles of different types placed on its inside surface. In further tests on human colon tissue, the device could accurately identify 10 different types of SERS nanoparticle applied to the tissue samples. Furthermore, using known reference spectra, the system was able to separate individual Raman spectra from a mixture of particle types.

The team is also investigating the application of real-time “ratiometric” imaging, in which the relative concentrations of two or more nanoparticle types are determined, with one type serving as a non-targeted control and the other(s) used to target relevant cells. By accounting for any non-specific particle accumulation after washing, as well as for variable working distances, this approach improves the specific detection of bound, targeted nanoparticles. Tests on colon-tissue samples showed detection of picomolar concentrations of SERS nanoparticles, with analytical calculations predicting a detection limit of just 56 nanoparticles per cell.

Contag notes that the nanoparticles have not been tested in humans yet, but says that discussions to do so are ongoing with the US Food and Drug Administration. In the meantime, the team is refining its kit further. For example, his team has already introduced a built-in microsurgical tool – a pulsed electron avalanche knife – that can excise a small cylinder of tissue if disease is detected.

The researchers are also continuing to progress their fluorescence-based dual-axis confocal microscope for applications including visualization of circulating tumour cells in the blood. Other developments include extending optical-imaging approaches to other sites, such as additional hollow organs, accessible organs like the skin and oropharynx, and surgically accessed organs such as the prostate, ovaries, breast or brain.

A sight for blind eyes

Diagram of a system showing how images get to the retina

By Margaret Harris at the AAAS meeting in San Jose

“Restoration of sight to the blind” is a brave claim, one with an almost Biblical ring to it. For Daniel Palanker, though, it is beginning to look as if it is an achievable goal. A medical physicist at the University of Stanford, Palanker has developed a prosthetic vision system that replaces damaged photoreceptors in the retina with an array of tiny photodiodes. When infrared images are projected onto this array, the photodiodes convert the light pulses into electrical signals, which are then picked up by the neurons behind the retina and transmitted to the brain. The result is an artificially induced visual response that, while not as good as normal vision, could nevertheless provide “highly functional restoration of sight” to people with conditions such as retinitis pigmentosa or age-related macular degeneration (AMD).

(more…)

Decoding the dark arts of Interstellar’s black hole

moderately realistic, gravitationally lensed accretion disk

In recent years, science and science fiction have come together in cinema to produce a host of rather spectacular visual treats, the best of the lot being Christopher Nolan’s epic Oscar-nominated film Interstellar. That actual science has played a major role in film is pretty well known, thanks to the involvement of theoretical physicist Kip Thorne, who was an executive producer for the project. But in a near-cinematic plot twist, it has emerged that Thorne’s work on trying to develop the most accurate and realistic view of a supermassive black hole “Gargantua” has provided unprecedented insights into the immense gravitational-lensing effects that would emerge if we were to view such a stellar behemoth.

To produce the awe-inspiring images of the wormhole and Gargantua that audiences across the globe marvelled at late last year, Thorne and a team from the acclaimed London-based visual effects company Double Negative developed a new computer code dubbed “Double Negative Gravitational Rendered” and have now published a paper detailing their work in the journal Classical and Quantum Gravity, which is published by IOP Publishing, which also publishes physicsworld.com.

Instead of focusing how individual rays of light would be distorted by the black hole, the code aims to solve the equations for how bundles of light (light beams) would navigate the extreme warped space–time structure that would surround the spinning Kerr black hole that is Gargantua. This was done to get rid of some of the strange visual anomalies that the team saw early on in its work. The team saw distant flickering stars and nebulae that would rapidly move across the screen if the standard approach of using just one light ray per pixel in the code was applied, which in the case of an IMAX image would amount to a total of 23 million pixels.

“To get rid of the flickering and produce realistically smooth pictures for the movie, we changed our code in a manner that has never been done before,” says Oliver James, chief scientist at Double Negative, explaining that once the code “was mature and creating the images you see in the movie Interstellar, we realized we had a tool that could easily be adapted for scientific research”.

They also found that the dragged space–time and the lensing would mean that an observer or a camera would see the accretion disc that surrounds Gargantua wrapped over and under the black hole’s shadow and that distant stars would move in a complex swirling dance around the hole as the camera orbits it. Indeed, thanks to a curious optical effect know as a “caustic” or a “caustic curve”, the images of the stars or nebulae would get amplified and split into double or even multiple images or even cancel in a flash of light.

To learn a bit more about these curious caustics (hint: they are actually pretty common and you have seen one in action if you have seen a rainbow), find out about how Gargantua was made to bend the rules of physics for a good cause and look into the perfect Einstein rings that emerge in the team’s simulations, delve into the paper’s depths. In the meanwhile, do be sure to read my review of Interstellar (warning: it contains spoilers!) as well as Thorne’s book The Science of Interstellar.

William Blake's graphene sensor, boiling an egg inside out, quantum woo and more

http://youtu.be/ZiOrIUEtp2k

 

By Hamish Johnston

Are you tired of the same old boiled egg staring up at you every morning? Then why not try this simple trick from the Japanese chef Yama Chaahan, who in the video above creates a boiled egg with the yolk on the outside and the white in the middle. There is angular momentum and fluid dynamics involved, and if you don’t understand Japanese, the Huffington Post has a step-by-step guide in English.

(more…)

Flying high in Baltimore

 

By Susan Curtis in Baltimore, US

After two days of getting to grips with biophysics – see here and here for my experiences –  I was ready for a change of scene. And a visit to the Space Telescope Science Institute (STScI), co-located with the Johns Hopkins University in Baltimore but operated on behalf of NASA, was just what I needed.

The STScI is home to many of the scientists and engineers who made the Hubble Space Telescope possible, and who have been working for many years to design the optics and instrumentation for its successor – the James Webb Space Telescope (JWST), which is due to be launched in 2018. The institute also runs the science operations for Hubble and soon will for the JWST, providing software tools for astronomers to make their observations and processing the raw data acquired by the onboard instruments to make it ready for scientific analysis.

(more…)

Physicists reveal new way of cooling large objects with light

A new technique for cooling a macroscopic object with laser light has been demonstrated by a team of physicists in Germany and Russia. Making clever use of the noise in an optical cavity, which normally heats an object up, the technique could lead to the development of “stable optical springs” that would boost the sensitivity of gravitational-wave detectors. It could also be used to create large quantum-mechanical oscillators for studying the quantum properties of macroscopic objects or to create components for quantum computers.

Physicists already have ways of cooling tiny mirrors by placing them in an optical cavity containing laser light. When the mirror is warm, it vibrates – creating a series of “sidebands” that resonate with light at certain frequencies. The first lower sideband has a frequency equal to the difference between the resonant frequency of the cavity and the vibrational frequency of the mirror. So when a photon at that frequency enters the cavity, it can be absorbed and re-emitted with an extra quantum of vibrational energy. As a result of this “dispersive coupling” process, the mirror cools because energy from it is removed.

Dispersive coupling works best when the bandwidth of the cavity is much smaller than the vibrational frequency of the mirror. This is possible for relatively small mirrors with vibrational frequencies in the hundreds of megahertz. However, for more massive mirrors with vibrational frequencies in the hundreds of kilohertz, optical cavities with sufficiently narrow bandwidths are simply not available.

Cooling with noise

In this latest work, a large object was cooled using a new technique that involves “dissipative coupling” as well as dispersive coupling. Dissipative coupling was first proposed in 2009 by Florian Elste and Aashish Clerk of McGill University and Steven Girvin at Yale University. It makes clever use of quantum “shot noise” in laser light, which would normally be absorbed by the mirror and cause it to heat up.

However, if the mirror is in an optical cavity and its motion couples to the mirror’s reflectivity in just the right way, then there are two ways that the noise can reach the mirror: it can travel directly from the laser to the mirror or it can bounce around the cavity before driving the mirror. Just as in an interferometer, noise taking these two paths can interfere destructively or constructively.

Clerk and colleagues realized that the system can be set up so that destructive interference stops this quantum noise from heating the mirror but does not prevent the mirror from losing energy to the noise. The net effect is a strong cooling of the mirror’s motion, which could in principle take the mirror to its quantum ground state. “Unlike standard cavity cooling schemes, this interference doesn’t rely on having a very large mechanical frequency,” explains Clerk – meaning that it can be used to cool large mirrors that have low vibrational frequencies.

Couplings working together

In the latest work, Roman Schnabel and colleagues at the Max Planck Institute for Gravitational Physics in Hannover, Moscow State University and the Leibniz University of Hannover have now shown that dissipative and dispersive coupling can work together to cool relatively large mirrors. Based on an idea first proposed by the researchers in 2013, the technique uses a cavity created by a Michelson–Sagnac interferometer (see figure).

What they have done is to fire laser light at a beamsplitter to create two beams that go off at right angles to each other. These beams then bounce off two mirrors, making their paths form a right-angled triangle. Light from the output port of the interferometer is sent to a “signal-recycling mirror”, or SRM, where some of the light is reflected back into the interferometer and some is sent to a detector. The optical cavity is fine-tuned by adjusting the position of the SRM, while the cavity properties are monitored using a frequency analyser connected to the detector.

The object to be cooled is a silicon-nitride mirror just 40 nm thick, which is placed at the centre of the cavity. The mirror is about 1.2 mm across, weighs 80 ng and has a fundamental vibrational frequency of 136 kHz. The vibrational motion of the mirror changes not only the resonant frequency of the cavity – leading to the emergence of sidebands and dispersive cooling – but also the bandwidth of the cavity. When the rate of change of the bandwidth is large, energy can be exchanged between the cavity and the mirror. By carefully adjusting the phase between the vibrating mirror and the light, energy alone will flow from the mirror to the cavity, thereby cooling the mirror.

Sub-kelvin cooling

The researchers monitored the temperature of the mirror by using the laser light to measure its motion. They found that by using a combination of dispersive and dissipative cooling, they could cool the mirror from room temperature to 126 mK. Commenting on the experiment, Clerk told physicsworld.com that “Schnabel’s is the first experimental system where you have the special kind of dissipative optomechanical coupling that can let you do something truly new”.

One possible application of the technique is to use it to cool relatively large objects to their quantum ground states of vibration. Such quantum oscillators would comprise billions of atoms and could be used as “Schrödinger’s cats” to study quantum mechanics on a macroscopic scale. Other applications include using such quantum oscillators as components in quantum computers and other quantum-information systems.

Stabilizing an optical spring

However, it is not the cooling power of the technique that most interests Schnabel and colleagues. Schnabel told physicsworld.com that the demonstration is a proof-of-principle of their model of how light interacts with an oscillating mirror within a gravitational-wave detector. Their goal is to create a “stable optical spring” whereby a mirror in a huge interferometer undergoes a stable oscillation when laser light is shone on it. A gravitational wave travelling through the mirror would cause a tiny disruption in the oscillation, which would be detected by the interferometer. The problem is that noise in the system heats the mirror and causes it to vibrate erratically. This makes the measurement extremely difficult in existing set-ups.

“Our goal is to avoid uncontrolled heating of the mirror,” explains Schnabel, who says that the team will now use the model to try to create a stable optical spring using a 100 g pendulum as a mirror in a small interferometer. The ultimate goal of the research is use a mirror of about 40 kg for use in gravitational-wave detectors of the future.

The research is reported in Physical Review Letters.

Quantum random walk puts a limit on superposition

Is there a limit on how large a quantum superposition can be or can macroscopic objects, such as humans or say cats, also exist in a superposition of quantum states? Our daily experience seems to suggest that large objects do not obey the rules of quantum mechanics and are said to behave classically. This suggests that there could be a fundamental boundary between the quantum and classical worlds.

To try and nail down exactly where this boundary lies, researchers in Germany have tracked the motion of a large atom in an optical lattice. They found that the atom moves in a non-classical way, behaving as a quantum superposition that occupies more than one location at any given time.

Boundary conditions

Probing the classical–quantum boundary is currently of great interest to physicists, with a variety of different experiments trying to work out where such a cut-off may lie. Indeed, in the past few years, physicists have been placing ever-larger objects into states of quantum superposition. These are often interference experiments, whereby large molecules are sent through a double slit and made to interfere with themselves.

But in 1985 Anthony Leggett and Anupam Garg took a decidedly different approach to the quest by developing a theory known as “macrorealism”. Instead of showing that quantum theory holds, they aimed to show that anything apart from a quantum description would disagree with experimental observations. In explicit contrast with quantum theory, the theorists posited that in the worldview of macroscopic realism, large objects must be in one determinate macroscopic state at any given time, allowing for no superposition or blurriness in the system. Macrorealism has two main criteria: that macroscopic superpositions are not allowed and that it is possible to make a measurement of the system without influencing the system in any way, meaning that you can always measure, say, the location of a large object without disturbing it.

If macrorealism were true, repeated measurements, at different times, of a single macroscopic system would only be statistically correlated up to a certain degree, giving what they called the Leggett–Garg (LG) inequality. The aim then was to violate the inequality with experimental evidence. This is similar to the Bell inequalities, which set out to show that another basic quantum effect, known as entanglement, is indeed possible. The difference is that for Bell’s inequalities, the measurements are made at different points in space, while for the LG inequality, the experiments take place at different times. Over the years, a number of experiments on photons, nuclear spins and superconducting circuits have been carried out to violate the LG inequality.

Walk the line

“According to [macrorealism], the [object] always moves on a specific trajectory, independent of our observation,” says Andrea Alberti at the University of Bonn, Germany, explaining “The challenge was to develop a measurement scheme of the atoms’ positions which allows one to falsify macro-realistic theories.” Alberti, along with Carsten Robens also at Bonn and colleagues in the UK, has violated the LG inequality with the largest quantum objects to date. The team observed the random quantum walk of a caesium atom in an optical lattice and used a certain type of “non-invasive” measurement that gives the most stringent violation of the inequality.

In an almost perfectly contrary manner to its classical cousin, a particle in a quantum random walk simultaneously travels in both directions – a “coherent superposition” of right and left. Over many steps, the particle is thought to be “delocalized” or blurred over many positions. Previously, researchers have seen the quantum random walk of a caesium atom. In this new work, the atom moves along one of two optical standing waves that have opposite electric-field polarizations. As it travels, the atom’s position is measured at various times, with the aim of measuring a correlation between the temporally separated positions.

To do this, the team begins by putting the atom into a superposition of two internal hyperfine spin states – this corresponds to being in both waves simultaneously. Next, the two optical waves are made to slide past each other and this makes the atom smear out or blur over a distance of up to about 2 µm – this is the quantum walk. The atom is then optically excited, and its subsequent fluorescence reveals its final location.

Cat or no cat?

Through the experiment, the researchers make three consecutive measurements. The first and the third are known – they know where the atom began its walk from, and thanks to the florescence, they known the final position. The middle measurements, which determine the internal spin-state of the atom are done non-invasively – if the atom is in one state – say, spin-up – then it is noted, but nothing is done to the experiment. If, instead, it is in spin-down state, it is transported far off so that its further evolution until the final position measurement is made cannot possibly influence the evolution of a particle that was left undisturbed. If the atom then fails to light up when the final fluorescence measurement is made, we know that the atom was in the spin-down state and was therefore discarded.

Images showing the cat-in-a-container null measurement protocol

A simple way to imagine this set-up is to imagine a guessing game wherein you have two containers and under one of them is a cat. If lifting up one container reveals it is empty, we can assume that the cat is undisturbed in the second container – this is synonymous to the spin-up state. Any measurement result where the cat is seen in the container (i.e. the spin-down state) is moved away or “discarded”.

By carrying out this “null result” measurement technique in the middle step, the researchers could determine the atom’s location without directly interacting with it. By repeating this experiment many times, and seeing when the fluorescence is detected, the researchers can tell which wave the atom was in (and therefore its position) and also that the atom was not disturbed in any way. If macrorealism was true, the null measurement would not affect the outcome of the final fluorescence measurement, and the total amount of correlation of the atom’s position in time could be explained classically – but this is not the case. Indeed, the blurring that happens in the quantum walk leads to a stronger total correlation than is possible under macrorealism. This is mathematically demonstrated via the LG inequality violation, clearly showing that macrorealism cannot apply to the caesium atom.

Rainer Kaltenbaek of the Quantum Foundations and Quantum Information group at the University of Vienna, who was not involved in the current work, found the work interesting. He points out that while the superpositions created by the team were not very massive (as compared with some other experiments) they are still relatively large, being on the scale of 2 µm. This might seem tiny, but it is comparable to a human hair, which is about 75 µm across, while most bacteria are around 3 µm. Kaltenbaek also points out that “quantum physics does not perceive the [null measurement] as really ‘non-invasive’… it’s only non-invasive from a ‘realistic’ point of view.” He continues, “The authors state in their conclusions that one can still stick to a ‘realistic’ picture if one does not mind nonlocality – Bohmian mechanics is an example of that. Of course, that comes at a price – one runs into difficulties as soon as one starts thinking about relativity where ‘nonlocality’ and ‘simultaneous’ effects do not work out as nicely as in non-relativistic physics.”

The results of Alberti’s experiment seem to nail down for sure that a caesium atom obeys the laws of quantum mechanics, and that macrorealism does not apply. In the future, similar experiments with even larger masses and with longer superposition times will help to either narrow down the inherent boundary that lies between the quantum and classical world, or banish it once and for all and lay the foundations for a more advanced quantum theory.

The work is published in Physical Review X.

Copyright © 2026 by IOP Publishing Ltd and individual contributors