Skip to main content

Temperatures set to plummet thanks to new cooling scheme

Physicists in the US have developed a new technique for cooling atoms in an optical lattice of criss-crossing laser beams. Unlike existing methods that rely on random collisions to remove hot atoms, this new scheme involves applying a precise sequence of modulations to the laser light. As well as having the potential to cool optical lattices to temperatures as low as 1 pK – well below what is possible today – the cooling algorithm could lead to quantum computers that are based on optical lattices.

Optical lattices containing just one atom per lattice site can be used to simulate a wide range of quantum phenomena that occur in solid materials, including magnetism, superconductivity and superfluidity. They are particularly useful because, unlike in solid systems, interactions between atoms in an optical lattice can be adjusted by changing the lasers or by applying a magnetic field.

Such lattices are created by first making a 2D optical lattice in a vacuum chamber using criss-crossing laser beams. The chamber also contains a very dilute gas of atoms such as rubidium. Each lattice site is an energy well, in which one or more atoms can become trapped. To ensure that each lattice site contains just one atom, physicists normally wait until random collisions involving those atoms with higher kinetic energy cause the excess atoms to be ejected from the system. However, this involves a lot of waiting around and is not specific to individual lattice sites.

Sequence of modulations

Now Markus Greiner and colleagues at Harvard University have developed a new technique that is both systematic and acts on individual lattice sites according to how many atoms they hold. The scheme relies on the fact that the excitation frequency required to eject an atom from a well depends on how many atoms are in that well. For example, when there are two atoms in a well, one atom can be ejected by modulating the depth of the well at a certain frequency. That modulation will not affect the other atom in that well – and atoms in wells containing one, three or more atoms will also be unaffected. If there are three atoms in a well, a modulation at a different frequency will eject one atom while preserving the other two. A modulation at yet another frequency will eject one atom from a well containing four atoms, and so on.

To take advantage of this feature, the team applied a carefully chosen sequence of modulations that removes the fourth, third and finally the second atom from the wells – leaving just one atom per well. The energy required for an atom to move into an (already occupied) adjacent well is extremely high, so this configuration – called a Mott insulator – will endure indefinitely. The existence of a Mott insulator is confirmed by using a special optical microscope to measure the occupancy of each lattice site.

Removing entropy

A lattice in which some wells contain two or more atoms can be thought of as a lattice with defects. Such a lattice has higher entropy than a perfect Mott insulator. This new scheme therefore reduces the entropy of the lattice – and a reduction in entropy corresponds to a drop in temperature.

Although it is difficult to quote the exact temperature to which the researchers have cooled their lattice, Greiner says that they have reached the threshold between nanokelvin and picokelvin temperatures. While this is not as low as can be achieved using other techniques, which have reached tens of picokelvin, this algorithmic cooling could point the way to achieving picokelvin temperatures in the future. Greiner also admits that the quality of the team’s Mott insulators is not as high as those made using other techniques; however he says there is room for improvement.

Quantum computers

In principle, optical lattices could also be used to store and process information in a quantum computer. Greiner points out that the lab’s microscopy technique can be used to measure and manipulate the quantum state of atoms in individual lattice sites. This means that controlled-NOT gates – a fundamental component of a quantum computer – could be implemented in an optical lattice.

The next step for Greiner and colleagues is to repeat their experiment using fermionic atoms, rather than rubidium, which is a boson. This could lead to more realistic simulations of the behaviour of electrons in solids because electrons are fermions.

Henning Moritz of the University of Hamburg in Germany describes the development of the algorithmic cooling as “a major achievement”. In particular, he believes that the technique could be crucial to gaining a better understanding of high-temperature superconductivity using quantum simulation. “The outstanding challenge is that temperatures significantly lower than those reached today have to be achieved, and this work might represent the crucial step to success,” he told physicsworld.com. Extending the technique to cool fermionic atoms would also represent an important step in this direction, he believes.

The work is described in Nature.

Ohm’s law holds down to atomic scale

A new technique for embedding atomic-scale wires within crystals of silicon has revealed that Ohm’s law can hold true for wires just four atoms thick and one atom tall. The result comes as a surprise because conventional wisdom suggests that quantum effects should cause large deviations from Ohm’s law for such tiny wires. Paradoxically, the researchers hope the finding will aid the development of quantum computers.

As chipmakers pack increasing numbers of circuits onto silicon wafers, the size of transistors and other devices are nearing the atomic scale. Beyond the sheer technological challenges of making ever-smaller components, many physicists are concerned that the inherent fuzziness of quantum mechanics will soon render the familiar classical laws of electronics obsolete.

To investigate conduction on the atomic scale, Michelle Simmons, Bent Weber and colleagues at the University of New South Wales in Australia have developed a method of using phosphorus atoms to embed atomically thin conducting regions within a crystal of bulk silicon. Phosphorus has one more electron in its outer shell than silicon and if a silicon atom is replaced by a phosphorus atom (a process called n-doping), it donates a free electron to the crystal, thereby raising the conductivity of the doped region.

“Remarkable achievement”

In what condensed-matter physicist David Ferry of Arizona State University in the US describes as “a remarkable achievement”, Simmons’ team use the tip of a scanning probe microscope to create a channel in the silicon by removing layers of silicon atoms. The surface is then exposed to phosphorus gas, followed by the deposition of silicon atoms. The result is a chain of phosphorus atoms embedded inside a silicon crystal – effectively an atomic wire. The team found that the resistivity of these wires was constant right down to the atomic scale. This means that the resistance of such a wire is proportional to its length and inversely proportional to its area, just as you would expect from Ohm’s law.

Although Simmons says the techniques used to create the wires cannot currently be deployed in industrial processes, Ferry believes it is a valuable demonstration that, in principle, the miniaturization of classical electronics can continue for several years. “Firms such as Intel have been worried about making their devices so small that they become quantum mechanical in their behaviour,” he says. Transistor gate lengths are now about 22 nm, which is about 100 times the spacing of the individual silicon atoms. “There’s a concern about how small these devices can become before quantum effects take over, and this suggests they still have a few more generations,” Ferry adds.

Addressing single atoms

Simmons’ group, however, is not interested in conventional electronics and instead is working towards the development of quantum computers. The team hopes to use individual phosphorus atoms as quantum bit or qubits. “We’re developing single-atom devices,” explains Simmons, “and in that development we’ve realized that to be able to address a single atom, we need to be able to make the electrodes the same size – and that’s really what we’re using these wires for.”

On this point, Ferry is more sceptical not just about that approach but about quantum computing in general. “I’m considered to be one of the ‘antis’ in that world,” he says. Indeed, he even suggests that the persistence of classical phenomena on the atomic scale could make it difficult to use phosphorus atoms as qubits.

Nevertheless, Simmons remains optimistic. “Five years ago, there were lots of potential barriers to developing a phosphorus-based quantum computer and we’ve overcome those gradually, bit by bit. At the moment I guess the big challenge for quantum computing is to make a scalable system. Certainly these wires are very helpful towards that goal,” she says.

The research is published in Science.

Who is the greatest living physicist?

By James Dacey

This week, the scientific community and the media are celebrating the phenomenal achievements of Stephen Hawking who will celebrate his 70th birthday on Sunday.

hands smll.jpg

In addition to his work in cosmology, Hawking has been prolific in popularising the complex ideas of theoretical physics through his books, his lectures and his appearances on television. His bestselling book A Brief History of Time has sold more than 10 million copies worldwide and it regularly appears in polls charting “the best popular-science books of all time”. Indeed, Hawking is now so famous that he regularly crops up in popular culture – including several appearances on The Simpsons – and even his name has become a by-word for “intelligence”.

Although Hawking’s achievements clearly transcend the science, make no mistake: he has made colossal contributions to physics. Hawking’s work on black holes is considered to be some of the most important physics of the past Century, not least because it started to unify quantum theory, general relativity and thermodynamics. And, here at the Physics World headquarters, all this talk of achievement has led us to wonder whether Hawking should be considered as the greatest among his peers. We want to know your opinion on this issue. In this week’s Facebook poll we are asking the following question:

Who is the greatest living physicist?

Philip Anderson
Stephen Hawking
Steven Weinberg
Frank Wilczek
Ed Witten

To cast your vote, please visit our Facebook page. And, of course, if you would believe that this accolade should be bestowed on another physicist, not on our list, then please feel free to post a comment on the poll.

In our final poll of 2011, we were looking ahead into this year and the exciting discoveries that may be ahead. We asked which of the following is most likely to become a confirmed discovery in 2012: The Higgs boson; neutrinos travel faster than light in a vacuum; both; or neither.

53% of respondents believe that a confirmed Higgs discovery alone is the most likely outcome. The second most popular, with 35% of the votes, was the option was that neither will be discovered. 32% of voters opted for the superluminal neutrinos, and just 21% are optimistic enough to predict that both will become confirmed discoveries.

Brian Kelly, Senior Research Physicist at Fermi National Accelerator Laboratory, was among the people to comment. He wrote: “Superluminal neutrinos will surely go away. I hope the small Higgs signal is confirmed. If it isn’t, the raison d’etre for construction of the LHC is negated and the future of experimental high energy physics looks dismal.”

Thank you for all your responses and we look forward to hearing from you again in this week’s poll.

Was a metamaterial lurking in the primordial universe?

A scientist in the US is arguing that the vacuum should behave as a metamaterial at high magnetic fields. Such magnetic fields were probably present in the early universe, and therefore he suggests that it may be possible to test the prediction by observing the cosmic microwave background (CMB) radiation – a relic of the early universe that can be observed today.

One of 2011’s strangest predictions in physics was the suggestion by Maxim Chernodub of the French National Centre for Scientific Research that, at incredibly high magnetic fields, superconducting states can emerge from the vacuum. This was particularly interesting because one of the main difficulties facing scientists working on traditional superconductivity is preventing superconducting states disappearing in the presence of even moderate magnetic fields.

Soup of quarks and antiquarks

In April Chernodub argued that an extremely high magnetic field should make the vacuum superconducting along the axis of the field – with the vacuum remaining insulating in directions perpendicular to the field. This prediction was based on quantum chromodynamics (QCD), which describes the interactions between quarks and gluons. QCD treats the vacuum not as empty space but as a boiling soup of virtual quarks and antiquarks constantly popping into and out of existence. An up quark can combine with a down antiquark to produce a rho meson.

These rho mesons are normally so unstable that they vanish almost instantaneously; but Chernodub calculated that, at magnetic field strengths greater than 1016 T, the mesons would become massless and therefore stable. This, he predicted, would lead to a superconducting state. However, it would be impossible to test this in the lab today because scientists on Earth have real problems producing fields of more than 100 T.

Now Igor Smolyaninov of the University of Maryland has built on Chernodub’s work to show that parallel magnetic field lines in a vacuum would arrange themselves to form a triangular lattice in the plane perpendicular to the field – much like an Abrikosov lattice in a superconductor. The vacuum near to each field line would be a superconductor, while the regions between the field lines would be an insulator.

Bizarre properties

This configuration is very similar to that of certain man-made metamaterials that are made from lattices containing regions of conducting and insulating materials. Smolyaninov has shown that this magnetic-field-induced lattice would function as a hyperbolic metamaterial. Such metamaterials possess the bizarre, counterintuitive optical property of having a negative refractive index and have been used to create superlenses capable of resolving features smaller than the diffraction limit imposed on normal lenses.

While physicists do not have access to magnetic fields strong enough to test Smolyaninov’s theory, the magnetic field in the universe in the first fraction of a second after the Big Bang may have been strong enough to give rise to Chernodub’s superconducting state. The universe as a whole may, therefore, have behaved as a giant metamaterial superlens, argues Smolyaninov. Although he has yet to make a definite, testable prediction, Smolyaninov suggests that it should be possible to test the metamaterial idea – and, by inference, the whole idea of vacuum superconductivity – by looking for imprints of this lensing effect on the present-day structure of the universe, and in particular on the CMB.

“Right in front of my nose”

Chernodub is impressed and explains that while continuing to work on vacuum superconductivity, he had also been thinking about metamaterials in high-energy physics without ever connecting the two concepts. “Then Smolyaninov comes along and says ‘Hey, you know your superconductor is a perfect metamaterial’,” he laughs. “It was as if I was looking for a coin and he pointed it out right in front of my nose.” He questions, however, whether or not the extremely hot, dense conditions in the very early universe may have been so far from the zero temperature and pressure model used in Smolyaninov’s paper – and his own – that superconductivity, and therefore metamaterial lensing, would have been impossible.

Cosmologist Andrew Jaffe from Imperial College London is also sceptical about the potential to test the idea using evidence from the early universe. “I think there is one major sticking point, which is that many of the ideas for generating very strong magnetic fields would happen when the universe was small compared with the radius of the rho meson. Hence, I’m not sure that Smolyaninov’s calculations would apply. By the time they did apply, I suspect the magnitude of the field would have been too small to have the vacuum-polarization effect.”

The research is described in Physical Review Letters.

New photo portraits to mark Hawking's birthday

hawking resized.jpg

By James Dacey

This portrait is part of a series of photographs commissioned by London’s Science Museum to celebrate the birthday of Stephen Hawking, who turns 70 on Sunday.

In the picture, the celebrated cosmologist was snapped in his office at the University of Cambridge by the photographer Sarah Lee. It is the classic scientist’s office: modestly decorated, a blackboard full of calculations and full of clutter.

In the foreground you can make out a toy model of one of NASA’s space shuttles, which I believe is Discovery. And what appears to be a plastic model of Hawking as he has been depicted in several episodes of The Simpsons. Note as well the crystal ball – perhaps to aid the great scientist as he searches for his next insight into black holes.

The new collection of pictures will be on show at the Science Museum from 20 January as part of a display that will celebrate Hawking’s life and achievements. The display will feature objects and papers sourced from Hawking’s own personal archives.

Meanwhile, at the University of Cambridge, a special conference is being held called The State of the Universe. The event starts today and will conclude with a public symposium to celebrate Hawking’s birthday on Sunday. A string of high-profile physicists will be speaking, including Kip Thorne, Frank Wilczek and one of last year’s Nobel prize winners, Saul Perlmutter.

For full details see the event website, and you can watch the talks via this live stream.

Third experiment homes in on neutrino mixing angle

By Hamish Johnston

Taking seventh place in our top 10 breakthroughs for 2011 were physicists working on the Tokai-to-Kamioka (T2K) experiment in Japan, who were the first to measure the rate at which muon neutrinos change into electron neutrinos (and then back into muon neutrinos) as they travel hundreds of kilometres through the Earth. This neutrino oscillation was then observed by scientists on the similar MINOS experiment in the US, with some degree of agreement between the two values. Both experiments provided important new information about the physics of neutrinos, which is by no means settled.

Neutrinos come in three “flavour” states – electron, muon and tau. However, physicists also believe that neutrinos can be described in terms of combinations of three mass states – m1, m2 and m3. Interference between these mass states gives rise to the observed oscillations of neutrino flavour.

Although physicists have measured many of the parameters that describe this flavour/mass system, one crucial value remains unclear. This is the “mixing angle” θ13, which is a measure of how the m1 and m3 mass states are combined within the flavour states.

T2K and MINOS have both given preliminary values for θ13 that are in rough agreement, and now a third experiment – Double Chooz in France – has also determined θ13 (what is actually measured is sin213).

Double Chooz took a different approach by looking at electron antineutrinos that are produced in two nuclear reactors and detected about 1 km away after travelling through solid rock. The electron antineutrinos are expected to oscillate to either muon or tau antineutrinos, and the rate at which electron antineutrinos vanish from the beam due to oscillation is determined by θ13 and one other parameter that is well known.

The experiment was run for 101 days and the number of electron antineutrinos that should have been detected was calculated to be 4344. Instead, the physicists only saw about 4100 events in the detector.

Double Chooz found that sin213 is about 0.086, whereas T2K suggests a value of about 0.11 and MINOS gives 0.04. All of these figures have large uncertainties associated with them and cannot be seen as definitive measurements of the mixing angle. What’s becoming clear, however, is that θ13 is not zero.

Once θ13 has been determined, the next challenge for physicists will be to work out if there are differences between the oscillations experienced by neutrinos and antineutrinos. The discovery of such an asymmetry could shed light on why there is much more matter than antimatter in the universe.

You can read a preprint describing the Double Chooz results here.

Ice avalanche enters the lab

A new experiment in the UK has been created to simulate ice avalanches, a hazard that can follow the collapse of a glacier or the eruption of an ice-capped volcano. Despite the obvious risk these events pose to nearby people and communities, they are still a relatively unexplored geological phenomenon. Early results from the experiment suggest that melting at the surface of individual ice particles helps to explain how these flows can travel so fluidly down a slope. Ultimately, the research could lead to more accurate systems for predicting the onset and characteristics of ice avalanches in regions of the world prone to the hazard.

Current engineering models of ice flows tend to be based on data collected in specific geographic regions, where the approach is focused on creating a model that corresponds with field measurements. In Europe, for example, a lot of research has been based in alpine areas, particularly in Switzerland, France and Norway. These models provide town planners and other local authorities in these regions with information such as the likely extent and speed of ice flow in areas susceptible to avalanches.

But the limitation with this approach is that the models do not necessarily apply to other parts of the world, where defences against avalanches are often less developed. “The current need in Europe is to help predict these types of avalanche in the Carpathian mountain range across Eastern Europe and there are limited data available in these regions,” says Barbara Turnbull, a researcher at the University of Nottingham in the UK. “Furthermore, aspects of the underlying physics of these very complex flows are not understood, leading in some cases to flow behaviour that cannot be explained with traditional theories.”

Avalanche in a drum

Turnbull has sought to develop a more generalized approach to studying ice flow in avalanches. She has designed an experiment to examine the behaviour of flowing ice particles within controlled conditions in a laboratory at the University of Cambridge in the UK. The set-up consists of a rotating drum filled with ice spheres, which she monitors using high-speed video in order to examine the interactions between individual spheres of ice.

In a series of tests, ice spheres with diameters of roughly 5 mm were created by dripping water droplets slowly into a bath of liquid nitrogen. These spheres were then transferred to a Perspex drum, which has a diameter of 350 mm and a width of 20 mm, and the drum was rotated at a rate of 3.75 s per revolution. Over the course of 45 min, the drum was filmed every 2 min at a rate of 250–500 frames per second. Turnbull then repeated the experiment at –4 °C, –2 °C, –1 °C, and 0 °C.

At all four temperatures, Turnbull discovered that melting at the interface of particles significantly increased the overall speed of the ice flow, and this process subsequently led to more melting and faster flow. Reporting her findings in Physical Review Letters, Turnbull believes that the feedback system created by these surface interactions can in part explain the large distances and speeds that ice avalanches can reach.

“This work will allow the development of a new type of avalanche model based on the key physical processes,” Turnbull told physicsworld.com. “Most people are focused on developing a model that works – a model that can be validated against field measurements,” she says. Turnbull believes that her approach is more fundamental, and for that reason it could lead to a more generalized understanding of ice avalanches that is not tied to particular areas of the world.

Importance of experiment

Demian Schneider, a glacial-hazards researcher at the University of Zurich in Switzerland, is impressed by the experimental approach taken by Turnbull. “The basis of preventing future avalanche disasters is on the one hand practical knowledge including empirical data. And on the other hand, a high level of understanding of the physical process, which can be achieved by experiment.”

Schneider does note, however, that the controlled experimental approach taken by Turnbull has both pluses and minuses when compared with field studies. One advantage he cites is that because Turnbull’s laboratory avalanche is effectively endless, it provides the researcher with time to study changes in flow behaviour, including the melting of ice and changes in grain size as particles fragment. On the downside, however, Schneider says that the rotating drum is very different from a real-world avalanche, not least because the ice flows over a perfectly curved bed, which would not exist in nature.

In the short term, Turnbull intends to carry out further testing at the Cambridge laboratory. But within the next five years she hopes to begin calibrating her findings against measurements in the field by developing a network of observation stations in areas prone to ice avalanches, including Caucasus, Switzerland, Norway and Canada.

Turnbull acknowledges that from a global perspective the current risk posed by avalanches is low compared with that of other geophysical flows such as volcanic eruptions. But she believes that the risk is changing quickly as climatic patterns appear to be altering and certain areas may be particularly affected. “Ice-capped volcanoes close to habitation, such as Mount Rainier [US] and Cotopaxi [Ecuador] and lower-altitude permafrost areas are the key locations of vulnerability,” she says.

A developing hazard

An increasing risk posed by avalanches is also predicted by Stephan Herminghaus, a researcher of complex flow at the Max Planck Institute for Dynamics and Self-Organization in Germany. “I think it is sufficiently clear from the IPCC [The Intergovernmental Panel on Climate Change] reports that there are many places on Earth that will be subject to increased avalanche hazards,” he says. Herminghaus predicts that all mountainous regions where soil and rock has been stabilized in part by permafrost conditions could soon change.

Christian Huggel, a researcher at the University of Zurich who specializes in high-mountain and glacial hazards, goes further and suggests that certain regions are already experiencing an increased risk. “We have relatively robust evidence of an increase in high-mountain rock fall and avalanches in the European Alps over the past decades,” he says. Huggel stresses, however, that it is not yet clear whether we can say we are experiencing a global increase.

In addition to its environmental application, this new research could also lead to an improved understanding of ice flow in other areas of science. Turnbull suggests that astronomers could use the experiment to build more detailed histories of fragments retrieved from comets, which are essentially a mix of ice and rock. She says it could also inform industry, for example helping to improve the efficiency of production lines where frozen food is transported in close contact around a factory.

Knighthoods for graphene and biophysics pioneers

By Hamish Johnston

Andre Geim and Konstantin Novoselov of the University of Manchester have received knighthoods in the 2012 New Year Honours. The pair bagged the 2010 Nobel Prize for Physics for their pioneering work on graphene – sheets of carbon just one atom thick. Both physicists were knighted for their “services to science”.

James Dacey visited the Manchester lab where graphene was first isolated and shot a video about how to make the material. The short film was one of our favourites of 2011 and you can watch it here.

Also receiving a knighthood is Venkatraman Ramakrishnan, a biophysicist at the University of Cambridge who shared the 2010 Nobel Prize for Chemistry for his work on the structure and function of ribosome.

Other physicists honoured include Jonathan Flint, chief executive of UK-based Oxford Instruments, and Philip Sutton, who has held a number of senior positions at the UK’s Ministry of Defence, including director of science and technology strategy. Both Flint and Sutton become Commander of the Order of the British Empire (CBE). In 2007 Flint talked to me about how Oxford Instruments is making physics profitable, and you can read that interview here.

The honour of Officer of the Order of the British Empire (OBE) has been bestowed on James McLaughlin of the University of Ulster and on Mohamed El-Gomati of the University of York. Member of the Order of the British Empire (MBE) honours go to John Huddleston of AEA Technology, Derek Raine of the University of Leicester and Ian Miller of the University of Lancaster.

How to grab attention with your science videos

By Liz Kalaugher, editor of environmentalresearchweb

And no, that doesn’t mean including footage of people attending exercise classes. The S Factor under scrutiny in this blog is the S Factor Workshop on how to make successful science videos, held at the American Geophysical Union Fall Meeting in December 2011. The event saw a panel of Hollywood professionals critique 10 entries, picked from a total of 42 submissions by hopeful researchers.

On the panel were marine-biologist-turned filmmaker Randy Olson, author of Don’t Be Such a Scientist: Talking Substance in an Age of Style, and his former film-school classmates Sean Hood, now a screenwriter with credits such as horror movie Halloween: Resurrection and Conan the Barbarian to his name, and Jason Ensler, co-producer and director of Franklin & Bash, and director of episodes of TV hits Gossip Girl, Chuck and Psych.

The trio were cheerfully disparaging of scientists’ storytelling skills, saying that many of the videos took the approach “here’s our lab, here’s our kit, come see us some day”. But story is key – “think of it as making a trailer for science”.

One exception was San Jose State University’s Green Ninja. The panel felt this video showed good storytelling, with a character who clearly has a problem – his oversized and ever-growing feet – that he needs to solve.

A useful technique, as detailed by Nicholas Kristof, is to follow the story of one individual and, ideally, to reach an uplifting conclusion. According to Olson, Kristof argues that an article on death is depressing, but an article on people fighting a disease engages. In the same way, a story about coral deterioration could be depressing or dull, but a story about a man interested in coral can catch people’s attention.

Since film is good for conveying emotion and humour but not for transmitting information, it can be useful to break your complex content down to a simple story. According to Ensler, it takes time to develop stories but they can be overdeveloped and lose some of their original spark. Hood stressed the need “to keep hold of that nugget of awe”, and that scientists should “inspire the 11 year old in all of us”.

It’s also worth considering changing the order of events from a “that happened, then that happened, then this happened” type of narrative. Replacing “ands” in the storyline with “buts” and “therefores” can change the direction of the story and add tension, the film experts explained. For example, in Volcano from Space, the storyline could have been “We monitor volcanoes but they’re hard to see so we need new techniques.” Arguing two sides of an issue can also create a good story.

Ensler recommended that researchers set up cameras whenever they are in the field so that they have plenty of interesting footage to use in their videos.

But interesting is not enough; if somebody says interesting after Hood’s latest film pitch, he knows “I’ve failed, because I haven’t grabbed them emotionally”. People are most engaged by people talking, not things, he said, so it’s useful to show a person alongside a piece of scientific kit. Because watching a person speak in real life is different to seeing them onscreen; if you’re filming a talking head, then you need multiple cameras and different angles, as per the TED talks, to stop it from being boring.

That said, many of the films submitted began with somebody speaking to camera – the panel felt there was no need for this. According to Olson, it’s good to arouse and fulfil – grab the audience’s attention, make them want, then fulfil their need. For example, the Mata Eruption video from JISAO (the Joint Institute for the Study of the Atmosphere and Ocean) could have put its amazing video footage of an undersea volcanic eruption right at the start of the film before answering the questions the footage raises. Alternatively, Ensler said the team could have made the audience want by promising them they were going to see some great footage but first explaining why it’s hard to obtain.

As film is a visual medium, it can be helpful to see if you can get the gist of a short film without listening to the soundtrack, the professionals explained. Indeed, one of the most well-received videos – Perspective, which used animated graphics to indicate the relative energy release of large earthquakes throughout history – contained no sound at all, and was praised for its Hitchcockian withholding of information from the audience.

In summary? Every picture (should) tell a story…

Bulging fibre excels at storing light

A tiny resonator that can store light without significant losses has been created by researchers in the US. The device is made from an ordinary optical fibre and is much more efficient than conventional resonators make from silicon. Such a low-loss, fibre-based resonator could boost the performance of telecommunications networks and could even lead to all-optical computers – claim the researchers.

Optical communication systems are extremely efficient when it comes to transmitting data in the form of light pulses. However, switching and processing data usually requires the pulses to be converted to electrical signals and then back into light again – operations that are costly both in terms of energy and chip size. Researchers are therefore keen on developing all-optical switches and processors, and one key challenge is how to store a light pulse for long enough for it to be manipulated.

One solution is an optical resonator, whereby a pulse is stored for a short period of time by having it bounce back and forth between mirrors. Silicon-based resonators can be made with dimensions of just a few hundred nanometres. However, according to Mikhail Sumetsky at OFS Laboratories in New Jersey, who was involved in this latest research, “The problem with [silicon] lithography is that it usually uses etching, and that causes roughness.” Because rough surfaces scatter more light than smooth ones, such resonators drop the signal at a rate of about 0.1 dB/cm as the light travels within them. In contrast, the resonator developed by Sumetsky and colleagues is shaped by heating and stretching an optical fibre, which gives it a much smoother surface and therefore a lower loss rate – less than 10–4 dB/cm.

Tugging with precision

The resonator itself is just a very slight bulge in the optical fibre, about 20–40 µm in diameter, which is created by heating points on either side of the fibre with an infrared laser and tugging the ends of the material so that the heated glass stretches and narrows. The narrow sections need only be about 10 nm slimmer than the bulges, and this method gave the team control over the diameter to a precision better than 0.1 nm.

John Fini, one of Sumetsky’s collaborators, says that the techniques used are similar to those employed for making microfibres years ago. “But an improved precision of two orders of magnitude enables us to fabricate these microresonators,” he adds.

In their experiment, light enters the resonator via a second optical fibre that runs from a light source, past the resonator, and finally to a detector (see illustration). The fibre narrows as it passes the resonator so that light can move easily between the two components. If the frequency of the light matches that of the resonator, the light is diverted into the resonator. Otherwise, the light travels past the resonator and on to the detector.

Spiralling round the bulge

Once inside the resonator, the light spirals along the surface of the bulge until it reaches a constriction, where it is reflected back. The result is a trapped light wave with a pattern of intensity peaks and nodes within the bulge.

With a single resonator, the researchers say that they trapped light for 100 times longer than is possible with lithographically made resonators. Furthermore, they claim that they could hold light even longer by adding more resonators. This is as simple as heating more points on the fibre and stretching, and the team has demonstrated a series of 10 resonators on a single optical fibre. The neighbouring resonators can pick up the light before it heads toward the detector, thus storing it for longer periods.

Uday Khankhoje of NASA’s Jet Propulsion Laboratory in Pasadena, California, suggests that these microresonators could be a good option for routing optical data on the Internet. A router holds data until the path is clear for them to travel on, a function known as buffering. In traditional routers, optical data would be converted to electronic signals for storage and direction, then reconverted to optical signals later. “As these new devices are made of optical fibres themselves, and the reported losses are quite low, it would be practical to use their light-storing functionality to buffer light,” says Khankhoje.

Shuttling light a problem

Khankhoje is sceptical, however, of the application of the new resonators in optical computing. “A very serious practical issue is that a lot of optical power would be lost in shuttling light between these devices and lithographically fabricated devices; the latter being required to perform core computational tasks,” he says. Sumetsky argues that a very low-loss connection between lithographically made waveguides sitting on top of the optical fibres has already been demonstrated, and that the resonators should work in a similar system.

Takasume Tanabe of Keio University, Japan, notes that although other microresonators are capable of storing light for longer times, this technology stands out for practical applications because the resonators are made with common optical fibre, and the team has shown that the devices can be connected. “Combining more than one cavity brings the possibility of having slow light for memory buffering, enhanced nonlinearity for all-optical logic operation, and fibre sensing,” he says.

This research is described in papers published in Optics Letters and Optics Express.

Copyright © 2026 by IOP Publishing Ltd and individual contributors