Skip to main content

A quantum accelerometer is being built for navy submarines

Illustration showing how a single laser beam is converted into four beams for trapping atoms

Physicists in the UK are developing an accelerometer for the Royal Navy, based on the quantum interference of ultracold atoms. The device will allow submarines to pinpoint their position underwater to within 1 m after travelling one day, without having to surface to use GPS. This is much better than is possible with current accelerometers, which are accurate to within 1 km after a day’s travel. With further development, the device could be used for oil exploration or even to do “gravity scans” of concealed objects.

Since the 1990s, physicists have been able to do interferometry experiments with ultracold atoms. Pioneered by Stanford University’s Mark Kasevich, the classic version of the experiment involves allowing an atom to fall under the influence of the Earth’s gravity. A laser pulse is fired at the atom that puts it in a superposition of two quantum states, which follow different trajectories much like photons travelling through an optical interferometer. A second pulse recombines the states and the resulting interference gives a precise measure of gravity – and can even reveal subtle effects of the general theory of relativity.

Extremely sensitive

Such a device can also be used as an extremely sensitive vertical accelerometer. Now, Ed Hinds and colleagues at Imperial College London have taken this experiment and rotated it by 90° to make an accelerometer that works in the horizontal direction. The device uses about one million rubidium atoms, which are trapped on an integrated chip using a magnetic field and laser light.

An important feature of the chip is that a single beam of laser light is used to trap the atoms. This beam is fired at a surface grating to create several beams of diffracted light, which together with a magnetic field are then used to trap the atoms.

The atoms have two quantum ground states, which the researchers denote as |1〉 and |2〉. The system is prepared so that all of the atoms are in |1〉 and then a light pulse puts the atoms into a superposition of |1〉 and |2〉. This action plays the role of the first beamsplitter in an optical Sagnac interferometer. State |1〉 has no recoil, while state |2〉 recoils along the direction defined by the light beams. Then, a second light pulse is fired at the atoms and this swaps the states so that |1〉 (with no recoil) becomes |2〉 (with recoil) and vice versa. This is analogous to the two mirrors of an optical interferometer, which direct the two diverging light beams towards a second beamsplitter, where they are recombined. Finally, a third light pulse plays the same role as the second beamsplitter of an optical interferometer.

Taking different paths

A measurement is then made to determine how many of the atoms are in state |1〉 – or, alternatively, how many are in |2〉. Either measurement can be used to compute the interferometer phase, which is related to the effective path difference taken by the two states. This path difference is proportional to the acceleration of the atoms along the direction of the light beams.

The simple design and operation of the accelerometer means that, in principle, it could be miniaturized for use on submarines. Indeed, the chamber in which the atoms are held has already been miniaturized on a chip. However, the associated electronic and optical components are still mounted in racks and on an optical table. Another challenge is to make the chip impervious to helium gas, which can leak through the walls and eventually contaminate the vacuum in which the atoms must be held.

Gravity scanners

The team is now working on shrinking the optical and electronic components of the accelerometer so that it can fit into about 1 m3. While this would make it suitable for naval use, the device would have to be further miniaturized – to the size of a beer can, for instance – before it could be sent down an exploratory bore hole to search for oil or other mineral deposits. Other possible applications that could emerge in a 5–10-year timeframe include “gravity scanners” that can peer into sealed containers and create density maps of their contents.

Hinds told physicsworld.com that it is likely that similar devices are being created for the navies of other nations. Indeed, Kasevich and colleagues have unveiled an atomic-interferometry-based “quantum gyroscope”, which is essentially an accelerometer that can be used for navigation. At the time, Kasevich said that the technology was going to be commercialized by AOsense, a company that he co-founded (see “Falling atoms measure the Earth’s rotation”).

The chip used by Hinds to trap the rubidium atoms is described in Nature Nanotechnology.

General relativity put to the test

In another recent development in the field, physicists in Germany and the US have used atomic interferometry to measure the effects of gravity on two different atoms: rubidium and potassium. The experiment found that the acceleration due to gravity experienced by both types of atoms is the same to one part in 10 million. This is the latest verification of the universality of free fall, which is a cornerstone of Einstein’s general theory of relativity. The rubidium atoms are more than twice as massive as their potassium counterparts. If they were seen to respond differently to gravity, it could point physicists towards a quantum-mechanical theory of gravity.

The research on the universality of free fall was done by Ernst Rasel of the Leibniz University Hannover, and colleagues, and is reported in Physical Review Letters.

US particle-physics panel presents plan for the future

High-energy particle physicists in the US must build further international collaborations and co-operation, according to recommendations made by the Particle Physics Project Prioritization Panel’s (P5) 2014 report, which was released yesterday. Top priorities over the next two decades include the US playing a vital role in upgrades to the Large Hadron Collider (LHC) at CERN in Switzerland and building a long-baseline neutrino facility based at Fermilab near Chicago. The report also calls for US participation in the planned International Linear Collider (ILC), should the project commence. In light of declining funding and tightened budgets for particle physics, the report presents a strategy that would allow the US to “invest purposefully in areas that have the biggest impacts and that make most efficient use of limited resources”.

P5 is part of the US Department of Energy’s High Energy Physics Advisory Panel (HEPAP). Since P5 presented its last report in 2008, the face of high-energy particle physics has changed and evolved, especially in light of the discovery of the Higgs boson particle at CERN.

Main drivers

The 25-member panel began its deliberations in September 2013 following a year-long, community-wide study known as “Snowmass”. Snowmass identified 11 groups of particle-physics questions that could be addressed, and P5 then whittled them down to create the five “drivers” that the panel feel show great promise for discovery in the next two decades.

The drivers are to use the Higgs boson as a new tool for discovery; to pursue the physics associated with neutrino mass; to identify dark matter; to lay the foundations for understanding dark energy and inflation; and finally, to “explore the unknown”, including the study of new particles. While the five “intertwined” drivers themselves were not prioritized in the report, the specific projects are categorized by construction costs as large (>$200M), medium ($50M–$200M) and small (<$50M), as well as a sequential list of large projects.

The list begins with the proposed “muon-to-electron-conversion experiment” or the Mu2e experiment at Fermilab and upgrades to the LHC. Following that is the wish for the US to host an international programme of neutrino research “that will attract the worldwide neutrino community, operating the world’s most powerful neutrino beam and, with international partners, building a major long-baseline neutrino facility complemented by multiple small, short-baseline neutrino experiments”. The panel has recommended that the current “Long-Baseline Neutrino Experiment” (LBNE) be redesigned as an internationally co-ordinated and funded programme called the Long-Baseline Neutrino Facility (LBNF). The report refers to LBNF as “the highest-priority large project in its timeframe”.

The P5 also suggests US participation in the development of an ILC in Japan, specifying that the US “should engage in modest and appropriate levels of ILC accelerator and detector design in areas where the US can contribute critical expertise”.

Varying scenarios

The report’s strategy also considers three different budgetary scenarios – if funding is constant for three years and then increases by 3% per year; if it is constant for three years and then increases by 2% per year; and finally, if funding is unconstrained. They also recommend investing a larger portion of the DOE’s high-energy physics budget in the construction of new experimental facilities, raising it from 16% to 25%.

The panel is quick to point out that the lowest budget scenario they outline is “precarious” and would be “close to the point beyond which the US would not be capable of hosting a large project while maintaining the other core program components”, and that this would, in turn, make the US “lose its position as a global leader in this field, and highly productive international relationships would be fundamentally altered”.

The report can be found on the P5 website.

A new Longitude Prize, global cooling in the 1970s, inspirational creatures and more

A red kite and a drone swoop down on their prey

A bird of prey swoops out of the sky, grabs its victim from the ground and flies off into the distance. It’s what a bird does instinctively, but how could we get a drone aircraft to do the same thing? That’s the subject of one of the papers in a special issue of the journal Bioinspiration & Biomimetics that focuses on “Bioinspired flight control”.

The above sequence of images is from a paper entitled “Toward autonomous avian-inspired grasping for micro aerial vehicles” by Vijay Kumar and colleagues at the University of Pennsylvania. The special issue also includes work on aircraft inspired by flying snakes, flocking birds and incredibly stable moths.

(more…)

Keeping a telescopic eye on the Soviets

The podcast provides a history of Jodrell Bank, explaining how the observatory was created in 1945 using the radar technologies and expertise developed during the Second World War. The observatory was founded by Bernard Lovell, whose primary goal was to detect cosmic rays, though this initial aim never come to pass. Jodrell Bank’s associate director Tim O’Brien looks back on these early years and how the enthusiasm of the first Jodrell astronomers led to them to commission what would be the world’s largest radio telescope – a giant steerable dish, 250 feet in diameter, now named the Lovell Telescope.

Having moved away from the military applications of radar technology to refocus on fundamental science, Lovell and his colleagues were soon drawn back in as the Cold War escalated. In 1957 the British government asked whether Jodrell Bank’s new giant radio dish could be used to track the rocket used to launch Sputnik I, the world’s first artificial satellite. The fear at the time was that the Soviets would use the same rocket technology to fire an intercontinental ballistic missile at the West. For a brief period in the early 1960s, Jodrell Bank was placed on stand-by to look out for such a dire eventuality, given that it was the only location in the West that could accurately track the missile.

Photograph of Bernard Lovell in the control room at Jodrell Bank Observatory

Dacey also meets a former director of Jodrell Bank, Francis Graham-Smith, who describes how the observatory teamed up with the Daily Express newspaper in 1966 as the two organizations collaborated to interpret a mysterious signal being beamed back to Earth by the Soviet mission Lunar 9. To find out how this unlikely pairing came about and what it discovered, give the podcast a listen.

Tiny gold bars focus light into graphene

A simple way of creating and controlling surface plasmon polaritons (SPPs) in graphene has been demonstrated by researchers in Spain and Argentina. SPPs are quasiparticles that are a hybrid of light and electrons, and the new technique involves using simple gold antennas to channel light energy into the material. The research could lead to the development of new electronic devices that use light.

SPPs are quasiparticles that are combined oscillations of photons and mobile charge carriers, such as electrons. Although they can be excited in metals, they propagate much further in graphene, so several groups are studying the potential of graphene plasmonics as an interface between optical and electronic circuits and devices. One important benefit of SPPs is that their wavelengths are much shorter than visible light, which means that devices based on SPPs can be made much smaller than those based on light. However, there is also a downside: the wavelength of a plasmon in doped graphene is much shorter than that of an incident photon of the same frequency, making the SPP’s momentum much larger. For an incident photon to excite a plasmon in bare graphene would, therefore, violate Newton’s third law, which requires momentum to be conserved.

Channelling momentum

In 2012 researchers in Spain led by Rainer Hillenbrand at CIC nanoGUNE in Donostia-San Sebastian and Frank Koppens at the Institute for Photonic Sciences in Barcelona, in parallel with an independent group in the US, created and imaged SPPs in graphene using a near-field optical microscope. They achieved this using near-field (or evanescent) light, which extends only a very short distance from a surface but can carry very high momentum. By bringing the tip of the microscope – which is a source of evanescent light – very close to the graphene, the researchers could channel much more momentum to the graphene than would otherwise have been possible. Using the same microscope tip, they also imaged reflected SPPs and recorded interference patterns.

Now the Spanish group has found a simpler, more practicable way to excite and control graphene plasmons, which could be useful for future engineering applications. The researchers covered the graphene with tiny gold antennas (bars about 3 μm long) that absorb photons at a particular frequency. This creates an optical dipole in the antenna that, in turn, creates evanescent light. As the antenna is in direct contact with the graphene, energy from the near field creates SPPs in the graphene. By changing the size of the antenna, the frequency of light absorbed can be altered. This changes the frequency of the SPPs that are produced.

Focus and diffraction

The researchers manipulated the SPPs in various ways. For example, a straight antenna launched planar SPP waves, but the team also managed to focus SPPs to a point by using an antenna with a concave tip. They also demonstrated refraction of the SPPs using a 2D “prism” of bilayer graphene. The bilayer graphene has higher electrical conductivity than the monolayer graphene, so the wavelength lengthens inside the prism and the SPPs bend away from the normal according to Snell’s law. In the future, the researchers believe the bilayer should not be necessary. “One could simply apply a gate voltage on a small area of the graphene,” explains Hillenbrand, “and then one could actually tune the wavelength inside the prism to control the refraction angle. That is nearly impossible to do with other materials.” If this could be achieved, it would open the way for a plasmonic transistor, in which a gate voltage could switch a plasmon current on and off.

Before such manipulations can be achieved, however, the team needs to improve the distance SPPs can propagate through the graphene, which at present is limited to 1–2 μm. The researchers are now working to boost the SPP’s range by using higher-quality graphene and looking for ways to dope it more highly, so it has more free electrons. “If one has achieved that,” says Hillenbrand, “I could imagine that these waves should propagate at least one order of magnitude further.”

Graphene expert

Alexander Grigorenko at the University of Manchester says that the work is not only a significant scientific achievement in the degree of control that the researchers have achieved over the graphene SPPs, but also a major technological one in the precision with which they have observed them using a new type of near-field microscope. “I would say there are just two labs in the world that could do something like what they’ve done right now,” he says. “If you talk about long-term importance: who knows?”

The research is published in Science.

Why don’t they listen?

In mid-September 2001 letters containing deadly anthrax spores were mailed to several news agencies and two US senators in a terrorist attack that killed five people and injured more than a dozen others. A top-level adviser to president George Bush asked his then science adviser John H Marburger III how to neutralize the spores on existing anthrax-ridden mail. Marburger convened a team of scientists, who came up with a carefully researched recommendation based on irradiating the mail with electron beams. It seemed to be a triumph of science’s application to the American national interest.

But when US Postal Service officials implemented the method – to kill anthrax but preserve the mail – the electron beam burnt some batches to a crisp. Surprised, Marburger investigated. He found that government officials had second-guessed the scientists. The officials had reasoned: if scientists said the right radiation dose to blitz the death spores was x, then 2x was surely safer! When Marburger ordered the dose scaled back, the method worked fine.

Marburger, who died in 2011, liked to cite this episode as a “relatively benign example of a potentially disastrous behaviour”, namely, the tendency of government officials to alter or ignore scientific advice. His store of more damaging examples included the Bush administration’s claim, in 2002, that aluminium tubes sought by Iraq were for a nuclear-weapons programme, contrary to the conclusion of scientists. In these and other cases, it is simply a fact that, as Marburger put it, “the methods of science [are] weaker than other forces in determining the course of action”.

Marburger – the longest-serving US presidential science adviser and an experienced scientist – grew curious about why science has such weak authority among political leaders. After he stepped down in 2009, he began to investigate.

Three grounds for authority

Marburger turned to the works of German historian and sociologist Max Weber, who in his influential book Economy and Society (1922) examined different types of authority, or the grounds on which people voluntarily comply with commands issued by others. There are three, Weber said: traditional, rational-legal and charismatic.

Traditional authority, Weber wrote, is rooted in a “belief in the sanctity of age-old rules and powers”. This is the authority possessed by village elders. Legal authority – the authority of a bureaucracy – is grounded in a “belief in the legality of enacted rules and the right of those elevated to authority under such rules to issue commands”. Charismatic authority is possessed by those who are “considered extraordinary and treated as endowed with supernatural, superhuman or at least specifically exceptional powers or qualities”.

Difficult to sustain over time, charismatic authority requires periodic reinvention and occasional proofs of exceptional powers such as an ability to perform miracles or to disclose secrets of nature. Weber called charismatic authority “irrational”, but noted it is one of the few means leaders have to take people on new, progressive paths; think of the authority of Martin Luther King, Mahatma Gandhi or Winston Churchill.

But which type applies to science? Clearly not the first two: no country is traditionally scientific or requires that its laws be grounded in sound science or scientific methods. Marburger concluded that the authority of science in governmental circles is, in Weber’s terms, charismatic. Science’s authority among politicians, that is, depends on them regarding it as possessing a special power or magic.

Scientists, Marburger continued, find this absurd. For them, science is not an “authority” but the only means at our disposal to understand nature. Because scientists have first-hand experience of how science works – of its roots in empirical testing and open discussion – they see acting against science as “a mild form of insanity”, as Marburger put it. “It is precisely because the power of science does not [my italics] require charismatic authorities that we should trust it to guide our actions,” Marburger wrote (Issues in Science and Technology, Summer 2010).

Yet from the standpoint of politicians lacking such first-hand experience, the voice of a scientist is but one among many voices clamouring to be heard. For them, “science is a social phenomenon with no intrinsic authoritative force”, which is why “the authority of science is inferior to statutory authority in a society that operates under the rule of law”.

This observation explains the waxing and waning of science’s authority in politics. When scientists make dramatic, socially communicated breakthroughs, their authority shoots up; in fallow years when they don’t, it tends to decline. That is why, for instance, physicists had such political power after the Second World War. Characterizing science’s political authority as charismatic also suggests that the only way to garner more authority for scientists in government is to improve the charisma of their calling. For this reason, Marburger concluded, “science must continually justify itself, explain itself, and proselytize through its charismatic practitioners to gain influence on social events”. For starters, we need more people like Brian Cox and Neil deGrasse Tyson.

The critical point

But I think there’s a fourth possible source of authority that I’ll call trust, which overlaps with Weber’s third type without involving irrationalism. We trust someone – that is, defer to them about something beyond our knowledge or power – when we “know their story”; when we’ve seen them operate in different contexts, and know their customs, long enough to acquire a sense of how they behave. (Think of the trust we place in postal workers, for example.)

If we can somehow give the apparatus of science – the empirical tests, the supervised institutions, the open discussions – more public visibility, it would be clearer to non-scientists that scientific results are more than opinions or beliefs. Science then might acquire more authority, with the public and with politicians, without having to rely on miracles or staging magic shows.

Hybrid technology developed for 2D electronics

A new technology that is CMOS compatible and integrates different 2D materials into a single electronic device has been developed by researchers in the US. The team constructed large-scale electronic circuits based on graphene and molybdenum-sulphide heterostructures. The fabrication process might be extended to develop such heterostructures from any type of 2D layered material, with potential applications in flexible and transparent electronics, sensors, tunnelling FETs and high-electron mobility transistors.

2D materials are creating a flurry of interest in labs around the world because they have dramatically different electronic and mechanical properties from their 3D counterparts. This means that they could find use in a host of novel device applications, such as low-power electronic circuits, low-cost or flexible displays, sensors and even flexible electronics, which can be coated onto a variety of surfaces.

The most well-known 2D materials are graphene (which is a sheet of carbon just one atom thick) and the transition metal “dichalcogenides”. Such materials have the chemical formula MX2, where M is a transition metal (such as molybdenum or tungsten) and X is a “chalcogen” (such as sulphur, selenium or tellurium) – they go from being indirect band-gap semiconductors in the bulk to direct band-gap semiconductors when scaled down to monolayers. Such monolayers also efficiently absorb and emit light, and so could be ideal for making a variety of opto-electronic devices such as light-emitting diodes and solar cells.

Separate and selective etching

Researchers have now combined, for the first time, both graphene and molybdenum sulphide (MoS2) heterostructures into single electronic devices and circuits, thanks to a new technology that allows them to selectively and separately etch both 2D materials. The team, led by Tomás Palacios of the Massachusetts Institute of Technology, grew the heterostructures using chemical vapour deposition. MoS2 was used as a transistor channel, while graphene was used to make contact electrodes and circuit interconnects.

The team, which includes scientists from MIT, Harvard University and the United States Army Research Laboratory, patterned and etched MoS2 into an isolated channel on its heterostructures. Next, patterned aluminium oxide was formed on this channel by low-temperature atomic layer deposition and lift-off techniques. “The MoS2 is partially covered by the Al2O3 and this layer serves as both a dielectric layer as well as the etch-stop layer,” say team members Lili Yu and Elton Santos.

Flexible and transparent electronics

The fabrication process could ultimately be used to construct heterostructures from any type of 2D layered material, and circuits made from such structures could be sued in heterojunction devices, such as lasers, tunnelling FETs and high-electron mobility transistors. “And, since every component in the circuits is extremely thin, the finished devices are flexible and transparent, and so could find use in applications like wearable electronics or sensors that could be wallpapered and attached to any type of surface,” says Santos.

Yu adds that the team is now busy trying to integrate a thin insulating layer of another 2D material – hexagonal boron nitride – into its structure. “We are also trying to create seamless graphene/MoS2 junctions,” she explains. “Some other applications for this type of hybrid junction, such as, for instance, photodetectors and memory devices, are under investigation too.”

The current work is detailed in Nano Letters 10.1021/nl404795z.

Mildred Dresselhaus: the queen of carbon

Her pioneering work in carbon science has earned her the nickname “the queen of carbon” but Mildred Dresselhaus has never been motivated by personal accolades and prizes. “I work for love. Awards they come, but they are not that important. The doing of the work is what’s important, not so much the recognition,” says Dresselhaus in this recent video interview with Physics World.

In addition to her research on materials, Dresselhaus is revered for her reputation as a educator. In 2012 she received the Enrico Fermi Award for her roles in science leadership and mentoring – an award she finds fitting because, she recalls, Fermi had a significant influence on her scientific development during her PhD, when he acted as her mentor for a while. In the second part of her interview with Physics World, Dresselhaus talks about her passion for education and how her students have been just as inspirational to her as she has been to them. “You’re not teaching for yourself, you’re teaching for them, conveying not only information but motivation,” she explains.

 

US sanctions on Russia hit ITER council

The ITER fusion experiment has had to bow to the impact of US sanctions against Russia and move the venue of its council meeting, scheduled for 18–19 June, from St Petersburg to the project headquarters in Cadarache, France. In a letter to council members on 30 April, ITER director-general Osamu Motojima had already warned of the impact of “international tension” on the $15bn project. He had said he was “very much concerned about the current international tension and its possible political impact on the ITER project”, adding that the project “should remain neutral, staying outside of the world political loops”.

Although council chair Robert Iotti says that council members have been working “quite harmoniously” to resolve a number of problems, including the venue for the council meeting, on 15 May a change of venue was announced. ITER spokesperson Michel Claessens says that because of the difficulty of US delegates travelling to Russia, all seven member delegations – China, the EU, India, Japan, Russia, Korea and the US – have agreed to the move. He adds that the Russian organizers were disappointed but agreed that they could not have the council meeting without the US taking part.

Money problems

Regardless of the venue, delegates face a daunting task on many fronts, not least that the US Congress is threatening to cut funding to ITER amid concerns over the project’s escalating cost. ITER has already had to make do with getting far less money from the US during the last few years than the $350m per year originally planned by the Department of Energy (DOE). The project has received only $200m this year and the administration has proposed just $150m for 2015. However, this cap pushes spending further into the future, which increases ITER’s total cost.

I’m really beginning to believe that our involvement in ITER is not practical, that we will not gain what we hope to gain from it, and instead this money could much better be spent elsewhere

US senator Dianne Feinstein

The DOE has declined for several years to give a figure for ITER’s estimated total cost, but in April Ned Sauthoff, head of the US ITER contingent, gave his latest projections to the DOE’s Fusion Energy Sciences Advisory Committee, which took the predicted price tag from $1.1bn to $3.9bn. Reaction in Congress was harsh. “I’m really beginning to believe that our involvement in ITER is not practical, that we will not gain what we hope to gain from it, and instead this money could much better be spent elsewhere,” senator Dianne Feinstein, energy and water subcommittee chair, said at a hearing on the same day. Press reports suggest that in its mark-up of the proposed budget, the Senate might suggest more savage cuts to ITER or even withdrawal, although any changes must be agreed by the House and the administration.

One of Congress’s demands of ITER is that the council must enact the 11 recommendations made in a recent scathing management assessment, which include moving ahead with replacing Motojima as director-general. Critics, including those in Congress, will be watching closely to see how the council acts.

Laser mimics biological neurons using light

A tiny laser built up from thin layers of semiconductor can behave just like a biological neuron, according to a group of physicists in France. The team has shown that its “micropillar” laser can be made to fire only when its input shifts by some minimum amount, just like a neuron. Successive firings of the device must also be well separated in time, which is also a crucial feature of biological neurons.

The human brain consists of around 100 billion neurons, each of which receives electrical signals from other neurons via thousands of tiny junctions known as synapses. When the sum of the signals across its synapses exceeds some threshold value, a neuron “fires” by sending a series of voltage spikes to large numbers of other neurons. As such, neurons are “excitable”: below a certain input threshold the system’s output is very small and linear, but above the threshold it becomes large and nonlinear.

Recreating the brain

Scientists have long tried to build artificial neurons that can recreate the enormous processing power of the brain, which has a capacity for understanding that has no parallel in existing digital computers. Much of this effort has concentrated on silicon circuitry, while a few groups have explored more novel approaches, such as the one that exploits superconducting devices known as Josephson junctions (see “Superconductors could simulate the brain”).

The latest work does away with electronics altogether and instead relies on optics. Sylvain Barbay and colleagues at the CNRS Laboratory for Photonics and Nanostructures outside Paris use what is known as a micropillar laser. Measuring just 10 µm high and a few microns across, the cylinder-shaped device consists of alternating layers of semiconductor materials grown on a substrate. These layers create a lasing medium bounded by two parallel mirrors, and a region that absorbs low-intensity light while transmitting light of higher intensities.

Quick firing

To demonstrate its neuron-like qualities, the researchers optically pumped the device with a 794 nm diode-array laser and then excited it further with an 800 nm titanium-sapphire laser. Using a single pulse from the latter, they were able to demonstrate excitability, with the device firing only when exposed to pulses of some tens of nanojoules. This occurs on timescales of just 200 picoseconds. This makes the artificial neuron much quicker than either its biological or electronic counterparts, which have response times in the order of milliseconds.

Using pairs of pulses from the titanium-sapphire laser, Barbay and colleagues were then able to demonstrate a second fundamental attribute of their imitation neuron: a minimum gap in time between firings. Without this gap, explains Barbay, a neuron’s activity could become disordered, with noise triggering other pulses. They found that the device fired only once when subject to two input pulses spaced less than 150 ps apart – that time interval being known as the “absolute refractory period”. The researchers also found that the device has a “relative refractory period”, which occurs between 150 and 350 ps after the first pulse is delivered. During this time period, the resulting firing is weaker and required a stronger trigger than needed after 350 ps, when the device fires just as it does in response to an initial pulse.

System has ‘memory’

“This relative refractory period has never been seen before in optical systems,” says Barbay. “Its observation is interesting since it enforces the analogy with biological neurons and because it shows that the system has a ‘memory’ of its previous state.”

Barbay points out that scientists are still “very far” from building a computer that is able to mimic the brain, because, he says, it is not possible to reduce all neurons to a single model and because the number of neurons and connections in the brain is so far beyond current technological capabilities. But he adds that his group’s device has the advantage of being both small and easily coupled, potentially allowing the construction of small networks of neurons.

The research is described in Physical Review Letters.

Copyright © 2026 by IOP Publishing Ltd and individual contributors