Skip to main content

Graphene could make ‘perfect’ solar cells

A new device that combines graphene with special metallic nanostructures could lead to better solar cells and optical communications systems. That is the claim of researchers in the UK who have measured a 20-fold enhancement in the amount of light captured by graphene when it is covered by such nanostructures. The work provides further evidence that the material might be ideal for making photonics and optoelectronics devices, despite the fact that it does not have an electronic bandgap.

Graphene is a sheet of carbon atoms arranged in a honeycomb-like lattice just one atom thick. Since its discovery in 2004, this “wonder material” has continued to amaze scientists with its growing list of unique electronic and mechanical properties. Some believe that graphene could find uses in a number of technological applications – even replacing silicon as the electronic industry’s material of choice. This is because electrons whiz through graphene at extremely high speeds, behaving like “Dirac” particles with no rest mass.

Graphene also shows great promise as a candidate for photonics applications – especially optical communications, where speed is an issue. The material has an ideal “internal quantum efficiency” because almost every photon absorbed by graphene generates an electron-hole pair that could, in principle, be converted into electric current. Thanks to its Dirac electrons, it can also absorb light of any colour and has an extremely fast response to light. The latter suggests that it could be used to create devices that are much faster than any employed in optical telecommunications today.

Drawbacks addressed

Researchers have also already shown that they can make basic solar cells, light-emitting devices, touch screens, photodetectors and mode-lock ultrafast lasers from the material. However, there are, of course, drawbacks: graphene’s “external quantum efficiency” is low – it absorbs less than 3% of the light falling on it. Furthermore, useful electrical current can only be extracted from graphene-based devices that have electrical contacts with an optimized “asymmetry” – something that has proven difficult to achieve.

Now, researchers at the University of Cambridge and the University of Manchester may have solved both these problems by pairing up graphene with plasmonic nanostructures. These are metal devices that enhance local electromagnetic fields in a material by coupling incoming light with electrons on the surface of the metal. The nanostructures are fabricated on top of graphene samples to concentrate the electromagnetic field in the region of the material where light is converted to electrical current, so as to dramatically increase the generated photovoltage.

The team, which includes Manchester’s Andre Geim and Kostya Novoselov, winners of the 2010 Nobel Prize for Physics for their discovery of graphene, started out by preparing samples of graphene using the now-famous “sticky tape” method. This involves mechanically shaving off single layers of graphene from a block of graphite. The researchers then made two-terminal electronic devices from the material by forming contacts made of titanium and gold on the graphene using electron-beam lithography. Next, various plasmonic nanostructures were assembled close to the contacts.

Highest efficiency so far

The new devices have an external quantum efficiency of almost 50%, the highest value to date for graphene, says team member Alexander Grigorenko of Manchester. This boosts the light-harvesting capacity of graphene by more than an order of magnitude compared to its non-contacted counterpart, without sacrificing its speed. “If the plasmonic nanostructures we employed were optimized, it should be possible to realize perfect light-to-current conversion, where every photon falling on graphene is converted into current,” he told physicsworld.com. “This is exactly what the solar cell industry is waiting for.”

Furthermore, the problem of creating contacts with the desired asymmetry is addressed through the use of titanium and gold in the device.

“Our work is the first step towards ‘perfect’ photodetectors and solar cells because we have shown that plasmonics helps graphene convert light into electricity with ideal efficiency,” says Andrea Ferrari, who led the Cambridge effort in the collaboration. “Optimizing light interaction and photovoltage generation in graphene will be key for a range of applications, such as solar cells, imaging and telecommunications.”

Profusion of charge carriers

Graphene could also be a viable alternative to conventional plasmonic and nanophotonic materials, he added, because it has many advantages over these materials. It can absorb light over any wavelength in the electromagnetic spectrum from the ultraviolet to visible and far-infrared wavelengths, which means there is no need for bandgap engineering; and it can confine this light into unprecedented small volumes. The profusion of charge carriers in graphene and the fact that researchers can now produce the material in large quantities and over large areas means that it could outperform all existing semiconductor technologies in applications as diverse as photodetectors, tunable ultrafast lasers and imaging, claims Ferrari.

“Graphene seems a natural companion for plasmonics,” adds Grigorenko. “We expected that plasmonic nanostructures could improve the efficiency of graphene-based devices, but did not expect that the improvements could be so dramatic.”

Spurred on by its new results, the team now plans to study how light interacts with graphene in more detail. The researchers also hope to optimize their plasmonic nanostructures, for example by exploiting coupled or “cascaded” plasmon resonances that could further enhance the photovoltage generated. “We might also be able to increase light absorption even more by employing several layers of graphene, something that could lead to a 100-fold enhancement of the photovoltage,” states Ferrari.

The work is published in Nature Communications 10.1038/ncomms1464.

Tracking tsunamis with radar

Tsunamis such as the one that devastated parts of Japan in March could be monitored by an early-warning system based on radar measurements, claims a group of geoscientists.

Large tsunamis can be triggered by a number of geological phenomena, including earthquakes, landslides and the eruptions of marine volcanoes. When triggered in the deep ocean, these waves can travel at speeds in excess of 800 km per hour, but because of their very long wavelengths their amplitudes at the surface of the water are very small, which makes them difficult to detect. As a tsunami approaches land, most of its energy becomes focused into one giant wave, often with devastating results.

To limit the impacts of these hazards, national authorities need to identify any tsunamis as early as possible and to gain an idea of what profile the giant waves will take when they strike the coast. In regions with relatively steep continental shelves, such as the west coast of the US, some quantative real-time observations of tsunamis have been possible using deep-water pressure sensors to observe changes in the elevation of the sea surface. But in regions with wider, shallower continental shelves, such as South East Asia and the east coast of the US, the effectiveness of these systems is limited.

Signature currents

Now, an alternative method based on radar has been developed by a group of researchers in California, working with colleagues in Japan. The technique capitalizes on the fact that networks of coastal radar systems are routinely used by many countries to measure surface currents. Rather than tracking the tsunami directly, the technique uses these signals to identify unusual current flows that are generated by the giant waves as they propagate across the ocean.

In demonstrating the feasibility of their technique, the researchers say that they were able to recreate a profile of the recent tsunami in Japan, which was triggered on 11 March by a magnitude 9.0 earthquake off the coast of Sendai. They analysed data captured by five high-frequency radar sites spanning 8200 km located on the coasts of Japan and California. By combining three different types of analyses the researchers were able to identify the tsunami using three different frequencies of radar signal: 5 MHz, 13.5 MHz and 42 MHz.

The researchers report their findings in a paper published in Remote Sensing under the lead authorship of Belinda Lipa of Codar Ocean Sensors in California. They say that the Japanese tsunami could have been detected up to 45 min prior to its arrival at the nearest tide gauge – devices used to measure sea levels and detect tsunamis. The researchers make it clear, however, that the signals can only be detected once the tsunami reaches a continental shelf. This is why the technique could offer the earliest warning when tsunamis pass through wide shallow shelves such as those off the coast of Japan and the UK.

“Because of the diversity in local bathymetry [water depth], there is considerable variation in the warning time available” write the researchers. These times, they say, vary “from minutes on the US Pacific coast to hours for some areas of the Atlantic coast and South East Asia”.

Christophe Vigny, a seismologist at the Ecole Normale Supérieure (ENS) in Paris, believes that the new system is promising because it uses shore-based instruments, making it easier to maintain than oceanic systems. “A tsunami detected by a ground network is certainly something that people might trust more than a probability of something happening,” he says. Vigny cautions, however, that the system still needs to be demonstrated in real time before it can be considered effective.

Lipa and her team now intend to develop their research through further study of data from coastal radar on the shores of the north Atlantic. They say that a more detailed analysis of the weaker radar signals could lead to a unique view of the propagation of a tsunami and its interaction with the ocean floor.

Manipulating the middle ground

An international group of researchers has developed a new way of controlling light using nanotechnology. The technique focuses on the boundary between two media, such as air and water, treating the boundary itself as a third medium. This allows the scientists to manipulate the reflected and refracted beams in ways that are not possible with natural materials, creating “designer light”.

The scientists, based at Harvard University in the US, claim that their discovery has inspired them to derive a more general expression of Snell’s law, which predicts the path determined by a beam of light travelling from one medium to another. This could help in designing new optical components such as planar lenses and polarizers.

At the boundary

Reflection and refraction occur whenever light crosses the boundary between two different media, at an angle. It is this incident angle and the optical properties of the two media that decide the angles of refraction and reflection, according to classical optics. But now, Nanfang Yu and colleagues from the Capasso research group have shown that if the boundary contains structures on the nanoscale, these laws need to be updated.

Standard reflection and refraction treats the boundary between media as a homogenous interface separating the two media. “What motivates us is the question: ‘Why not treat the interface as a third ‘active’ medium?'” says Yu, who is also lead author of a paper on the research published in Science. “We realized that if we artificially structure the interface using nanotechnology, it can introduce an abrupt phase shift and a resultant time delay between the incident light beam and the reflected and refracted beams,” he explains.

Yu says that this is the first time anyone has manipulated the boundary between media in the optical regime. “Interestingly, decades ago people working on microwaves and millimetre-waves demonstrated the so-called “reflectarrays” and “transmitarrays” that can shape the reflected and transmitted beams. The connection between that and our results is that both use abrupt phase changes associated with antenna resonances,” says Yu. But that research was not at the nanoscale and the structures involved cannot be regarded as an interface or a boundary because the spacing between the array elements was larger than the wavelength.

The light fantastic

The Harvard team uses gold V-shaped plasmonic antennas – or pixels – patterned on silicon wafers as optical resonators. The array is structured on a scale much smaller than the wavelength of the incident light, allowing the engineered boundary between the air and the silicon to impart an abrupt phase shift or “phase discontinuity” to the light passing through. Yu points out that, while previous research concentrated on enhancing the near-field properties of optical antennas, his group uses “a somehow overlooked property of such structures – their phase response”. The phase difference between the incident and scattered light varies considerably over one antenna resonance. By operating the antennas at different resonance conditions, a wide range of phase – and therefore time – delays are achieved. Effectively, each antenna captures the incident light, stores it for a given time and then reemits the light into the free space.

The researchers’ interface is designed pixel by pixel as a series of optical resonators, such that the structure of the array determines the phase shift. By doing this, they can tailor the interface to reflect or refract in arbitrary directions, allowing a great degree of freedom in “shaping” the light. “For example, light coming in at an angle can be reflected back towards the light source – we call this phenomenon “negative” reflection because ordinarily the reflected beam is directed away from the light source,” says Yu. There is also “negative” refraction, where the refracted light bends in the “wrong” direction as compared with the prediction of Snell’s law. Yu says that there are two critical angles for total internal reflection, depending on the relative direction of the incident light and that of the gradient of the phase delay along the interface.

In one of the experiments they conducted, the scientists made the light ray hit the interface perpendicularly, from below, where the scattered light propagated at an angle, rather than perpendicular to the surface ( which is how it would naturally propagate), due to the varying structure of the antennae (see image “Perpendicularly incident light ray”). They also produced a vortex beam – a helical, corkscrew-shaped stream of light – from a flat surface (see image “Vortex beam and other strange optical effects”).

Integrated optics

The researchers are now working on applications such as planar lenses that could focus an image without the necessity of a compound lens to correct aberrations. “The advantage of the plasmonic interface is that it moulds optical wavefront right after the light passes through it, unlike conventional optical components like bulk lenses, which rely on gradual phase accumulation along the optical path to change the wavefront of propagating light. This makes our design favourable for integrated optics,” says Yu. He claims that some of their designs – such as the vortex beam – perform so well that they do not expect major difficulties in producing useful planar optical components for the long-wavelength (mid- and far-infrared) range. For the shorter wavelength range, however, they need to find a better non-metal resonator design.

Computer architecture recreated on quantum device

Physicists in California claim to be the first to implement a quantum version of the “Von Neumann” architecture found in personal computers. Based on superconducting circuits and integrated on a single chip, the new device has been used to perform two important quantum-computing algorithms. Conventional Von Neumann architecture includes a central processing unit (CPU) linked to a memory that holds both data and instructions.

Quantum computers, which exploit purely quantum phenomena such as superposition and entanglement, should in principle be able to outperform classical computers at certain tasks. However, building a practical quantum computer remains a challenge because the quantum states that such systems employ are difficult to control and are easily destroyed.

In implementing the Von Neumann architecture using superconducting quantum circuits, Matteo Mariantoni and colleagues at the University of California, Santa Barbara have taken an important step towards a working computer. Mariantoni told physicsworld.com that, to the best of his knowledge, he and his colleagues are the first to create such a quantum version of the architecture.

Marrying CPU and memory

The research team’s quantum CPU, or “quCPU”, comprises two superconducting “phase quantum bits” (qubits) connected by a superconducting microwave-resonator data bus. A phase qubit is a single Josephson junction, which consists of two pieces of superconducting material separated by a very thin insulating barrier. The logic levels – 0 and 1, for example – are defined by the phase difference between the electrodes of the junction.

Each qubit is connected to its own quantum random access memory (quRAM) element, which is made up of a superconducting resonator that stores quantum information in the form of trapped microwaves and a “zeroing register” – a two-level system that clears a qubit of information. The quRAM effectively acts like ordinary RAM that preserves the quantum nature – such as entanglement – of the information it stores.

The bus and the quRAM operate at fixed frequencies, whereas the working frequency of a qubit changes when special “z-pulses” are applied. When the frequency of a qubit matches that of a quRAM or the bus, then quantum information can be exchanged between the two.

Quantum operations

To perform an operation, Mariantoni’s team begins with the qubits “detuned” from the other components. Microwave pulses are then applied, which loads the system with quantum information, before z-pulses are applied to exchange information. Quantum operations are performed by the careful application of specific sequences of pulses.

In one experiment, the team performed the “quantum Fourier transform” operation with a process fidelity of 66%. In another experiment, Mariantoni and colleagues used the system to implement a three-qubit Toffoli OR phase gate with a 98% phase fidelity. Both of these operations are seen as essential for the operation of practical quantum computers.

“These figures of merit are very encouraging,” says Mariantoni. “However, numbers above 98% or even higher will be needed for a practical quantum computer to function.”

Long coherence times

Another important feature of the system is that the quantum memory can retain quantum information for much longer than the qubits. Such long “coherence times” are another practical requirement of a quantum computer. While the fidelity of the qubit states dropped below 20% after about 400 ns, the fidelity of the memories stayed above 40% for at least 1.5 µs.

The team is now working on increasing the number of quantum devices integrated on a single chip. According to Mariantoni, while boosting integration is fairly easy, operating such chips involves many more quantum operations. This means that the coherence times of the individual components must be boosted – something that is more of a challenge. The team is addressing this by finding ways of improving the quality of the dielectric and metallic materials used to make the devices.

The work is published in Science.

Reining in an asteroid

Commissariat hayabusa 3a.jpg


A single particle from the Itokawa asteroid. Courtesy: Science/AAAS

By Tushna Commissariat

When you think about near-Earth asteroids, they mostly bring to mind discussions about how to blow them up or move them out of the way, if they are headed towards us, perhaps with an afterthought of Bruce Willis. It is rather strange to think of engineering a method to ‘capture’ a neighbouring asteroid into an Earth-bound orbit. But that is exactly what a recent paper published on the arXiv pre-print server is looking at.

The researchers, from Tsinghua University in Beijing, China, have proposed coaxing a near-Earth object (NEO) into a “temporary capture”, such that it becomes a satellite of the Earth. A likely candidate for this type of acquisition would be an NEO with a low-energy orbit that can be captured by Earth with a slight increase in the asteroid’s velocity. The authors point out that some Jovian comets are routinely claimed by Jupiter, orbiting around the gas giant from one to several orbits, which would be a period of a few Earth years.

Unfortunately, this is not something that will occur with the Earth and any of its NEOs naturally. But some NEOs will be tauntingly close to Earth’s orbit and would require just a gentle nudge in the right direction. In the paper, the researchers consider the necessary conditions to artificially engineer this. They look at the mechanics of a three-body problem – the Sun, the Earth and the asteroid – and calculate at which orbital co-ordinates the capture would be successful and how much of a change in orbit and velocity, with respect to the NEO in question, would be required.

Using these parameters, they then listed possible candidates from the known NEOs. A candidate that caught their eye is a 10 m NEO that will pass within a million kilometres or so of Earth in 2049. Its orbital velocity is close enough to that of the Earth that it could be captured into an Earth-bound orbit by a velocity change of only 410 mps. This would allow it to orbit Earth at nearly twice the distance of the Moon, before it wanders off like Jupiter’s comets.

But what is the point of it, you ask? As the researchers themselves point out, “a 2 km-size metallic NEO, for example, may contain rich metals and materials worth more than 25 trillion dollars”. While the concept of mining an asteroid had been around for a while, a practical method has not been found. The recently returned Hayabusa mission from the Itokawa asteroid was delayed by three years and its final sample was of about 1000 particles of asteroid dust – more than enough for research but not exactly a bountiful harvest in terms of minerals (see image above). Having an asteroid “on a leash” would make it a lot easier to study and mine them.

What was Rutherford’s greatest discovery?

By James Dacey

hands smll.jpg

This year is the 100 year anniversary since Ernest Rutherford published his seminal paper describing his discovery of the atomic nucleus. But Rutherford was an industrious researcher who many remarkable contributions to science, including three discoveries that revolutionised our view of matter.

Rutherford’s first major scientific work was to lead to him being awarded the Nobel Prize for Chemistry for his investigations into the disintegration of the elements, and the chemistry of radioactive substances. One major experimental breakthrough during this period was to discover that thorium gave off an “emanation” that was radioactive. Essentially, Rutherford had discovered thorium gas.

Rutherford received the Nobel prize in 1908, about 18 months after he had begun working at the University of Manchester, where he held the chair of physics for 12 years. It was during this time that Rutherford, working with colleagues including Hans Geiger and Ernest Marsden, carried out his famous scattering experiments, designed to probe the structure of the atom. The results led to Rutherford’s second “eureka moment” when he realised that the majority of an atom’s mass is concentrated in a relatively tiny volume at its centre — he had discovered the nucleus.

Rutherford’s third big contribution was to effectively become the world’s first alchemist when he transformed nitrogen into oxygen. This finding was a result of bombarding nitrogen gas with alpha particles so that higher energy protons were ejected.

Of course all three of these discoveries have transformed our view of atomic physics in different ways. But, just for a bit of fun, if you had to single out one of these three discoveries, which do you think is the greatest?

• That atoms are not always stable (his Nobel-Prize-winning work on radioactivity)

• The atoms have the majority of their mass concentrated in a nucleus

• The world’s first alchemy (converting nitrogen into oxygen)

Have your say and take part in our facebook poll. And feel free to post a comment on the poll to explain your reasoning.

IOP members can also watch this short feature length film about Rutherford’s discovery of the atomic nucleus. It includes interviews with keynote speakers at the Rutherford Centennial Conference, which was held in August at the University of Manchester.

In last week’s poll we asked posed a question that is highly pertinent to the big questions surrounding the future of astronomy and the financial situation in the US. We asked whether funding be reinstated on the $6.8 billion James Webb Space Telescope, which is poised to be the successor to Hubble Space Telescope (JWST). The question arose following a move by the US congressional committee to cancel the project after a series of over-run costs. The findings of our poll, however, were highly conclusive as 90% of respondents voted that “yes, jeep the JWST”.

Between the lines

Photo of red onion

The “onion questions” in physics

Most questions in physics and astronomy are like onions. They may seem smooth and uncomplicated on the surface, but inside, many layers of subsidiary questions await anyone with the patience and talent to strip them away. “What are gamma-ray bursts?” is definitely an onion-type question, and Joshua Bloom’s book of the same title does a thorough job of unpeeling it. As Bloom writes in the preface, at one level the answer is simple: gamma-ray bursts (GRBs) are “unannounced flashes of high-energy light detected from seemingly random places on the sky”. But this bald explanation does not even begin to address the origins of these events, nor what Bloom, an astronomer at the University of California, Berkeley, calls the “engine” behind their creation. What astrophysical processes could possibly compress so much energy – as much, in fact, as the Sun will release in its entire lifetime – into just a few seconds? And then there is the most intriguing question of all, which Bloom addresses in the book’s final chapter: what can GRBs tell us about the universe as a whole? These are still very active topics of research, and it is a pleasant surprise to find them discussed in a book aimed at a semi-popular audience. What are Gamma-ray Bursts? is, in fact, the second in a promising new series from Princeton University Press on the “frontiers of physics”. Like its 2010 predecessor, Abraham Loeb’s How Did the First Stars and Galaxies Form?, it seems best suited for readers who want a “big picture” of a field before embarking on in-depth study. Although the book contains numerous equations, as well as graphs taken from research papers on GRBs, it is written in an accessible style. Moreover, unlike a journal article, it is possible for a newcomer to read it without constantly referring to earlier work for basic definitions and background. There are a few niggles, including a proliferation of acronyms, but on the whole, Bloom (and Princeton) deserves kudos for filling this gap.

  • 2011 Princeton University Press £19.95/$27.95pb 280pp

Written wonders of the universe

The BBC’s two critically acclaimed Wonders series saw millions tuning in each week to watch physicist Brian Cox deliver mountain-top
lectures on the magnificence of the cosmos. But for some, particularly sticklers for traditional media, these programmes placed so much focus on the spectacle of nature that the wonder of scientific facts too often came second place to the special effects (and to Cox’s radiant haircut). If you are among them, then Seven Wonders of the Universe (That You Probably Took for Granted) by C Renée James offers a pleasant alternative to Cox and his crew. The book’s fly-by tour of the cosmos, with its seven stop-offs that include “gravity”, “stuff” and “time”, does not contain a single photograph. Instead, James, an astronomer at Texas’ Sam Houston State University who regularly contributes essays to popular-science magazines, opts for old-fashioned prose, interspersed with the occasional sketchy cartoon. James defends this concept in her preface, pointing out that with so many stunning pictures of the heavens freely available from websites such as NASA’s, it feels a bit arbitrary to pick a crop for a published book. It is an excellent point, and James’ witty, lucid writing style brings humanness and a sense of perspective to a subject where technicolour blockbusters can leave viewers numbed. In James’ cosmic journey, everything in nature is assigned a personality, from antiparticles being the evil twins of matter to the hailing of Jupiter as the solar system’s king. Particularly enjoyable is the chapter on light, in which the Sun is painted in varying portraits, including “happy visible” and “creepy ultraviolet”, to convey the idea that astronomers study different types of light to learn about different processes in the universe. At times, James overcooks the jokes, and the frequent references to American culture can sometimes leave foreign readers feeling on the outside of the joke. But on the whole, this geeky comedy is an effective strategy for taking the wonder of astrophysics and grounding it firmly in everyday life.

  • 2010 Johns Hopkins University Press £13.00/$25.00pb 256pp

Rutherford’s big discovery – 100 years later

In 1911 the New-Zealand-born physicist Ernest Rutherford published a paper that was to revolutionize science. Rutherford’s famous alpha-particle scattering experiment transformed our understanding of the atom and it inspired the new areas of physics including the theory of quantum mechanics.

The pioneering work was carried out at the University of Manchester where Rutherford held the Chair of Physics for 12 years. To mark the centenary of these landmark experiments, the university hosted a special week-long conference in August 2011. The event was organized by the UK’s Institute of Physics, which publishes Physics World.

In this short film, Physics World journalist James Dacey reports from the conference where he caught up with two of the keynote speakers. First, Dacey meets the University of York physicist, David Jenkins, who describes how Rutherford’s experiments overthrew the prevailing picture that atoms were solid building blocks of nature.

In this discussion, Jenkins talks about how Rutherford’s work has led to some important practical applications, including big advances in the field of medicine. “Understanding the nucleus and radioactivity has led to many diagnostic techniques for medicine like positron emission tomography, or the radiotherapy cancer treatments that people receive.”

For a different take on Rutherford’s discovery, Dacey also met physicist John Schiffer of Argonne National Laboratory, who has been an active nuclear researcher since completing his PhD in 1954. Schiffer explains how, after visiting Rutherford’s laboratory, Niels Bohr was able to develop a coherent theory of quantum mechanics based on the idea of a nuclear atom.

Dacey also encourages Schiffer to take his imagination beyond fundamental physics by asking what might have happened if Rutherford had not made his discovery in 1911. In a fascinating response, Schiffer speculates that other scientists would have been unlikely to make the discovery before the onset of the First World War. Continuing this line of thought, Schiffer believes that the discovery of fission may then have been delayed until after the Second World War. “Would the first use of nuclear weapons have been in a third world war? You can write science fiction books about that,” he says.

And it is not just professional physicists who are celebrating the centenary of Rutherford’s discovery. Manchester’s Museum of Science and Industry is also hosting a special exhibition until the end of October, which provides an overview of Rutherford’s work and his legacy in the city of Manchester. Dacey takes a trip to the museum to meet the exhibit’s curator, Cat Rushmore. Rushmore gives Dacey a guided tour of the exhibit, which includes a number of artefacts from Rutherford’s lab such as his desk and chair and a letter to Rutherford from Bohr describing how much he admired the Manchester laboratory.

Licence to stun

For most of their history, police officers have had three options for using force when confronting violent criminals: truncheons, dogs or guns. The first of these can tip the outcome of an unexpected violent confrontation in the officer’s favour, but it is still an instrument of blunt trauma, reliant upon its handler’s strength and skill. Dogs have tremendous psychological impact, can often prevent violence from occurring in the first place and are particularly useful in tracking people down. However, their bites sometimes require extensive medical treatment and can become infected. And guns, while necessary in some situations, carry a high risk of death or serious injury. Ideally, the police should have access to other technologies that can stop people with minimum risk to both suspects and officers.

In recent years, police forces have sought to bridge this gap by equipping their officers with less-lethal weapons such as CS incapacitant spray and TASERs. The term “less lethal” is used advisedly, because the difference between less-lethal weapons and non-lethal ones is more than semantic. Despite decades of research, the Star Trek phenomenon of “phaser set to stun” – in which a target immediately slumps to the floor unconscious and recovers with no ill effects – is still the stuff of science fiction. People have been seriously injured and even killed in incidents involving some less-lethal weapons, and because no real-life technology is both completely harmless and completely efficient at stopping someone (figure 1), the use of force in dealing with violent situations is always going to be controversial.

Ultimately, though, all of the arguments for or against the use of less-lethal weapons boil down to the same two questions: are the weapons safe and are they effective? These questions can only really be addressed by science, and researchers around the world have been working on supplying some answers. Within the UK, this effort has been led by two organizations: the Home Office’s Centre for Applied Science and Technology (formerly known as the Scientific Development Branch, or HOSDB) and the Defence Science and Technology Laboratory (Dstl).

Physicists at both centres have been crucial members of the teams that sifted through hundreds of available weapons systems to find the safest and most effective. They have designed and carried out experiments that ranged from establishing the force with which various projectiles would hit the body to measuring the power outputs of electrical weapons. One team even created a digital mannequin of the human body that had the same electrical properties as human tissue, so that the paths of currents produced by electrical weapons could be modelled. Working with engineers, materials scientists, chemists and biomedical scientists, they gave medical analysts and experts on the policing environment the evidence they needed to decide which systems – if any – should be adopted by police forces in the UK.

The need for options

Although there are many different types of less-lethal weapon (see box), they only really work in one of two ways. One is “pain compliance”, which essentially means that the weapon causes targets enough pain that they no longer want to keep doing whatever they were doing. Some types of less-lethal weapons that use pain compliance include impact rounds – projectiles fired from a gun that are designed not to penetrate the skin – and PAVA spray, a form of synthetic pepper spray that causes pain and streaming in the eyes and nose.

The other method is “incapacitation”, where a weapon actually prevents the target from continuing their actions. CS spray and electrical weapons are usually classified in this category. In practice, there is often cross-over between the two methods: CS spray is painful, while impact rounds can incapacitate. Some weapons also have a very strong built-in deterrent effect. A good example is the TASER, an electrical weapon that incorporates a laser beam as a targeting device. Suspects who see its bright red spot on their chest will usually concede that the game is up without police needing to fire the weapon.

While all less-lethal weapons carry risks, one could be forgiven for thinking that those that have made it onto the market must be reasonably effective, safe and well made. The fact is that the majority are not, so we must test them carefully to find the best. In the US, where these weapons systems are often designed and are widely used, there are literally thousands of police forces, ranging from rural departments with a sheriff and a few deputies to metropolitan forces that employ tens of thousands of officers. Each force can purchase any weapons system it wants – all of them less lethal than the guns most officers already carry – and there are dozens of manufacturers willing to sell to them. Unfortunately, few forces have the expertise or resources required to evaluate the safety and effectiveness of multiple systems, so a less-lethal weapon’s success is often measured in the reduced cost of lawsuits taken out against the police for excessive use of force.

On the UK mainland, in contrast, most police officers are unarmed, so the public perceives any new weapons as an increase in force. Although CS spray was introduced in 1996, much of the drive to adopt less-lethal weapons stemmed from a report by the Independent Commission on Policing for Northern Ireland that was published in 1999 as a result of the Good Friday peace accord. The Patten Report included two recommendations that discussed finding a range of replacements and/or alternatives to the plastic baton round, a controversial less-lethal weapon that was then used by police in Northern Ireland. Other factors behind the adoption of less-lethal weapons included new human-rights legislation and public pressure following incidents in which people armed with knives or swords were shot by police.

Thanks to these various influences, an operational requirement (OR) for less-lethal alternatives was produced in 2000, and updated in 2001, at the behest of the Association of Chief Police Officers and the Northern Ireland Office. Written by a steering group composed of specialists from the police, Home Office, Ministry of Defence and the Northern Ireland Office, the OR set out 22 separate criteria (see box) for analysing the performance of all such weapons. Some criteria, such as minimized injury and lethality, were deemed more important than others, and it was accepted that no weapons system would perform well against all of them. The question, instead, was how each system performed overall, in comparison with others.

TASERs: a case study

One of the weapons that was approved for UK use was the TASER. This gun-shaped electrical device was invented by a physicist, John Cover, who named it “Thomas A Swift’s Electric Rifle” after a series of adventure books he read as a boy. TASERs have been used in the US since the mid-1970s. They work by firing two barbs that are attached to a base unit by 6.4 m-long wires. The darts are oriented at an 8° angle to each other, which gives the barbs the optimum spread at the recommended firing distance of 4 m. When they have attached to the subject, a series of electrical pulses passes between them. These pulses override those generated by nerves sending messages to the muscles, causing the muscles to contract and the subject to fall over.

The way the TASER was tested against the OR makes it a useful case study. A few criteria – notably cost, legal implications, acceptability and the authority required for use – were deemed beyond the scope of a scientific analysis. Others, including the ease of operation, mobility and flexibility, repeatability, need for specialist officers and training, were examined in a series of handling trials. A total of 97 police officers from 28 forces, plus five prison officers, were trained in the weapon’s use and then subjected to a variety of handling scenarios, such as firing around or over a riot shield or at a target moving towards them. The remaining criteria were evaluated either by reviewing existing data or performing new experiments.

One very important resource for the researchers was the large body of written reports and video footage from the many hundreds of thousands of times TASERs have been deployed in the US. Such footage included numerous examples of police officers volunteering to be tasered by colleagues at conferences, stepping up one after another to prove their bravery for the cameras. The overwhelming conclusion was that most people shot with a TASER topple over immediately. The same body of evidence also showed that targets generally recover as soon as the current is stopped.

However, the culture of unarmed policing in the UK means that the British public expect violent situations to be resolved with the minimum force necessary. Where weapons do have to be used, we expect their relative safety to have been quantified by a competent body – and, what is more, we expect that body to be independent from the weapons’ manufacturers. Much of the testing previously carried out had been financed by these same firms, which also maintain the largest database of incidents of TASER use. Although such data are not necessarily biased, for something so important, it was deemed necessary for evidence to be above accusations of financial interest.

The accuracy of the TASER was tested on a firing range operated by HOSDB. These tests showed that the weapon’s barbs tended to fall below the aim point towards the longer end of its 6.4 m range – a limit that was itself considered a drawback – but it was judged to be accurate enough for practical police use. Testing the speed of the barbs proved trickier. The electrical field generated by the TASER can interfere with sensitive electrical detectors, while the trailing wires rendered standard light-gate equipment – in which the projectile passes through two beams of light that are a known distance apart – useless. The solution was to calculate speed the old-fashioned way: a high-speed camera captured the time it took for the barbs to travel a distance marked out on a wall, and researchers simply divided distance by time.

Some of the most important and detailed tests investigated the effects of TASERs on the human body. Physicists at both HOSDB and Dstl carried out extensive analysis of the electrical signal that TASERs generate when applied to a range of resistances present in the human body (47–4700 Ω). When combined with the average placement of barbs found during accuracy testing, these measurements became the basis for analysing the TASER’s effect on the body. To aid the analysis, the Dstl group also developed a highly sophisticated computational model that turns all the internal structures of a 3D human male into a discrete matrix of cubes that are assigned appropriate electromagnetic properties. This model was used to track the path of the TASER’s electrical pulse through the body, allowing researchers to establish how much current would cross the heart.

One reason for performing such extensive medical modelling and testing was to gauge the likely effects of TASERs on vulnerable populations, such as people who wear pacemakers or use illegal drugs. A TASER’s electrical pulse can cause transient changes in heart rhythms; more specifically, it can increase the time lapse between two points (called Q and T) in a heart’s electrical waveform. If this Q–T interval becomes too large, the end of the electrical signal of one beat can interfere with the signal controlling the next beat, producing a potentially lethal heart arrhythmia known as torsades de pointes. Although this is unlikely to happen in healthy subjects, some medical conditions and prescription drugs (including statins and the common antibiotic erythromycin) are also known to increase the Q–T interval, so concerns were raised about possible cumulative effects. The effects of illegal drugs on Q–T intervals, in particular, were not well understood, and while researchers reasoned that people with pacemakers are unlikely to be involved in altercations with police, the opposite is true for drug users.

After extensive preliminary work using mathematical models, a range of illegal drugs were introduced to samples of heart tissue. When two of them, PCP and ecstasy, were found to induce Q–T lengthening, they were put forward for further study in experiments on guinea-pig hearts. Once the animals had been humanely killed, their hearts were put into a Langendorff preparation, which allows the heart to keep beating by supplying it with nutrients, oxygen and appropriate electrical stimulation. These hearts could then be exposed to TASER-like electrical waveforms (which had been calculated as part of the digital modelling) at the same time as they were exposed to the drugs of interest.

These tests showed that for the most powerful model of TASER, there was at least a 60-fold safety margin for inducing an anomalous heart rhythm known as ventricular ectopic beats, which can precede more serious conditions such as ventricular fibrillation. In other words, the TASER would have to be at least 60 times more powerful or a person’s heart would have to be 60 times more vulnerable than the average to produce this adverse effect. This would be very rare and would only be the result of many unlikely cumulative factors. The experiment was also unable to induce ventricular fibrillation directly. A separate review of literature on pacemakers found that although their function was slightly impaired while TASERs were being deployed, they went back to normal operation immediately after the weapon’s electrical current stopped.

Narrowing the field

Not all of the devices on the market were tested as extensively as the TASER. Indeed, some of the more outlandish devices were eliminated either out of hand or by reviewing independent work and the manufacturer’s own claims. One of the most dangerous technologies weeded out at the review stage was a foam gun that sought to immobilize suspects by covering them in a hot, sticky substance. Not only was decontamination difficult, if the foam entered the suspect’s mouth or nose, death by suffocation seemed inevitable. Other early rejections included the numerous Spider-Man-style nets and entanglement devices on the market. These have a limited range and their impact could potentially injure a suspect. They are also useless in cluttered and indoor environments.

Some devices were eliminated after testing showed that they either failed too many of the OR criteria, or failed some of the more significant ones – in particular safety, effectiveness or reliability. Although a vehicle-mounted water cannon passed scientific, operational and medical tests, and has been used in Northern Ireland, a hand-held version failed after researchers discovered that the weight of its associated backpack, combined with the recoil of the weapon itself, meant that users were likely to lose their balance. The hand-held cannon also had a limited range, as the force from the “slug” of water dissipated with distance.

With impact-type weapons, accuracy is vital, because you can only make a realistic assessment of a projectile’s effects if you know where it will hit the body. Many such devices failed on this criterion alone. For example, there is a gun on the market that can fire tennis balls at roughly 380 km/h, but the balls rarely hit the target. Most “beanbag” rounds – fabric sacks containing lead shot – also have accuracy problems, but a more serious flaw is that they are too likely to break bones or enter the target’s body. These rounds are fired from a shotgun in a rolled-up configuration, but although they are supposed to flatten out in flight and hit their target with a flat surface, there is little in the way of aerodynamics that would actually lead them to do this. As a result, the stitched edges of the beanbag bear the brunt of the impact, meaning that the force is delivered over a much smaller area. In some cases, tests showed that the beanbags began to rotate in flight, which adds a shearing effect to the force of the impact, thus increasing the risk of skin penetration.

A variant of beanbag rounds called a drag-stabilized beanbag or “sock round” also proved disappointing. Although sock rounds lack stitched edges and have tails that are meant to stabilize their flight, some of the types tested had serious quality-control issues. Indeed, one brand was found to have been manufactured with party balloons inside.

Ultimately, the only impact-type weapon now being used by UK forces is the Attenuated Energy Projectile (AEP). This type of round consists of a deformable head above a solid plastic base and it is extraordinarily accurate, reducing the chance that it will accidentally strike a subject’s head and cause potentially life-threatening injuries. Furthermore, in the event that an AEP round does strike a bony area of the body such as the head, it is engineered to deform and thus dump its energy into the target over a longer period of time. This longer period of deceleration reduces the force of the impact in much the same way as the crumple zone in a car, lessening the chances of a bone fracture.

The chemical options considered as part of the review included chloroacetophenone (trade name Mace), natural and synthetic pepper sprays, and dibenz[B,F]-1,4-oxazepine (known as CR). The first was rejected because of its status as a known carcinogen, coupled with the narrow margin of safety between incapacitating and lethal doses. CR was also rejected – although it is more potent than the CS spray already used in the UK, there are other operational issues, the most significant of which is that it does not dissolve in water, making decontamination difficult. Natural pepper spray has been used in the US since the 1990s, but because it is derived from a natural product, its potency is not consistent from batch to batch and it also contains hundreds of ingredients that would have to be tested individually. The only new chemical option that received a green light was synthetic pepper spray or PAVA, which contains just a couple of active ingredients and has undergone extensive toxicology tests.

Future technologies

The use of force in dealing with violent situations is always going to be controversial, and governments, human rights groups and society at large rightly take a close interest in the tools we give police to carry out their duties. The majority of work has now been done to evaluate what was already on the market and UK police do now have several different options when faced with violent criminals. However, manufacturers will continue to refine their products, and they occasionally come up with new ideas; indeed, some other devices remain under development or are still being tested in the UK. Before any of them hit the streets, politicians, the police and the public will need to discuss their merits and drawbacks. It is important that these discussions are informed by rigorous, physical research and then maybe, one day, police officers will be setting their phasers to stun.

Box: Types of less-lethal weapon

  • Kinetic-energy devices are “impact weapons” that deliver a physical blow via a projectile such as a bean bag or baton round.
  • Electrical devices such as TASERs incapacitate a target by sending an electrical current through the body.
  • Directed-energy devices produce electromagnetic rays with effects that range from dazzling a target to causing pain by heating the skin.
  • Water cannons deliver water in either pulses of 5–15 litres or a continuous stream of 900 litres per minute to knock people over.
  • Chemical-delivery devices include both sprays and projectiles that contain CS powder (commonly known as “tear gas”) or newer chemicals such as PAVA (synthetic “pepper spray”).
  • Long-range hailing devices are directed-sound systems that project instructions or uncomfortably loud noises over a small area.
  • Pyrotechnic devices such as flash-bang stun grenades produce a very loud bang and bright flash designed to confuse and disorientate.

Box: Criteria for evaluating less-lethal weapons

  • Accurate over 1–25 m (ideally up to 50 m)
  • Training issues
  • Repeatability/speed of use
  • Easily operated
  • Specialist versus general officers
  • Immediately effective
  • Works on maximum subject population
  • Cost
  • Authority required for use
  • Legal implications
  • Minimized injury/lethality
  • After-effects
  • Acceptability – police and public
  • Mobile and flexible
  • Effect – neutralizing the threat
  • Durability
  • Visual effect (not like a firearm)
  • Safe and secure
  • Effective in all environments
  • Minimized judgement
  • Does not preclude other weapons
  • Audit trail

Leading by example

John H Marburger III experienced dramatic changes in science policy in his lifetime. For a quarter-century after the Second World War, science was largely protected from public scrutiny and government supervision, with scientists both the actors and judges of their own performances. By the 1980s researchers increasingly worked in an environment where this “fourth wall” – to use a theatre analogy – had disappeared. While researchers continued to receive government funding, they increasingly had to make their actions transparent to regulators and the public, and obey sometimes frustrating rules.

This state of affairs was messy, expensive and inefficient, but Marburger realized that science administrators had no choice but to embrace it. Indeed, he did so himself in his many high-profile appointments – a lesson for future administrators in how to cope.

The bigger story

Marburger was born in Staten Island, New York, in 1941. He graduated with a degree in physics from Princeton University in 1962 and completed a PhD in applied physics at Stanford University in 1967. Marburger joined the University of Southern California (USC), where he became chair of the department of physics in 1972.

Articulate, attentive and respected, Marburger was host of Frontiers in Electronics, a local (pre-recorded) educational TV programme on CBS that aired at 6 a.m. On the morning of 21 February 1973, a magnitude 5.3 earthquake struck California, awakening people throughout the San Fernando Valley. Their first instinct was to turn on the TV. There was Marburger, interviewing an information theorist, not on earthquakes, but about his field. “That episode had a huge audience!” he told me, proudly.

In 1976 Marburger became USC’s dean of arts and sciences. Facing a scandal involving preferential treatment for athletes, USC officials designated Marburger their media spokesperson. The experience taught him valuable lessons about being the public face of an institution. “Be calm, say what you want to say, don’t get complicated, don’t diss anybody,” he once said to me.

As president of Stony Brook University, a position he took up in 1980, Marburger had to co-ordinate advocates of different departments and offices, conjuring policies that inevitably disappointed many but were acceptable to all. His diplomatic skills were sharpened in 1983, when New York governor Mario Cuomo had him chair a fact-finding commission on the controversial Shoreham nuclear-power plant under construction on Long Island. Its diverse collection of members, Marburger knew, would never agree. Still, he managed meetings fairly and patiently, not adjudicating but painting, in his final report, a big picture of the controversy in which all sides could recognize themselves.

The Superconducting Super Collider (SSC), a particle accelerator partly built in Texas but terminated in 1993, was the first big accelerator project on which the government attempted to impose formal procurement and oversight processes. Marburger’s experience as chair of the SSC’s management – University Research Associates – alerted him to a still bigger story: that the government, too, was part of the community scientists had to serve, with its own evolving needs.

Marburger became the go-to person when storm clouds gathered. In 1997 a leak of slightly radioactive water from the spent-fuel pool of a reactor at the Brookhaven National Laboratory led to an uproar. The lab’s manager, Associated Universities Inc., was fired and anti-nuclear activists called for the lab’s closure. Marburger was tapped to be the lab’s new director, and his calm and attentive demeanour did much to resolve the conflict.

White House bound

In 2001 Marburger took the most controversial job of his career when he became science adviser to US President George W Bush. Many in the science community were outraged that he was joining an administration they saw as harmful to science. The psychologist Howard Gardner from Harvard University even labelled him a “prostitute”. “That doesn’t bother me much,” he told me at the time – and I was relieved to hear that final word, revealing him to be not infinitely unflappable, but human after all.

Marburger preferred to be productive rather than get fired, setting out to improve co-ordination between the various government agencies that approve and handle science, and emphasize the brighter side (see Physics World November 2008 pp16–17, print edition only). A genial analogy is that he was fixing an under-utilized office, preparing it for a more appreciative administration to come. A more extreme view is that he was in the morally ambiguous, but defendable, position of a collaborator, trying to do bits of good while working for a superior whose actions he could not alter.

Friends often asked Marburger how, in these roles, he could stand the vociferous criticism from those who failed to appreciate what he was doing. He once pondered that question in his diary. He wrote of building a harpsichord, restoring a vintage car and designing his home using an architectural computer program – all pursuits that juggled complex elements in ways he found soothing. He finally decided his most satisfying pursuit was physics. “Physics has been the main stabilizer of my life,” he wrote.

The critical point

Marburger constantly sought better ways for science administrators to cope with the absence of such a fourth wall. Frustrated at how much science policy is dominated by advocacy, he co-edited a book, The Science of Science Policy, that outlined a framework for this new discipline. He also authored a book about quantum mechanics, Constructing Reality, that is to appear this month, and started a book about his experiences as a science administrator.

In it he would have criticized those who dream of removing decisions about scientific facilities from the public arena. He would have warned that critics would then just turn their fire on that reinstated fourth wall. The only way, in a democracy, is to do what he did at Shoreham, Brookhaven and the White House: tell the story of what is happening in as big a context as possible. If you do so carefully, the wise decision becomes obvious.

Copyright © 2026 by IOP Publishing Ltd and individual contributors