Skip to main content

Single-molecule diode has record-breaking current

A single-molecule diode with the highest on–off current (or rectification) ratio to date has been unveiled by a team of physicists and chemists in the US. While single-molecule diodes have been made in the past, they suffered from low conductance and very low rectification ratios. The new diode could be used to study the fundamental electronic properties of materials on the molecular scale, and might lead to the development of better nanoscale electronic devices.

Electronic devices made from single molecules, including single-electron transistors, memory elements and optical switches, have been around since the 1990s. However, making single-molecule diodes – the most basic of all electronic elements – has proved to be a difficult task.

A single-molecule diode is a two-terminal electronic component that allows current to flow in only one direction; the idea of such a device was first proposed more than 40 years ago in a theory paper. The concept involved an asymmetric “donor-bridge-acceptor” molecule, and was expected to work like the semiconductor p–n junction in a conventional diode. Since then, researchers have made several single-molecule diodes featuring asymmetric molecules. However, despite improvements in the properties of these devices over the decades, they still suffer from low conductance and low rectification ratios (of less than 11). They also require high operating voltages of around 1 V.

Symmetric molecule works

A molecular diode normally needs to have an asymmetric structure so that the current flow is also asymmetric in terms of direction. This is usually achieved by using an inherently asymmetric molecule or by using electrodes made from different materials. Now, a team of researchers led by Latha Venkataraman of Columbia University in New York has succeeded in building asymmetry into a molecular diode using a symmetric molecule and electrodes made from the same metal (gold). This was done by adjusting the electrostatic environments where the molecule is attached to each electrode, which involved having one end of the molecule in contact with a planar electrode with a large surface area. The other end of the molecule is in contact with a sharp-tipped electrode coated with wax, so it offers a much smaller surface area (see figure). The researchers also operated the device in a polar solvent and exposed different areas of the electrodes to this ionic medium.

Asymmetric charge distribution

The result of this interface asymmetry is that double layers of differing charge densities develop at the two electrodes–molecule interfaces. These double layers originate from ions in the solvent that propagate towards the interfaces to screen out the electric field generated by electrical charges in the gold. “This asymmetric charge distribution is responsible for the enhanced current rectification we observed,” explains Venkataraman.

“Our technique to enhance current rectification in these single-molecule structures is simple and robust. It also alleviates the need for complex synthesis strategies required to design asymmetric molecules,” says team member Brian Capozzi.

The researchers say they achieved rectification ratios of more than 200 at voltages as low as 370 mV using molecules comprising symmetric oligomers of thiophene-1,1-dioxide. The same junctions immersed in non-polar solvents do not show any rectification, which the team says proves that the environment around the electrodes plays a key role in the operation of the devices.

Fundamental electronic structure

“Combined with the high rectification and currents that we have measured, our technique might also be used to make real-world devices, and could be applied to other nanoscale device components, not just single-molecule junctions,” says Venkataraman. And that is not all: the method provides a way to experimentally probe how energy levels are aligned in single-molecule junctions – something that could be useful for studying the fundamental electronic structure of a variety of other device components.

The team, which includes groups lead by Luis Campos of Columbia University and Jeffrey Neaton of the University of California, Berkeley, says that it is now busy optimizing and developing even better single-molecule diodes.

The devices are described in Nature Nanotechnology.

Black elephants: scientific issues that we don’t talk about

A few months ago the New York Times used the curious term “black elephant”, attributing it to the London-based investor and environmentalist Adam Sweidan. The term crosses two already familiar expressions. One is “black swan” – a name for something whose repercussions force you to throw out key theories that you took for granted, such as the premise that “all swans are white”. The other term is “elephant in the room” – something whose presence everyone knows but nobody seriously addresses out of fear or embarrassment.

Sweidan’s examples of “black elephants” were environmental and included global warming, ocean acidification and pollution of water supplies. To address these issues on the required scale, Sweidan said, would profoundly disrupt current political activity. So we ignore them. A black elephant is therefore something that changes everything, but which no-one wants to deal with.

I believe black elephants are found in pure science as well. In January, for example, I wrote about shutdown ceremonies for synchrotron radiation sources. I noted that such mundane, even cliquey, events are natural and spring from the special character of the scientific community and the feelings of its participants. Yet you won’t find these events mentioned in books on how science works, nor could anybody I spoke to could clearly describe their value.

I had “captured it well”, e-mailed one scientist, mentioning his mixed feelings of pride and gratitude, affection and loss at the closing ceremony for a machine he had worked on. His remark convinced me that, in shutdown ceremonies, I had spotted a black elephant. In science, in other words, a black elephant is a familiar feature that is routinely excluded from formal accounts of how science works, and could not be incorporated without ruining them.

Why some elephants are black

The scientific world is a sprawling and untidy place whose inhabitants practise their craft in myriad ways. Attempts are periodically made to bring order to this world by building model homes in it, so to speak, and declaring that what’s inside is what science is really like – all the activities outside being imperfect versions. That way, we can easily teach it and tell outsiders what it’s about.

Two such homes are particularly attention-grabbing. The first is orderly, its atmosphere logical, and its disputes calmly resolved by proposing theories and taking data. Experiments are good when they get the true result, wrong when they don’t. This house does not have normal people inside – the inhabitants are so exacting and rule-abiding that they live and act quite differently from the rest of us. Discoveries made inside this house are universal, reflecting truths about nature outside. This house was built by traditional philosophy of science.

Another house was erected in reaction to the first. Its inhabitants behave exactly as non-scientists do, motivated by the same social and psychological forces. Experiments are good when they get a result everyone accepts. What’s found in the room is not universal but local – arising from what’s happening in that room. Obtaining consensus about a result is a matter of swapping interests, like the work of diplomats. This home, built by “social constructivists”, has real people inside but no real nature.

My characterization is simplified, and each model house has undergone modifications. Still, it’s a good first approximation. The model houses might be different, but they have in common that they seek to give an abstract, formalized representation of the scientific process from the perspective of someone outside the territory. They differ in what they include and omit. The first, to oversimplify, gets rid of human beings, who disrupt the rationality inside the house. The second gets rid of nature, which would resist, define and frustrate the negotiations.

Each house has a different set of black elephants, common and easily recognizable features of the scientific life that it would be impolitic to discuss. In the first house, one black elephant is the example I mentioned – shutdown ceremonies at machines that are integral to the lives and work of communities of scientists. In the second house, however, a shutdown ceremony is not a black elephant; such events might be read, for instance, as deliberate attempts to consolidate and reinforce a community in preparation for a political drive to request a new facility.

The cost of house-building is illustrated by recent silly debates about whether string theory can truly be considered science if it has no testable predictions. String theory doesn’t fit into the first model house. But the scandal here is not about the flawed character of string theory as science, but about the flawed architecture of the first house. The scandal should even cause us to examine our fascination for house-building.

The critical point

“We learn how to do science,” Steven Weinberg writes in his new book To Explain the World, “not by making rules about how to do science, but from the experience of doing science, driven by desire for the pleasure we get when our methods succeed in explaining something.”

Science is a way of making sense of the world and has gone on for centuries. It has continually accreted knowledge during that time and is always being extended further, but – as Weinberg says – not in a form that can be reduced to rules or negotiating interests. If you are trying to describe science – rather than just do it – you have to beware of your assumptions, including why you are looking at it and what you are hoping to find; you also have to be ready to revise them when necessary.

So in describing how science works we shouldn’t aim at building permanent houses. Instead, we should create temporary tents that let us watch science in action – ready to be taken down and moved. That way we can appreciate the elephants, rather than paint them black.

The spin of a proton

In physics, the budget always has to be balanced. The amount of any physical quantity on one side of an equation – energy, momentum, charge and so on – must equal that on the other side. Any imbalance means that our understanding of nature might be out of kilter, and may in fact suggest the existence of new laws, particles or forces.

For nearly three decades, physicists have been faced with a particularly stubborn imbalance. At stake is the intrinsic angular momentum, or “spin”, of a proton. Spin is a quantum-mechanical property, akin to the angular momentum of a classical sphere rotating on its axis, except it comes in discrete units of integer or half-integer multiples of ħ. The proton, like the electron and neutron, has a spin of ħ/2, or “spin-1/2”. So do each of its three quarks. Summing the spins of the quarks to get the total spin of the proton seems, in principle, straightforward: if two of the quark spins point up, while the other points down, the down spin will cancel one of the ups, and both sides of the equation should be left with an angular momentum of ħ/2.

Except it isn’t that simple. In 1988 the European Muon Collaboration (EMC) at CERN shocked the physics community by announcing that the sum of the spins of the three quarks that make up the proton is much less than the spin of the proton itself. This was unexpected because the summing-up approach had worked for several of the proton’s other properties. For example, the proton’s electric charge of +1 can be accounted for by adding the charge of its two “up” flavoured quarks (+2/3) to that of its one “down” quark (–1/3). (Note that here, “up” and “down” are names of quarks and have nothing to do with spin.) However, the EMC researchers discovered that the net spin of the three quarks actually accounted for no more than 24% of the proton’s spin, and might even contribute as little as 4% – practically none of it, in other words.

“It was an observation that shocked the world,” says Fred Myhrer of the University of South Carolina in the US. “Everyone was baffled by it. Why should the quark model that had worked so nicely fail so badly?” Indeed, Gerhard Mallot of CERN says that the result threatened to undermine quantum chromodynamics (QCD) – the theory put forward in the early 1970s that describes how the strong nuclear force acts between quarks. “People got nervous,” he recalls. “There were conjectures that the experiment was wrong, or even that QCD was wrong.”

The seriousness of the situation was summed up in the name physicists gave to it: “spin crisis”. Reluctant to ditch the quark model because of its substantial successes, researchers instead devoted their energies to finding alternative sources of the proton spin. There were several possibilities. It could have come from the momentum acquired by quarks and gluons – the particles that carry the strong nuclear force and “glue” the quarks together inside protons and neutrons – as they rotate about the proton’s spin axis. However, this orbital angular momentum is hard to measure. Many researchers instead pinned their hopes on another option: the spin of gluons (figure 1).

Getting good data on gluon spin took nearly 20 years, though, and when that new information finally arrived, it was disappointing. In 2008 physicists working on the Relativistic Heavy Ion Collider (RHIC) at Brookhaven Laboratory on Long Island in New York showed that gluon spin contributes far less to the proton’s spin than had been proposed. The RHIC data followed on the heels of a similar (but less reliable) result obtained by the COMPASS collaboration at CERN two years earlier, so it didn’t look like a fluke. Far from resolving the crisis, the new results threatened to deepen it.

But the RHIC results came with large error bars. For the past seven years, researchers have sought to reduce those errors using an upgraded accelerator and improved detectors. The ensuing data set, which includes results acquired up until 2009, shows that the gluon might, after all, contribute a significant fraction of the proton’s spin. Last year, two groups of theorists who had analysed those data showed that the contribution might in fact approach half of the proton’s spin. One of those theorists, Werner Vogelsang of the University of Tübingen in Germany, argues that the breakthrough provides hope that the enigma of the missing spin might finally be solved.

The limits of theory

Ideally, physicists would like to be able to calculate the proton’s spin (and that of the neutron, which has a similar spin shortfall) from first principles. Unfortunately, QCD is just too complex to permit analytical calculations. The basic problem lies in the enormous strength of the strong nuclear force. Other forces, such as electromagnetism or the weak nuclear force, are puny enough that they can be represented by a fairly simple mathematical expression, to which higher-order corrections are added. But with the strong force, those corrections are themselves large and feed back into the main term.

Schematic diagram showing several possible sources of proton spin. In the centre are three large spheres representing the proton's valence quarks. Outside are three pairs of smaller spheres representing the sea quarks. The pairs of sea quarks and the valence quarks are all connected by loopy yellow line, representing the gluons that carry the strong nuclear force. White arrows indicate that all of these particles have their own orbital motion

Such large corrections are needed, in part, to account for the fact that gluons themselves possess “colour”, the QCD equivalent of charge. This means that, unlike photons, which are uncharged, gluons can interact with themselves. Gluons also decay continuously into pairs of quarks and antiquarks, as a consequence of Heisenberg’s uncertainty principle: over very brief periods of time, the uncertainty in mass-energy is very large, which implies that pairs of particles and antiparticles can pop in and out of existence. These very short-lived quark–antiquark pairs can have a significant influence on the behaviour of protons in QCD theory.

The extraordinary complexity of QCD means that physicists must derive many key parameters related to quark matter from experimental measurements, without first being able to predict them theoretically. One of the most important kinds of measurement at their disposal – and the one that prompted the spin crisis – is known as deep-inelastic scattering. This technique involves first firing high-energy electrons (or their more massive cousins, muons; these particles are known collectively as “leptons”) into a target containing certain nuclei, and then measuring the deflection of the leptons as a result of their electromagnetic interaction with two different types of quark: the “valence” quarks within protons and neutrons, and the far more numerous virtual “sea” quarks that are continually appearing and disappearing from the vacuum.

To measure quark spin using deep-inelastic scattering, both the incoming leptons and the target protons must be polarized, so that the spins of the two particle types either line up or oppose one another. Conservation of spin means that leptons can only interact (via the exchange of a spin-1 photon) with quarks of opposing spin. So by firing leptons first polarized in one direction and then the other, and recording the number of deflections in each case, scientists can work out the imbalance in quark spin and discover whether or not it adds up to the ħ/2 needed to account for the spin of the proton.

The first such measurements were carried out at the Stanford Linear Accelerator Center (SLAC) in California in the late 1970s. These measurements showed that quarks contribute about 60% of the proton’s spin, which was not surprising, since relativistic effects had already been predicted to transform some of the quark spin into orbital angular momentum. This transformation happens because quarks are confined in a small space inside the proton, and according to the uncertainty principle, this implies that those quarks have significant momentum perpendicular to, as well as along, their direction of motion. This means the quarks gyrate, and they do so at relativistic speeds because of their small mass.

The SLAC experiment, however, was limited by having a relatively low-energy beam – of no more than 20 GeV. Energy is a crucial parameter in scattering experiments because higher energies correspond to shorter wavelengths and therefore higher resolutions. And the higher the resolution, the more dense is the sea of virtual quarks and gluons visible inside the proton – given that quarks radiate gluons that split into quark–antiquark pairs, which then emit further gluons, and so on. That ever higher density in turn means an ever greater parcelling up of the proton’s energy, which means that each particle carries an ever smaller momentum. Since quark spin has to be integrated across quarks of all momenta, higher-energy probes provide a better estimate of the total contribution of all quarks to the spin of the proton – from the most energetic valence quarks to the lowliest sea quarks.

This is where the EMC had the edge. By firing a beam of muons into an ammonia target, it was able to reach energies of 200 GeV and so probe quarks with much lower momenta. As it happened, the spin of these quarks did not contribute as much to the total proton spin as was expected from an extrapolation of the SLAC data – making the overall quark-spin contribution far lower than previously thought. This stunning result has since been confirmed by other scattering experiments at SLAC, CERN, the DESY laboratory in Germany and the Jefferson laboratory in Virginia. The combined data from these experiments indicate that quark spin contributes about 30(+/–5)% of the proton’s spin – a bit more than the EMC’s initial results had suggested, but still much less than the total.

Gluons vs orbital motion

With the quark-spin contribution thus pinned down, attention turned to the remaining 65–75% of unaccounted-for proton spin. Most physicists think that this is shared out between three contributing phenomena: the quarks’ orbital angular momenta, the gluons’ spin and the gluons’ orbital angular momenta. (Photons might also play a part, but their contribution is expected to be very small, if not zero.) However, there remain differences of opinion as to how big each of these components is likely to be.

Left: A photo of the PHENIX detector. Right: An image of particle traces taken from the PHENIX detector

Anthony Thomas of the University of Adelaide in Australia claims that practically all of the remaining fraction can be accounted for via the conversion of quark spin into quark and antiquark orbital angular momenta. His assertion is based on a model that treats the proton as a “bag” of three quarks surrounded by a cloud of pions, which are very short-lived particles with a quark–antiquark core. Thomas says that in this “cloudy-bag” model, the effects of three phenomena add up to generate the required spin-to-orbital angular momentum conversion: one, the relativistic movement of quarks; two, the exchange of gluons when quarks interact; and three, a proton’s brief “fluctuation” into a proton or neutron plus a pion (any flipping of the proton spin results in the pion carrying away orbital angular momentum).

Initially, results derived from this model disagreed with the predictions of another technique, known as lattice QCD, which allows some properties of protons to be computed by breaking down space and time into discrete units. But Thomas realized that lattice QCD considers protons at very different energies (and hence resolutions) from those typically used in the cloudy-bag model. Once this energy difference is accounted for, he says, the two approaches yield similar results.

Other theorists, however, think that gluons could still make a substantial contribution. Robert Jaffe of the Massachusetts Institute of Technology in the US argues that “there is no reason a priori” to think that any of the potential spin components can be neglected, and Vogelsang argues that gluon spin “could easily” contribute more than orbital angular momentum. In Vogelsang’s view, the cloudy-bag model might explain why the quark-spin contribution is so small, but he is not convinced that it can then go on to explain where the missing spin comes from.

According to Vogelsang, a “crisis mood” surrounding the hunt for the missing proton spin prevailed until last year, when the latest round of RHIC results lifted it. RHIC collides two beams of polarized protons, and (as in the earlier, lepton-based deep-inelastic scattering experiments) it makes measurements with the spins of the beams aligned and then anti-aligned. However, whereas leptons cannot scatter off gluons directly because they do not feel the strong force, the colliding RHIC beams produce plenty of interactions involving quarks and/or gluons, thus providing direct information about gluon spin.

Using data from RHIC’s STAR and PHENIX detectors, Vogelsang and colleagues from Tübingen and Buenos Aires pinned down the gluon spin contribution to be about 40%. A separate group, led by Emanuele Nocera of the University of Milan in Italy, concluded it was about 34%. Further data from RHIC (collected between 2011 and 2015) will allow theorists to refine these estimates further.

Energy boost

With these confident estimates in, one might expect that only a third of the proton’s spin remains up for grabs. However, Renee Fatemi, a physicist at the University of Kentucky in the US and deputy spokesperson of the STAR collaboration, points out that there are still some lingering questions about gluon spin. As was the case with quark spin, she says, gluons’ spin contributions must be summed across all momenta – and RHIC’s collision energies of 500 GeV are not quite energetic enough to probe gluons at the lowest end of the momentum scale. Doing that, she says, will require a new machine: the Electron–Ion Collider (EIC).

The EIC would combine the punch of proton-beam experiments with the precision of electrons. In proton-beam experiments, the actual collisions take place between individual quarks, not whole protons, and each quark carries only a fraction of the proton’s energy. In contrast, electrons are point particles, so they deliver all of the energy they acquire through acceleration. This means that in electron-beam experiments, collision energies are actually higher even though the combined energy of the two beams is lower.

A photo of the CEBAF accelerator at Jefferson Laboratory

There are currently two designs on the table for the roughly $1bn EIC: one involving the addition of an electron beam facility at RHIC and the other requiring the construction of an ion accelerator at the Jefferson Lab. Either incarnation would eventually reach collision energies in the region of 140 GeV, but would not switch on until 2025 at the earliest.

According to Elke-Caroline Aschenauer of the Brookhaven lab, the EIC should provide the last word on proton spin. Not only will it reach all the way down to the lowest gluon momenta of interest, she argues, it will also constrain models that describe quarks’ and gluons’ orbital angular momenta. “I think after the EIC you will have the spin puzzle solved,” she says. “If the EIC doesn’t solve it then the proton would have to get its spin from something other than quarks, gluons and orbital angular momentum. And that would be extremely surprising.”

Thomas, however, believes that the spin puzzle has already been solved – in favour of quark and antiquark orbital angular momenta. He acknowledges that the gluon spin contribution has “got a bit bigger” in the light of the latest RHIC data, but argues that this is “only half the story”. He maintains that gluons’ true contribution to proton spin remains unclear because, he says, at higher energies, where their spin contribution increases, their angular momentum contribution actually goes down. “There are many interesting experiments that an electron–ion collider can do, such as sorting out the spin carried by various flavours of quarks and antiquarks,” he explains. “But solving the spin crisis is not one of them.”

Myhrer, who worked on the cloudy-bag model with Thomas, also believes that gluon spin is the wrong place to look. He argues that it is not possible to separate out the spin and orbital angular momentum contributions provided by the strong-force carrier. But he is not quite as bullish as his Australian colleague. “My opinion on this as a theorist is quite firm – yes, angular momentum due to the gyration of the quarks accounts for a large fraction of the missing spin,” says Myhrer. “However, only future experiments can settle these ongoing arguments”.

Vogelsang agrees that more data are needed to settle the issue. Indeed, he points out that, while unlikely, it is still possible that slow-moving gluons have their spins aligned against that of the proton, thereby reducing gluons’ contribution or even cancelling it altogether. “Models tend to predict that gluon spin is aligned at low momenta but we don’t have a clear proof of that,” he says. “The only thing to do is to push experiments down to these scales and see what happens there. There might still be surprises lurking.”

Lasers reveal previously unseen fossil details

A new laser-based scanning technique, which could potentially help researchers to get new information from fossil specimens, has been developed by scientists in the US. The inexpensive and non-destructive approach uses commercial-grade lasers to stimulate fluorescence in the fossil, revealing detail that would not otherwise have been observable with traditional visual enhancers such as UV light, which have a far lower irradiance level. In palaeontology, a variety of visual enhancers have long been used to highlight fossils for photography and analysis. One interesting technique uses UV light, which can stimulate visible fluorescence in certain minerals such as hydroxyapatite (the inorganic component of bone) – and, in some cases, may even highlight fossilized soft tissues. Most minerals in fossil specimens are hard to fluoresce, however, which means that, under UV light, they appear to remain dark.

Fluorescing fossils

The intensity of fluorescence can be increased, however, by using a higher-powered light source such as a laser, which allows for detectable fluorescence from a far wider variety of specimens. While laser stimulation has been traditionally constrained to highly detailed studies on microscopic scales – either through confocal laser scanning microscopy or Raman spectroscopy – recent developments and cost reductions in commercial laser technology have allowed palaeontologist Tom Kaye of the Burke Museum in Seattle, and colleagues, to apply laser-stimulated fluorescence to the macroscopic level.

The method is quite simple – in a darkened room, fossil specimens are excited by laser light and viewed through an appropriate long-pass filter. The filter blocks the bright laser light but allows the fluorescent signal from the fossil specimens to pass through. This can then be photographed as a long exposure with a digital camera. Different wavelengths excite different rocks and fossils in different ways. Indeed, even if a particular fossil will not fluoresce, it may still be possible to illuminate the surrounding rock and backlight the specimen.

Rocky specimens

“We are excited about this technique because it offers instantaneous geochemical fingerprinting of the specimens,” says Kaye, explaining that the technique will enable the researchers to identify a variety of soft tissues preserved in fossils, adding that they will be able to “look at things like skin, the size of muscles and the construction of feet”.

Alongside revealing additional detail in existing specimens, the laser light is also powerful enough to penetrate a limited distance into certain rocks, allowing the team to visualize fossil specimens that may be entirely or partly hidden beneath the rock’s surface. Using their laser technique, the researchers were able to easily identify a 120 million-year-old, largely encased “mystery fossil” – a fish – when the specimen’s teeth and bones fluoresced at a higher intensity than the surrounding rock matrix.

Fluoresced fish fossil reveals teeth and growth rings

Fossil fakers?

Laser scanning can also help identify composite fakes – fossils that have been cobbled together from different specimens – by revealing differences in fossil mineralogy. “People try to make the specimens look better or more intact because this makes them easier to sell,” explains team-member David Burnham of the University of Kansas. “Some artists are so good that you can’t tell where the real thing stops and the fake thing begins. With lasers, now we’ll know.”

The researchers have even applied the fluorescence-based technique to create the world’s first automated fossil sorter. Rock grains are sent, in a narrow stream, past a laser and video camera; based on their fluorescence, potential small fossils (for example teeth) are separated for closer examination. The machine can process 1–2 kg of material per hour – an amount that would take weeks or months by sort hand – producing a specimen concentrate of typically 20–50%.

“Laser-stimulated fluorescence is an important, welcome addition to the now rapidly expanding arsenal of techniques available for modern studies of ancient life,” says William Schopf, a palaeobiologist at the University of California, Los Angeles, who was not involved in this study. He told physicsworld.com that although the method is incapable of producing “the high-resolution 3D cellular detail provided by confocal laser scanning microscopy, and does not provide the sub-micron macromolecular information afforded by Raman spectroscopy, it is faster, cheaper, more easily portable and offers an order of magnitude improvement in the signal-to-noise ratio over traditionally used standard UV light”.

“It is splendid that both structural and anatomical detail can be rapidly resolved using this de novo approach,” agrees Phillip Manning , a palaeontologist from the University of Manchester in the UK. “I have no doubt that the imaging of fossils using multiple facets of the electromagnetic spectrum will shed unique light on the evolution of life on Earth.”

Having demonstrated the potential of laser-stimulated fluorescence, Kaye and his colleagues have moved to applying the tool to reveal new details about previously examined fossil specimens. The team is also exploring how the method could be used to scan an area for fossils, at a range of around 100 metres, by coupling a laser with a telephoto camera.

The research is published in PLOS ONE.

Atomic time was born 60 years ago today

 

By Hamish Johnston

“The death of the astronomical second and the birth of atomic time” is how the British physicist Louis Essen described 3 June 1955, when the world’s first practical atomic clock ticked for the first time.

The place was Teddington on the outskirts of London, which is home to the National Physical Laboratory (NPL). Essen’s clock was based on a beam of caesium atoms and was monitored by microwave technology inspired by his wartime work on radar. The clock was more than a metre long and nicknamed the “Flying Bedstead” by engineers at the BBC, who used it as an input for their radio time signal. The clock made atomic time available worldwide for the first time 60 years ago. Then, in 1967, the second was redefined as an SI unit based on an atomic transition in caesium, thereby ending the ancient practice of defining time though astronomical observations.

(more…)

Hope for 'new physics' as Large Hadron Collider begins 13 TeV run

CERN: physicists in the LHC control room

By Hamish Johnston

Earlier today the first data of the 13 TeV run of the Large Hadron Collider (LHC) at CERN were collected by all four of the Geneva-based collider’s main experiments. I was up early this morning (8.00 a.m. Geneva time) and followed all the action live via a webcast from CERN. After losing the beams at about 8.40 a.m. because of a faulty beam monitor, collisions in the CMS, ALICE, ATLAS and LHCb experiments were being reported at 10.40 a.m.

(more…)

Physics at 13 TeV should begin today at the Large Hadron Collider

The LHC control room at CERN

By Hamish Johnston

Earlier this morning physicists at CERN’s Large Hadron Collider began their scientific programme at 13 TeV. Unfortunately, they lost the beam after about 30 minutes and it will probably be another hour or so before things are up and running again.

You can follow all the excitement via a live webcast.

Good luck to all at the LHC and fingers crossed for finding evidence for physics beyond the Standard Model.

Breathing new life into the Rashba effect

By Tushna Commissariat

Spintronics is often touted to be a field of research that one day soon will revolutionize computing as we know it, helping build the next generation of superfast and energy-efficient computers that we long for. Future spintronic devices will tap into the inherent spin magnetic moment of the electron, rather than just its charge, to store and process information. As an electron’s spin can be flipped much quicker than its charge can be moved, these devices should, in theory, operate faster and at lower temperatures than their current electronic-only counterparts.

The entire basis of this field is built on research done by Soviet-American theoretical physicist Emmanuel Rashba in 1959. Indeed, he was the first to discover the splitting of the spin-up and spin-down states in energy and momentum with an applied electric field instead of a magnetic field. But Rashba’s original article detailing the effect, written together with Valentin Sheka, was published in a supplement of the Soviet-era, Russian-language journal, Fizika Tverdogo Tela, and is nearly impossible to get a hold of today.

New Journal of Physics (which is published by IOP Publishing, which also publishes Physics World) has now produced a focus collection of articles on the Rashba effect. As a part of this collection, guest editors Oliver Rader of the Helmholtz Centre Berlin, Gustav Bihlmayer of the Jülich Research Centre in Germany and Roland Winkler of the Northern Illinois University in the US worked with Rashba to create an English-language version of his original paper.

You can access the entire “Focus on the Rashba Effect” collection here, and the translated original paper here. In some ways, this highlights the importance of other key research articles that may have been published in journals that are no longer available and so may be in danger of being lost forever. Leave us a comment if you can think of any such papers.

Tiny probe reveals electrical conductance of individual atoms

The electrical conductance at different locations on individual atoms has been measured by scientists in Japan. The experiment involved adding and removing electrons from atoms on a surface by using a scanning tunnelling microscope (STM). The technique could also be used to measure the distribution of charge in superconductors, as well as to study magnetic interactions on very small length scales.

An STM builds up an image of a surface by using an atomically sharp tip to inject or withdraw electronic charge. The tip is scanned across the surface at a height of less than 1 nm, and the position of the tip can be controlled in 3D with picometre precision. As a result, STMs can produce images of individual atoms on surfaces, and have also been used to measure atomic-scale phenomena such as the quantization of conductance.

In this latest work, Huwon Kim and Yukio Hasegawa of the University of Tokyo used a state-of-the-art STM to study the surface of lead in an ultrahigh vacuum and a just a few degrees above absolute-zero. They found that when the distance between the needle and the surface was relatively large (about 100 pm), the conductance decayed exponentially with this distance. This is exactly what is expected from an electric current created by quantum tunnelling. Also, the same decay was measured when the needle was directly above an atom and when it was above a gap between two or more atoms.

Atomic skimming

When the needle was brought closer to the surface, the conductance started to increase faster than expected if tunnelling were involved. However, this increase only occurs when the needle is directly above an atom. This, say the researchers, is because the needle skims the surface of the lead, putting it in direct contact with the top of each atom. This allows electrons to move more freely between tip and atom, thereby boosting the conductance. However, when the tip is over a gap between atoms, electrons can only be exchanged via quantum tunnelling.

When the needle is brought closer still – actually pressing into the surface – the reverse effect on conductance is seen. The conductance is highest when the needle is pressed into a gap between atoms, rather than into a single atom. The researchers believe this may be because when in a gap, the tip has access to electrons from all of the neighbouring atoms. However, when the tip is touching a single atom, it can only interact directly with electrons in that one atom. Nevertheless, says Hasegawa, studies by theorists will be required to clarify exactly what is going on. “Our basic conclusion is that we measure the conductance at the sites,” he says.

Probing the superconducting gap

This detailed understanding of the variable concentration of electronic charge around and between atoms may prove to be important in nanotechnology because it should aid the design and optimization of atomic-scale electronic devices. The researchers themselves, however, are interested in superconductivity. Lead becomes a superconductor at temperatures below 7.2 K, and their experiment was carried out at 2.1 K. Although the material would have been a superconductor during the experiment, the observations were made in a region of the energy spectrum where this would have made no difference. Now, however, the researchers are preparing a paper describing how they have used the technique to look at electrons in the so-calledsuperconducting gap, in the hope that this will provide insights into the nanoscale behaviour of electrons in superconductors.

Laurent Limot of the Institute of Physics and Chemistry of Materials of Strasbourg, France, commends the researchers, but points out that the technique is “not completely new”, because two papers in 2011 looked at the charge distribution around atoms, one in gold and one in C60 molecules. Nevertheless, he says, the researchers have shown the differences in conductivity at different sites much more solidly than previous work. As well as studying superconductivity, says Limot, one can envisage using a ferromagnetic tip to look at nanoscale magnetic interactions. This would be useful for developing spintronic devices that use the magnetic spin of the electron to store and process information.

The research is published in Physical Review Letters.

Physicists celebrate Singapore’s golden-jubilee year

Photo of the second phase of Singapore's giant Fusionopolis R&D centre taken in May 2015

By Robert P Crease in Singapore

I’ve landed in Singapore shortly before the 50th anniversary of the nation’s independence –  Sunday 9 August is the official date. The event that brought me was a conference entitled “60 Years of Yang–Mills Gauge Field Theories”, the opening day of which on Monday 25 May featured speeches by C N Yang, who shared the 1957 Nobel Prize for Physics, as well as David Gross – the 2004 Nobel-prize winner. I spoke on Wednesday morning.

But the conference isn’t the only physics-related event scheduled in Singapore’s jubilee year. Another is the opening of Fusionopolis II, the second phase of an innovative research and development (R&D) hub funded by the government’s Agency for Science and Technology Research (A*STAR). Phase one opened seven years ago – you can relive Physics World news editor Michael Banks’s experiences here; phase two is slated to open on 19 October. The initiative aims to supercharge Singapore’s research ecology by putting in close proximity materials-science research institutes, industrial research centres, and an international collection of eminent universities.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors