Skip to main content

Rashba gets hotter and more pronounced

An international group of physicists has demonstrated an electron spin-splitting effect in a semiconductor that is far larger than has ever been seen before. The large Rashba effect – the phenomenon of spin splitting with an applied electric field instead of a magnetic field – could herald the room-temperature operation of spintronic devices.

Spintronics is expected to be one of the next revolutions in computing. The idea is to fabricate devices that operate using not just an electron’s charge, but also its spin. Because the spin of an electron can be switched more quickly than charge can be moved round, these spintronic devices should operate faster and at lower temperatures than their electronic counterparts.

Electron spins are tiny magnetic moments, so to manipulate them a magnetic field is needed. As magnetic fields are difficult to control on the small scales typical in computing, physicists tend to exploit the so-called spin–orbit interaction. In this phenomenon, an electron moving in an electric field “sees” a magnetic field, which interacts with the electron’s spin.

Towards a room-temperature Rashba effect

In an external electric field, this leads to the so-called Rashba effect – a splitting of the spin-up and spin-down states in energy and momentum that is crucial for proposed spintronic devices. In the design for spin transistors, for example, electrons of a single spin are injected and then – under an applied electric field – have their spins rotated. But the Rashba effect in well-established semiconductors, such as silicon and gallium arsenide, is so small that electrons have to travel large distances – perhaps several microns – before any spin rotation is noticeable. Such distances require ultrapure materials and low temperatures to ensure that the electrons are not knocked off course.

Now, Phil King and colleagues at the University of St Andrews in the UK, together with other researchers in Europe and China, have come up with a material that could make the Rashba effect, and spintronics in general, feasible at room temperature. Bismuth selenide – the researchers’ material of choice – is unusual in that its inner bulk structure behaves as a semiconductor while its surface behaves as a metal. Such materials, known as topological insulators, have been around for decades but it is only in recent years that their unique behaviours have been discovered.

King’s group dope the surface of bismuth selenide, which causes its electrons to become confined in 2D “quantum wells”. The researchers then use a technique called angle-resolved photoemission spectroscopy, in which a bright light of a single frequency displaces electrons from the surface of the sample via the photoelectric effect. By measuring the energy of these electrons and their incident angle, the researchers can record a snapshot of the sample’s electronic structure – one that reveals the Rashba effect, or the energy splitting of the spin-up and spin-down electrons.

At least 10 times better

The bismuth-selenide sample exhibited an amount of spin-splitting at least 10 times better than other semiconductors, and at temperatures above 100 °C. The results are due to be published in Physical Review Letters.

“The very large spin splitting that we see should allow the scaling of spintronic devices such as the spin transistor down to nanometre dimensions, thereby making it much easier to have the electrons travel from one side of the device to the other without scattering and flipping their spin,” says King. “This is also promising for room-temperature operation of these devices.”

Ulrich Zuelicke, a physicist specializing in spintronics at the Victoria University of Wellington, New Zealand, is impressed by the size of the Rashba effect, and says that it also has the advantage of being tunable in terms of the amount of spin splitting. However, he says that there may still be hurdles to overcome before an effective spin transistor is realized, such as the possibility of “spin relaxation”, which affects the rotation of electron spins.

Marco Grioni, a spintronics expert at the Ecole Polytechnique Fédérale de Lausanne in Switzerland, agrees that a reliable spin transistor will require more experiments. But he thinks that a working device may come sooner than we think. “Recent experience, namely with colossal-magnetoresistance devices [used in computer memory] has shown that industry can sometimes move extremely fast towards the practical application of a bright idea,” he says.

China–US neutrino facility opens

The biggest major science project in China that has been built through a genuine international collaboration has begun operation. Once fully complete next year, the Daya Bay Reactor Neutrino Experiment – a partnership lead by 19 Chinese and 16 US universities – will begin searching for the final undetermined neutrino “mixing angle”, known as θ13.

Neutrinos are difficult to detect because they interact weakly with matter. They come in three “flavours” – electron, muon and tau – that change or “oscillate” from one to another as they travel in space. The oscillation strength between different types of neutrino is characterized by three “mixing angles” – known as θ12, θ23 and θ13 – with Daya Bay designed to determine θ13 by measuring the disappearance of electron antineutrinos.

The US Department of Energy is providing about half of the cost of the $68m facility, with China paying for the other half and all of the civil-engineering costs. The Daya Bay experiment detects electron antineutrinos produced via nuclear beta decay at two neighbouring nuclear reactors – the Daya Bay and Ling Ao power plants, which are around 55 km north-east of Hong Kong.

The new neutrino facility will consist of three experimental halls that contain identical neutrino detectors, each filled with 20 tonnes of gadolinium-doped liquid scintillator. When a neutrino strikes the liquid, a flash of light is produced that is then picked up by a bank of photomultiplier tubes around the liquid.

The first experimental hall, which is around 300 m from the Daya Bay reactor, is now complete, while the second experimental hall – 500 m from the Ling Ao reactor – is expected to be finished in the next few months. Both of these stations, known as “near detectors”, are 100 m underground to help shield them against unwanted cosmic rays and each contains two detectors to characterize the beam of electron antineutrinos from the reactors.

A third hall, around 2 km away from both reactors and 300 m below ground, will be ready by June next year. Containing four neutrino detectors, it will measure the electron-antineutrino beam that has passed through the nearer detectors, so that any drop in the strength of the signal will be an indication of neutrino oscillation.

“Among the current generation of reactor neutrino-oscillation experiments for measuring θ13, Daya Bay has the best sensitivity,” says Daya Bay co-spokesperson Kam-Biu Luk, of the Lawrence Berkeley National Laboratory in California.

Measuring disappearance

The start-up of the Daya Bay experiment comes hard on the heels of two other neutrino successes. First, in early June, the Tokai-to-Kamioka (T2K) neutrino experiment in Japan for the first time measured muon neutrinos changing into electron neutrinos – a first step to determining θ13. A few weeks later, researchers at the MINOS experiment in the US detected a total of 62 electron neutrinos – 13 more events than the background of electron neutrinos.

At T2K, as well as similar planned experiments such as the NOvA facility being built at Fermilab, the probability of electron neutrino “appearance” depends on two unknown parameters: θ13 and the neutrino phase factor, δ, which is non-zero if neutrino oscillation violates charge–parity (CP) symmetry. Daya Bay, however, is blind to the neutrino-phase factor because the probability of disappearance of electron antineutrinos only depends on θ13, which means that researchers can focus on just its numerical value.

Yifang Wang, co-spokesperson for Daya Bay and a physicist at the Institute of High Energy Physics at the Chinese Academy of Sciences in Beijing, says that the three experiments will be complementary for searching for the phase factor. “If Daya Bay, NOvA and T2K find that θ13 is non-zero, then the CP phase can be jointly measured or strongly constrained,” he says.

New collaborations

As Daya Bay is the first major US–China scientific collaboration, Luk expects the facility will provide a good testing-ground for more partnerships between the two countries. “Daya Bay provides a unique opportunity to join forces to tackle a burning question in neutrino physics and, more importantly, to learn how to work together,” says Luk.

That view is shared by Wang, who says that Daya Bay will be important for both countries. “We believe we will have a better understanding of each other, and the experience will help us for future collaborations,” he says.

Dave Wark of Imperial College London and former international co-spokesperson for T2K says it is good news that Daya Bay has begun running, but warns that it could be some time before the experiment starts to get reliable measurements – given how difficult neutrino-disappearance experiments can be. “If θ13 is large, they have an easier target, but we are still talking about at most few per cent effects in a disappearance experiment so the measurements are tricky,” says Wark.

arXiv celebrates its 20th birthday

Paul Ginsparg.gif

By Tushna Commissariat

Yesterday, on 14 August, the arXiv preprint electronic server celebrated its 20th birthday. In 1991 physicist Paul Ginsparg (right), who had then just moved to the Los Alamos National Laboratory in New Mexico, set up the online physics archive, initially know as the Los Alamos Preprint Server (xxx.lanl.gov), as a place where high-energy physicists could share preprints of their upcoming work. The initial idea, according to Ginsparg’s recent comment piece in Nature, was for 100 full-text articles or so to be submitted every year, each of which would be stored for three months. “By popular demand, nothing was ever deleted” writes Ginsparg.

The server received close to 400 subscriptions in the first six months alone. By 1999 when xxx.lanl.gov had changed its name to arXiv, the repository was collecting almost two thousand new articles every month. In 2001, when the server turned 10, Ginsparg moved to Cornell University in Ithaca, New York and took the server with him. By 2008 the world’s favourite e-print server officially had half a million papers published on it.

In 2008, when Physics World celebrated its 20th anniversary, Ginsparg recounted the early days of the Web and looked at how it has changed scientific communication. You can read his thoughts on the subject here.

Over the years, the arXiv server has had a huge impact on physics and paved the way to open-access publishing for scholarly journals. Many scientific journals now publish their content with unrestricted online access, and this has allowed scientific information to become freely accessible to researchers and the public.

Now, the server contains “about 700,000 full texts, receives 75,000 new texts each year, and serves roughly 1 million full-text downloads to about 400,000 distinct users every week. It has broadened, first to cover most active research fields of physics, then to mathematics, nonlinear sciences, computer science, statistics and, more recently, to host parts of biology and finance infiltrated by physicists,” according to Ginsparg.

Early last year, librarians at Cornell University asked for extra external funding to support the server, as the running costs were “beyond a single institution’s resources”. Its budget – which covers personnel as well as operating expenses – was predicted to increase from $400,000 in 2010 to $500,000 in 2012. Ginsparg says that an international meeting of sponsor institutions will be hosted by the Cornell Library next month and will look into transforming the arXiv server into a more community-endorsed resource. “My hope is that the barrier to implementation of new ideas in this realm will remain low enough that, if all else fails, some young researcher elsewhere can launch another tiny ship on a fateful trip.”

Information paradox simplified

A black hole’s event horizon is the ultimate last-chance saloon: beyond this boundary nothing, not even light, can escape. But does this “anything” include information itself? Physicists have spent the best part of four decades grappling with the “information paradox”, but now a group of researchers from the UK thinks it can offer a solution.

The researchers have created a theoretical model for the event horizon of a black hole that eschews space–time altogether. Their work also supports a controversial theory proposed last year that suggests that gravity is an emergent force rather than a universal fundamental interaction.

Paradoxical history

The information paradox first surfaced in the early 1970s when Stephen Hawking of Cambridge University, building on earlier work by Jacob Bekenstein at the Hebrew University of Jerusalem, suggested that black holes are not totally black. Hawking showed that particle–antiparticle pairs generated at the event horizon – the outer periphery of a black hole – would be separated. One particle would fall into the black hole while the other would escape, making the black hole a radiating body.

Hawking’s theory implied that, over time, a black hole would eventually evaporate away, leaving nothing. This presented a problem for quantum mechanics, which dictates that nothing, including information, can ever be lost. If black holes withheld information forever in their singularities, there would be a fundamental flaw with quantum mechanics.

The significance of the information paradox came to a head in 1997 when Hawking, together with Kip Thorne of the California Institute of Technology (Caltech) in the US, placed a bet with John Preskill, also of Caltech. At the time, Hawking and Thorne both believed that information was lost in black holes, while Preskill thought that it was impossible. Later, however, Hawking conceded the bet, saying he believed that information is returned – albeit in a disguised state.

At the turn of this century, Maulik Parikh of the University of Utrecht in the Netherlands, together with Frank Wilczek of the Institute of Advanced Study in Princeton, US, showed how information could leak away from a black hole. In their theory, information-carrying particles just within the event horizon could tunnel through the barrier, following the principles of quantum mechanics. But this solution, too, remained debatable.

Tunnelling through the event horizon

Now, Samuel Braunstein and Manas Patra of the University of York in the UK think they have formulated a tunnelling theory that looks rather more attractive than Parikh and Wilczek’s theory. “We cannot claim to have proven that escape from a black hole is truly possible,” they explain, “but that is the most straightforward interpretation of our results.”

Normally, theorists dealing with black holes have to wrestle with the complex geometries of space–time arising from Einstein’s theory of gravitation – the theory of general relativity. In their model, Braunstein and Patra say that the event horizon is purely quantum mechanical in nature, with bits of quantum “Hilbert” space tunnelling through the barrier.

The theorists find that even such a heavily simplified tunnelling model can reconstruct the spectrum of radiation that is thought to emanate from black holes. This is unlike Hawking’s pair-creation model, which leads to the information loss and has always required many more theoretical details to work. Put simply, Braunstein and Patra say that tunnelling seems far more likely to be an intrinsic feature of black holes – so, probably, information is not lost after all. Their findings are published in the latest issue of Physical Review Letters.

Gravity’s depth

There is yet another twist to the researchers’ work. Last year, string theorist Erik Verlinde of the University of Amsterdam, building on work by Ted Jacobsen of the University of Maryland in the US, put forward a speculative idea for the origin of gravity. Under Verlinde’s proposal, gravity is not a fundamental interaction, but emerges from the universe trying to maximize disorder. Gravity is therefore an “entropic force” – a natural consequence of thermodynamics – much as one feels a force on a stretched rubber band as the molecules attempt to squiggle up into disordered states.

Braunstein and Patra believe that their black-hole model goes in favour of Verlinde’s proposal. If gravity – not to mention inertia or space–time – is an emergent force, then it would not be utilized to unravel the basic information-loss mechanism of black holes, which is what the York researchers have shown. “This doesn’t prove that Verlinde is correct, but that his proposal ‘has legs’,” Braunstein tells physicsworld.com.

Steve Giddings, a physicist specializing in quantum gravity at the University of California, Santa Barbara, does not think that Braunstein and Patra have addressed “the most central questions” of Verlinde’s proposal. However, he says they have put forward another hint of an important link between quantum information and gravity. “An important challenge is to figure out whether the ideas enunciated by Verlinde and others can be given a more concrete foundation,” he adds. “This may be one more piece of that puzzle, but we’re not there yet.”

Electron bunches keep their shape

Researchers in Australia have developed a new source of cold electrons that could be useful for imaging tiny structures at atomic-length scales. The source, which makes use of ultracold atoms, can deliver intense and coherent electron pulses with specific shapes – including the Batman motif shown above. According to the team, such pulses could be used in the diffraction imaging of biological molecules, viruses and nanostructures.

Robert Scholten and colleagues at the University of Melbourne begin with a cloud of about one billion rubidium atoms that are laser-cooled to a few millionths of a degree above absolute zero. The team then fires two laser pulses at the atoms. The first pulse puts the atoms in an excited electronic state. The second pulse provides just enough energy to liberate those electrons and create a pulse of cold electrons with a temperature of about 10 K. Electron pulses with complex shapes can be created by passing the first pulse through a spatial light modulator before it strikes the atoms.

The pulses are then accelerated to 1 keV using an electric field and then allowed to drift about 21 cm before being detected. Unlike pulses from a conventional, hot-electron source that blur rapidly from the random motion of the electrons, these pulses retain their shapes when detected.

High spatial coherence

Because the electron pulses retain their shape, they have a high degree of spatial coherence perpendicular to their direction of travel. This makes them ideal for diffractive imaging – which the researchers hope to carry out in the coming months. According to Scholten, the transverse coherence length is about 10 nm at the source, which is already good enough to do diffraction imaging of large biomolecules as well as small viruses.

“High spatial coherence means that [the electrons] propagate in a very parallel beam, so when they hit a target, we know where they came from,” explains Scholten. “If we then detect them after diffracting from the target, we know where they came from and where they were detected,” he adds. This information is used to infer the diffractive effect of the target, which is related to its structure. Such imaging systems would complement existing atomic-force microscopy (AFM) and electron-microscopy techniques.

Being able to shape the pulses should also help researchers get round the phenomenon of “Coulomb explosion”, which is a fundamental barrier to creating bright electron pulses. Because electrons have electrical charge, the particles repel each other, thereby causing the pulse to expand as it travels – reducing its intensity. However, if the pulse is created with a specific shape – a uniform-density ellipsoid – Scholten says that it can be refocused using standard electron optics to cancel out the effects of the Coulomb explosion.

“Leapfrog arrangement”

Scholten is quick to point out that the original idea for how to create shaped pulses of cold electrons came from Edgar Vredenbregt, Jom Luiten and colleagues at the Technical University of Eindhoven in the Netherlands. As well as setting out the theory, the Dutch researchers have also worked on electron sources. “We work closely with them and, indeed, they are now adopting the techniques we [have developed], and we are sending them engineering drawings of our system,” says Scholten. “It’s a leapfrog arrangement – we built on what they did using their experiences and suggestions to progress, and now we are returning the favour.”

Thomas Killian of Rice University in Texas tells that “This new work should be viewed as a potential source of electrons that would be used in something like a scanning electron microscope.” He describes the work as “a great leap forward” in the development of low source temperatures and long transverse-coherence lengths for the electrons. “I am hopeful that this will accelerate the development of practical tools based on this technology,” he adds.

The research is published in Nature Physics 10.1038/nphys2052.

What makes a physicist a physicist?

By Margaret Harris

In last week’s Facebook poll we asked

Do you consider yourself a physicist?

This proved to be one of our most popular polls yet, with 214 responses. Of these, a narrow majority (55%) said that yes, they considered themselves physicists, while 15% chose “no” and 30% agreed that for them, “it’s complicated”.

A number of people were kind enough to explain their responses in the poll’s comments section. We really appreciate this, because it tells us a lot more than the raw numbers can. For example, judging from the comments, there seems to be some difference of opinion over the question of what makes a physicist a physicist.

For some, it’s primarily down to training or education. “I feel that I really can’t call myself a physicist because I don’t have anything hanging on the wall saying ‘Tom Sullivan is hereby granted and honoured as a physicist’, “ wrote, er, Tom Sullivan, who answered “it’s complicated”. Another who mentioned training was Kate Oliver, a science writer who regularly contributes to Physics World’s “Lateral Thoughts” humour column. “I like to consider myself a physicist as I have the relevant training, read about it and think like it,” she wrote, explaining her “yes” vote. However, she added, “since I haven’t been in the lab for three years, my ‘physicistique’ may have expired”.

The idea that physicist-hood might carry an expiry date suggests an alternative definition – one that focuses not on who you are or what you know, but on what you do. (The philosopher Jean-Paul Sartre, who believed that “to do is to be”, would love this definition.) Like Oliver, Bruce Etherington is a science communicator with a physics degree, but he answered “it’s complicated” because “Most practising physicists would probably not consider me to be one.” Another in the “it’s complicated” camp, Steve Douglas, wrote “I think to be a physicist you’ve got to specialize in it, rather than just be pretty good at it.”

At physicsworld.com, we tend towards a pretty broad definition of physicist – one that encompasses, at minimum, those who have studied physics at degree level (or higher) and who remain interested in learning about it.

But since these Facebook polls are about your views, not ours, we’ll leave the last word to Michael Eliachevitch, a soon-to-be physics student who wrote that “being a physicist means [being] part of a large adventure to discover the world we are living in”. Good luck on your adventure, Michael!

Testing equipment for space

Where does space begin? The answer to this question is a little arbitrary because the Earth’s atmosphere does not abruptly end. In practice, the recognized start of space lies at an altitude of 100 km, known as the Kármán line, which is where the atmosphere becomes too thin to obtain sufficient lift for aeronautical purposes. The atmosphere at this altitude is still enough to create a significant drag on a satellite, which is why satellites must usually fly at some 250–300 km above the Earth’s surface.

At this altitude, the level of vacuum – about 10–6 mbar – means that the thermal performance of a satellite is dominated by radiative transfer rather than convective transfer, as on the ground. This fact, combined with the complexity and cost of servicing an orbiting satellite, means that anything sent into space has to be rigorously tested in a simulated space atmosphere using a “thermal vacuum test chamber”.

Fixed ultrahigh-vacuum systems used in particle accelerators and other similar systems can be baked at high temperatures to drive off adsorbed water vapour and contaminants from chamber walls and test equipment. This is not possible with space-borne equipment because of the materials used and the highly sensitive nature of some of the components. A typical bake-out of a complete spacecraft is instead usually limited to temperatures of 50–60 °C, which results in the out-gassing load still being very high during testing.

What this means in practice is that high-capacity pumping systems have to be used to reach and maintain the required vacuum levels. Typically, a vacuum level of between 10–7 and 10–5 mbar is needed in a test chamber that may vary in size between that required for a relatively small “cubesat” (measuring 10 × 10 × 10 cm) and one needed to test a large satellite the size of a bus.

For smaller systems, a turbo pump is sufficient to overcome the outgassing load from the chamber and test item, with oil-free systems being used to minimize the contamination risk. For larger systems a combination of turbo pumps, 20 K cryo-pumps and 4 K helium cryo-panels are used to achieve the typical pumping requirement of 500,000 l s–1. Cryo-systems are preferred because they are relatively cheap and are good at pumping water and nitrogen (which are dominant in the chamber at that operating pressure) at high speeds.

Care has to be taken in preparing both the test item and the test chamber itself to ensure that both are clean from a molecular and particulate point of view. For optical instruments, molecular contaminants degrade the overall instrument sensitivity, which is exacerbated at individual wavelengths by specific absorption of particular materials, such as silicones. Particulate contamination contributes to a scattered background, which reduces image contrast and can reduce the life and stability of mechanisms in the optical chain.

Identifying the presence of particulate contamination is relatively easy as clean-rooms can be used to assemble the instruments, with regular inspections helping to maintain cleanliness to the required standard. Preventing and identifying molecular contamination, however, is not so simple. Using known low-outgassing materials as well as proven cleaning and assembly techniques provides a good starting point, but with most items, a bake-out at the maximum temperature of a subsystem is required – as is proof of the outgassing level obtained under common predefined conditions – before the subsystem can be integrated into the spacecraft. Invaluable tools for this work are a residual gas analyser to identify the nature of contaminants and a thermoelectric quartz crystal microbalance to measure the absolute rate of outgassing.

Another key part of the on-ground testing is to simulate the expected thermal conditions the satellite may encounter. This is usually done using a combination of local radiator panels, with a temperature-controlled shroud providing a representative global view. This allows the test item to be driven in a representative way between its expected operating temperatures. During this thermal cycling, functional and optical testing is performed to test and calibrate the onboard systems and ensure all the scientific requirements are met. For a small instrument, this may take just a few days, while a larger calibration campaign may take months of testing. The work may be painstaking, but it is essential to a mission’s success.

Hi-tech tattoos monitor brain waves

If you are having trouble with your heart or struggling to communicate because of a debilitating disease, a new hi-tech tattoo could soon offer help. That’s the claim of an international team of researchers that has created tattoo-like devices that can monitor heart beats, brain waves and muscle contractions.

The devices stick to skin without adhesives and move naturally with the body. The team has also integrated electrical and temperature sensors, transmitting antennas, receivers, power sources, lights and the gamut of basic circuit elements into the tattoos.

The conventional way of monitoring electrical signals given off by the heart and other organs involves electrodes smeared with conductive gel and held on with tape. However, such devices restrict the patient’s movement and normal readings are difficult to attain because the wearer is likely to be affected by presence of an electrode.

“If you want something that can be worn without irritation, with robust adhesion that doesn’t create any discomfort, you want it to match the mechanical properties and deformability of the skin,” explains John Rogers of the University of Illinois, Urbana-Champaign, who leads the research. With this goal in mind, the team came up with a circuit design that allows a semiconductor device to stretch and contract with human skin.

Semiconductor squiggles

While commercial semiconductors such as silicon and gallium arsenide make effective circuits, they are also are stiff and brittle. To make them flex and stretch, the team shaped them into extremely thin squiggles. “We take a silicon wafer that is half a millimetre thick and slice very, very thin membranes,” Rogers tells physicsworld.com. This reduces the silicon to a thickness of 50 or 100 nm, which is enough to allow it to flex. To allow the silicon to stretch, the researchers then etched the material into serpentine shapes.

“The metal interconnect wiring, the contact pads, the resistors – pretty much everything, I think, can be fashioned into these shapes,” says Rogers. The squiggly components are assembled onto a sheet of polyamide and then the circuit is transferred to a breathable elastic sheet of modified polyester that is just 30 µm thick. The completed device attaches to skin much like a temporary tattoo, clinging by Van der Waals interactions between the polyester sheet and skin, without the need for adhesive.

Karen Cheung, a specialist in bio-micro-electromechanical systems at the University of British Columbia who was not involved in the work, says the system “represents a huge improvement over current non-invasive electrodes”.

Controlling a computer game

University of Illinois researchers Dae-Hyeong Kim, Nanshu Lu and Rui Ma placed tattoos on their foreheads, chests, legs and throats to test the abilities of these sensors in reading brain waves, heart beats, muscle contractions from walking and activity in the throat when speaking. In the speech trial, the sensor could differentiate between the spoken words “up”, “down”, “left” and “right”, allowing Ma to control a strategy computer game called Sokoban.

“One can imagine using this technology to make huge improvements in assistive technology for patients of spinal-cord injury or neurodegenerative disease, such as amyotrophic lateral sclerosis,” says Cheung. She believes that these electronic tattoos could be worn comfortably for long periods of time, helping patients to “regain independence and quality of life”.

While wires were used to power and receive signals from the tattoos, the team also integrated power sources and transmitter antennas on other systems. However, what Rogers calls the “ultimate” device, combining sensors with a power source and a wireless data system, has yet to be made.

Tiny photovoltaic cells and induction coils, which convert an external alternating electric field into a current, were both tested as power sources. Rogers says that the induction coils are best for temporary uses, while photovoltaics need a storage device if they are to provide a reliable, long-term power supply. But batteries would increase the weight of the device, so the researchers have also suggested scavenging energy from the motion of the wearer.

The tattoos are described in Science 333 838.

Watch an interview with John Rogers here. It was first broadcast in January 2011.

What do you do for a living?

By Margaret Harris

hands smll.jpg

We had so many responses to last week’s Facebook poll – which asked “Do you consider yourself a physicist?” – that we’re giving everyone a few more hours to respond before we blog about the results. So if you haven’t yet answered yes, no or it’s complicated, there’s still time to do so via our Facebook page.

In the meantime, I’d like to conclude this round of career-related polls with a somewhat less metaphysical question:

If you have a degree in physics, which option best describes what you do for a living?

We’re interested in sectors here, not specific job titles, so to get you started, we’ve listed five options – engineering, finance, IT, research and teaching – that more rigorous surveys suggest are popular among physics graduates. However, if you don’t fit in any of these boxes, you’re more than welcome to add your own category (legal? medicine/health? communications?).

Speaking of being rigorous, we at physicsworld.com are well aware that Facebook polls aren’t. However, that does not mean they’re useless, or even “just a bit of fun”. We’re interested in hearing from you and we take your opinions seriously – they help us keep in touch with what individual members of the physics community think and care about. So treat these polls like the office water cooler, departmental common room or anywhere else that people gather to share their views – and if you want proper statistics on physics education and research, try the Institute of Physics’ policy department instead.

Earth sciences: unlocking the secrets of a dynamic planet

The latest video report from our globe-trotting multimedia team offers an “up close and personal” take from the bleeding edge of the Earth sciences, as told to us by faculty and graduate students in the geosciences department at the University of Texas at Dallas (UT Dallas).

Filmed in the spring as an add-on to our coverage of the American Physical Society March Meeting in Dallas, the interviews cover a lot of ground – to be expected for a discipline that aims to unlock the secrets of the solar system’s most active planet.

Carlos Aiken and colleagues, for example, are using an approach called cybermapping (which integrates laser scanning, digital photography and satellite positioning, among other sensors) to build 3D photorealistic models of surface geology around the world. Their work is being applied in oil exploration and education (for virtual field trips).

Meanwhile, fellow researcher John Ferguson is applying a technique called 4D microgravity – essentially ultraprecise gravitational measurements, a few parts per billion of the Earth’s gravitational field – to monitor the success (or otherwise) of CO2 sequestration in underground reservoirs.

Another important strand of the UT Dallas geosciences programme is the use of remote sensing (specifically, space geodetic satellite observation) to understand changes in Earth systems over time. “There’s much more to it [remote sensing] than pretty pictures,” explains Alexander Braun.

“You can actually measure real physical parameters – such as the [Earth’s] gravity field or magnetic field – and, more importantly, you can detect surface deformation. The Earth is a very active planet and it is crucial for us to understand when and where it is moving.”

In the second video (below), senior scientists in the UT Dallas geosciences programme explain what attracted them to a career in the Earth sciences. It seems if you like to travel and have a hankering for the outdoors then Earth sciences could be just the ticket.

Or, as Bob Stern puts it, “It’s really a remarkable opportunity to get out and see things that no-one else gets to see – that you would never see as a tourist.”

 

Copyright © 2026 by IOP Publishing Ltd and individual contributors