Skip to main content

Big Bang turns youth on to physics

Big Bang Theory
Geek chic: From left to right, Raj, Howard, Leonard and Sheldon of The Big Bang Theory build a fighting robot. (Courtesy: Warner Bros Television Entertainment)

By Hamish Johnston
Sheldon Cooper is an unlikely poster boy for any cause – he’s gangly, self-absorbed and sociopathic. Indeed, the fictional theoretical physicist is more like an alien observer of the human race, than someone to aspire to.

But according to an article in the Observer, the popular character in TV’s The Big Bang Theory – played by Jim Parsons – and his entourage of geeky physics pals are responsible for “a remarkable resurgence of physics among A-level [high school] and university students.” Tom Whitmore, 15, told the paper: “The The Big Bang Theory is a great show and it’s definitely made physics more popular”.

According to the paper, the number of British students studying physics at university has jumped by 10% since 2008 when the show was first broadcast in the UK. However, the article does point out – as Sheldon surely would – that the number of pupils doing physics A-levels (senior high-school courses) in the UK had been rising since 2006. Also, the recent popularity of TV/radio presenter and physicist Brian Cox, and all the publicity surrounding the Large Hadron Collider, are put forth in the article as contributing factors in the physics boom.

If UK students are being turned on to physics by the The Big Bang Theory, the irony will not be lost on physics societies worldwide, which have spent much time and effort on trying to re-brand physics as a cool subject studied by normal people who go on to success in the wider world. It could be that a better way to get teens interested in physics is to appeal to their “inner geek”.

You can read the Observer article here.

Three new elements get official backing

cn.jpg
By Hamish Johnston

Darmstadtium (Ds), roentgenium (Rg) and copernicium (Cn) are here to stay now that the International Union of Pure and Applied Physics (IUPAP) has approved the names of these three new elements.

The good news came yesterday at the General Assembly of IUPAP, which is running this week at the Institute of Physics (IOP) in London.

Robert Kirby-Harris, chief executive of the IOP and secretary-general of IUPAP, said, “The naming of these elements has been agreed in consultation with physicists around the world and we’re delighted to see them now being introduced to the periodic table.”

The approval ends the long process of naming a new element, which typically begins with its discovery at a nuclear physics lab. Indeed, these latest three were all discovered at the GSI lab near the German city of Darmstadt – which lent its name to Ds.

Both Rg and Ds were first spotted in 1994 and have 111 and 110 protons, respectively. With an atomic number of 112, Cn first burst on the scene in 1996.

Why has it taken so long for official approval? After GSI announced a discovery, it had to be reproduced at another facility – and then both the International Union of Pure and Applied Chemistry (IUPAC) and IUPAP had to be convinced of the discovery. Then, the scientists who made the discovery suggest a name to the IUPAC/IUPAP Joint Working Party on the Discovery of Elements, which then recommends that the name be adopted. Finally, the name must be adopted by the General Assembly of IUPAP.

If you’d like to know more about how these and other elements were found, we’ve just published an article by Paddy Regan, a nuclear physicist at the University of Surrey who works on the RISING collaboration at GSI. You can read it here.

Beef over magnetic cows keeps on sizzling


View Larger Map

By Hamish Johnston

In 2008 zoologist Sabine Begall from the University of Duisburg-Essen in Germany and colleagues shocked the bovine world with the claim that cattle prefer to align their bodies along the Earth’s magnetic field – that is along the north–south direction. The team used images from Google Earth to study the orientation of 8500 cattle from 208 pastures around the world to come to their conclusion, which was described in the Proceedings of the National Academies of Science.

But then in January of this year, Jiri Hert from Charles University in the Czech Republic and colleagues reported that there is no evidence for such alignment in their Europe-wide study of some 3412 individual cows in 322 herds (arXiv:1101.5263. This work was later published in Journal of Comparative Physiology A. You can read our take on this development here.

Now, Begall and colleagues have hit back with a paper in that very same journal, where they re-analyse data used by Hert and coworkers. In their paper, Begall and team argue that about half of Hert’s data are noise – that the resolution of corresponding images are too poor, or the cattle are on slopes or in other locales that could affect their orientation.

In their paper, Begall et al. take a fresh look at Hert’s data and claim to see that “cattle significantly align their body axes in north–south direction”. Furthermore, the researchers say that they have uncovered evidence that resting cattle are even more likely to align themselves than their standing neighbours.

Hynek Burda, who works with Begall, described the exchange of views as “a holy war against magnetoreception”. We look forward to the next salvo from Hert’s army.

In the meantime, you could do a bit of research of your own. To get you started I’ve embedded a Google image of what appears to be cattle grazing in a field somewhere in England. But beware, apparently it can be difficult to tell the difference between sheep and cattle!

Silicon carbide shows promise for quantum computing

Silicon carbide, a material that is already widely used in high-power electronics, could also be used for quantum information processing. So say researchers in the US who have studied point defects in the material. These defects, similar to ones found in diamond, contain electron spin states that can be controlled coherently and manipulated as quantum bits (qubits) using light.

A quantum computer exploits the ability of a quantum particle to be in a “superposition” of two or more states at the same time. By encoding information into qubits based on such particles, a quantum computer could, in principle, outperform a classical computer on certain tasks. For example, a quantum computer should be good at code decryption, because its processing speed should increase exponentially with the number of qubits involved. In practice, however, physicists have struggled to create even the simplest quantum computer because the fragile nature of these quantum states means that they are easily destroyed and difficult to control.

Recently, diamond-based qubits created a flurry of interest because their “decoherence” time (the time that they can retain their logic state) is much longer than the time it takes to perform a logical operation – even at room temperature. What is more, they can be read-out using light, which means that they could potentially be integrated into photonic quantum-information-processing systems. However, making these qubits remains a challenge – both in terms of cost and scaling-up the technology to make multi-qubit integrated circuits.

Now, in a new paper in Nature, David Awschalom and colleagues at the University of California, Santa Barbara show that similar qubits can be made in silicon carbide. This material is already widely used in the electronics industry and therefore should be easier to scale-up than diamond.

Divacancies

The researchers looked at a specific crystal structure of silicon carbide called 4H-SiC that contains naturally occurring defects called “divacancies”. These correspond to a missing silicon atom next to a missing carbon atom in the crystal. They are very much like the defects in diamond known as “nitrogen-vacancy centres”, which have already been used as qubits. Both types of defect form a multi-electron system that has a net angular momentum (or spin) that can be aligned either parallel (“1”) or antiparallel (“0”) to an applied magnetic field, and can so be exploited as a qubit. Some SiC defects interact with light and have long decoherence times at room temperature – just like those in diamond.

Like previous experiments on diamond nitrogen-vacancy centres, Awschalom and co-workers measured the spin of the divacancies in 4H-SiC using photoluminescence. This involved shining laser light onto the sample and collecting the fluorescence light subsequently emitted by it. And, as for diamond nitrogen vacancies, the fluorescence of the silicon-carbide divacancies depends on their spin state, thus making it possible to read-out the state of the qubits in this way.

Quantum writing

By then applying an oscillating magnetic field at microwave frequencies to the sample, the researchers were able to perform electron spin resonance. Here, the spin of a divacancy oscillated between its two-qubit states, something that can be used to “write” quantum information to the sample. Again, the technique has already been tried and tested in diamond.

While these measurements suggest that SiC could find a role in quantum computing, Andrew Dzurak of the University of New South Wales sounds a note of caution in a related article in Nature. “Although silicon-carbide qubits offer enticing prospects for quantum computing, a number of challenges for this new technology remain. First, the qubit operations reported here were performed on a large ensemble of qubits, so the next step will be to demonstrate control and measurement of a single qubit. More significantly, technologies must be developed to ‘engineer’ thousands of individually addressable divacancy qubits, rather than merely identifying accidentally located defects. Engineering will also be needed to configure pairs of adjacent qubits reliably, to enable controlled two-qubit operations – another vital requirement for quantum computation,” he writes.

However, if these challenges can be met, silicon carbide could become a “serious candidate” for large-scale quantum computing, Dzurak says.

The research is described in Nature 479 84.

What is your favourite physical constant?

By James Dacey

hands smll.jpg

A fascinating paper published in Physical Review Letters this week reports that one of the fundamental constants of our universe – the fine-structure constant (α) – may in fact vary depending on where you look in the heavens. The paper was actually available on the arXiv preprint server more than a year ago, and the implications of this bold claim were discussed at the time in a news article by physicsworld.com editor, Hamish Johnston. Along with other fundamental constants, the fine-structure constant determines the masses and binding energies of elementary particles, including dark matter – so it’s a big claim!

But in addition to the huge physical questions raised by this finding, it is also quite strange to think that such a familiar constant, 1/137, may not be so constant after all. Maybe I’m being a tad melodramatic about this, but I find it quite sad to think that this trusty constant, which was etched into my brain as an undergraduate, could somehow be subject to the whims of the universe just like the rest of us. But when it comes to holding an emotional attachment to the fundamental constants of physics, I somehow doubt that I am alone.

In this week’s poll we want to know: What is your favourite physical constant?
Planck’s constant
Gravitational constant
Boltzmann’s constant
Charge of the electron
Avagadro’s number
Speed of light in a vacuum

To cast your vote, please visit our Facebook page. And of course, there are plenty of other physical constants out there, so if your favourite does not appear on this list of big-hitters then please feel free to let us know in a comment on the Facebook poll.

In last week’s poll, we considered another feature of the human side of science – the art of writing scientific papers. We asked whether you think that papers would be more informative if they were written in a first-person narrative where researchers told the “story” of their research as well as the scientific results. We had a lot of responses and opinion was fairly evenly divided with 56% of respondents replying “yes” and 44% replying “no”.

The question came about because last week the Royal Society opened up its entire historical journal archive to the public, which included Newton’s first published scientific paper. In the old text, Newton presents his New Theory of Light and Colors in a relaxed first-person narrative, which gives the readers an insight into the great physicist’s thought processes. Today’s scientific papers stand in stark contrast to this, being written largely in the third person about experiments that took place with no apparent human input.

One respondent who voted “yes” is Abhinav Deshpande, a physics student at the Indian Institute of Technology in Kanpur. He commented: “Even though [the papers] wouldn’t necessarily be more informative, the story of how the discovery came about or how the key idea hit the author is an inspiring one for young, inexperienced readers like myself.”

But another student, Matthew O’Neil who is doing a degree in biochemistry at Keele University in the UK, has different ideas. He thinks that a first person narrative would make a paper less, not more, informative. “The idea of a scientific paper is to clearly and concisely inform the reader of methods, results, analysis and conclusions,” he commented.

Perhaps a third way has been found, however, by a respondent called Chi Ming Hung. He believes that, for clarity, the main body of a scientific paper should still be in the third person, but that it would be useful for authors to add a section about the “story” of the research, perhaps to the appendix. “This is useful in case somebody has similar ideas and need some inspiration and thus can benefit from the subjective train-of-thought behind the research.”

Thank you for all of your responses and we look forward to hearing from you again on the Physics World Facebook page.

Memristor memory could be used in wearable electronics

Researchers in South Korea are the first to make a bendable digital memory that can store data without constant power. Such memories could find applications in electronic paper for more comfortable reading and in wearable computers, which could be used in medical monitoring and treatment.

A memristor “remembers” the amount of charge that has flowed through it, with the information being stored in terms of the device’s resistance. While the concept of the memristor was first proposed in 1971, it was not until 2008 that the first practical device was made.

Since then, several research groups have explored the development of flexible memories by placing memristors in cross-point configurations. Two arrays of parallel metal lines are placed one on top of the other in a grid; where the lines cross, they are connected with a memristor. By running current along the two wires that cross a particular memristor, the researchers can – in theory – read, write or erase information encoded in its resistive state.

Enriched with oxygen

Keon Jae Lee of the Korea Advanced Institute of Science and Technology (KAIST) and colleagues made their memristors from amorphous titanium dioxide with aluminium electrodes at the top and bottom. The team deposited titanium dioxide in atom-thick layers between the electrodes, leaving the interface at the top electrode enriched with extra oxygen ions because of oxygen’s affinity for aluminium.

When a negative voltage is applied to the top electrode, negative ions are pushed into the titanium dioxide, thus reducing the material’s electrical resistance. This low-resistivity state is the equivalent of a binary “1”, and it endures for at least 2.7 hours even when the voltage is switched off.

Switching the polarity of the electrodes causes the positive top electrode to draw the oxygen back out. This returns the memristor to the high-resistance “0” state. The state of the memristor can be read-out by applying a small, –0.5 V, read voltage and then measuring the current. The current that runs through the memristor in its low-resistance state is 50 times higher than if it ran through a high-resistance memristor.

Simple, but flawed

Unfortunately, this simple set-up has a major flaw. A read current that is meant to probe a high-resistance memristor would instead prefer to travel through its lower-resistance neighbours – and it can. This makes the resistance of the read memristor look smaller than it really is. As long as these “sneak paths” exist, the memory cannot be accurately read, written or erased, says KAIST’s Seungjun Kim. To cut off the sneak paths, the team paired each memristor with a flexible silicon transistor, which prevents current from flowing through the memristor unless it is selected for the operation.

The team made arrays of 64 memristor–transistor bits, eight to a side on a flexible plastic base. To demonstrate the memory’s pliability, the team gradually bent it from a curvature radius of 28.6 mm to 8.4 mm. The researchers measured the high- and low-resistive states along the way, reporting no significant change to either state. To test the material for fatigue, the team flexed an array on a 2.8 cm-long piece of plastic so that its edges were 1.8 cm apart. The team bent and relaxed the memory 1000 times, observing little alteration in performance.

“Crucial step”

Wei Lu at the University of Michigan in Ann Arbor, who was not involved in the research, calls the work a “crucial step” towards flexible memory devices because it makes the jump from single cells to small arrays. However, he points out that the team only tested 2 × 2 subsets of their 8 × 8 arrays, and that the researchers will need to prove the technology in larger arrays to show that it can be scaled up. He also notes that the 64-bit memory covers a “whopping” square centimetre. “For comparison, modern solid-state memories are more than 100 million times denser,” he says.

The team is already investigating options for more compact arrays, combining a diode and unipolar resistor for each bit of its next flexible memory. In addition to taking up about a third less space than memristor–transistor bits, Lee suspects these will be easier to mass produce.

The work is described in Nano Letters.

Images from Turkey




Institute of Accelerator Technologies, Ankara University
(Credit: Michael Banks)

By Michael Banks

If my recent travel (see here and here to Ankara, Turkey, left you wanting to see more images from the trip, then fear no more.

On the Physics World Flickr page, you can now peruse selected images from the visit to the Institute of Accelerator Technologies at Ankara University, as well as the Proton Accelerator facility operated by the Turkish Atomic Energy Authority.

Look out for further coverage from my trip to Turkey in future issues of Physics World.

A taste of the exotic

A rare event took place in June, when not one but two new elements were added to the periodic table. The heaviest elements yet discovered, these new entries have 114 and 116 protons in their nuclei, respectively. Although they have not yet received names – the heaviest named element is currently copernicium, which has 112 protons – their presence on the table was recognized by the Joint Working Party of the International Union of Pure and Applied Physics and its sister body in pure and applied chemistry. Officially speaking, these elements exist.

The same meeting of the Joint Working Party also reviewed the evidence for three other would-be elements in the table containing 113, 115 and 117 protons, respectively. Signs of these elements had been seen in experiments at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, and the results were published in the scientific literature. However, on this occasion, the committee decided not to formally acknowledge the existence of the elements until more definitive and cross-checking measurements could be performed.

The drive to produce and identify new elements – and in the process redefine the limits of the periodic table – is a frontier field of nuclear physics, but proving that a new element has been created is far from easy. Similar scientific challenges exist when the object of the experiment is to create nuclei where the ratio of neutrons (N) to protons (Z) is unusually high or low. Although these “exotic” nuclei are chemically identical to their more stable cousins and so occupy the same slots in the periodic table, their differing total masses can radically alter how their nuclei behave. Indeed, such nuclear species often live only fleetingly before they radioactively decay to more stable forms. But what these decay processes can do is provide valuable insights into the underlying structure of atoms. This helps us understand how protons and neutrons link together to form bulk nuclear matter, and thus how stable elements were originally created. A century after Ernest Rutherford’s paper on the existence of the atomic nucleus, we are still gaining new insights into the mysteries of the nuclear world.

Things fall apart

The periodic table contains 92 elements that occur on Earth naturally, ranging from hydrogen, which has just a single proton (Z = 1), up to uranium with 92. None of the elements beyond bismuth (Z = 83) have radioactively stable isotopes; but even for the lighter elements, stable isotopes represent a rather small subset of all possible nuclear systems. In fact, of the 7000 or so possible nuclear species that are thought to exist, only 286 proton–neutron combinations (or about 4% of the total) have decay half-lives of more than 500 million years (making them effectively stable). The number of stable isotopes also differs for each element. For example, tin (Z = 50) has 10 stable isotopes, while technetium (Z = 43), promethium (Z = 61) and polonium (Z = 84) have none.

Within this family of stable nuclei, certain patterns can be discerned. One of these concerns the total mass of the nucleus, A, which is the sum of the protons and neutrons in a nucleus (Z + N). Chains of nuclei with the same value of A are called isobars. If A is odd, there is usually only a single stable combination of neutrons and protons for that particular “isobaric chain”, while for even values of A below 200 there are usually two stable isobars. For example, there are two stable A = 86 isobars, namely krypton-86 and strontium-86, but only one stable A = 85 isobar in the form of rubidium-85.

Another good indicator of nuclear stability is the ratio of neutrons to protons in a nucleus (N/Z). Light nuclei with A < 40 are most stable when the nucleus contains nearly equal numbers of neutrons and protons (N/Z ≈ 1). When A = 16, for example, the most stable system is oxygen-16, which has eight protons and eight neutrons. Heavier nuclei, in contrast, are most stable when N > Z; the most common isotope of lead, for example, has 126 neutrons and 82 protons, making N/Z = 1.65.

Such patterns are important for anyone studying heavy and exotic nuclei for two distinct reasons. First, they are linked to some very interesting topics in the underlying theory of nuclear structure, particularly the existence of so-called “magic configurations” of protons and neutrons that are more tightly bound – and hence more stable – than their nuclear neighbours. Although the total energy needed to break a nucleus into its constituent protons and neutrons – i.e. its binding energy – increases almost linearly with the total number of nucleons in the nucleus (N + Z), it turns out that additional binding is associated with nuclei where either N or Z equals 2, 8, 20, 28, 50, 82 or 126. Nuclei that contain these “magic numbers” of protons or neutrons are like the nuclear equivalent of the noble gases: their additional stability is caused by their outer shell of nucleons being full, or “closed”. Empirically, magic nuclei arise from an additional “spin-orbit” term in the nuclear Hamiltonian, which causes certain orbitals with lots of angular momentum to have significantly lower energies.

The second reason for being interested in patterns of stable nuclei is that they help us to predict and understand how an unstable nucleus will decay. For example, nuclei that have an excess of neutrons compared with the most stable isobar for a given A can spontaneously change one of their neutrons into a proton, emitting an electron (β) and an antineutrino in the process. But if a nucleus has an excess of protons compared with the stable isobar, a proton can change into a neutron, emitting a positron (β+) and a neutrino.

Although β emission is the main decay mode for most radioactive isotopes with a mass of less than 209, other forms of radioactive decay are also energetically possible. Nuclei with the fewest neutrons for a given chain of nuclei containing the same number of protons, for example, can decay by emitting a proton directly from the nucleus. This decay mode has been observed for the most neutron-deficient isotopes of odd-Z elements. In this case, the only thing stopping the final, unpaired proton from being ejected instantaneously is that the proton must quantum-mechanically tunnel through an energy barrier formed by a combination of the Coulomb repulsion between the protons in the nucleus and the angular momentum of the final, unpaired proton.

For a handful of nuclei with an even number of protons but very few neutrons, including iron-45, nickel-48 and zinc-54, nuclear physicists have recently observed a new and very rare decay mode of correlated two-proton emission. However, such “low-N, even-Z” nuclei more commonly decay by emitting an alpha particle (two neutrons and two protons bound together). In many cases, the nucleus that remains – the so-called daughter nucleus – is left in an excited state that usually decays to its ground state by emitting gamma-ray photons with a characteristic energy, which can give useful clues to its internal structure. It turns out that certain nuclei require more energy to raise the nucleus into an excited state than others, leading to systematically higher excitation energies (figure 1); these are the nuclei containing magic numbers of protons or neutrons. But there are some even more stable, “doubly magic” nuclei – with magic numbers of both protons and neutrons – containing first excited states of even higher energy still.

Some nuclei, however, do not emit gamma rays at all when they decay from an excited state to their ground state, but instead transfer the released energy to an atomic electron that gets ejected with a characteristic energy. An electron from an outer shell then drops into the vacancy, emitting an X-ray in the process. These X-rays are particularly useful for identifying the element from which they came since their energies are proportional to the square of the number of protons in the atomic nucleus – a relationship known as “Moseley’s law” after the early 20th-century British physicist, Henry Moseley, who discovered it. As for the heaviest elements, with the largest number of protons, they can spontaneously break up into two smaller, more energetically favourable fragments through the process of fission. However, in many cases this mode is not particularly favoured over the competing alpha-decay mode.

For physicists who study exotic nuclei, these different types of decay are like the loops and lines in a human fingerprint. Their presence or absence can prove whether a specific nucleus was present in a particular experiment, as well as providing direct insights into the arrangement of protons and neutrons in the nucleus being studied. But with these exotic nuclei not occurring naturally on Earth, the question is, how can we make them?

Getting to the limits

Heavy exotic nuclei can be synthesized using a variety of experimental techniques. In one method, known as “fusion evaporation”, intense beams of positively charged ions of radioactively stable isotopes, such as calcium-48, nickel-64 and zinc-70, are accelerated and fired at a thin, isotopically purified metallic foil. Since they are both positively charged, the ions and the target nuclei experience a mutual electrostatic repulsion. But if the ions are accelerated to energies just above this repulsion energy, the beam and target nuclei can overcome the repulsion and fuse, thus combining individual protons and neutrons into a single, hot, compound nucleus. The resulting nuclei then cool down by rapidly “boiling off” light particles, such as neutrons, protons and alpha particles, over a period of less than 10–15 s, leaving cool, residual nuclei.

These nuclei can be identified either indirectly, by detecting the particles that boil off from the fused compound system, or directly using a device known as a mass separator. This is essentially a dipole magnet, and its electric and magnetic fields can be set so that only those nuclei of a certain mass and electrical charge travel to the end of the separator. The device thereby splits the “interesting” residual nuclei from “uninteresting” species, such as unreacted beam particles and fission fragments created when the beam and target nuclei interact. Once the residual nuclei have been separated out, their decay properties can be studied in detail, away from the large background of beam nuclei. The process is rather like finding a needle (the rare exotic nuclei) in a haystack of other reaction products and beam particles. Facilities that use fusion evaporation followed by mass separation to form and study the heaviest nuclei exist at the Argonne and Lawrence Berkeley national laboratories in the US, the cyclotron laboratory in Jyvaskyla, Finland, and the JINR.

A second technique for creating and studying exotic nuclei – typically somewhat lighter (A < 238) neutron-rich isotopes – is known as the “in-flight method”, in which targets such as beryllium are bombarded by beams of heavy, stable ions, such as xenon-136, lead-208 or uranium-238. These beams have such high energies – typically hundreds of mega-electron-volts per nucleon or 100 times higher than in fusion-evaporation reactions – that their particles are moving much faster than the individual protons and neutrons within their nuclei. When the beam collides with the target, the nuclei do not fuse, as with the fusion-evaporation method, but instead produce a wide variety of nuclei – via projectile-fragmentation or projectile-fission reactions – that weigh less than the species in the primary beam. Moreover, the high velocity of the initial beam means that the reaction products are focused in the forward direction, along with the unreacted beam particles.

To separate out the exotic nuclei, the particles are passed through a “nuclear fragment separator”, such as the FRS facility at the GSI Helmholtz Centre for Heavy-Ion Research in Darmstadt, Germany (figure 2). These instruments consist of a series of detectors that measure the energy of the beam as it passes through, with the energy loss at each detector being related to the proton number of the nucleus. Researchers can calculate the mass-to-charge ratio for each transmitted nucleus by combining information on the nuclei’s “time of flight” between two points along the path of the separator with the strengths of the device’s magnetic fields. As with the fusion-evaporation method, once these exotic nuclei have been transmitted to some final focus, their decay properties can be studied in detail, event by event.

Although the fragment separator is a chemically insensitive tool that allows researchers to produce and identify new, exotic nuclear species, there are still limitations on the nature and number of such nuclei that can be formed and studied. For example, a 2008 experiment at the GSI facility led by Thomas Faestermann from the Technical University in Munich required more than two weeks of intense beamtime to produce just a few hundred nuclei of tin-100, which is the heaviest nucleus with equal numbers of protons and neutrons (Z = N = 50) to have been identified so far. Even with such small amounts, though, interesting and new information about the internal structure of nature’s most exotic isotopes can be found by measuring the radiation emitted either when protons and neutrons inside the nucleus rearrange themselves or when the nucleus’s ground state radioactively decays.

Pushing back the boundaries

One important question for nuclear physicists involves determining the maximum and minimum numbers of neutrons or protons that nuclei can contain. These outer edges of nuclear existence are known in the jargon as drip-lines, because any unstable nucleus that lies beyond them will simply emit, or “drip out”, protons or neutrons. Any nucleus lying exactly on the neutron drip-line is stuffed so full of neutrons that it can take in no more of them, while any nucleus on the proton drip-line is so proton-rich that no more protons will bind to it.

One nucleus on the proton drip-line that has been studied recently by Adam Garnsworthy at the University of Surrey and colleagues from the RISING collaboration at GSI is technetium-86. Although it is not stable (it can decay by emitting a positron and neutrino), technetium-86 survives for long enough to be transported in a “metastable” excited state through the FRS facility at the GSI – a journey that takes about 100 ns – with the gamma rays it emits being measured using the RISING spectrometer positioned in FRS’s final focal plane (figure 3). What makes technetium-86 particularly interesting is that, despite having an odd number of protons and an odd number of neutrons (43 of each), its internal energy levels are almost the same as that of 86Mo42, which is the nearest nucleus to it with an even number of protons (42) and an even number of neutrons (44). Such “even–even” nuclei usually have more binding energy than “odd–odd” nuclei of similar mass because the spin of every proton and neutron in the former can pair up nicely, whereas in the latter a “spare” proton and neutron are left over. Neighbouring odd–odd and even–even nuclei therefore usually have rather different energy levels. The reason technetium-86 (odd–odd) is so similar in structure to 86Mo44 (even–even) is because the former – despite the spins of the unpaired nucleons pointing in opposite directions – has a strong, additional proton–neutron binding that appears to be significant only in nuclei with equal proton and neutron numbers.

Another noteworthy study using the RISING set-up at GSI, led by Andrea Jungclaus from the University of Madrid and Marek Pfützner from the University of Warsaw, involved the very neutron-rich nucleus cadmium-130. This nucleus has a magic number of neutrons (82) and is just two shy of a magic number of protons (48 instead of 50). One thing that makes the structure of cadmium-130 interesting is that its signature gamma rays – observed from the decay of a metastable excited state identified in this nucleus by the RISING collaboration – are similar to those from cadmium-98. On the face of it, these two nuclei should be somewhat different beasts as one has 50 neutrons and the other 82. Yet despite having about 60% more neutrons, the internal structure of cadmium-130 is basically the same as cadmium-98. The similarity is caused by the fact that the 50 neutrons in cadmium-98 form a closed shell: in other words, like cadmium-130 it also has a magic number of neutrons.

Other novel nuclear systems examined recently include nuclei around lead-208 (208Pb82), which have been studied by Zsolt Podolyak from Surrey and colleagues in the RISING collaboration following the fragmentation of a uranium-238 beam. Lead-208 is the heaviest stable nucleus to have both a magic number of protons (82) and a magic number of neutrons (126). Podolyak and collaborators have made the first study of mercury-208 (208Hg80), which is similar to 208Pb82, except that it has two protons fewer than a full shell (i.e. 80) and two neutrons more than a full shell (i.e. 128). Although neutron-deficient lead and mercury nuclei have been studied that have up to 30 fewer neutrons, this is the first nucleus to have been studied in this neutron-rich region of the nuclear chart and provides the first information about the subtle interactions between individual proton holes and neutron particles in our understanding of the nuclear structure of heavy nuclei.

The future of exotic nuclei

So where next for nuclear physics? The isotopes that make up the proton drip-line have been measured for most of the elements with odd numbers of protons as far as bismuth (Z = 83). But while the proton drip-line is well established, the neutron drip-line has yet to be reached in all but the lightest elements. In other words, we still do not know how many neutrons can be packed into the nuclei of atoms such as tin and lead.

One of the motivations for creating ever-heavier new elements is the elusive “island of stability”. This term refers to a long-standing prediction that the uncharted end of the periodic table may contain a group of unusually stable, super-heavy elements in configurations associated with magic numbers in the heaviest nuclei. Atoms with atomic numbers up to 118 have been inferred as rare surviving products from fusion-evaporation reactions at the JINR between beams of calcium-48 ions and heavy, radioactive targets made from chemically separated isotopes of transuranic elements including plutonium-244, curium-245 and -248, and californium-249. These rare nuclei were identified by successive decays of alpha particles, which usually ended in a spontaneous fission event that was recorded in the same pixel of a segmented charged-particle detector. But while the island of stability is thought to begin with nuclei containing about 114 protons – and possibly extend as far as nuclei with 126 protons – the problem is that these nuclei are likely to have about 184 neutrons, which is a higher number than can currently be achieved using induced fusion-evaporation reactions with stable beams. It is likely to be many years before we can reach the centre of the island of superheavy stable nuclei.

The limits of the nuclear chart, both in proton number and neutron number, have proved a fertile research area over the past 10 years. Looking ahead to the next decade, it is possible that experiments using very intense beams, cooled radioactive targets and extremely efficient detection systems will push the periodic table up to Z = 120 and perhaps even beyond. The limits of the nuclear chart on the neutron-deficient side are also being studied widely, and the development of new facilities such as the Facility for Antiproton and Ion Research (FAIR) at GSI, the Radioactive Ion Beam Facility at RIKEN in Japan and the Facility for Rare Isotope Beams at Michigan State University in the US (Physics World October pp12–13, print edition only) should allow researchers to push towards the most neutron-rich systems. It has been speculated that such systems may have very different physical properties from normal nuclear matter, including outer “skins” of neutrons. These nuclei are crucial pieces in the jigsaw of the production of the stable elements in nature but, for now at least, the most neutron-rich systems remain tantalizingly out of reach.

At a Glance: Exotic nuclei

  • One major goal of nuclear physics is to produce and identify new elements, thereby redefining the limits of the periodic table
  • Similar challenges exist when producing “exotic” nuclei – variations of existing elements with unusually high or low ratios of protons to neutrons
  • Heavy exotic nuclei can be produced either by fusing smaller nuclei together or by chipping them off heavier nuclei
  • Attempts are under way to confirm predictions of more stable and longer-lived “superheavy” elements with “magic” numbers of neutrons and protons

More about: Exotic nuclei

N Al-Dahan et al. 2009 Nuclear structure southeast of 208Pb: isomeric states in 208Hg and 209Tl Phys. Rev. C 80 061302(R)
A B Garnsworthy et al. 2008 Neutron–proton pairing competition in N = Z nuclei: metastable state decays in the proton dripline nuclei 82Nb and 86Tc Phys. Lett. B 660 326
Yu Ts Oganessian et al. 2011 Eleven new heaviest isotopes of elements Z = 105 to Z = 117 identified among the products of the 249Bk + 48Ca reactions Phys. Rev. C 83 054315
T Sumikama et al. 2011 Structural evolution in the neutron-rich nuclei 106Zr and 108Zr Phys. Rev. Lett. 106 202501

Undulator brings ILC closer to reality

A full-scale undulator module that could produce the intense positron beams needed for next-generation particle colliders has been unveiled by an international team of physicists. Similar modules could be used in future projects such as the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) – both of which aim to collide high-energy positrons with high-energy electrons.

Future collider projects such as the ILC and CLIC will need stable positron sources that are about 60 times more intense than those available today. In addition, these sources must produce positrons that are spin aligned and circularly polarized. Now a team led by Jim Clarke of the UK’s Daresbury Laboratory has developed an undulator that the researchers say fits the bill. The work involved the evaluation of many different designs – both by doing computer simulations and by building small-scale prototypes.

An undulator contains magnets that force an electron beam to follow a specific trajectory, causing the electrons to emit gamma-ray photons. These gamma rays are then used to produce an intense positron beam. “In our design we create a helical magnetic field by using a double helix of superconducting wires, with current flowing in opposite directions in each helix. An electron moves in a helical trajectory through the helical magnetic field. If viewed end-on, the helical trajectory appears as if the electron is describing a circle, hence the emitted radiation is circularly polarized,” says one of the team members, Duncan Scott.

Minimizing leaking heat

Scott explains that one of the main ways of achieving the high efficiency involved locating and minimizing heat leakage in the system. “We designed the system to operate at 80% of the maximum load, but after extensive tests and redesigning some parts we managed to get that down to 70%,” says Scott.

In the ILC, the electron beam that passes through the undulator will then go on to be used in the main electron–positron collisions – so it is crucial that the electron beam is not disrupted by the undulator. While the researchers have not yet tested their device with an actual electron beam, Scott says that a large number of simulations were carried out to ensure that the final beam is stable. “There was a genuine concern that steering this high-quality beam 200 m through a vessel 6 mm in diameter would disrupt the electron beam quality too much,” says Scott. He compares this disruption – which is caused by the electric field of the beam interacting with the vacuum vessel and then feeding back on the beam – to the wake left behind a boat moving through a canal that is then reflected off the canal walls and interacts with the original wake.

“Our work showed that we had to use an extremely smooth copper pipe, with a surface roughness of about 100 nm. In addition, the alignment of each of the 4 m modules [there are 60 in total] has to be accurate to within 300 µm. This is a challenge, but we have a suitably smooth vessel and we believe we can meet the alignment requirements,” explains Scott.

With data from their simulations and studies, the researchers report that the gamma rays generated by their device will be capable of producing about 1010 positrons per electron pulse – the ILC specification – even when the current in the superconducting wires is only 70% of the maximum. This means that the ILC will require about 120 of these undulators lined up in series to generate sufficient gamma rays.

Major hurdle overcome

Scott also points out that producing “a fully working proof-of-principal device that shows we can generate the required number of photons” involved overcoming a major technical hurdle, as was proving to the particle physics community that the device “won’t disrupt the beam quality, short of testing it with the real beam”, both of which the researchers have now accomplished.

In the coming months the team will be looking at passing a real electron beam through the device magnet and measuring the radiation output. The researchers are also currently in discussions with the Argonne National Laboratory in the USA about sending the undulator over to be installed on the electron test beam there. Although it will still be many years before the device is used in the ILC or CLIC, Scott feels that the best thing that physicists designing such machines can do in the meantime is to “clearly demonstrate we know how to build key components such as this undulator”.

Scott also points out that while the undulator would be essential for the ILC and CLIC, similar superconducting undulators are also used in synchrotron light sources and free-electron laser.

The work is described in Physical Review Letters.

Mikhail Lomonosov: the greatest scientist you’ve never heard of

Painting of Mikhail Lomonosov

Mikhail Vasilyevich Lomonosov (1711–1765) was one of the most far-sighted, polymathic and colourful scientists who ever lived. Far-sighted, because he pioneered the use of quantitative research methods. Polymathic, because though he died at just 53 he contributed to physics, chemistry, astronomy, metallurgy, mining, poetry, literature, mosaics, glassblowing, meteorology, electricity, grammar and history – and built a chemical laboratory, glass factory and flying machine. Colourful, because of irreverent antics and a hot temper. So why is this Russian genius barely known in the West?

Lomonosov was the son of a peasant-turned-fisherman from the Archangel province of north-west Russia. His insatiable love of knowledge, and family conflicts, led him in 1730 to borrow a few rubles and depart for Moscow on foot. It was a time when Peter the Great’s reforms were still being bitterly resisted by entrenched clergy and nobility. Indeed, to enter the Slavic Greek Latin Academy he had to pretend to be a son of nobility. When this deception was exposed in 1734, he was nearly expelled.

In 1736 Lomonosov began four years of study in Germany, where he learned the corpuscular theory of light and the need to treat it within a mathematical framework. He began writing poetry, but his revelry and carousing often landed him in trouble. On one trip a recruiter for the Prussian hussars befriended him, got him drunk and convinced him to enlist. The next morning, Lomonosov awoke in uniform in a heavily guarded fortress. It took him days to devise an escape, by climbing two palisades, swimming two moats and eluding cavalry in hot pursuit.

Pioneering polymath

Lomonosov returned to Russia in 1741, joining the Russian Academy of Sciences. Its founding, by Peter the Great in 1724, essentially marked the start of science in Russia – but it was still staffed by often incompetent foreigners (Bernoulli and Euler had left). Rude and mocking to inept colleagues, Lomonosov landed himself under house arrest following one violent episode. He was released after delivering odes to the Empress and a public apology, and became the Academy’s first Russian academician in 1745.

At the Academy, Lomonosov undertook a breathtaking range of experiments. He transformed Russian chemistry from art into science, introducing quantitative methods and laboratory instruction. He built Russia’s first chemical laboratory, where he conducted about 4000 tests and experiments. He used corpuscular theory to explain the elasticity of air, considered heat a form of rotational motion, introduced the idea of absolute zero and a version of conservation of matter and energy. He developed self-recording thermometers and designed a model helicopter to take them to the upper atmosphere. (A full-scale version was never built.) He designed and built a glass factory, and created a huge (6.4 × 4.8 m) mosaic, The Battle of Poltava. He also founded Moscow State University, although Pushkin called Lomonosov himself “our first university”.

But Lomonosov could not escape embroiling himself in fierce attacks and counterattacks with opponents. These fights reflected in academic circles the ongoing conflict that Peter the Great had set in motion between religion and science.

Many episodes in his life were dramatic. He and co-worker Georg Richmann each built “thunder machines” to measure electricity during thunderstorms. During one storm on 26 July 1752, Lomonosov’s frightened family begged him to leave the lab. He brushed them off, but was interrupted by Richmann’s servant, asking him to come quickly. Lomonosov rushed to Richmann’s house, which had its own machine, to find his colleague dead, having been killed by a blast of bolt lightning – of which Lomonosov then offered the first theoretical model.

Lack of recognition

Why Lomonosov is not better known in the West has something to do with a lack of good material about him not in Russian. His biography – Russia’s Lomonosov by the chemist Boris Menshutkin (the English translation appeared in 1952) – is dry and focuses on Lomonosov’s chemical contributions. The most sensitive account in English, written by physicist Pyotr Kapitsa in 1966 (Soviet Physics Uspekhi 8 720), is but nine pages long.

Furthermore, polymaths tend to be underappreciated both because their ambitions exceed their ability to complete projects, and because we cannot believe that people are able to overflow traditional disciplinary boundaries. My favourite illustration of this is by French historian of chemistry Ferdinand Hoefer, who wrote that “among the Russian chemists who have become known as chemists, we mention Mikhail Lomonosov, who mustn’t be confused with the poet of this name”.

Yet another factor is that some claims made for Lomonosov are hyperbolic, smacking of Cold War assertions of Russian superiority. Lomonosov also made mistakes, notably denying the proportionality of weight and mass. Whether Lomonosov was first to observe the atmosphere of Venus is an interesting controversy bound to be revived next year with the 2012 transit of Venus.

The critical point

Lomonosov worked in almost complete isolation from other scientists or even those who appreciated science. For instance, he had to beg a sponsor to allow him to spend his leisure time conducting chemistry and physics experiments the way others would spend theirs on billiards. Kapitsa also realized that Lomonosov’s genius is partly obscured because he lived in a time and place where the lack of a scientific environment made it easy – and possibly inevitable – for even a genius to go astray. Great science, like great art, requires an educated and demanding audience. This was not present in Russia at the time and one of Lomonosov’s principal contributions was to begin a coupling between science and culture.

We need to remember Lomonosov. Centuries have passed since scientists have needed to struggle for cultural recognition of the importance of science and its beneficial impact on humanity – but that luxury is now being eroded. We may have to take up Lomonosov’s struggle once again.

Copyright © 2026 by IOP Publishing Ltd and individual contributors