Skip to main content

From blue fogs to Brexit – the April 2017 issue of Physics World is now out

PWApr17cover-500-ruleBy Matin Durrani

“The secret of the blue fog” might sound like a Tintin book, but it’s all about a strange form of liquid crystal that’s the cover story in the April 2017 issue of Physics World magazine, which is now live in the Physics World app for mobile and desktop.

First observed in the late 1800s, only recently have we finally uncovered the structure of these materials, which turn blue when cooled. As Oliver Henrich and Davide Marenduzzo explain, blue liquid crystals could be used for new kinds of display devices.

Elsewhere in the issue, mathematical physicist Jason Lotay explains his work in seven extra dimensions, while science writer Benjamin Skuse examines the challenge for respected physicists with theories outside the mainstream.

Don’t miss either our latest look at Donald Trump’s scientific shenanigans, including an interview with Rush Holt – the physicist-turned-politician who’s now head of the American Association for the Advancement of Science.

Remember that if you are a member of the Institute of Physics, you can read Physics World magazine every month via our digital apps for iOS, Android and desktop.

(more…)

Flash Physics: Graphene ink bags photography prize, small-scale structures in Alfvén waves, two-photon blockade

Graphene ink bags UK photography prize

An image of swirling graphene ink in alcohol has won the Engineering and Physical Sciences Research Council’s 2017 Science Photography Competition. Taken by James Macleod, a technician at the University of Cambridge’s Graphene Centre, the image shows powdered graphite in alcohol that can be used to produce a conductive ink for printing electrical circuits onto paper. “We are working to create conductive inks for printing flexible electronics and are currently focused on optimising our recipe for use in different printing methods and for printing onto different surfaces,” says Macleod. “This was the first time we had used alcohol to create our ink and I was struck by how mesmerising it looked while mixing.” The competition, which is now in its fourth year, received more than 100 entries and was open to researchers who currently receive grants from the UK funding council.

NASA mission reshapes understanding of plasma waves

NASA has observed unexpected, small-scale complexities in kinetic Alfvén waves (KAWs). KAWs were predicted more than 50 years ago as the means of transferring energy through plasmas. As a KAW propagates, electrons travelling at a certain speed get trapped in weak spots of the wave’s magnetic field. Either side of such points, the magnetic field is stronger so the electrons are contained, creating pockets of higher electron density. Meanwhile faster and slower electrons pass energy back and forth with the wave. Using NASA’s Magnetospheric Multiscale mission (MMS), scientists have been able to observe the waves at the relatively small scales where the energy transfer happens. The mission contains four spacecraft in a compact pyramid formation, near Earth. As they are just 6 km apart – the closest arrangement achieved to date – they fit between two KAW peaks. The 3D arrangement allows scientists to measure details such as wave direction and speed. “We’re seeing a more detailed picture of Alfvén waves than anyone’s been able to get before,” says team member Dan Gershmam of NASA’s Goddard Space Flight Center in the US. Although predicted over half a century ago, the new results, published in Nature Communications, are the most comprehensive measurements to date and showed a higher rate of trapping than expected. The researchers hope the findings may have benefits for nuclear-fusion technology and help to improve energy efficiency.

Two-photon blockade seen in single-atom system

An atomic system that produces bunches of two or fewer photons has been unveiled by physicists in Germany. Based on a two-photon blockade, the system could be useful for creating quantum-optical devices that use multiple photons. A single-photon blockade occurs when an atomic system absorbs one photon and in doing so becomes unable to absorb further light. Unlike most other light sources, such systems emit photons one-by-one in a steady stream – making them potentially very useful for quantum-optics experiments. Physicists would like to extend this concept to create light sources that emit at most two photons at a time by creating an atomic system that exhibits two-photon blockade. Now, Gerhard Rempe and colleagues at the Max Planck Institute for Quantum Optics in Garching have created such a system. It comprises a single rubidium-87 atom that is strongly coupled to an optical cavity. The atom is trapped in the cavity using laser light, which is also used to put the atom into a quantum state that makes it a strong single-photon blockade. When the cavity and atom are strongly coupled, light emitted from the system exhibits strong three-photon “antibunching” – which means that three photons emitted from the system are more equally spaced in time than photons in a conventional laser beam. At the same time, pairs of photons exhibit bunching – which means that they are less equally spaced than photons in a laser beam. Writing in Physical Review Letters, Rempe and colleagues say that these two observations are evidence that a two-photon blockade has been achieved. The team also points out that it should be possible to create a three-photon blockade in their system, and also say that the system could be adapted to produce bunches of photons with a specific number of photons.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on DNA analysis on a chip.

Keeping it safe forever

Photo of Charles McCombie

How did you get interested in nuclear waste management?

When I finished my doctorate in materials science and physics at the University of Bristol, UK, I deliberately wanted to move into a more socially involved field, and – I’m smiling here – I was a convinced environmentalist (and I still think I am). So I chose to go into nuclear energy after graduating. I worked for eight years on the reactors that young engineering students today still think of as “future generations”, such as high-temperature reactors, liquid metal-cooled fast reactors, gas-cooled fast reactors and pebble-bed reactors. This was the 1970s, a time when there was increasing opposition to nuclear energy, and I observed that part of the opposition stemmed from perceived issues in the radioactive waste management area. So I decided to move out of “front-end” reactor physics, because my perception was that the “back end”, the waste management, was being neglected and that this was costing a lot of public support for nuclear energy.

What did you do in your first job on the nuclear-waste side?

My reactor days were spent with national research institutions in the UK and Switzerland, and my first job in the “back end” was at the Swiss National Cooperative for the Disposal of Radioactive Waste (Nagra). I started as a safety assessment expert, because I knew about computer programming and there’s a lot of computer modelling involved in assessing the performance of nuclear repositories. After a while, I became the scientific and technical director, and that was exciting because I think of myself as a generalist, and the radioactive waste management field is incredibly interdisciplinary. By the end of my time at Nagra, I had been working with civil engineers, materials scientists, geophysicists, hydrogeologists and seismologists. I was also involved in public communication.

How much of your work now is scientific research and technical problem-solving, versus communication and policy?

With time I have come to agree that the biggest challenges in radioactive waste management are more social than technical and scientific. We have disposal technologies that we are convinced will be safe forever, but we don’t have enough confidence among the public and politicians to be able to implement these technologies. Radioactive materials produced in a nuclear reactor are very hazardous and, more importantly, they are hazardous for a very, very long time. So you have some materials you want to isolate from any potential human contact for 100,000 years or more. People had never thought in these timescales before, and the scientific community has only recently started to think about the consequences of technologies extending to these timescales. The good news is that we have very little of this material: a major nuclear power plant produces 1000 MW of electricity (generating revenues of more than $1m a day) and in a year it will produce only 20 tonnes of waste material. Furthermore, although 100,000 years is long on human timescales, it’s easy to find geologic formations that look the same today as they looked 100 million years ago. This presents a totally different time perspective.

How are waste materials stored?

First, you don’t put the waste in any soluble or liquid form down in a geological repository. You make a solid waste matrix that is extremely resistant to any kind of corrosion or dissolution. The spent fuel is in the form of small ceramic pellets, and these have been in the reactor at hundreds of degrees for several years. They are very tough. You can dispose of these directly or else reprocess them to extract the radioactive nuclides, which are then mixed in a melter with glass so that you end up with these cylinders of black glass with the radioactive materials embedded in them. According to the best predictions that scientists can make, this glass takes more than 10,000 years to dissolve. You encase this glass in a stainless-steel sheath and put it in a copper container. Underground, where there is very little oxygen, copper acts like a noble metal, so these containers can last for more than 10,000 years. But you don’t stop there. The container goes into the repository at a depth of some hundreds of metres in a hard rock like a granite, a clay or salt. There it is surrounded by another barrier, which is often a special water-tight material called bentonite clay, that separates it from the host rock. So there’s a rock layer, a clay layer, a metal canister and inside that we have the waste matrix.

Photo of radioactive material being mixed with molten glass for storage

Where can nuclear waste management technology improve?

The most physics-oriented option is transmutation. If you irradiate long-lived radionuclides you can break them down into shorter-lived radionuclides. Many people would like to see a system where the radionuclide mix that comes out of a nuclear reactor would be treated. So you would separate it out into all its components. The useful ones, such as the much-maligned plutonium, can go straight into a reactor as fuel, but all of the other ones, almost without exception, can be transmuted if you strongly irradiate them. You can do this in a reactor, but it’s not easy in the light-water reactors that are prevalent today. Various organizations, including the Bill & Melinda Gates Foundation, are supporting other reactor types, such as fast reactors and small modular reactors, which can transmute much more of the waste than is currently done. You can also transmute them in a particle accelerator. Transmutation – either reactor driven or accelerator driven – is often pointed to as something that would change the face of radioactive waste management in the far future.

How far away are we from this?

Scientifically, no distance. But to upscale the technology so that it can be done economically and safely is quite far off, if it ever happens. It would costs loads of money and any further manipulation or treatment that you do of these highly radioactive materials would mean that process workers would be exposed to some level of radiation. Then you have an ethical question: how much extra radiation should I be prepared to give a process worker today to save tiny potential radiation doses to people thousands of years in the future?

The other challenge that bothers me is in the remediation and cleanup area. When sites get radioactively contaminated, you can have large areas that will never be safe for people to live on unless you do something clever. There are lots of interesting technologies – hydrological, geochemical and physical – being tried to clean up contaminated sites or sites where there have been accidents. The most recent striking example is at Fukushima in Japan, which following the 2011 earthquake, tsunami and reactor accident will be a mess for decades to come. It’s difficult to clean up a reactor if you don’t know what’s inside it, and it’s difficult to approach them because there’s molten fuel in there. One recent development that I think is really smart is to use muon technology as a remote sensing tool: if you can measure the flux of muons, you can use them like an X-ray to see an image of the melted fuel (corium), and try to get a sense of the scale of the problem.

What’s next for you?

I have been involved for many years in promoting the concept that not every country has to implement its own geological repository. We normally use two words when discussing waste disposal challenges: “safe”, meaning radiological and environmental safety, and “secure”, meaning safety against the deliberate misuse of the material. Despite setbacks, nuclear power is expanding globally. If you think of a potential future where instead of having 10–11 technologically advanced countries with advanced nuclear programmes you instead have 30–40 newcomer countries, each with one or two nuclear power plants, then this hazardous material will be sitting around the world in many locations. One of the ways to keep that material safer and more secure is to concentrate it in fewer, very well completed facilities. A large part of my life is devoted to developing this concept of multinational disposal.

Chernobyl’s hidden legacy

In June 1980 a doctor with the Oak Ridge Associated Universities in the US wrote a letter to a colleague at the Knolls Atomic Power Laboratory in upstate New York. The pair were corresponding about a forthcoming study of employee health at the Knolls reactor, and the doctor, C C Lushbaugh, wrote that he expected “little ‘useful’ knowledge” from this study “because radiation doses have been so low”. Even so, he agreed that the study had to be done because “both the workers and their management need to be assured that a career involving exposures to low levels of nuclear radiation is not hazardous to one’s health”. The results of such a study, he surmised, would help to counter anti-nuclear propaganda and resolve workers’ claims. However, they could also be a liability. If a competing union or regulatory agency got hold of the employees’ health data, Lushbaugh fretted, it could be weaponized. “I believe,” he continued, “that a study designed to show the transgressions of management will usually succeed.”

Lushbaugh’s dilemma is characteristic of research on the human health effects of exposure to low doses of radiation. He assumed he knew the results – good or bad – before the study began, because those results depended on how the study was designed. The field was so politicized, in other words, that scientists were using health studies as polemical tools and, consequently, asking few open-ended scientific questions.

A few years after Lushbaugh posted this letter, reactor number four at the Chernobyl nuclear power plant blew up, killing 31 workers and firefighters and spreading radioactive material across a broad area of what was then the Soviet Union (now Ukraine and Belarus) and beyond. The accident also exploded the field of radiation medicine and, for a while, promised to rejuvenate it. In August 1986, months after the accident, the chief of the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Giovanni Silini, advocated an enduring epidemiological investigation similar to research on atomic-bomb survivors in Japan [1]. Many other scientists concurred, hoping that Chernobyl could clear up ongoing controversies and uncertainties surrounding low-dose exposures.

It never happened. No long-term epidemiological study took place. That’s not to say there isn’t any information. A few summers ago I went to the Ukrainian national archives in the dusty, bustling outskirts of Kiev and asked the archivists for files on Chernobyl from Soviet Ukraine’s Ministry of Health. They laughed, telling me Chernobyl was a banned topic in the Soviet Union. “You won’t find anything,” they said.

They were wrong. I found dozens of collections labelled “The medical effects of the Chernobyl disaster”. I started reading and have not yet been able to stop.

The aftermath

In the years between 1986 and 1991, doctors and sanitation officials wrote to the Ministry of Health in Kiev with alarming accounts of widespread, chronic illness among the hundreds of thousands of children and adults living in contaminated territories. They recorded increases in tonsillitis, upper respiratory disease and disorders of the digestive tract and immune system. Between 1985 and 1988, cases of anaemia doubled. Physicians from almost every region in the zone of contamination reported a leap in the number of reproductive problems, including miscarriages, stillbirths and birth malformations. Nervous-system disorders surged. So did diseases of the circulatory system. In 1988, in the heavily contaminated Polesie region of northern Ukraine, 80% of children examined had upper respiratory diseases and 28% had endocrine problems. In Ivankiv, where many cleanup workers lived, 92% of all children examined had a chronic illness.

I also went to Minsk to check the archives in Belarus. There, I read reports that sounded eerily similar to the Ukrainian documents. These reports were classified “for office use only”, meaning that at the time, scientists were not free to exchange this information across districts or republics of the Soviet Union. Even so, independently, they were reporting similar, bad news. The problem grew so dire in Belarus that in 1990 officials declared the entire republic, which received more than 60% of Chernobyl fallout, a “zone of national ecological disaster”.

The Ukrainian and Belarusian reports, hundreds of them, read like a dirge from a post-catastrophic world. Doctors wrote from clinics in Kharkiv, far outside the contaminated zone, and described similar health problems among evacuees who had settled there. Physicians sent telegrams from Donetsk, where they were treating a complex of illnesses among young miners who had burrowed under the smouldering reactor in the days after the accident. Medical workers sent in to examine people in contaminated regions also fell ill.

In response, the Union of Soviet Radiologists penned a petition to alert Soviet leaders of the ongoing public health disaster. The president of the Belarusian Academy of Science sent a detailed summary of scientists’ findings to Minsk and Moscow. Even a KGB general, Mikhailo Zakharash, sounded the alarm. Zakharash, who was also a medical doctor, conducted a study of 2000 cleanup workers and their family members in a specially equipped KGB clinic in Kiev. In 1990, summing up four years of medical investigation, he wrote, “We have shown that long term, internal exposures of low doses of radiation to a practically healthy individual leads to a decline of his immune system and to a whole series of pathological illnesses.”

Chronic radiation

These findings track with what Soviet doctors had long described as chronic radiation syndrome, a complex of symptoms derived from chronic exposure to low doses of radiation. Researchers working on Chernobyl discerned a pattern of disease that tracked with pathways of radioactive isotopes entering the body, paths that began in either the mouth and headed towards the gastrointestinal tract or started in the lungs and followed blood into circulatory systems. Radioactive iodine sped to thyroids, they hypothesized, causing endocrinal and hormonal damage.

Critics, mostly in Moscow and the ministries of health, acknowledged the growth in health problems, but denied a connection to Chernobyl. A E Romanenko, the Ukrainian Minister of Health, is credited with inventing the word “radiophobia” to describe a public fear of radiation that induced stress-related illness. He and his colleagues also pointed to a screening effect from mass medical monitoring. Local doctors, they said, were projecting the diagnoses of chronic radiation syndrome onto their patients, blaming it for any illness found after Chernobyl.

There are some problems with these arguments. From 1986 to 1989, Chernobyl was a censored topic in the Soviet Union. Doctors could not exchange information about health problems, nor did they have access to maps of radioactive contamination. They only learned to be “radiophobic” by judging the bodies they examined. In the same years, doctors were also fleeing contaminated areas en masse, leaving hospitals and clinics in those regions staffed at 60%. As physicians left, so too did the chance for diagnosis, meaning that under-reporting of illnesses was more likely than a screening effect. Moreover, doctors from the northern regions of the Rivne province, which were at first judged clean and only in late 1989 designated contaminated, reported the same growth of illness as areas originally deemed “control zones,” regions with counts of more than 5 curies per square kilometre. The president of the Belarusian Academy of Science, V P Platonov, pointed to a vacuum of knowledge: “Until this time, no population has ever lived with continual internal and external exposures of this size.” Risk assessments assuring safe levels in the contaminated zones were extrapolated from the Japanese Atomic Bomb Survivor Lifespan Study, but these began only in 1950, five years after exposure. “Much is uncertain,” Platonov continued, “about fundamental aspects of the effects of low doses of radiation on human organs,” [2].

What happened to the 1980s Chernobyl health studies, which might have led to a renaissance in the field of radioecology? Essentially, they were overlooked. To figure out why, I went to the headquarters of the World Health Organization (WHO) in Geneva, to the UN’s archives in New York and the archives of UNSCEAR in Vienna. There, I found evidence of a conflict between branches of the WHO and the International Atomic Energy Agency (IAEA) over which organization would control the studies of Chernobyl health effects.

By 1989 angry crowds were questioning the Soviet Union’s handling of Chernobyl, and Soviet leaders asked foreign experts for help in assessing the disaster’s health impacts. The IAEA agreed, and Fred Mettler, a radiologist and American delegate to UNSCEAR, was appointed to head the medical section of an IAEA team. In 1990, as he and his team examined 1726 people in six contaminated zones and six control zones, Soviet doctors gave him 20 slides from children diagnosed with thyroid cancer. Thyroid cancer is very rare in children: before the Chernobyl accident, doctors saw eight or nine cases per year in all of Ukraine. Twenty cases in just three provinces was hard to believe. Dubious, Mettler brought the slides to the US to have them verified. They indeed indicated thyroid cancer.

Cancer cluster

Mettler mentioned this major medical finding in the 1991 International Chernobyl Project (ICP) technical report, but strangely, he also stated that there was “no clear pathologically documented evidence of an increase in thyroid cancer” [3]. The report concluded that there were no detectable Chernobyl health effects and only a probable chance of childhood thyroid cancers in the future. In a 1992 publication on thyroid nodules in the Chernobyl territories, Mettler failed to mention the 20 verified cases at all [4].

How could such a lapse occur? I found a confidential 1990 UN memo that seems relevant, particularly in light of the study-design problem set out in Lushbaugh’s letter a decade earlier. The memo suggests that the IAEA was conducting the ICP study to “allay the fears of the public” in service of “its own institutional interest for the promotion of peaceful uses of nuclear energy” [5]. The experiences of Keith Baverstock, then head of the radiation protection programme in the WHO’s European office, likewise reveal an institutional aversion to bad news. In July 1992 Baverstock planned to go to Minsk to examine childhood thyroid cases in Belarus, where doctors reported an astounding 102 new cases. At the last minute, officials from the WHO and the Commission of European Communities inexplicably pulled out of the mission. In an interview with me, Baverstock, an expert on the effects of ionizing radiation, said that a WHO official told him he could get fired if he went to Minsk.

He went anyway. With Belarusian scientists, he published news of the thyroid cancer epidemic in Nature. A top IAEA official complained angrily to the WHO, and the two agencies put pressure on Baverstock to retract his article. He refused, and a barrage of letters followed in Nature disputing the connection between the cancers and Chernobyl exposures [6]. Leading scientists from the US Department of Energy, the National Cancer Institute, Japan’s Radiation Effects Research Foundation and the IAEA argued that cancers were found because of increased surveillance. They called for a suspension of judgment and for further study. Repetitive and dismissive, their letters read like an orchestrated pile-on.

We now know that these global leaders in radiology were wrong. The numbers of cases rose into the thousands, too high to dismiss, and in 1996 the WHO and the IAEA finally admitted that skyrocketing rates of childhood thyroid cancer were most likely due to Chernobyl exposures. Today, the UNSCEAR maintains that the health consequences of the Chernobyl accident are limited to 31 direct fatalities – plus 6000 cases of children’s thyroid cancer [7].

Lingering questions

The question is – so what? Despite the 1991 ICP report’s erroneous claim of no health effects, UN agencies eventually recognized the cancer epidemic. What difference did a few years make? A great deal, it turns out. The ICP report also recommended that resettlements from the most contaminated regions should cease [8]. Consequently, the planned resettlement of 200,000 people living in areas contaminated with high levels of radiation (between 15 and 40 curies per square kilometre) slowed tremendously. The UN General Assembly had also been waiting for the report before raising funds for Chernobyl relief. The $646m budget (equivalent to about $1.1bn today) included medical aid, resettlement funds and a large-scale epidemiological study of Chernobyl health effects. The assertion by important UN agencies that there were no detectable health effects deflated that effort. Before the report, Japan had given $20m to the WHO, but afterwards it gave no more and complained about the funds being wasted. A few other countries gave sums totalling less than $1m, while the US and the European Community begged off entirely, citing the ICP report as a “factor in their reluctance to pledge” [9].

In subsequent years, IAEA and UNSCEAR officials cited the ICP report when discouraging Chernobyl-related health projects. In 1993 UNSCEAR scientific secretary Burton Bennett recommended that UN agencies suspend all programmes aimed at Chernobyl relief because they were unnecessary. He and IAEA administrator Abel Gonzalez, who led the ICP assessment, widely shared their views among UN agencies about “misinformation surrounding the Chernobyl accident” [10]. When the WHO, nonetheless, started a pilot study on Chernobyl health effects, Gonzalez wrote that he could not imagine what the WHO “expects to be able to detect for the level of doses in question”. Irked that WHO officials would examine any effects but psychological ones, he charged, “The World Health Organization seems to ignore, expressly or tacitly, the conclusions and recommendations of the International Chernobyl Project,” [11].

The consequences of this moment of deviant science continue 30 years later. Today we know little about the non-cancerous effects that Soviet scientists working in contaminated zones reported in the late 1980s, and which they attributed to internal and external exposures to ionizing radiation. Are these effects as real as the childhood thyroid cancers proved to be? The Soviet post-Chernobyl medical records suggest that it is time to ask a new set of questions about long-term, low-dose exposures.

References

  1. Giovanni Silini 1986 “Concerning proposed draft for long-term Chernobyl studies” Correspondence Files, UNSCEAR Archive
  2. V P Platonov and E F Konoplia 1989 “Informatsiia ob osnovynkh rezul’tatakh nauchnykh rabot, sviazannykh s likvidatsiei posledstvii avarii na ChAES” RGAE 4372/67/9743: 490
  3. International Chernobyl Project, Proceedings of an International Conference (Vienna: IAEA 1991): 47. Mettler also admitted that the slides checked out at the Vienna conference convened to discuss the report. For a discussion of thyroid cancer, see The International Chernobyl Project, Technical Report, Assessment of Radiological Consequences and Evaluation of Protective Measures (Vienna: IAEA 1991): 388
  4. Fred Mettler et al. 1992 “Thyroid nodules in population around ChernobylJournal of American Medical Association 268 616
  5. From Enrique ter Horst, Asst Sec Gen, ODG/DIEC to Virendra Daya, Chef de Cabinet, EOSG, 16 April 1990, United Nations Archive, New York S-1046 box 14, file 4, acc. 2001/0001
  6. Baverstock et al. 1992 “Thyroid cancer after Chernobyl” Nature 359 21; Kazakov et al. 1992 “Thyroid cancer after Chernobyl” Nature 359 21; I Shigematsu and J W Thiessen 1992 “Childhood thyroid cancer in Belarus” Nature 359 680; V Beral and G Reeves 1992 “Childhood thyroid cancer in Belarus” Nature 359 680; E Ron, J Lubin, A B Scheider 1992 “Thyroid cancer incidence” Nature 360 113
  7. The Chernobyl accident: UNSCEAR’s assessments of the radiation effects” UNSCEAR website
  8. The International Chernobyl Project: an Overview (Vienna: IAEA 1991): 44
  9. “International co-operation in the elimination of the consequences of the Chernobyl Nuclear Power Plant accident” 24 May 1990, UNA S-1046/14/4; “Third meeting of the Inter-Agency Task Force on Chernobyl” 19–23 September 1991, WHO E16-445-11, 5; “Briefing note on the activities relating to Chernobyl” 3 June 1993, Department of Humanitarian Affairs DHA, UNA s-1082/35/6/, acc 2002/0207; Anstee to Napalkov, 17 Jan 1992, WHO E16-445-11, 7
  10. Gonzalez to Napalkov, 10 August 1993, WHO E16-445-11, 19; B G Bennett 1993 “Background information for UNEP representative to the meeting of the Ministerial Committee for Coordination on Chernobyl” 17 November 1993, New York, Correspondence Files, UNSCEAR Archive, Vienna
  11. Gonzalez to Napalkov, 10 August 1993, WHO E16-445-11, 19

Einstein world record, Spider-Man physics, quantum films and cakes

 

By Sarah Tesh and James Dacey

A world-record-breaking hoard of Albert Einsteins invaded Toronto in Canada on Tuesday 28 March. 404 people gathered in the city’s MaRS Discovery District dressed in the genius’s quintessential blazer and tie, and sporting bushy white wigs and fluffy mustaches. As well as breaking the previous Guinness World Record of 99 Einsteins, the gathering kicked off this year’s Next Einstein Competition. The online contest invites the public to submit ideas that can make the world a better place and awards the winner $10,000 to help them realize it.

(more…)

Quantum memory is made from doped silicon

Quantum information has been stored in a single atom of phosphorus embedded in a silicon crystal – and then retrieved at a later time. The quantum memory was made by physicists in Australia who say that this kind of memory could be an important ingredient in silicon-based quantum computers with the potential to be more scalable, compact and easier to mass-produce than devices based on rival technologies.

Long-term storage in classical computers is straightforward; digital bits are simply copied from the processor to a rotating magnetic disk or other suitable medium. But quantum computers, which encode data in the form of quantum bits, or qubits, face a fundamental obstacle – the no-cloning theorem dictates that it is impossible to copy the state of a qubit or any other quantum object.

A quantum memory instead involves transferring a quantum state from one qubit to another, so erasing the state of the first qubit in the process. Because the second qubit – the “memory qubit” – is chosen to be more resilient to external sources of electrical or magnetic interference that would otherwise destroy quantum coherence, this transfer could enable computations that rely on information being parked temporarily while other data are processed. However, the first qubit – the “processing qubit” – also plays its part. Being less resilient to interference, it is therefore also more responsive to deliberate electromagnetic stimulation, and as such is used to read and write data.

Extra electron

In the latest work, Andrea Morello of the University of New South Wales and colleagues have exploited a natural two-qubit system by doping silicon with atoms of phosphorous. Silicon atoms contain four electrons in their outer shells. This means that each electron forms a covalent bond with a neighbouring atom and this gives silicon its crystal structure. Phosphorus, which is next to silicon in the periodic table, adds an extra positive charge to the lattice, so attracting an extra electron. This effectively creates a hydrogen atom in which the less magnetically sensitive nuclear spin forms the memory qubit, while the electron spin acts as the processing qubit.

Silicon’s crystal properties means that an electron confined in its lattice has a very narrow spatial wavefunction – being just a few nanometres wide. As Morello points out, this means having to manipulate a solid-state circuit at close to the atomic scale, something, he says, that “is not a cup of tea in a university research lab”. But silicon’s long quantum-coherence times gives it a major advantage over rival technologies – as does the ease at which silicon devices can be manufactured. This is because its spins are naturally insensitive to electric noise and can be made very insensitive to magnetic noise by enriching silicon so that its only isotope with nonzero nuclear spin – silicon-29 – is almost entirely absent.

In their experiment, Morello and colleagues implanted phosphorus atoms into a 100 × 100 nm2 region of a 900 nm-thick layer of enriched silicon. They set the initial state of the phosphorus electron spin using a microwave antenna fabricated on top of the silicon chip. That spin state is then transferred to and from the nucleus using a series of radio-frequency pulses from the same antenna. To read out the value of the electron spin, they created a single electron transistor from aluminium electrodes fabricated on the chip. The transistor is turned on if the electron escapes the phosphorus nucleus, which only happens when it is in its (high-energy) spin-up state.

Remarkably long time

The researchers found they could transfer the spin state of the phosphorus electron to the nucleus and keep it there for up to 80 ms – “a remarkably long time in the solid state,” says Morello – before transferring it back to the electron and reading it out. Unfortunately, they found that the electron’s final state only matched its initial state around 80% of the time – a fidelity that fell far short of the 99% possible when operating the electron and nuclear qubits separately. They believe this is caused by a shift in the electron’s resonance frequency after they turn on the radio-frequency pulses, and say that they will now work to eliminate this shift.

Morello’s group is not the first to demonstrate quantum memory in silicon. Back in 2008, John Morton, then at Oxford University, and colleagues did so by collectively manipulating the spin states of billions of phosphorus atoms in a large piece of silicon crystal. But Morton says the latest work is important because in a large-scale quantum computer, information must be encoded into individual qubits. He notes that the hardest part of the experiment was reading out the electron spin state, the Australian group having first developed its detection technique in 2010.

Morton, now at University College London, concedes it “remains an open question” among physicists about how useful this kind of memory would be, given doubts about the length of time that electrons would be “idle” as opposed to processing, and also given the need for very high fidelities. One “very exciting” future avenue of research, he says, is to use nuclear spins that are more weakly coupled to the electron spin, such as the spin-1/2 nuclei of silicon-29 in naturally occurring silicon. Being less affected by the electron’s removal during the read-out process, they could potentially operate with fewer interruptions than the phosphorus nuclei.

The research is reported in Quantum Science and Technology.

Flash Physics: Alexei Abrikosov dies, SpaceX lands a rocket, long nanotubes are excellent heat conductors

Superconductivity pioneer Alexei Abrikosov dies at 88

The Nobel laureate Alexei Abrikosov has died at the age of 88. Born in Moscow in 1928, Abrikosov completed an undergraduate degree in physics at Moscow State University in 1948 before studying for a PhD in physics at the Institute for Physical Problems (IIP) in Moscow, graduating in 1951. Following a stint as a researcher at the IIP, in 1965 Abrikosov moved to the Landau Institute for Theoretical Physics until 1988. After two years as director of the Institute for High Pressure Physics, he then moved to Argonne National Laboratory in the US in 1991 where he remained for the rest of his career. Abrikosov made pioneering contributions to the theory of superconductivity. The theory of type-I superconductors – in which the material completely expels a magnetic field – was developed in the 1950s by the US scientists John Bardeen, Leon Cooper and Robert Schrieffer, for which the trio were awarded the Nobel Prize for Physics in 1972. Abrikosov together with Vitaly Ginzburg of the P N Lebedev Physical Institute in Moscow, developed a theory for “type-II” superconducting materials in which superconductivity and magnetism can co-exist. In 2003, Abrikosov shared the Nobel Prize for Physics together with Ginzburg and Anthony Leggett of the University of Illinois at Urbana “for their pioneering contributions to the theory of superconductors and superfluidity”.

SpaceX relaunches and lands a Falcon 9 rocket

Photograph of the SpaceX Falcon 9 rocket launch as part of the SES-10 mission

A used Falcon 9 first stage has been successfully relaunched and landed as part of a recent SpaceX mission. In the past, rockets have traditionally been one use only. But in this case the segment had previously been used 11 months ago on a supply run to the International Space Station. The current mission, launched from Florida’s Kennedy Space Center in the US at 18:27 EDT on 30 March, saw it assist in delivering SES-10, a commercial communications satellite, into geostationary orbit. The rocket segment was also successfully landed on a barge in the Atlantic. Containing nine Merlin engines and aluminium-lithium alloy tanks of liquid oxygen and rocket-grade kerosene propellant, the first stage represents about 80% of a Falcon 9 launch cost. Therefore, the successful relaunch and recovery is an important milestone for SpaceX and may lead to significant cost savings. “This is going to be, hopefully, a huge revolution in spaceflight,” says SpaceX chief executive Elon Musk. Although NASA’s space-shuttle system was partially reusable, the SpaceX mission is the world’s first re-flight of an orbital class rocket.

Long nanotubes are excellent heat conductors

The thermal conductivity of singled-walled carbon nanotubes (SWCNTs) increases with their length on scales of several millimetres, according to physicists in Taiwan. Carbon nanotubes have walls that are just one atom thick and it was already known that their thermal conductance increases with length on microscopic scales. This contradicts Fourier’s law of heat conduction, which says that the thermal conductivity is an intrinsic material property that is independent of the shape of a sample. Calculations had suggested that the thermal conductivity of defect-free SWCNTs would increase up to millimetre lengths, however it has been very difficult to both calculate and measure conductivities of long SWCNTs. Now, Victor Lee, Chih-Wei Chang and colleagues at the National Taiwan University in Taipei have come up with a way of measuring the thermal conductivity of SWCNTs with lengths between 4 μm and 1.039 mm. They found that the thermal conductivity increases with the length of the SWCNT and that some 1 mm long samples had conductivities that are approximately four times higher than diamond or graphene – materials that are both known for their very high thermal conductivity. The physicists say that their findings suggest that the heat-carrying capacity of long-wavelength lattice vibrations (phonons) play an important role in the thermal properties of low-dimensional systems such as SWCNTs. The result, which is reported in Physical Review Letters, could lead to the development of high-effective heat-management systems for tiny systems such as computer chips.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics. Tune in to physicsworld.com later today to read today’s extensive news story on a new silicon qubit.

Asteroid ‘Bee-Zed’ shares a retrograde orbit with Jupiter

A possible refugee from the Oort Cloud that is revolving backwards around the Sun is reshaping our understanding of orbital dynamics by sharing an orbit with the giant planet Jupiter.

The object, known as 2015 BZ509 or “Bee-Zed” for short, was first spotted by the Pan-STARRS survey and followed up by a team lead by Paul Wiegert of the University of Western Ontario in Canada using the Large Binocular Telescope in Arizona. If we were to look down at the solar system from above the Sun’s north pole, we would see the vast majority of objects including all the planets orbiting in a counterclockwise fashion. Bee-Zed bucks this trend: it orbits clockwise in retrograde fashion.

Although rare, retrograde orbits per se are not mysterious. What really makes Bee-Zed stand out is that it also shares Jupiter’s orbit, in a 1:–1 resonance (the minus sign indicating retrograde motion). But instead of being kicked out of the orbit by the gas giant, the asteroid is in a configuration that has allowed it to remain stable for millions of years while never colliding with Jupiter.

Shared orbit

Jupiter is no stranger to sharing its orbit. Trapped at the L4 and L5 Lagrangian points, 60° ahead and behind Jupiter, respectively, are two swarms of asteroids known as the Trojans that move prograde – in the same direction as Jupiter.

Bee-Zed is a kind of “anti-Trojan”, whose existence was predicted by Helena Morais of São Paulo State University in Brazil and Fathi Namouni of the Côte d’Azur Observatory in France, two years before its discovery. Its orbit is inclined by 163° to the plane of the ecliptic, cutting across Jupiter’s path every six years, but it never gets closer than 176 million kilometres. Jupiter’s gravity perturbs its orbit during each encounter, but “these nudges are quite gentle because the two don’t get very close,” Wiegert told physicsworld.com. Ultimately the perturbations cancel each other out, with the net result being that Bee-Zed remains on its orbital path.

Jupiter is very dominant in the solar system and can grab hold of objects and move them to new areas that can be safe or potentially hazardous
Jonathan Horner, University of Southern Queensland

Wiegert suspects that Bee-Zed was once a comet that originated from the Oort Cloud, from where comets can come from any direction, often with highly inclined and retrograde orbits. The most famous comet in history, Halley’s comet, has a retrograde orbit and is suspected to be an interloper from the Oort Cloud.

Comet puzzle

How comets like Halley transition from very long-period orbits to the relatively short periods they have today is uncertain, says Jonathan Horner of the University of Southern Queensland, Australia, who was not part of the research. If Wiegert is correct about Bee-Zed being a former Halley-like comet, then “it strikes me that this object could be part of the answer to that puzzle,” says Horner.

Another possibility is that Bee-Zed was once a member of the 6000 Trojan asteroids, but was scattered away during the early history of the solar system when Jupiter and Saturn migrated outwards and fell into a brief orbital resonance.

“It’s possible that a few objects were scattered onto retrograde orbits and were then re-captured by Jupiter as part of that migration,” says Horner. A third possibility is the Kozai mechanism, in which perturbations from outer planets can reduce the eccentricity and increase the inclination of an asteroid’s orbit to the point that its orbit flips over.

More retrograde Trojans

Wiegert suspects that Jupiter may possess more retrograde co-orbital asteroids and comets waiting to be discovered, “but there’s unlikely to be as many as the prograde Trojans simply because retrograde orbits are much rarer,” he says, adding that he “would not be surprised if we discover more of them around different planets, including Earth”.

The co-orbital resonance with Earth should prevent such asteroids from impacting our planet, but Jupiter also has a role to play in the rate of impacts on Earth. In a series of papers published between 2008 and 2010, Horner and the late Barrie Jones of the Open University found that Jupiter is just as likely to fling objects towards us as it is to sweep them up or eject them out of the solar system. The presence of retrograde co-orbitals gives Jupiter another way of capturing incoming objects, but “I don’t think it adds any credence to the idea that Jupiter is our protector,” says Horner. “Jupiter is very dominant in the solar system and can grab hold of objects and move them to new areas that can be safe or potentially hazardous.”

The discovery of Bee-Zed also raises implications for exoplanet research. Although two Jupiter-sized bodies could not be co-orbital, “it’s not unreasonable for a small planet to exist in a retrograde orbit next to a Jupiter-sized planet,” says Wiegert. Although retrograde exoplanets have previously been discovered, they are not co-orbiting with other planets; however, Wiegert suggests that it’s worth looking out for such planetary duos.

The research is described in Nature.

Flash Physics: US reactor to make molybdenum-99, radiotherapy side effects, entangled photons from two places

US university reactor to make molybdenum-99

The University of Missouri Research Reactor (MURR) has unveiled plans to make molybdenum-99. This is used to make the medical-imaging isotope technetium-99m, which could soon be in short supply in North America because of the upcoming closure of a reactor in Canada. MURR has filed an application with the US Nuclear Regulatory Commission to make the isotope using technology developed by US-based General Atomics. This will involve placing low enriched uranium targets inside the reactor and then extracting molybdenum-99 from the irradiated targets. The molybdenum-99, which has a half-life of 66 days, will then be transported 1600 km to Ottawa, where it will be further purified by Canada-based Nordion. The isotope is then distributed to radiopharmaceutical manufacturers, who integrate the molybdenum into technetium-99m generators, which are shipped to hospitals. The plan is a response to possible shortages of molybdenum-99 that could occur once the NRU reactor in Chalk River, Canada, shuts down in March 2018. NRU currently makes most of the molybdenum-99 used in North America and officials at MURR say their proposal could supply nearly half of the US demand for the isotope.

Shining a light on radiotherapy side effects

SFDI images of breast tissue damage by radiation therapy

An LED-based device allows scientists to monitor skin damage caused by radiotherapy during breast-cancer treatment. Patients often undergo radiation therapy after surgery or chemotherapy to kill any remaining cancerous cells. Unfortunately, the treatment can cause significant skin afflictions including irritation, peeling, blistering, permanent discolouration and tissue thickening. There are currently no methods to predict the severity of reactions, so researchers at the University of California, Irvine (UC Irvine) in the US have begun using Spatial Frequency Domain Imaging (SFDI) devices to monitor and characterize the damage. SFDI uses the fact that light absorbs and scatters to varying degrees depending upon the properties of the target object. Detecting the reflected light gives information about those properties. In the current work, Anaïs Leproux and colleagues are using an SFDI device, originally developed by David Cuccia, formally of UC Irvine and founder of Modulated Imaging. To monitor breast tissues, the team use low-power, LED light of eight visible to near-infrared wavelengths. By shining the light in certain patterns using a digital micro-mirror, a camera can detect varying reflectance more accurately. “Since we use several wavelengths of light, we perform spectroscopy and obtain the content of melanin, tissue hemoglobin, in the de-oxygenated and oxygenated state, from which we can calculate the total blood volume and oxygen saturation in the tissue,” says Leproux. The non-invasive technique measures 3–5 mm into the skin. The team addressed concerns about exposing skin to additional radiation by calculating that 10 measurements is the equivalent of two seconds in the sun. They hope that by characterizing and monitoring the skin damage, they can gain a better understanding of the processes involved and potentially predict patients’ reactions. The work is being presented at next week’s OSA Biophotonics Congress: Optics in the Life Sciences meeting in San Diego, California.

Entangled photons created in two different places

Photons created in entangled pairs in a nonlinear crystal can emerge at two different points in space, say physicists in the UK who have studied a special case of a process called spontaneous parametric down-conversion (SPDC). Their finding contradicts a general assumption that such photons are created at the same place in the crystal. SPDC involves firing higher-energy photons into a special type of crystal, which results in the production of two lower-energy photons. This is a quantum-mechanical process and the two photons emerge in an entangled state, which means that as they fly off in different directions they maintain a relationship that is stronger than that allowed by classical physics. Such entangled pairs can then be used in a number of applications including quantum computing and quantum cryptography. Kayn Forbes, Jack Ford and David Andrews of the University of East Anglia looked at a specific type of SPDC called degenerate down-conversion in which the two electrons have the same energy. “Until now, it has been assumed that such paired photons come from the same location,” says Andrews. The trio identified a process involving the propagation of virtual photons in the crystal that results in photons being emitted from two different places. Andrews describes this as “a new positional uncertainty of a fundamental quantum origin,” adding: “Everything has a certain quantum ‘fuzziness’ to it, and photons are not the hard little bullets of light that are popularly imagined.” The study will be described in Physical Review Letters.

 

  • You can find all our daily Flash Physics posts in the website’s news section, as well as on Twitter and Facebook using #FlashPhysics.

Leadership lessons learnt in the lab

Much is written about the shortage of physics graduates entering teaching, about how many children are never taught physics by a subject-specialist and about the need for more pupils to pursue physics beyond school. But there is another area of crisis in schools and that is leadership. Filling leadership posts is becoming increasingly challenging, especially at the top of the profession. Now is a time full of opportunities and for those with the potential and the skills, leadership in schools could be the perfect career path.

There are opportunities for the full range of physics graduates in teaching, but for those with research or postgraduate experience the prospects may be even better. Reflecting on my own career, the PhD itself wasn’t the key factor in helping me move up the different levels of the education profession. Instead it was the range of skills gained during that postgraduate degree that laid the foundations for leadership success.

After graduating in 1990 I spent three years doing a PhD in particle physics at the University of Birmingham, UK, while working on the OPAL experiment at the Large Electron–Positron Collider at the CERN particle-physics laboratory in Geneva, Switzerland. After receiving my doctorate, I trained to be a teacher for a year before returning to do a postdoctoral degree in biophysics. Three years later, I began my teaching career in a challenging comprehensive (secondary) school. Within a year or so, I had my first low-level leadership role as an assistant headteacher in a large school, reaching headship four years ago.

Despite leaving research for a full-time career as an educator, it was my experience while still a part of research teams that equipped me with key skills and ultimately helped forge my leadership. At the heart of my research life was the teamwork necessary for the functioning of any experimental project. The size of a team varied over my research life, from small teams of three to large detector groups at CERN. A common factor in any of these settings that amazed me, and still does, was the level of democracy and inclusiveness shown within the team. Young, inexperienced voices were heard and given the opportunity to speak by those with years of expertise, acknowledging that age on its own was no guarantee of creativity and innovation. While the implicit hierarchy was always there – maintained by respect for those above you – everyone felt valued and a part of something bigger.

The lessons learnt during that time have served me well throughout my school leadership career. Collaborative approaches within leadership engender shared ownership and this, in turn, ensures all those in a team feel valued. Treating those who work for you as intellectual equals produces commitment, while failure to do so produces resentment. The best leaders in education know this, and they share their authority, invest in their colleagues and place faith in the ability of others. The worst exercise power, alienate their colleagues and encourage division rather than unity.

But there are a range of other skills developed in my research years that are worth mentioning. Effective school leaders must be analytical. Though much of school leadership is about “soft skills”, and there is still a great deal that is analytical. Judging pupil performance and teacher effectiveness, handling budgets and writing timetables all require a logical and analytical mind. As the job often entails having to deal with challenging “human” issues involving children, parents and staff, one benefits from a calm, rational and analytical approach. Schools themselves operate within legal, social and technical frameworks and dealing with the requirements of inspections, legislation and even school-management systems all benefit from detached and abstract approaches often found in the world of research.

School leaders must also be extremely comfortable with data. Don’t get me wrong, I would not suggest that the data processing required in a school equates to that required at the Large Hadron Collider, but the ability to confidently and reliably handle and interpret large data sets is a key skill of a leader in education. Many, if not most, schools are data-driven and mastery of data analysis and statistics will give you a distinct advantage over colleagues with less experience in the area. Data are the lifeblood of schools and, though often shunned as the impersonal and regrettable side of the profession, they are at the heart of education and unlikely to take a back seat any time soon. Indeed, I recall a meeting with a school inspector during which we discussed the school’s progress data. I was able to illustrate and argue points using a range of statistical tools that are rarely found in an inspector’s tool kit. This helped my school through an inspection and did my career no harm at all.

Throughout my postgraduate career, I also honed my presentation skills. From my first talk at CERN during the first term of my PhD, through to lectures as a postdoc, I spent a lot of time planning and delivering talks to expert audiences both small and large. The ability to talk with confidence to a range of audiences is a key skill of school leaders. Whether you are addressing your science department, a whole body of staff or 300 parents, you need the confidence and experience to speak eloquently and convincingly. You are judged by this – you may well be the finest school manager, but fail in front of a critical audience and your credibility is out of the window. Allied to this is the ability to communicate complex ideas. Physics is complex and even with fellow physicists you will surely have to explain your own niche areas. Many issues in schools are extremely complex, made more so by the simple fact that they do not obey the laws of physics. So while you may not be explaining quantum field theory to a parent, I would say that explaining the latest guidance from the UK Department of Education on measuring progress to 100 parents comes a close second.

Having gone through some of the reasons that school leadership offers opportunities for physicists I should issue a caveat or two. Your teams will rarely contain others with backgrounds in physics, even if you are running a science department, so don’t be surprised if you are the only physicist in the room. We are trained in a unique discipline and we do tend to think in a particular way; do not expect your colleagues to think like you or necessarily even understand. Physicists have the potential to make great school leaders, but I wouldn’t like to see a school run by half a dozen of them! Your colleagues will be experienced, intelligent professionals, with equally valuable but potentially very different skill-sets to those you possess. It is this diversity – which is a strength and should always be encouraged – that makes school leadership teams work.

Every child, teacher, parent and community is unique, as is every situation; a consequence of which is that every leadership challenge is distinctive. Coupled to this is the fact that every school leader is unique when it comes to their own values, beliefs, experience, knowledge and expertise. Consequently there is no simple checklist to determine whether you could be a successful school leader, but for those pursuing postgraduate physics and looking for a meaningful career beyond universities and laboratories, school leadership may offer a rewarding future.

Copyright © 2026 by IOP Publishing Ltd and individual contributors