Skip to main content

Stalin’s scientists

Although much has been written about the history of Soviet science, in the Western imagination the topic remains opaque, remote and (most importantly) hemmed in by a simplistic dichotomy of good versus evil. Beyond a few touchstones, such as the episode when Stalin’s fondness for the theories of Trofim Lysenko almost destroyed the burgeoning field of genetics, and the Soviet “theft” of the atomic bomb, most Western laypeople have few points of reference to Soviet science. In his emotionally resonant book Buried Glory, Istvan Hargittai, a well-known Hungarian chemist and prolific writer of popular books on science, adds depth to this picture by bringing to light the biographies of more than a dozen Soviet scientists.

Hargittai describes these men (and the scientists he has chosen are all men) as “heroes” – a word that suggests that his narrative will hew to the archetype of the noble scientist struggling against the oppression of the Soviet state. To his credit, however, Hargittai’s complex and often moving biographies of these scientists eschew a hagiographical approach. His stated goal for the book is to humanize the lives of “a select set of Soviet scientists”, and in this he succeeds. A more ambitious objective, to communicate “what it meant to be creating in science under Soviet conditions”, falters slightly, but only because Hargittai focuses on a few elites in lieu of the many thousands who undoubtedly experienced life differently from the famous personalities of this book.

The principal theme of the book – one that is implicit within the individual biographies, but never articulated as such – is the relationship between scientist and state. Through sequential chapters, each typically dedicated to a single person, Hargittai ably weaves together a longer narrative about the ways in which Soviet scientists negotiated their relationship with a repressive state apparatus. Many, including the condensed-matter physicist Peter Kapitza, resisted. Some, like Yulii Khariton, came to a rapprochement of sorts, while others avoided controversy at all costs (Yakov Zel’dovich, for example). As Hargittai shows, the latter approach was very difficult given the enormous importance the Bolsheviks placed on science as a tool of nation-building. To be a scientist, especially one with membership in the hallowed halls of the Academy of Sciences, was to be one of the chosen elite. But the benefits (and there were many material ones) of such an honour were often outweighed by the increased visibility it bestowed. As Hargittai notes, “there was no branch of science where scientists were immune to persecutions”, and this is true not only of the elites but also of several generations of mid-level scientists and engineers.

All of the men profiled here did their primary research in nuclear physics, low-temperature physics or chemistry. Some of their names will be familiar to physicists and chemists in the West: Kapitza, Lev Landau, Nikolai Semenov and Igor Tamm, for example. Most strikingly, almost all of them were involved with the Soviet atomic-bomb programme (and later with the development of thermonuclear weapons) in some way. Echoing the claim made in David Holloway’s magisterial Stalin and the Bomb (1994, Yale University Press), Hargittai shows that working on the bomb frequently insulated top physicists from persecution. Going further, he also demonstrates that this made it possible for them to engender limited forms of democratic activity, especially within the inner workings of the Academy of Sciences, suggesting that scientists were able to cultivate a modest “democratic” culture that was at odds with the larger imperatives of the Communist Party structure. This is not a new notion, however, as other historians of Soviet science (including Alexei Kojev-nikov, Ethan Pollock and Nikolai Krementsov) have also explored the spaces within which Soviet scientists operated and found surprising agency in particular cases, for example in promoting their own careers.

An important thread running through the narrative is one of identity. Many of the men who appear in the book were Jewish, exemplifying the proportionally large number of Russian Jews who were members of the post-revolutionary Intelligentsia, and especially the scientific and technical Intelligentsia. Scientists such as Zel’dovich, the main theoretical physicist behind the Soviet atomic-bomb project, and Khariton, a kind of counterpart to Robert Oppenheimer on the Soviet side, faced many hurdles because of their religious identity. Hargittai also highlights several instances where lesser lights faced severe discrimination as Jews, especially during the late Stalin years when Jewishness was identified with the “evils” of international “cosmopolitanism”. The conundrum, of course, is that many Jewish scientists and engineers (such as the space designer Boris Chertok, who the Russian press have recently been identifying as the “patriarch” of Soviet cosmonautics) remained in high positions throughout the Soviet times, making this part of Hargittai’s story more complicated than simply one of unbridled antisemitism. Soviet industrial managers were often quite willing to suspend their deep prejudices in the service of larger national goals, especially if these goals were related to security.

Perhaps the most well-known scientist in the volume is Andrei Sakharov, a leading light in the Soviet hydrogen-bomb programme who faced enormous adversity in maintaining his commitment to freedom from oppression. Much has been written about him, and Hargittai synthesizes this extant literature, ably reconstructing Sakharov’s evolution from devoted scientist to human-rights activist. Sakharov’s story is interesting not only because it highlights all of the contradictions of Soviet science – brilliant achievements despite (or perhaps because of) a draconian system – but also because his activities were a barometer for how other leading scientists saw their own place within the Soviet system. The infamous 1973 letter signed by 40 Soviet scientists denouncing Sakharov’s actions appears in Hargittai’s narrative as a polarizing milestone that pitted colleague against colleague. Among Westerners, Sakharov has often served as a kind of blank slate upon which to impose binary expectations of the lone hero versus all-encompassing evils of Communism, but historians of Soviet science have shown that such formulations are simplistic at best and misleading at worst. Echoing this earlier work, Hargittai’s book underscores that, despite their reservations about many of the ills of the Soviet regime, most of the scientists profiled here remained genuinely committed to its improvement, not its destruction.

Hargittai’s writing is leavened by a personal touch (he often knew the men in question) that makes Buried Glory eminently readable. The science is rendered in clear language, rarely obfuscated by jargon. The biographies are not simply chronologies of data but rather fully formed representations of the lives of these extraordinary men. There is a hint of tragedy about their lives, with incarcerations, disrupted family lives, “disappeared” relatives, dismissals, exiles and so on, but the tone is one of individuals driven to succeed and animated by the possibilities opened up by modern science. There is no definitive answer on the principal conundrum of whether Soviet science flourished despite the system or because of it, but Hargittai’s work is a worthy popular addition to the literature in English on this rather overlooked topic.

  • 2013 Oxford University Press £22.99/$35.00hb 368pp

Web life: Particle Clicker

So what is the site about?

Particle Clicker is a game that lets players run their own simulated particle-physics experiment. It was created at CERN earlier this year during a 48-hour “hackathon” in which teams of students competed to develop the best computing projects, and it is both simple and addictive. Visitors to the website are greeted with a stylized image of a particle detector. When you click on the detector, it lights up as simulated collisions send showers of particles across the screen. Creating such collisions increases your stockpile of data – something you’ll need in copious amounts if you want to turn your modest collider experiment into a world-leading collaboration.

Is that all you need to do?

Not at all. Clicking over and over again like a demented lab rat will send your data count spiralling upwards, and in the game’s early stages this is the only way you can make progress. But just as in real life, the big bosses in this game (that’s you) don’t have to do their own grunt work for long. Once you’ve made your first scientific discovery (a couple of dozen clicks should get you there), your reputation grows and the grant money starts trickling in. Before long, you’re rich enough to hire your very own PhD students to do the clicking for you. From there, it’s onwards and upwards as you and your growing army of minions work to amass the data, reputation and funding you need to advance the cause of particle physics to unheralded levels of procrasti–er, glory.

Anything else I should know about?

Having an army of PhD students, postdocs and even – gasp! – summer students beavering away on your behalf is fine as far as it goes, but they will work a lot more efficiently if you spend some of your hard-won funding on technical upgrades. Improvements to efficiency and accelerator luminosity will give you more data per click, and bestowing some tongue-in-cheek perks on your workforce (free beer for the PhD students, extra coffee for the postdocs) will make them more productive, too. You can also choose to spend money on public relations, which boosts your reputation and how fast you win funding.

Why you should visit…

Particle Clicker’s developers have made a decent effort to build some science into the game. As well as the information boxes that pop up when you make a new discovery, the gameplay itself parallels the real scientific process in a number of ways. For example, the quantity of data required to make new discoveries increases over time – a fair reflection of the extremely data-intensive nature of modern particle physics. Also, once you have made a discovery, you can choose to investigate it further and thereby boost your reputation. However, each time you do this, the amount of data you need to amass in order to achieve the same reputation boost goes up. This, again, seems realistic: discovering charge–parity (CP) violation led to James Cronin and Val Fitch winning a Nobel Prize for Physics in 1980, but making an equally groundbreaking discovery today about this (now relatively well understood) phenomenon would require a prodigious amount of research.

…and why maybe you shouldn’t

In its early stages, the game is seriously addictive, with scientific discoveries and upgrades appearing thick and fast. After an hour or two, though, it slows to a crawl as hiring new people and performing new experiments becomes prohibitively expensive. After this point, there’s not a great deal you can do except wait around for the Higgs boson to show up, which seems a trifle anticlimactic. But given how much time Particle Clicker can eat up, perhaps a built-in taper is not such a bad thing.

Superconductor finally goes with the FFLO

A long-sought-after phenomenon that allows superconductivity to survive even in very strong magnetic fields has been seen for the first time by an international team of physicists. The “FFLO” phase of superconductivity involves the formation of exotic quantum entities known as Andreev bound states. As well as providing further insight into superconductivity, the discovery could also further our understanding of particle physics and neutron stars, and even lead to better magnetic resonance imaging (MRI) systems.

Superconductivity and magnetism are usually sworn enemies. Superconductors will expel weak magnetic fields that would pass straight through a normal conductor, while a strong enough magnetic field will destroy superconductivity.

Conventional superconductivity occurs when vibrations in a crystal lattice allow electrons to bind together to form Cooper pairs that can flow through the lattice without resistance. The electrons in each pair have opposite values of spin angular momentum – one having spin-up while the other has spin-down. However a strong magnetic field will flip the spins of some electrons, upsetting the balance of up and down spins and so destroying the Cooper pairs and the superconductivity itself.

Mismatched electron pairs

However, in 1964 two pairs of physicists – Peter Fulde and Richard Ferrell, alongside Anatoly Larkin and Yuri Ovchinnikov – predicted that certain materials ought to superconduct, even in the presence of very strong magnetic fields. This “FFLO” state would occur as a result of mismatched electron pairs – having a finite rather than zero net angular momentum – gathering together in bands across the material, outside of which superconducting currents could still flow (see figure “Go with the FFLO: mismatched electrons”).

In the last 50 years many groups have tried to test this idea experimentally, and some have found indirect evidence for FFLO – mainly by measuring macroscopic properties of superconductors to create detailed phase diagrams of the materials. Rolf Lortz of the Hong Kong University of Science and Technology and colleagues, for example, identified a new phase between the superconducting and normal conducting phases in the organic compound κ-(BEDT-TTF)2Cu(NCS)2, which they interpreted to be FFLO and which, they found, pushed the magnetic limit for superconductivity up from 21 T to nearly 30 T.

Diagram showing the FFLO state of superconductivity

In the latest work, Vesna Mitrović of Brown University in the US, and colleagues from Japan and the French National High Magnetic Field Laboratory (LNCMI) in Grenoble, have instead found evidence for FFLO at the microscopic scale. Their research explores the energy spectrum of a superconductor’s unpaired electrons, which have a higher energy than the paired variety. This energy gap has a single value throughout a conventional superconductor, but is predicted to vary from one region to another inside a material in the FFLO phase.

Superconducting quasiparticles

Mitrović and co-workers looked for regions within very thin sheets of κ-(BEDT-TTF)2Cu(NCS)2 where the energy gap goes to zero. These are regions where paired and unpaired electrons have the same energy, and where it is therefore energetically possible for unpaired electrons to exist. These unpaired electrons are best thought of as “quasiparticles”, which exist in complicated quantum superpositions with everything around them, and, unlike normal electrons, can superconduct. Specifically, the researchers looked for quasiparticles known as Andreev bound states, which resemble normal electrons whose spins point in the direction of an applied magnetic field.

The experiment was carried out at the LNCMI, where nuclear magnetic resonance (NMR) was used to confirm two expected properties of Andreev bound states – and therefore the presence of the FFLO phase. The first, and most important, involved measuring the time that it took for electrons to flip their spin when exposed to powerful magnetic fields, a characteristic that reflects the energy spectrum of electrons across the sample. The second property required measuring the distribution of spins within the material.

“Other groups have carried out impressive and important work, showing that in a high magnetic field you go into a new state,” says Mitrović. “But they could not tell what this state looked like. The purpose of our experiment was to look, and what we see is actually quite striking.” She adds that the work might prove to be important outside of condensed-matter physics, because it could help particle physicists to identify a form of superconductivity that involves quarks with unbalanced flavour, and in astrophysics might explain how neutron stars can exhibit superconductivity while at the same time generating enormous magnetic fields.

Better MRI systems

Lortz says that the research provides “important information of a different kind” to that obtained by his group. He adds that, in principle, it could lead to the creation of more powerful superconducting magnets for MRI systems because the superconducting state persists to higher fields. While κ-(BEDT-TTF)2Cu(NCS)2 is not suitable for making magnets, Lortz adds that the FFLO phase might be observed in more appropriate materials in the future.

Ted Forgan of the University of Birmingham, who has looked for FFLO in the superconductor CeCoIn5, says that the results look “pretty convincing”. But he points out that NMR, while providing microscopic data, does not show spatial variation directly. “Maybe high-field scanning tunnelling microscopy or spectroscopy could show a spatially modulated state,” he says.

The research is described in Nature Physics.

Graphene boosts thermal conductivity of popular plastic

A graphene coating has been used to boost thermal conductivity of the common plastic polyethylene terephthalate (PET) by up to 600 times. This new result from an international team of physicists and engineers could substantially increase the use of PET and other plastics in technologies such as solid-state lighting and electronic chips, where the ability to conduct heat is essential.

PET is a widely used plastic that will be familiar to anyone who has bought a bottle of water or soft drink. It is low-cost, strong, durable and recyclable, and it can be moulded into just about any shape. Fibres of the plastic are also used to make fabrics such as polar fleece. While PET’s low thermal conductivity makes it ideal for warm clothing, its inability to transfer large amounts of heat precludes its use in electronics and other devices where getting rid of heat is important.

Graphene is a sheet of carbon just one atom thick, and has an exceptionally high thermal conductivity of about 2000–5000 W/mK near room temperature – compared with about 0.2 W/mK for PET. Graphene’s thermal conductivity will drop when it is placed on a substrate, because heat-carrying lattice vibrations are scattered by interactions with the substrate. However, the thermal conduction of the graphene layer will still remain high, relative to most other materials.

Graphene flakes

Now a team led by Alexander Balandin at the University of California, Riverside and Konstantin Novoselov at the University of Manchester has used graphene flakes to create films just a few microns thick onto a thin PET substrate. The researchers then showed that the presence of the graphene gives the composite material a much greater thermal conductivity than PET alone.

The researchers used a non-contact optothermal Raman technique for their thermal measurements. In this method, the micro-Raman spectrometer is used as a sort of thermometer to measure temperature changes in the sample, and the laser that performs the Raman measurements is also used to heat the sample. The technique was developed in Balandin’s lab, where it was used to discover the exceptionally high thermal conductivity of graphene in 2008 (see “Graphene continues to amaze”).

Team member Hoda Malekpour, a PhD student in Balandin’s group, was responsible for making Raman measurements. Balandin explains: “Our results reveal that the thermal conductivity of PET increases by up to 600 times when it is coated with the graphene laminate films.” This gives the laminates a similar thermal conductivity to metals such as iron and lead, approaching that of silicon.

Drastic improvement

Balandin adds: “The thermal conductivity of PET on its own is very low – in the 0.15–0.24 W/mK range at room temperature – and other plastic materials are also poor conductors of heat. This drawback prevents plastics from being employed in many applications that could benefit from their low cost, durability and light weight. Our work proves that a few micron-thick graphene layers deposited on plastic films can drastically improve the way they conduct heat, and so now make such applications possible.”

The team, which includes scientists from Riverside, Manchester, Bluestone Global Tech in New York and Moldova State University, used a fairly simple theoretical model in this work to explain how the thermal conductivity of graphene laminates depends on graphene flake size and impurity concentrations. “We would now like to develop a more detailed model based on multi-scale simulations of heat transport in grapheme, to optimize its use as a coating material in thermal management applications,” says Balandin.

The research is described in Nano Letters.

How does medical ultrasound imaging work?

In less than 100 seconds, Mathias Fink introduces this medical technique used to generate images of the inside of the body. To produce ultrasound images, medical physicists target sections of the body with ultrasound and then study the reflected signal to build images of internal structures. In essence, they are doing the same thing that dolphins do when they are searching for fish.

Medical ultrasound is most commonly associated with the field of obstetrics, where it is used to generate images of the foetus developing in the womb. It is also used for studying the heart and abdominal regions. Fink, a researcher at Ecole Supérieure de Physique et de Chimie Industrielles de la Ville de Paris (ESPCI) in France, explains how some modern versions of ultrasound can produce around 10,000 images per second. This enables medical scientists to track mechanical waves through the body to create maps of the elasticity of tissue. This can be particularly useful in the diagnosis of cancer, where tissue stiffness can reveal details about the nature of the disease.

Watch more from our 100 Second Science video series.

Material gives up its spin-splitting secrets

The first direct observation of giant spin-splitting has been made by an international team of researchers, which made the discovery while trying to develop spintronic technologies. The discovery came as a surprise because the material being studied – tungsten diselenide (WSe2) – has a crystal symmetry that, conventionally, would not allow for such spin-splitting to occur.

Spintronics is an emerging technology that aims at harnessing the electron’s intrinsic spin and magnetic moment to develop new kinds of solid-state devices that are much faster, smaller and more energy efficient than current electronics.

Giant spins

Such research has already paved the way to more advanced computer memories and hard drives. However to build transistor-like devices based on the electron’s spin alone, researchers must have a clear idea of how electron spins travel in a solid, as dictated by the underlying spin-dependent electron band structure of the material. In particular, it is important to create a material with large “spin-splitting” – the separation of the spin-up and spin-down states in energy and momentum – that can be switched on and off at will. Unfortunately, to date the splitting seen in candidate materials has been too small to have any practical applications.

Now, Philip King and Jon Riley at the University of St Andrews in the UK, along with colleagues in Europe, Japan and Thailand, have combined detailed experiments and theoretical calculations of WSe2 and have, surprisingly, observed giant spin-splitting in the material, even though the material’s structure suggests that any such splitting should be impossible.

The finding came as a shock because WSe2 is what researchers describe as a “spin-degenerate” material. This means that even though the electrons are subject to very strong spin–orbit interactions, the material’s crystal symmetry makes it nearly impossible to see the states clearly “spin-polarized” – that is, a clear split or segregation of the spin-up from the spin-down states. But this split is exactly what the researches saw. A WSe2 crystal comprises two alternating 2D atomic layers, and they found that each layer had almost 100% spin-polarized states. But this contradicts a fundamental symmetry known as “inversion symmetry” that WSe2 possesses. So what is going on?

Layered mystery

What King’s team ultimately found is that while WSe2 is spin-degenerate in its bulk, individual, atomic layers of the material are highly spin-polarized, where the inversion symmetry is broken. WSe2‘s overall crystal structure is that of many layers, with each layer rotated 180° relative to the previous one. Within each atom-thin layer, inversion symmetry is broken or not present, and the electronic states in that layer are highly spin-polarized. While this applies only to those electronic states that are restricted to a layer (some states are delocalized across multiple layers of the crystal), this includes the majority of the material’s states.

Overall inversion symmetry is maintained and the sign of the spin-polarization for states confined in one layer “is exactly compensated by that of the equivalent states in the rotated layer” says King. “So locally, the states are spin-polarized, while globally they are not, and the consequences of inversion symmetry are restored,” he says. King further explains that such materials where spin-polarized states are “already hidden away in the bulk” would be perfectly suited to exploitation for spintronics applications because the material has electronic states that are naturally spin-polarized – something that is normally induced in previous spintronics materials.

Unlocking potential

Splitting one layer of WSe2 from another, for example by applying a voltage difference between them, would “unlock the potential of this material for hosting large tuneable spin-splitting”, according to King, and could provide a very large spin-splitting of nearly 500 meV at a maximum. “This is because the spin-splitting would be directly tied to the energy difference between the two layers,” says King. Previously, the largest such splitting, which was also observed by King, along with other colleagues in 2011, was around 200 meV. This amount of splitting should make it possible to develop a small device that actually functions at room temperature, so 500 mV is even better.

King also points out another benefit of the WSe2 system in that the spin points out of the 2D layer of the system, while in previous systems, the spin tended to point in the surface plane of the material. This, he says, adds an “additional functionality to the spintronics tool box”.

To make its observations, the team selectively probed just the top layer of WSe2 by carefully tuning the measurement parameters. To do this, the team used a technique known as “angle-resolved photoemission spectroscopy”, and studied the samples at MAX-lab at Lund University in Sweden and the Diamond Light Source in the UK. This allowed the group to track propagating electrons in a solid. A beam of bright, monochromatic synchrotron light illuminates the sample, displacing electrons at the surface via the photoelectric effect, which are then scattered off a heavy metal surface. As electrons that are spin-up scatter in one direction, and those that are spin-down scatter in another, the researchers calculate where the two types of spins would scatter to, and place detectors. By then measuring the energy and the incident angle of the electrons, the team can reconstruct the energy splitting on the spin-up from the spin-down states.

King is excited by the possibility of a whole new class of such materials with “hidden” spin-polarization. “Controlling this could bring fantastic new opportunities for spintronics, and a large arsenal of new materials in which we can achieve this,” he says.

“One of the key contributions of this paper is to highlight the importance of the layered nature of this material by demonstrating that the spin properties can vary enormously from layer to layer,” says spintronics expert David Awschalom from the University of Chicago in the US, who was not involve in the new work. “Spin-splitting on this scale in a semiconductor is valuable for the prospect of spin-based logic, which has the potential to significantly impact high-speed, low-power electronics,” he says, further explaining that transition metals such as WSe2 “have exotic properties of their own, and physicists are just beginning to sort them out. What’s especially fascinating about this work is its focus on fundamental spin and electronic properties of a new material. And new material properties often catalyse new applications”.

The research is published in Nature Physics.

The huge untapped potential of Palestine’s physicists

This time last year I was just settling into my new office at Birzeit University in the West Bank, Palestine. I spent the winter teaching a master’s course in particle physics alongside my colleague Bobby Acharya from King’s College London. I was supported by my institute – the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste in Italy – where I am a postdoctoral fellow working on the ATLAS experiment at CERN.

In the six months I spent in Palestine, I visited three universities in the West Bank – An-Najah, Al-Quds and Birzeit – and three in Gaza – Al-Aqsa University, the Islamic University of Gaza and Al-Azhar University – that all teach undergraduate courses in physics. Palestinians put a high value on education, with about a quarter of the population heading to university – one of the highest figures in the region. While going to university is not cheap, scholarships help the best students to continue their studies. I met academics who, along with lecturing, carried out research in areas such as biophysics, computational nuclear physics and condensed-matter physics.

While the staff were extremely welcoming and students clearly had a love for physics, it was disheartening to see the state of the laboratories and how their spirits were dampened by the state of their equipment – in some cases much of it was broken and there are limited resources available for new apparatus. Yet the technicians were extremely resourceful and much of the working equipment is handmade using a variety of materials.

One of the key issues in Palestine – as in many developing regions – is that lecturers are overloaded with teaching with little time for research. On top of that, lecturers are poorly paid and physics departments are underfunded. Such working conditions will not attract young ambitious physics students to work in physics in Palestine. Those who do go elsewhere for further study will often stay there where they have the facilities to continue their research.

As physics is relatively undersubscribed at universities, it receives less funding than other more subscribed departments. The lack of interest is due to physics being seen to lead exclusively to a career in teaching. Parents often deter their children from taking the subject at university, meaning only the very determined endeavour to go on to study physics at undergraduate level. Universities in Palestine also have little or no access to academic journals.

Palestinian physics departments have additional problems in trying to develop research. Institutions have been closed for a number of periods in the past, and travel – a vital ingredient of an academic career – is not easy. Within the West Bank just getting to university can often be arduous, let alone trying to travel internationally.

The situation is worse in Gaza, with academics and students not being allowed to travel to the West Bank – inhibiting collaboration and preventing shared resources and infrastructure – but on top of that the blockade on Gaza often means academics and students cannot travel at all, and equipment is difficult or impossible to get to the universities. This leaves academics in Palestine isolated. And of course the recent war in Gaza has left universities devastated and without basic things like power.

Supporting cause

For all the bad news, there are some encouraging aspects. Women dominate physics in Palestine, a trend that is seen in other parts of the Middle East. Coming from the UK where only about 20% of undergraduate physics students are female, standing in front of audiences consisting of 60–80% women was remarkable. It seems that Palestine does not struggle to get women to study science.

However, at faculty level things are different – at Birzeit, for example, particle theorist Wafaa Khater, who is head of the physics department, is the only female faculty member. Other departments have similarly few women in senior positions, possibly because women in the region are not expected to forge professional careers. Nevertheless, social norms are changing and there is a bright new generation of ambitious female students pursuing their studies abroad.

Universities must put more value on scientific research, and realize the impact it can have on students, the university and the scientific culture of Palestine

By supporting the development of physics research in Palestine, we can also help women there. Furthermore, in an ever-changing political, social and economic environment, strong institutions engaging in international research are vital, and so too is the production of high-quality scientists. This is especially urgent in the Middle East where the environmental crisis worsens.

I found it inspirational to meet such ambitious bright young students and tenacious academics who endeavour to fulfil their potential in challenging circumstances. But for their research to grow, universities must put more value on scientific research, and realize the impact it can have on students, the university and the scientific culture of Palestine. Lecturers must also have their teaching load eased, and obtain the opportunity to take sabbaticals, to travel to international conferences and institutions for collaboration.

As a scientific community I believe we must support research in Palestine – and in the region generally – by providing internships, master’s and PhD placements for the brightest students as well as collaborating with academics who pursue their research in the face of heavy teaching, poor funding and international isolation. Scientific opportunity simply must be based on merit, not one’s country of birth. Finally, academic journals must become accessible to all, irrespective of their or an institute’s capital. There is a huge untapped potential of scientists in Palestine and throughout the region – and we and our institutes must reach out and collaborate.

How to deal with the data deluge from big science

When the 2bn ($2.6bn) Square Kilometre Array (SKA) sees first light in the 2020s, astronomers will have an unprecedented window into the early universe. Quite what the world’s biggest radio telescope will discover is of course an open question – but with hundreds of thousands of dishes and antennas spread out across Africa and Australasia, you might think the science will be limited only by the enormous extent of the telescope’s sensitivity, or its field of view.

But you would be wrong. “It’s the electricity bill,” says Tim Cornwell, the SKA’s head of computing. “While we have the capital cost to build the computer system, actually running it at full capacity is looking to be a problem.” The reason SKA bosses are concerned about electricity bills is that the telescope will require the operation of three supercomputers, each with an electricity consumption of up to 10 MW. And the reason that the telescope needs three energy-hungry supercomputers is that it will be churning out more than 250,000 petabytes of data every year – enough to fill 36 million DVDs. (One petabyte is approximately 1015 bytes.) When you consider that uploads to Facebook amount to 180 petabytes a year, you begin to see why handling data at the SKA could be a bottleneck.

This is the “data deluge” – and it is not just confined to the SKA. The CERN particle-physics lab, for example, stores around 30 petabytes of data every year (and discards about 100 times that amount) while the European Synchrotron Radiation Facility (ESRF) has been annually generating upwards of one petabyte. Experimental physics is drowning in data, and without big changes in the way data are managed, the science could fall far short of its potential.

Data drizzle

Cornwell says that he can remember the start of his astrophysics career at the UK’s Jodrell Bank Observatory in the late 1970s, when staff could print out every data point from an experimental run on a sheet of paper six metres long. As sample sizes have inflated, however, experimental groups have been forced to overcome problems associated with taking the data – whether storing it, processing it or transferring it from one place to another. “Over 30 years, each of those has been a factor at some point,” Cornwell says.

At the SKA today, being able to process data without blowing the energy budget is the key concern. While the actual number of data falls with each processing step, Cornwell and his colleagues still have to perform careful computer modelling in order to determine exactly which experiments will be possible. Some will not – and electricity costs will be to blame. “This is what people predicted five years ago – that capital costs would be exceeded by the running costs,” Cornwell notes.

The problem at the SKA is not simply down to the number of data being generated. Unlike many other experimental facilities, the SKA’s data will be coming from disparate sources – dishes and antennas – that are spread over much of the southern hemisphere. As a result, the data must be collated before anything else can be done with them. If the data originated at roughly the same place, on the other hand, other possibilities for streamlining would have opened up. That is true at CERN, which in 2006 launched a special computing network to farm out data from its Large Hadron Collider (LHC) to labs around the world for processing, thereby avoiding the need for costly, on-site number crunching. Today, the Worldwide LHC Computing Grid consists of more than 170 computing centres in 40 countries and played a vital role in the discovery in 2012 of the Higgs boson.

Despite the success of the Grid, CERN is concerned about what the future holds for data-intensive computing. In May, a public–private partnership between CERN and various computing companies called CERN openlab produced a white paper, “Future IT Challenges in Scientific Research”. The paper outlined six main challenges: how to extract data, and how to initially filter them; the best types of computing platforms and software to handle the data; how to store the data; where to find the computing infrastructure, whether it is on-site, over a CERN-type grid, or in a Internet-shared “cloud”; how to transmit data; and how to analyse them efficiently.

Physics institutions feel the pressure of these challenges differently. At the ESRF, where X-ray data must be recorded within a confined region around a sample, engineers have to extract one gigabyte of data per second – a tiny fraction of what is possible at CERN or the SKA. However, even that relatively small amount is tricky to handle. The synchrotron was originally mandated to give visiting scientists all the raw data that they generate during their experimental runs, but Andy Grotz, the group leader of software at the ESRF, says that aim is no longer realistic, and that they must reduce it. “We suffer from the data deluge, in that we almost cannot keep up,” he adds. “We have to change the way we work radically.”

Often little is lost by reducing raw data. For instance, thousands of X-ray images may be needed to define a crystal according to the most common parameters – orientation, strain and so on – but, once those parameters have been calculated, the raw data are, for many visiting scientists, superfluous. The trouble is how to reduce the raw data when the requirements of visiting scientists can be so varied. Grotz says the ESRF has in the past allowed its computer scientists to help visiting scientists reduce data “on a goodwill basis” so that the files are small enough to be taken home on a USB stick or any other convenient medium. Now, he says, the lab is looking at ways of formalizing the process because the raw data sets are always too unwieldy.

One option – which Grotz and others have submitted as a research and innovation project to the European Commission (EC) under its Horizon 2020 funding programme – is to set up a cloud-computing facility with other European science institutions for the express purpose of data reduction. The key requirement of such a system would be that it is easy to use, even for those scientists who are not computer-savvy. “More and more users want to be able to use this as a turnkey system, where they can provide a sample and get out data that they understand,” says Grotz.

Different requirements

If it goes ahead, Grotz and colleagues’ Horizon 2020 project would follow on from Cluster of Research Infrastructures for Synergies in Physics (CRISP). This project, which has run for three years under backing from the EC, brought together 11 European research facilities, including the SKA, the ESRF and CERN, to tackle all aspects of the data deluge. CRISP has had some successes, such as finding new ways to extract data quickly from detectors, but Grotz says other targets – such as automatically storing the contextual data (or “metadata”) from experiments – has proved difficult because of the innate differences between research institutions.

Differences between hosting institutions may not only be practical. Bob Jones, the head of CERN openlab, believes the recent scandals of how governments can tap into private data have galvanized people into thinking about who should have access to what data. Science is competitive, he says, and groups that have helped fund an experiment may be concerned if that experiment’s data are farmed somewhere else for processing, because it might allow non-participating groups to sneak access. Some data could even carry a political or security risk, he says – a satellite’s image of a war zone, for instance.

Legislative answers to such problems could hinder collaborative computing efforts or they could streamline it, says Jones. But whatever happens, he says, there needs to be a collective decision. “There are a number of interests, but really it boils down to Europe deciding what the rules are for accessing data, rather than having them imposed on it by a third party.”

When it comes to the data deluge, it seems, staying above water will not be easy. Grotz says that a more general problem is financial, in the sense that computing is often bottom of the list for managers who are budgeting experimental infrastructure. “By the time we get to the software and computing infrastructure, the money has usually run out,” he says. A change of mindset is needed, but Grotz thinks that we are still in that antediluvian world where just generating the data is the priority. “It’s like we’re still working with slide rules,” he says.

Drilling down to catch cosmic rays at the South Pole

Ultra-high-energy cosmic rays are an enigma. They bombard the Earth with energies of more than 1020 eV – over 10 million times greater than the energies generated at CERN’s Large Hadron Collider. And yet, where ultra-high-energy cosmic rays originate and what accelerates them is still a mystery. Astrophysicists are hoping such questions will be answered by the $8m Askaryan Radio Array (ARA) that, if funded, will come fully online at the South Pole in the coming decade. Consisting of 37 nodes or “stations”, the ARA will sit on the 3 km-high Antarctic plateau and eventually span an area of 200 km2.

The ARA is a large international collaboration consisting of 50 researchers from some 11 institutions, including the University of Wisconsin-Madison in the US and the National Taiwan University (NTU) in Taipei. All the ARA partners are involved in existing Antarctic cosmic-neutrino detectors: IceCube – a 1 km3 detector embedded 2.5 km deep in the ice that reported its first detection of neutrinos with energies up to 1015 eV in 2013 – and the balloon-borne ANITA detector. ANITA, which is yet to make a confirmed detection, surveys around 1.5 million square kilometres of ice from 37 km up in the atmosphere and is sensitive to neutrinos with energies above 1019 eV.

The ARA will explore the so-far uncharted energies between IceCube and ANITA. Its main goal is to detect ultra-high-energy neutrinos that are predicted to result from interactions between high-energy cosmic rays and the cosmic microwave background in the vicinity of the unknown cosmic accelerators. Unlike the particles that make up cosmic rays, such as protons, neutrinos have no charge and next to no mass, meaning that those emitted from the edge of the universe can, in principle, reach Earth undeflected by fields and matter in the cosmos.

Spotting neutrinos

Building a detector at the South Pole might seem a perverse thing to do, but the beauty of Antarctica is that it has the largest expanse of ice in the world. And ice – being a dense, radio-transparent dielectric – is ideal for glimpsing neutrinos, which are notoriously hard to spot. Moreover, the ability to detect neutrinos improves the deeper you go. “We found that if we deployed the sensors at a depth of 200 m, then we have a factor of three times higher sensitivity than if we leave them at the surface,” says Albrecht Karle of the University of Wisconsin-Madison, who is managing the ARA’s deployment.

Each of the ARA’s 37 stations will detect radio waves over 50 km3 of ice and will be placed around 2 km apart on a hexagonal grid. Stations have been designed as stand-alone interferometers, enabling the three stations already deployed to start observing. Working over two intense summer seasons, each lasting six to eight weeks, the collaboration installed the prototype testbed station at the surface of the ice in late 2010 as well as the three full stations – one at 100 m and two at 200 m below the surface of the ice – in 2011 and 2012. The telescope will become more sensitive as each new station gets added – an approach that leaves the door open for the total number of stations to eventually expand beyond just 37.

Chosen for reliability and cost-efficiency, the 37 stations will be powered by the generators that supply the Amundsen-Scott South Pole Station with electricity. They will be connected by cables, which, along with data links, will run as far as 15 km across the ice to reach the most remote stations. It is estimated that the ARA will draw under 5 kW – less than 1% of the total power consumption at the South Pole Station.

Each ARA station will have 16 receiver antennas, divided into eight pairs spread across four detector “strings” suspended vertically in the ice to a depth of 200 m.  Each pair comprises one horizontally and one vertically polarized antenna. The antennas will detect neutrinos by looking for the radio waves emitted when these tiny neutral particles collide with nuclei in the ice – a phenomenon called the Askaryan effect. By measuring the sequential arrival of the pulse at the antennas, the direction and the curvature of the wavefront can be deduced. From this information, the distance of the neutrino–ice interaction from the station can be calculated, which when combined with the pulse magnitude, can be used to determine the energy of the interaction.

Fine-tuned to maximize sensitivity, Karle estimates that the ARA will detect up to 10 events per year. Frequency and polarization characteristics detected by the antennas will also help improve the location of the events in the ice – a step towards locating neutrino sources in the sky. “By knowing their incoming energy and angle, we can point back to certain suspect astrophysical objects,” says Pisin Chen of the NTU, which is funding one quarter of the stations. These could include supermassive black holes found in active galaxies and gamma-ray bursts. However, the ARA’s primary goal, as Karle stresses, is to detect the particles in the first place. “Our priority is to detect events reliably,” he says. “Once we have nailed that, then it will be a different game.”

Drilling down under

Members of the collaboration are facing some significant challenges in building the ARA. As mechanical engineer Terry Benson of the University of Wisconsin-Madison points out, one particular problem is that the harsh conditions, remote location and associated logistics mean that the drills that are used to bore holes in the ice must be robust, easy to repair and fuel-efficient. In fact, Benson and colleagues have pioneered drilling technology with these criteria in mind. Their approach melts the ice with pressurized water at a temperature of 85 °C and then rapidly pumps it to the surface, leaving a dry hole. This means that the detectors do not need to be waterproofed from the melted ice or shielded as it refreezes and expands.

Yet, the technique carries a risk that equipment can get damaged and lost in the ice. “Keeping water liquid at the South Pole is hard to do,” explains Benson, who has worked eight summers in Antarctica since 2004. “If you have delays when the drill equipment is sitting in the hole, it can freeze in and you can’t get it out.” But in addition to good design, meticulous preparation and instruments that can monitor the holes during drilling, Benson singles out experience as the most important factor. Safety is another top priority, with hazards including the hot, pressurized drill water and high-voltage power, not to mention the harsh environment. Indeed, crews are equipped with survival packs in case they get injured or exposed to poor weather conditions that could stop them returning quickly to the South Pole Station.

Chen’s NTU group, which built the second and third stations, is now assembling additional ones in Taipei. However, for now, their installation is on hold. While money from Taiwan and other non-US states is in place, it is unclear if or when the National Science Foundation (NSF) will provide the ARA’s US partners with the funding needed for deployment. “It’s the US that has the capability of transporting all of the equipment to the pole. Most other countries are not able to do it, so that’s becoming a bottleneck,” says Chen, who adds that the US contribution is “very crucial”. If funding is secured in time for construction over the 2016–17 Antarctic summer, Karle estimates the array will be completed by 2022. Yet the ARA is not the only neutrino experiment being built at the South Pole: the ARIANNA facility, which will be located on the Ross Ice Shelf, is in the early stages of construction and will search more than 500 km3 of ice at the same energies.

Amy Connolly – a physicist at Ohio State University who is working with colleagues on simulations of neutrino detections by the ARA and analysing its first data – says that the prototype testbed has provided a successful proof of principle and made the first observations of background noise, proving that it will not overwhelm signals from true neutrino events. By feeding the data into simulations of a full station, the collaboration is also developing techniques to maximize sensitivity. “Based on our experience with the testbed, we can say more definitely that with a full ARA, we can really dig deep into the expected flux of cosmogenic neutrinos and begin to do real physics and astrophysics,” says Connolly. Funding permitting, it would seem that the ARA has a bright future. “It’s going to be a very powerful experiment,” says Connolly. “If the NSF will only just allow us to build it.”

CERN gears up for LHC switch on

CERN’s 27 km-circumference Large Hadron Collider (LHC) – the world’s most powerful particle accelerator – is expected to be fully operational once again next year, allowing physicists to resume experiments after a two-year shutdown for maintenance and upgrading. CERN scientists hope that the upgrade will enable the LHC to operate with collision energies of 13 TeV, just short of its design energy of 14 TeV. This would open a new window for discoveries, including studying the Higgs boson in greater detail and hunting for “supersymmetric” particles. Indeed, the upgrade is expected to improve the LHC’s ability to detect heavy new particles by a factor of two, while the number of Higgs bosons produced will increase by an order of magnitude in total.

The LHC upgrade – costing SwFr,150m ($160m) – was completed in June, and is being followed by a step-by-step restarting and testing process that will last into early 2015. The upgrade will allow the LHC to operate at higher energies and with beams that are squeezed into a smaller area as they pass through the detectors to increase the collision rate. “There is a general sense of anticipation – the excitement is building,” says theoretical physicist John Ellis of King’s College London, who is also based at CERN. “I am personally looking forward to a lot of fun in the next two to three years and, as data come out, raking through the coals and seeing if there are any gems.”

Following a hugely successful run lasting three years, the LHC, which accelerates and collides protons with antiprotons, was shut down for maintenance in February 2013. The LHC had been operating with collision energies of around 7 TeV – or 3.5 TeV per beam. Despite not reaching its full design energy during that first run, in July 2012 scientists were still able to announce that they had detected the Higgs boson, which was first theorized in 1964. CERN’s successful detection was capped last year when François Englert and Peter Higgs were awarded the 2013 Nobel Prize for Physics.

But the rest of CERN’s accelerator complex has been receiving plenty of attention too during the current shutdown, with maintenance and upgrading work also being carried out on the “injector complex” – the chain of smaller accelerators that feed the LHC with particles. The starting point for protons at CERN is linear accelerator 2, or “Linac 2”, which sends protons into the Proton Synchrotron Booster (PSB) that in turn injects protons into the Proton Synchrotron (PS). The next and final step before delivery of protons to the LHC is the 7 km Super Proton Synchrotron (SPS), which was first switched on in 1976. “I like to compare it to changing gears on a car,” says Ellis.

The long list of maintenance and upgrading tasks that CERN completed during the 16-month shutdown include making 1695 openings and re-closures of the vacuum enclosure in between the LHC magnets that include the vacuum and helium pipes. Engineers made 400,000 electrical-resistance measurements as well as improving the 13 kA circuits in the 16 main electrical-feed boxes. These circuits connect the warm current leads from the power supplies with the superconducting leads that feed the current to the magnets. The upgrade also involved completely replacing four quadrupole magnets and 15 dipole magnets that looked suspect during testing.

But the biggest portion of work, according to Mike Lamont, head of the accelerator operation group at CERN, was the consolidation of more than 10,000  “splices” between superconducting magnets. These splices are essentially “superconducting cable joints” with six splices per interconnect. “This was a huge job,” Lamont says, pointing out that the consolidation included soldering shunts on the splices to ensure electrical continuity and installing improved electrical insulation, which also acts as a mechanical restraint to limit stresses and deformations.

The maintenance work was complete in June, with the restarting process for the LHC beginning shortly afterwards. At the time, Frédérick Bordry, CERN’s director for accelerators and technology, described the LHC as “coming out of a long sleep after undergoing an important surgical operation”. On 2 June, the PSB was restarted followed by the PS on 18 June. In July, powering tests began at the SPS and the physics programme covering CERN’s antimatter experiments is expected to resume in October.

Lamont says that the LHC is having to undergo months of intensive testing to ensure that the upgrade was successful before the full physics programme at the LHC can resume next year. Testing includes putting a current through each of the LHC’s eight sections and Lamont adds that all eight sections of the LHC should be cooled down by October to the operating temperature of around 2 K.

Lamont, however, hesitates to describe the LHC as new, but agrees that the LHC will have enhanced capabilities. “From a physics viewpoint, we are entering new ground,” he says. “It is a new frontier.”

Copyright © 2026 by IOP Publishing Ltd and individual contributors