Skip to main content

Physics societies call for sustainable approach to open access

A group of 16 major physics societies around the world have come together to support open-access publishing and welcome the increased “policy momentum” towards it. Yet in a joint statement, the societies urge policy makers to preserve the “diversity, quality and financial sustainability” of peer-reviewed publishing. They also warn that certain policies, such as the proposed cOAlition S “rights retention strategy”, could “undermine the viability of high-quality hybrid journals”.

Physics has been at the forefront of open access ever since the arXiv pre-print server was founded in 1991. While arXiv provides access to early drafts of scientific articles, the final peer-reviewed versions of the papers have traditionally been accessible to readers only through an institutional subscription. Open-access publishing, however, removes the requirement for subscriptions: by charging authors instead a fee, known as an article-processing charge (APC), articles are made immediately and freely available for anyone to read.

Over the past decade, the total number of papers in physics has grown by an average of 2% a year, but the number of open-access papers in physics has surged by about 25% a year. Despite this progress, however, more than 85% of physics articles are published in “hybrid journals”, which are publications that remain subscription based but give authors the choice to make their papers open access by paying an APC.

Undermining quality

One of the biggest open-access drives underway is Plan S, which seeks to make research papers open access immediately after publication. Unveiled in September 2018 by 11 national research funding organizations — dubbed cOAlition S – their goal is that from 2021 “all scholarly publications on the results from research funded by public or private grants provided by national, regional and international research councils and funding bodies, must be published in open access journals, on open access platforms, or made immediately available through open access repositories without embargo”.

Backed by funding bodies such as UK Research and Innovation and the French National Research Agency, Plan S does not support the hybrid model but acknowledges a “transitional pathway” towards fully open access “within a clearly defined timeframe, and only as part of transformative arrangements”. cOAlition S has also developed its rights retention strategy, which would give researchers supported by a cOAlition S Organizations “the freedom to publish in their journal of choice, including subscription journals, whilst remaining fully compliant with Plan S”.

Significantly, however, cOAlition S wants publishers to modify their existing publishing agreements so that authors can make their “accepted manuscripts” — those that have been accepted for publication and include author-incorporated changes suggested during peer review but may not be in their final form — available at the time of publication under a “CC BY” licence. Such a move would allow the work to be copy and distributed with attribution.

The 16 societies, which include the Institute of Physics, which publishes Physics World, as well as the American Physical Society and the Chinese Physical Society, maintain, however that the right retention policy would “undermine the viability of high-quality hybrid journals”. They also say that the move would mean that many physics researchers “no longer have an adequate range of options or freedom of choice in where they publish their work”.

The societies instead call for a “pragmatic, inclusive and sustainable approach to open access”, which includes broader international financial support for open access to be in place before hybrid journals can fully transition to open access.

Ask me anything: Carol Marsh

What skills do you use every day in your job?

At the start of my career and during my doctorate I used physics and maths. I produced designs with VHDL – a hardware description language that takes code and converts it into digital circuits. It’s different from a software language like C++, which runs a sequential set of instructions; instead, VHDL creates components such as flipflops that run in parallel.

The skill I use most now is communication. I’m always talking to customers, members of my team and organizations that promote engineering and diversity and inclusion. I do a lot of mentoring, mainly for people who want to become professionally registered and I regularly give presentations on technical topics, or about women in engineering. But my main job is chairing design reviews, which lets me see everything that’s going on across the whole site – all the different designs, how they work, what challenges the teams are experiencing, and every now and again I can still help people with their technical problems.   

What do you like best and least about your job?

My job is to support a team of engineers and it’s wonderful to see someone who started off at the company straight out of university leading major projects several years later. It’s great to see them grow; I really enjoy that. I also like doing a wide range of activities: often I don’t know what I’m going to be doing on any given day. It can change instantly from updating a process, ordering licences, approving designs for release to the customer, or helping members of my team. The thing I like least is the number of meetings I have to attend; some days my diary is just full of them.

What do you know today, that you wish you knew when you were starting out in your career?

I wasn’t confident when I was younger. When someone asked me to develop a design, I’d look at it and think “How am I going to do this?” But over the years, I’ve learned to take really complicated problems and break them down into smaller parts that I can solve. So if I could go back, I’d tell myself to be more confident and learn from my mistakes. In engineering, if you’re not learning you’re not growing. In school, you’re told that failure is a bad thing, but in engineering it is an opportunity to learn – you have to stay flexible and adapt to new information as soon as you receive it, while remaining confident about your skills. That ability to adapt is becoming more and more important in engineering. I also wish I had a crystal ball, as 30 years ago I had no idea how phenomenal the electronics we have would become. I give talks about what I think the impact of electronics will have over the next 30 years and it’s going to be absolutely incredible.

Optical sensor offers non-invasive monitoring of intracranial pressure

Traumatic brain injury is a major cause of death and disability. When a patient attends an A&E or neurocritical care unit with a head injury, one of the most important parameters to assess injury severity is intracranial pressure (ICP), as raised ICP is particularly associated with poor outcome. Measuring ICP, however, is currently a highly invasive process.

Speaking at the recent IOP symposium “Optics in Clinical Practice”, Panicos Kyriacou from the Research Centre for Biomedical Engineering at City, University of London described his team’s work in designing a non-invasive optical sensor technology for dynamically measuring ICP.

Measurement of ICP requires a surgeon to drill a hole in the patient’s skull and insert a pressure transducer (the ICP bolt). As well as being one of the most invasive non-therapeutic procedures, it cannot be performed at the site of an accident and carries large associated risks. “Stabbing a transducer into your brain carries certain risks – haemorrhage, leakage of cerebrospinal fluid or infection – and requires an expert neurosurgeon to perform the procedure. There needs to be a measured decision as to whether a patient really needs an ICP bolt,” Kyriacou explained.

As such, much research effort is now underway to develop methods for non-invasive ICP (nICP) monitoring. Currently, however, there is not a practical dynamic measurement that can rapidly determine absolute ICP values in mmHg.

“Our vision is to have a standalone ICP monitor that is completely non-invasive. Such a device will be placed on the forehead and will continuously monitor ICP in absolute numbers, as well as providing graphical ICP trends during monitoring,” said Kyriacou. “The vision of this research is to create a disruptive technology that will replace the most invasive monitor in clinical practice.”

Optical sensing

To create the nICP detector, Kyriacou and colleagues, along with clinical partners from the Royal London Hospital, developed an optical sensor based on photoplethysmography (PPG), which uses light to measure variations in blood volume. The device emits near-infrared light at various wavelengths into the skull and the reflected optical signals are detected by proximal and distal photodiodes. The proximal photodiode detects superficial PPG signals arising from layers of the forehead or scalp. These can be subtracted from the PPG signals read by the distal photodiode, which detects photons from deeper into the brain.

As a first step in evaluating the developed sensor technology and the capability of the acquired optical signals to reflect changes in ICP, the research team performed an in vitro evaluation using a custom made brain phantom. Kyriacou explained that this was a necessary step, as “obviously, we couldn’t recruit volunteers and put bolts in their heads to evaluate the sensor prior to clinical trials”.

The researchers created a brain phantom incorporating vessels with pulsatile flow and contained within a chamber (representing the scalp) that could be pressurized. They placed the optical sensor on the top surface of the phantom and raised the pressure inside the chamber from 10 to 40 mmHg (intracranial pressures of interest to neurosurgeons). Altering the pressure resulted in clear changes in the recorded PPGs.

Key to the success of the nICP device is knowing which features in the PPG trace are directly related to changes in ICP. The researchers examined various features and found two that had a large dependence upon the pressure. They then used these two features to create an ICP prediction model.

The team ran the in vitro experiment three times and saw excellent agreement between the nICP and invasive ICP readings, with correlations of 0.95–0.98 and RMS errors of 1.45–3.12 mmHg for the three datasets. These findings gave the team the confidence to perform in vivo tests on healthy volunteers. While there was no gold-standard comparison in this case, they observed promising trends in ICP measured on volunteers during tilting and Valsalva manoeuvres.

Clinical study

In the most recent phase of this work, the researchers joined forces with the Royal London Hospital, which has one of the largest neurological trauma centres in the UK, to begin clinical trials of the nICP device. They recruited patients that had an intracranial bolt (the gold standard for ICP) as part of their routine monitoring. This pilot clinical study, which began in February 2019 (paused earlier this year as the pandemic struck, and since resumed), has so far evaluated 21 of a targeted 40 patients.

Kyriacou shared some early results, noting that “the results were somewhat too good to be true”. A preliminary data analysis from a small group of patients revealed a high correlation between nICP and gold-standard measurements, with an RMS error of 1.98 mmHg. The nICP device showed a sensitivity of 95% and a specificity of 97% at 20 mmHg.

“These preliminary results generated a lot of excitement and enthusiasm. But this is a small group of patients, the robustness and confidence will come when we analyse all 40 patients, we are still recruiting and still analysing data,” Kyriacou concluded, noting that the group is hoping to commercialize the sensor. “Hopefully, we will be able to translate our research and create a tool to be used for patients and improve their quality-of-life.”

Face shields cannot protect wearers from virus particles carried by vortex rings

Vortex rings created when a person sneezes can transport virus particles to the noses of people wearing face shields – according to fluid dynamics simulations done by Fujio Akagi and colleagues at Fukuoka University in Japan. Their research reveals how these complex flows allow air-carried particles to enter the gaps between a shield and its wearer’s face. Their discovery exposes a key weakness in existing protective equipment and could lead to face shield designs that are better at diverting airflows away from wearers.

When fluids (liquids and gases) are fired at high velocity through a circular aperture such as the mouth or a nostril, swirling vortex rings can be created. As well as concentrating and focussing particles emitted in a sneeze or cough, these rings can also gather and transport particles that happen to be in the surrounding air.

Due to the COVID-19 pandemic, people are using protective equipment that aims to prevent this type of virus spread. For many healthcare and service workers, transparent face shields are a popular choice, owing to their increased comfort compared with face masks. Previously, fluid dynamics simulations have been used to study how these shields can reduce airflows generated by their wearers during sneezing. So far, however, there has been little work done on how effective face shields are at protecting wearers from the coughs and sneezes of others.

One metre sneeze

Akagi’s team did computer simulations of the airflow surrounding a person wearing a popular design of face shield. The person is exposed to the sneeze of a person standing 1 m in front of them. The team discovered that high-velocity vortex rings can reach the top and bottom edges of the shield within just 1 s. At this point in time, the wearer’s breathing can draw particle-carrying air into the narrow space between the shield and their face.

If the timing of the sneeze’s arrival is synchronized with inhalation, the simulations reveal that over 4% of the particles carried by the vortex ring can reach the vicinity of the wearer’s nose – significantly enhancing their risk of infection. Therefore, without additional protection like a face mask, they concluded that face shields are not highly effective for preventing the spread of COVID-19.

With this weakness exposed, Akagi’s team hope that their results will help guide the development of standards for protective measures against the spread of the virus. In the future, they hope to study how vortex ring flows are altered in different ways by face shields with different shapes. This work could soon lead to optimized shield designs that prevent users from inhaling particles, without the need for any additional protection.

The research is described in Physics in Fluids.

Learning the lessons from the Haiyuan quake 100 years on

On 16 December 1920 at about 7 p.m. local time, a magnitude-7.9 earthquake ripped along 240 km of fault in north-central China, razing two cities to the ground and severely damaging five others. It was one of the most devastating earthquakes ever seen and around 230,000 people died. Today, on the centenary of the Haiyuan quake – known colloquially as “the mountains walked” – researchers at the American Geophysical Union (AGU) Fall meeting are discussing how to better prepare for the next “big one”.

One of the reasons that the quake was so deadly is because it took people by surprise: the last major earthquake on this fault occurred over 1000 years earlier. Long periods of inactivity are a common feature of earthquakes on continental plates and in the intervening years cities are unwittingly built right over the hidden fault. Large swathes of the world, including regions from the European Alps to the Himalayas and much of the US, are prone to continental quakes. Luckily, continental faults tend to be easier to study than oceanic ones. “Unlike off-shore faults, which are very difficult to access, it is possible to set up geophysical sensors along continental faults and dig trenches on the surface to understand their long-term behaviours,” says Zhigang Peng from Georgia Institute of Technology in Atlanta, who has organized the AGU Haiyuan quake session.

In recent decades sophisticated Earth surveillance techniques including satellite and drone imagery, LiDAR scans and GPS measurements have enabled researchers to pick up on early signs of continental fidgeting. For example, Wenqian Yao, from the China Earthquake Administration, and colleagues have used unmanned aerial vehicles to survey the landscape along the Haiyuan fault system. Combined with measurements of slip rate on the ground they have discovered that strain is building on an offshoot of the Haiyuan fault, known as the Zihong Shan fault. Their findings suggest that the Zihong Shan fault could produce a quake of up to magnitude-7.

Today scientists are making long-term seismic hazard maps in many countries

Zhigang Peng

Similarly, Yacine Nicolas Benjelloun and colleagues from Institut de Physique du Globe de Paris have used drone images to map two major continental faults in Northwestern Mongolia, known to have produced a magnitude eight quake in 1905. Combining the drone data with geological history of fault movement has enabled them to estimate the seismic hazard for this region – an approach that they believe could be applied anywhere.

James Neely and Seth Stein from Northwestern University, meanwhile, have used the last 40 years’ worth of quake data to categorize continental earthquakes all over the world. Using this information they have been able to divide continental quakes into four categories and show that normal faults – where the crust is being stretched apart – have a maximum magnitude of seven, while other continental fault geometries can reach magnitude eight. For engineers, this kind of information is vital as designing buildings and infrastructure to withstand a magnitude eight is significantly more complex and expensive than for a magnitude seven (which is 10 times smaller than a magnitude eight).

Fault lines

Despite all the technology available to us today, researchers remain humble towards continental quakes. Some faults accumulate strain so slowly that they are invisible to modern instruments right up until the moment they move. The 1994 Northridge quake in California was an example of this, with no-one having any idea that this suburb of Los Angeles sat atop a major earthquake fault. And even when researchers identified a fault they don’t know how much strain will make it pop, or how the fault will move. “We don’t know if an earthquake is going to grow large and rupture several faults, or stay small and confined to a single fault,” says Neely.

As a result, the main focus of research today is to estimate the seismic hazard of a region and make preparations. For example, a report submitted to the German Federal Parliament in November outlines the chance of a major earthquake near the city of Cologne and makes recommendations for mitigating such an event. The findings shows that this region should expect a quake with a magnitude of 6.5 every 1000 to 3000 years. If such a quake occurred tomorrow the researchers estimate that around 10,000 buildings would suffer moderate to severe damage and somewhere between 100 and 1000 people would die.

As the 2008 Sichuan quake in China and the 2010 quake in Haiti demonstrate, we still have a long way to go when it comes to protecting ourselves from continental quakes. But Peng and his colleagues think that significant progress has been made. “In the 1920s, the concept of an active fault within continents was very new,” he says. “Today scientists are making long-term seismic hazard maps in many countries.” Peng adds that buildings are also generally much stronger than one century ago and people are “generally more aware and know how to stay safe during these events”.

Destructive quantum interference improves single-molecule switch

A single-molecule switch that operates via destructive quantum interference has the highest on/off ratio for a device of its kind. The switch, developed by researchers at Columbia University in the US and the University of Glasgow, UK, consists of a molecule six nanometres long (similar in size to the smallest computer chips on the market) and a special central unit. It can carry currents of over 0.1 microamps in its “on” state and could allow for faster, smaller and more energy-efficient transistors.

Transistors are the workhorses of modern electronics, and their size has decreased steadily over the last half century or so, enabling more and more to be packed onto computer chips. This relentless downsizing can’t continue forever, though, and methods to make ever-smaller transistors in silicon are rapidly approaching the material’s size and performance limits. Researchers are therefore exploring new types of switching mechanisms that can be used with different materials.

Nonlinear effects

In the nanoscale structures that Latha Venkataraman and her group study at Columbia, quantum mechanical effects dominate, and electrons behave as waves rather than particles. These waves can interfere either constructively or destructively. For two constructively-interfering waves, the amplitude of the resulting wave is greater than the sum of the individual waves. In destructive interference, two waves can completely cancel each other out.

Researchers have predicted that such nonlinear effects should allow single-molecule switches to exhibit large ratios of “on” to “off” currents. However, making transistors out of these molecules is no easy task.

One major challenge is current leakage. In an ideal transistor, current flows only in the “on” state, while in the “off” state it is blocked. While real devices are not so clear-cut, Venkataraman explains that the amount of current flowing in the on- and off-states must nevertheless be very different. Otherwise, the device behaves like a leaky hosepipe in which it is hard to tell whether the valve (that is, the on-off switch) is open or closed.

Strongly supressing current in the off-state

Most previous designs for molecular transistors produced leaky devices because they used short molecules for which the difference between the on and off states was small.

In their new work, Venkataraman’s team instead used six-nanometre-long fluorene oligomer molecules synthesized by Peter Skabara and his group at Glasgow. “We observed transport across a six-nanometre molecular wire, which is remarkable since transport across such long length scales is rarely observed,” she explains. “In fact, this is the longest molecule we have ever measured in our lab.”

The molecules are also easy to trap between metal contacts, making it possible to create stable, single-molecule circuits that sustain applied voltages of more than 1.5 V. An additional central benzothiadiazole unit enhances destructive interference between different electronic energy levels in the molecules and strongly supresses current in the devices’ off state, so mitigating current leakage. This electronic structure makes the relationship between (tunnelling) current and applied voltage highly nonlinear, Venkataraman says, and produces a ratio of 104 between the on- and off-state current.

The researchers report their work in Nature Nanotechnology.

Oxygen and carbon monoxide electrocatalysis for renewable-energy conversion

Want to learn more on this subject?

The design and development of active, stable and selective electrocatalysts for energy conversion reactions is key for the transition towards a sustainable future. Tailoring the structure of the electrochemical interface at the atomic and molecular levels allows us to understand the structure–function relations and enhance the electrocatalytic performance for renewable-energy conversion. In this webinar, Dr María Escudero-Escribano will present some recent strategies aiming to understand and engineer the interfacial structure and properties for oxygen and carbon monoxide/carbon dioxide electrocatalysis.

The first part will be focused on the oxygen reduction and evolution reactions (ORR and OER, respectively), which slow kinetics and limits the performance of proton exchange membrane fuel cells and electrolysers. She will present our work on oxygen electrocatalysis, from model studies on well-defined surfaces to the development of self-supported high-surface area nanostructured catalysts for ORR and OER (see figure, left).

In the second part, María will present our recent work on Cu single-crystalline electrodes in contact with different electrolytes aiming to understand the structure sensitivity for CO2 and CO reduction (see figure, right). We have studied the effect of pH, anion adsorption, and potential dependence of interfacial processes for CO reduction. We show how model studies are essential to understand the structure–property relationships and design efficient electrocatalysts for sustainable energy conversion.

Want to learn more on this subject?

Dr María Escudero-Escribano is an Assistant Professor of Chemistry at the University of Copenhagen, Denmark, where she leads the NanoElectrocatalysis and Sustainable Chemistry group. Her research group investigates tailored electrochemical interfaces for renewable-energy conversion and electrosynthesis of green fuels and value-added chemicals. She obtained her PhD from the Autonomous University of Madrid, Spain, in 2011, for her work on electrocatalysis and surface nanostructuring using well-defined electrode surfaces. She carried out her postdoctoral research on oxygen electrocatalysts for fuel cells and water electrolysers at the Technical University of Denmark and Stanford University, US, before moving to the University of Copenhagen in 2017. María is chair of the Danish Electrochemical Society since 2018. She was awarded a Villum Young Investigator grant from the Villum Foundation in 2018 and is a PI of the Center for High Entropy Alloy Catalysis (CHEAC) from the Danish National Research Foundation. She has received numerous national and international awards in recognition of her early-career achievements, including the European Young Chemist Award (Gold Medal) 2016, the Princess of Girona Scientific Research Award 2018, the Energy Technology Division Supramaniam Srinivasan Young Investigator Award 2018 from The Electrochemical Society, the Spanish Royal Society of Chemistry Young Researchers Award 2019, and the Clara Immerwahr Award 2019 from UniSysCat.

Solar-powered device sterilizes medical equipment

An international team of researchers has developed an innovative system that can sterilize medical tools using solar heat. The device could help to maintain safe, sterile equipment at low cost in remote locations, and could prove particularly valuable in developing regions of the world.

The sterilization of invasive medical equipment is a critical step in mitigating infection risks in healthcare settings. However, although common sterilization procedures using saturated steam in a pressurized chamber – known as an autoclave – are effective and standardized worldwide, they can be a challenge to use in remote areas without reliable energy sources.

In an effort to address this issue, a team of researchers at Massachusetts Institute of Technology (MIT) and the Indian Institute of Technology Bombay have developed a novel way of generating high-temperature, high-pressure steam passively by using a portable solar energy collector. This steam is used to drive a typical small-clinic autoclave. The researchers describe their system in the journal Joule.

As MIT graduate student Lin Zhao – who wrote the paper alongside MIT professors Evelyn Wang and Gang Chen, and colleagues at MIT and IIT Bombay – explains, the researchers started to work on this problem after realising that a transparent silica aerogel material they had originally developed for large-scale concentrating solar-thermal power applications could also enable a “unique solution to medical sterilization with solar energy”.

Insulating aerogel

After some research, the team found that the thermally insulating aerogel is particularly suitable because its operating temperature of around 125°C – beyond the 100°C limit of conventional passive solar water heaters, but without the complexity and cost of an active tracking system – is perfect for solar-powered sterilization applications.

“Because of [the system’s] relatively small-scale, it can also benefit greatly from a simple, modular design and it can have a high impact on people’s lives, especially in the developing regions,” says Zhao.

In a field test performed in Mumbai, the team simulated a clinical sterilization process by using a standard autoclave indicator tape – designed to change colour when sterilization conditions meet minimum exposure requirements – in a test chamber.

“The same tape is widely used in hospitals and clinics worldwide to verify the efficacy of their autoclaves,” Zhou explains. “As we showed in our paper, the tape changed colour after we completed the sterilization cycle in our device, validating the device’s sterilization efficacy.”

Next steps

Zhao points out that the device is likely to be of particular benefit for hospitals and clinics in remote locations, where electricity or fuel-powered autoclaves are too expensive or impossible to operate due to power shortages.

“Additionally, as we saw in India, centralized hospitals in big cities extend their healthcare service by setting up temporary clinics in remote villages,” he says. “Due to the lack of power sources, they have to carry sterilized equipment on every trip to the village, which limits the number of operations they can do. Our solar-powered device can help them sterilize the equipment on the spot, therefore increasing their capacity without adding more items to their packing list.”

Compared to existing solar-powered autoclaves, the new system has a number of advantages – including the fact that it is completely passive with no moving parts and its high energy efficiency, which enables it to provide more steam for a given footprint.

“Most of the components in our system are commercially available in the solar water heater industry. Once the aerogel becomes available, we expect our system can be mass-produced at low cost,” says Zhao.

The next critical step is to demonstrate feasible large-scale manufacturing of the transparent aerogel. As part of this process, AeroShield, a start-up company founded by co-author Elise Strobach, is developing a commercially viable manufacturing plan.

“After the aerogel material is available, we plan to partner with solar collector manufacturers such as AEi to produce our solar-autoclave prototypes and distribute them through local channels such as NGOs,” adds Zhou.

Little book, big science

In the latter half of the 20th century physicists undertook a shrewd move: they began to take the entire universe as their laboratory. It was a clever manoeuvre based on real-estate values alone, but it had other advantages as well. Floor space was essentially unlimited, maintenance fees were negligible, it cost nothing to heat and cool, and no insurance policies were required. But getting through the door, or even watching through a window, was costly. Of course, astronomers and physicists have always used observations of the universe to hone their understanding of the world.

As Lyman Page – the eminent Princeton University cosmologist – recounts in The Little Book of Cosmology, it was the discovery of the cosmic microwave background (CMB) that started the age of cosmology we’re in now, something that hadn’t much interested astronomers until then.

“We cosmologists,” writes Page, who studies temperature variations in the CMB, near the end of this clearly written, delectable book, “feel fortunate to have been alive in the decades when the explosion of knowledge about the universe took place.” He has made good use of these feelings to produce this enthusiastic and approachable survey of the state of cosmology today. This is hardly a surprise, as Page is an expert in observational cosmology, being one of the original co-investigators of the Wilkinson Microwave Anisotropy Probe (WMAP) project, and has won numerous awards including the 2018 Breakthrough Prize in Fundamental Physics, the 2015 Gruber Prize and the 2010 Shaw Prize.

In The Little Book of Cosmology, Page uncovers the discoveries that have led to some of the most interesting and astonishing phenomena in all of the sciences. The book touches upon the cosmological constant of empty space; the accelerating expansion of the universe; the fact that ordinary matter makes up only 1/20th of the energy density of the universe; the mysteries of dark matter and dark energy; the slight (about one part in 100,000) but exceedingly informative temperature variations of the CMB as seen across the entire sky. In truth, at 185 pages this book is more than a survey, but still something you can read without a pen in your hand and a tablet before you. The book includes forays into more advanced topics as well, though not to the same degree. Page introduces inflationary models of the very early universe, how the gravitational landscape of the early universe produces anisotropy of the CMB, gravitational lensing of the CMB, and quantum fluctuations as a basis for cosmic structure.

A heartening feature of the book is Page’s repeated emphasis on precise measurement as the best way to test ideas, theories and models. He gives examples of where this has already been done, perhaps most spectacularly with the determination of the power spectrum of the CMB as measured by the WMAP and Planck satellites. That a relatively simple cosmological model, with just six free parameters (beyond those of the Standard Model of elementary particle physics), explains this non-trivial, shapely curve is a true masterpiece of scientific achievement.

The big bang

The book ends with a chapter on the frontiers of cosmology, what remains to be known, and what might be known soon. Sensitive upcoming galactic surveys will see the effects of neutrinos on cosmic structure, and perhaps give more precious information about their masses, for which only limits exist today. Primordial gravitational waves may reveal secrets about the infinitesimal era of quantum gravity. Galactic and CMB surveys could reveal small deviations in the expansion rate versus time, and hence changes from predictions of an unchanging cosmological constant. Meanwhile, instruments are being designed to look for CMB radiation departures from a blackbody spectrum.

In many ways The Little Book of Cosmology reminds me, in mission and style, of Steven Weinberg’s 1977 book The First Three Minutes, which sits on my desk as I write, its matured pages smelling a bit like soap, its lower right-hand corner having been nibbled on by a mouse. Weinberg includes more numbers, but no plots. Page’s book has an exceptionally clear and direct style of writing. There are no equations in this book except for the famous little one of that Einstein fellow (which is more for show), some graphs and only a few spots that require multiplication. It avoids scientific notation until an appendix, to my chagrin as I struggled with the written number “one ten-millionth.” Surely anyone who can understand the bulk of this book can understand scientific notation from the get-go. Also, I was somewhat surprised to find Alice and Bob gallivanting across the universe, explaining inflation and other concepts – indeed, it was a bit disorienting.

I don’t think the revelations and the remaining mysteries presented in this book have yet penetrated much of the public, which is a lowdown shame

I don’t think the revelations and the remaining mysteries presented in this book have yet penetrated much of the public, which is a lowdown shame. After all, how much we have learned about the cosmos over the last 20–30 years, how precise the measurements have become, how tightly constrained the most successful model is. But yet, we still don’t know what 95% of the universe is made of. There’s something untoward about saying space is being created everywhere, and there are portions of universe constantly fading from our view. Still, those costly windows into the ultimate laboratory are providing ever clearer views. It is no place for the timorous.

I think this book can open the eyes of many motivated readers – smart high school students, university students regardless of their course of study but particularly those assigned it in an introductory astronomy class, members of astronomy clubs, physics students looking for a quick introduction to the field, and certainly anyone who reads this or any other science magazine. It’s got to be the best, most up-to-date, “little” introduction to cosmology they’re going to find.

  • 2020 Princeton University Press 152pp £16.99hb

Fundamental constant measured at highest precision yet

The most precise measurement ever of the fine-structure constant has placed new constraints on theories that predict the existence of “dark sector” particles. The new value, which researchers in France measured using clouds of cold rubidium atoms, provides a stringent test of the Standard Model of particle physics while also further limiting the properties of dark matter – the substance thought to make up more than 90% of the matter in our universe.

The fine-structure constant α is a composite of several physical quantities (including e, the charge on an electron, and c, the speed of light) that, together, characterize the strength of the electromagnetic interaction. This makes α ubiquitous throughout the universe. Because it is a dimensionless number, it is in some sense more fundamental than physical constants such as the strength of gravity or Planck’s constant ħ, which change depending on the units in which they are measured.

Electromagnetic interaction is weak

The relatively low value of α – it is approximately equal to 1/137 – implies that the electromagnetic interaction is weak. The main consequence of this is that electrons orbit some distance from their atoms, so they are free to form chemical bonds and build up molecules. This property made it possible for matter and energy to form stars and planets. Indeed, some physicists have argued that we owe our very existence to the exact value of α, because if it were slightly bigger or smaller, stars might not been have able to synthesize heavier elements like carbon, and life as we know it wouldn’t exist.

Precise measurements of α make it possible to rigorously test relationships between elementary particles. These relationships are described by the equations that make up the Standard Model of particle physics, and any discrepancy between the model’s predictions and experimental observations may provide evidence of new physics.

Determining the recoil velocity of atoms

Measurements of α generally begin by determining how strongly atoms recoil when they absorb photons. The kinetic energy of this recoil (or its velocity) reveals how massive the atoms are. Next, the electron’s mass is calculated using the precisely known ratio of the atom’s mass to that of an electron. Finally, α is calculated from the electron’s mass and the binding energy of a hydrogen atom, the value of which is likewise well known from spectroscopy measurements.

In the new work, researchers led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris cooled atoms of rubidium to a few degrees above absolute zero in a vacuum chamber. They then created a quantum superposition of two states of the atoms using laser pulses. The first state corresponds to atoms that recoil when they absorb photons and the second state to atoms that do not recoil.

The two possible versions of each of type of atom propagate through the experimental chamber along different paths. The researchers then applied a second set of laser pulses to “re-join” the two halves of the superposition.

The more an atom recoils after absorbing photons, the more out of phase it will be with the version of itself that does not recoil. By measuring this difference, Guellati-Khélifa and colleagues extracted the mass of the atoms, which they then used to determine the fine-structure constant. Their result shows that α has a value of 1/137.035999206(11) – a measurement that, with an accuracy of 81 parts per trillion, is 2.5 times more exact than the previous milestone, which was made in 2018 by Holger Müller and colleagues at the University of California at Berkeley, US.

Improved experimental setup

The Paris researchers say that improvements to their experimental setup were key to the new result. By controlling effects that can create perturbations in the measurement, they were able to reduce sources of inaccuracies. For example, the researchers considered, and compensated for, the gradient in the local strength of gravity across their experimental set-up and the Coriolis acceleration created as the Earth rotates. They also meticulously characterized properties such as laser beam alignment, frequency, wave-front curvature and the second-order Zeeman effect, which account for many errors in such experiments.

The new measurement, which is reported in Nature, differs from the value obtained in the 2018 Berkeley experiment in its seventh digit. This result surprised the Paris researchers, since it implies that either one or both measurements has an error currently unaccounted for. However, the two groups’ measurements do agree closely with the value of α calculated from precise measurements of the electron’s so-called g-factor, which relates to its magnetic moment. In a related News and Views article, Müller notes that the Paris result “confirms that the electron has no substructure and is truly an elementary particle”.

The Paris researchers say they now plan to back up their results by measuring the recoil velocity of a different rubidium isotope. “We also plan to build an even more precise instrument,” Guellati-Khélifa tells Physics World.

Copyright © 2025 by IOP Publishing Ltd and individual contributors