As the leader of a materials science and engineering laboratory, my office – situated in the middle of a sea of experimental spaces, white boards, and group member desks – was rarely quiet before the pandemic. The murmurs of derivations from the white boards and the swinging of the doors to our optics and synthetic chemistry labs provided a constant background hum. And although I am trained as a physicist, the members of my group come from numerous scientific and engineering backgrounds, creating a rich, dynamic research environment.
During the past year, these noises and interactions have vaporized. As I work from the isolation of home while my group members perform research (at significantly reduced capacity) in the lab, our entire research environment and lab culture have been transformed. New synthetic pathways no longer decorate the wall outside my office door, and students no longer rush into my office to show me their latest laser threshold data. But the daily coffee breaks and walks also stopped. More recently, the more significant celebratory milestones, like graduation and holiday parties, have been postponed. And while this stark, sudden shift had a huge and immediate impact on productivity, it also led to a slow, continued erosion of mental health.
Seeking practical solutions
Universities are trying a wide range of strategies to address this erosion, some of which are more successful than others. Loneliness is already a challenge for science and engineering students for many reasons, including struggles with imposter syndrome, issues with advisor(s) and being unable to travel home due to visa restrictions. COVID-19 is amplifying these issues, making it more important than ever to try to find practical solutions.
In my case, I took inspiration from the healthcare sector and decided to bring in a mental health therapist to lead a weekly discussion group for my lab. There were several practical challenges associated with this decision, including finding someone who is skilled in and willing to work with the STEM/education sector (and PhD students specifically), paying that person and getting student buy-in.
To overcome the first hurdle, I reached out to my personal network as well as a range of potential contacts on social media (yes, I said social media). Through them, I found a person who had previously run similar groups. As far as payment, I paid “out of pocket”, meaning I paid them directly. As a principal investigator, I value my students’ health and well-being, and I feel it is my responsibility to ensure their success. To get student buy-in, I dedicated workday time to the discussion group to emphasize that I believed it was important. Although attending the group was completely optional, I explained why I felt group sessions and the group format were a good approach and why I valued therapy, as part of an attempt to normalize it.
Specialized support
We are now finishing our second month of these sessions, and when I asked my group if they wanted to continue, the answer was a resounding “yes” for several reasons. Although the university does offer similar groups, they are typically attended by students from many disciplines. As a result, the therapists are not trained in engineering-specific challenges, and the discussants are not all engineers. The result is that the students end up feeling even more isolated than they already were.
While I do not attend our group’s sessions, I know that my group members discuss a wide range of topics, both pandemic-related and otherwise. Given the pressure that many of them are feeling, it is very important to provide them a place to express their concerns related to screening/qualifying exams, research and academic progress, housing and finances, and family. And this environment needs to be with their colleagues who can empathize with and support them, as well as with a therapist who can lead a constructive discussion. I also encouraged my students to use the therapist to alert me to any concerns they have with the university or with me that they do not feel comfortable bringing to my attention directly. In other words, the therapist can act as their advocate. This advocacy role (as well as my lack of experience in psychology) is one reason why I decided not to lead these groups myself.
Creating this resource for my group has presented many challenges, but I am hopeful that the school of engineering will continue the initiative post-pandemic. To ensure our students’ academic and research success, it is critical to care for both their mental and physical health. This requires engaging the students to provide them with the resources they need, not the resources we think they need.
In February 2020, some 20,000 scientists, engineers and business insiders filled the cavernous halls of the Moscone Center in San Francisco for Photonics West, one of the last major scientific conferences to be convened before the pandemic took hold. This year the halls will be empty, but the organizers have had the best part of a year to reimagine the event as a digital meeting place for discussing the latest advances and innovations in lasers, photonics and biomedical optics.
Using several different technology platforms, including the ubiquitous Zoom, the Slack messaging service, and the event’s dedicated app, the Photonics West Digital Forum is designed to provide attendees with the same content and networking opportunities as the live event. The popular plenary and Hot Topic sessions will be live-streamed and available to watch on demand, while attendees will be able to access thousands of technical presentations in four major conference tracks. New for 2021 is Quantum West, a four-day event that will explore the role that photonics plays as quantum technology moves from R&D to engineering products for the commercial marketplace.
Alongside the technical programme, delegates will be able to take part in live networking sessions and social meet-ups. Industry events including presentations and panel discussions will be hosted online, as will the annual Start-Up Challenge and SPIE’s Prism Awards. Meanwhile, a digital marketplace will enable attendees to explore the most innovative technology solutions from suppliers from all over the world, watch product demonstrations, and engage directly with company representatives. Some of the products being featured in the marketplace are highlighted below; to find out more visit the companies’ virtual booths.
Upgrade boosts SuperK performance and reliability
The SuperK FIANIUM series of supercontinuum lasers from NKT Photonics offers the highest efficiency of any on the market, as well as the highest power in the visible range. Newly upgraded electronics and improved fibre technology have boosted performance and reliability still further, while also making the lasers even easier to use.
The industry-leading SuperK FIANIUM series of supercontinuum lasers now feature upgraded electronics and improved fibre technology. (Courtesy: NKT Photonics)
SuperK lasers deliver high-brightness diffraction-limited light over the entire 390–2400 nm range, providing single-mode, broadband collimation at high power. The modular design achieves power levels of up to 2 W at visible wavelengths and a total output power of up to 6.5 W, while the high efficiency improves reliability and reduces any unwanted residual pump power at the output. Adding a filter also converts the SuperK into an ultra-tunable laser.
A pulse picker option makes it possible to change the repetition rate of the laser during operation. A range of 0.15–78 MHz is available as standard, with custom options available on request, providing maximum flexibility for lifetime applications such as fluorescence lifetime imaging.
The lasers are based on a monolithic fibre architecture, which ensures excellent reliability and a lifetime extending to thousands of hours. The design eliminates the need for regular maintenance, and also enables alignment-free operation. No laser expertise is needed to generate a high-quality beam, and functions can be changed on-the-fly from a computer interface with no need to power down the system. In standby mode, the laser remembers the latest power or current setting and returns to the same level when the emission is re-activated.
Sources deliver polarization-entangled photons for quantum research
OZ Optics, a leading supplier of fibre-optic products for telecommunications and industrial and medical applications, has expanded its product line for quantum photonics. The company now offers two sources of polarization-entangled photons for applications in quantum sensing, quantum communication, and quantum computing, and with both source types, OZ can now produce high-quality polarization entanglement throughout the near-infrared and short-wave infrared frequency bands.
Polarization-entangled photon sources from Oz Optics are designed for applications in quantum sensing, quantum communication, and quantum computing. (Courtesy: OZ Optics)
The first source, the EPS-1000, is an all-fibre generator of broadband polarization-entangled photon pairs, produced at telecom wavelengths with more than 80 nm of bandwidth. Based on periodically-poled silica fibre (PPSF) technology, it features turn-key, room-temperature operation, while the all-fibre design makes it environmentally stable for challenging applications such as space-based instruments.
The latest addition is the EPG-1000 series of crystal-based polarization entangled photon sources, which have been designed to meet the diverse phase-matching needs of the quantum R&D community. This source generates polarization-entangled photon pairs through the well-established process of spontaneous parametric down conversion (SPDC), and it features a compact interferometer that supports several phase-matching techniques. With additional supporting equipment, the EPG can either be used to produce photon pairs or to create polarization entangled pairs with a fidelity of more than 95%.
“These two types of sources diversify our quantum photonic capabilities and positions OZ Optics to become a global market leader in the quantum light source space,” commented OZ Optics CEO Ömür Sezerman.
For detailed specifications and additional information about these and other products, visit www.ozoptics.com.
Virtual reality enables photonics learning
The Immersive Photonics Lab from ALPhANOV is an innovative training tool that engages trainees in a virtual reality photonics lab. The immersive learning environment helps participants to master the professional and technical know-how they need for their role, whether they are new to the industry or learning to use a new piece of equipment. By helping to disseminate training programmes, it enables companies to tackle the shortage of skilled labour in the photonics industry.
The Immersive Photonics Lab from ALPhANOV exploits virtual reality to teach photonics skills and procedures. (Courtesy: ALPhANOV)
The virtual reality application, which won an SPIE Prism Award, faithfully reproduces physical phenomena and enables fast and effective development of procedural skills, whether in industrial or educational settings. The tool emulates all the equipment needed to train professionals and students, without risk of injury or damaging valuable optical components.
The Immersive Photonics Lab training application can be used anytime, anywhere, which makes it ideal for remote learning and customer training. The tool gives participants easy access to the latest generation of photonics technologies, while also reducing the need for equipment downtime to support technical training.
ALPhANOV works with each customer to developing a tailor-made training programme as well as a dedicated technical environment within the Immersive Photonics Lab.
TOPTICA rises to new laser challenges
TOPTICA Photonics will be showcasing its wide range of cutting-edge laser systems for demanding scientific and industrial applications in biophotonics, industrial metrology and quantum technology. The company prides itself on providing lasers covering the widest wavelength range on the market – going from 190 nm in the ultraviolet to 3 mm at 0.1 THz – with high power outputs even at exotic wavelengths.
The CTL laser is a widely tunable laser that offers mode-hop free tuning up to 120 nm. (Courtesy: TOPTICA Photonics)
The company will be hosting exclusive educational sessions throughout Photonics West via Zoom Webinar. These will cover leading product innovations, such as the CTL continuously tunable laser with mode-hop-free tuning up to 120 nm, terahertz systems for both time-domain and frequency-domain techniques, and the TOPO laser system for mid-infrared spectroscopy and applications. Also featured in the webinar line-up is the iChrome FLE, designed to be a “flexible laser engine” for diverse applications in biophotonics, and highly sensitive linewidth analysers for controlling both ultra-narrow and broadband lasers.
Representatives from TOPTICA Photonics will also be taking part in the inaugural Quantum West conference. CTO Wilhelm Kaenders will give a presentation on the use of lasers in quantum applications on Wednesday 10 March, while the next day a panel discussion on photonics technologies for the emerging quantum market will feature TOPTICA’s Mark Tolbert.
To learn more about TOPTICA and its product range, visit the company’s virtual booth in the Digital Marketplace.
Phase-only SLM targets small-scale solutions
HOLOEYE has released a new series of compact phase-only spatial light modulators (SLMs) that are designed to be integrated into small-sized or even portable solutions. The LUNA SLM features a liquid-crystal-on-silicon (LCOS) microdisplay with an active area diagonal of 0.39 inches, and which offers full high-definition resolution of 1920 x 1080 pixels and 4.5 µm pixel pitch.
The LUNA phase-only spatial light modulator from HOLOEYE is compact enough to be integrated into portable solutions. (Courtesy: HOLOEYE)
The SLM provides linear 8-bit phase levels and offers fast digital addressing via the DisplayPort video interface, which provides an input frame rate of 60 Hz. The display can even accept video data via the high-speed MIPI digital serial interface, a novel approach that offers new possibilities for industrial implementations of phase-only SLMs.
The driver ASIC is embedded in the LCOS microdisplay itself, saving board space and allowing easier integration. The standard driver box measures just 85 x 47 x 28 mm, and also features a USB connector for power and advanced configurations.
HOLOEYE currently offers two versions of the LUNA phase-only SLM: one operating at visible wavelengths, which offers a phase shift of 2π over the 420–650 nm range, and one for the telecommunication waveband operating at 1400–1700 nm.
During this year’s virtual Photonics West exhibition, PicoQuant will unveil its latest multichannel event timer. The MultiHarp 160 is a plug-and-play time tagger and time-correlated single-photon counting (TCSPC) unit that is optimized for applications requiring up to 64 timing channels. It offers an outstanding time resolution of 5 ps and an ultrashort dead time of less than 650 ps.
The MultiHarp 160 from PicoQuant offers time tagging and correlated single-photon counting for up to 64 synchronized channels. (Courtesy: PicoQuant)
“We developed the MultiHarp 160 to meet the challenges of future TCSPC applications using many channels,” commented Rainer Erdmann, managing director of PicoQuant. “Our device provides internal synchronization of all inputs without need for additional hardware or software tools, and extraordinary throughput can be achieved thanks to a special FPGA link.”
The MultiHarp 160’s number of synchronized input channels can be scaled from 16 up to 64. A common synchronization channel, supporting sync rates of up to 1.2 GHz for periodic signals, is available as timing reference for all channels. Time tags from all input channels are combined into a single data stream that is accessible via a USB 3.0 interface. The data stream is also accessible to external FPGA boards via a dedicated interface, enabling great flexibility in tailoring the way that data is preprocessed for the specific needs of an application.
For an opportunity to preview the capabilities of the MultiHarp 160, a free webinar will be hosted by Dr Torsten Langer, sales and application specialist at PicoQuant, on 1 March 2021. Registration is possible on PicoQuant’s webinar website.
You can also find out more about the MultiHarp 160, as well as PicoQuant’s full range of pulsed diode lasers and instrumentation for time-resolved data acquisition, single-photon counting, and fluorescence imaging at the BiOS and Photonics West digital marketplaces.
Real-time laser simulation offers new capabilities
BeamXpert, which introduced the simulation software BeamXpertDESIGNER at last year’s Photonics West, has now upgraded the application to allow the export of optical setups to mechanical CAD software, improve the useability, and provide an optimized and enlarged component database.
BeamXpertDESIGNER is now being used by TRUMPF to develop laser systems for EUV lithography. (Source: TRUMPF Group)
The software enables the rapid development of optical systems for the propagation of laser radiation. By using two consecutive approaches, it is also possible to rapidly evaluate aberrations and their influence on M². All output results are based on the relevant ISO standards.
Customers of the software use it, among other things, for developing laser systems for ophthalmology and satellite communications, and for the design of high-power lasers and optics for solar-cell research and manufacturing.
One notable example is TRUMPF, which uses BeamXpertDESIGNER for developing high-power CO2 lasers that are used to generate extreme-ultraviolet radiation for lithography equipment. This application requires the simulation of optical systems with hundreds of components and a propagation length of more than a kilometre.
Researchers at the University of Bristol have combined low-cost 3D printing with soft lithography to streamline production of complex microfluidic devices. The technique represents an important step towards universally accessible lab-on-a-chip diagnostic technology, particularly in settings where healthcare resources are scarce.
The success of the technique lies in its user-friendliness. Writing in PLoS ONE, the researchers describe their workflow with the non-expert user in mind. Low-cost hardware and open-source software, for example, ensure that the fabrication process is suitable for both research and teaching environments. What’s more, the modular design of microfluidic channels means that clinicians and educators alike can simply click-and-connect multiple channels to create a myriad of microfluidic systems.
“It is our hope that this [technique] will democratize microfluidics and lab-on-a-chip technology, help to advance the development of point-of-care diagnostics, and inspire the next generation of researchers and clinicians in the field,” says study author Robert Hughes.
Simplicity, without the high price tag
Lab-on-a-chip devices enable real-time detection of infectious diseases like tuberculosis and malaria. Optimizing microfluidic systems, however, requires advanced (and expensive) fabrication processes. This limits their uptake in low- and middle-income countries, where rapid diagnosis of transmissible diseases would have the greatest impact.
The team at Bristol aims to lower the bar of entry for microfluidic research – which means minimizing the time and money spent prototyping new devices. Their technique uses simple household equipment and a commercially available material extrusion printer to make low-cost microfluidic master moulds. These moulds are then used to produce microfluidic chips made of polydimethylsiloxane (PDMS), a low-cost elastomeric polymer, in a process called soft lithography. When embedded in PDMS, the master leaves behind an imprint, and the microchannels are formed.
A dye-mixing microfluidic chip created using interconnected Y-junction and fluid resistor modules. (Courtesy: CC BY 4.0/PLoS ONE 10.1371/journal.pone.0245206)
The channels are as narrow as 100 µm (about the width of a human hair). Users can choose from a selection of five microchannel designs (modules), each equipped with ball-and-socket connectors. Like puzzle pieces, the modules are clicked together through their respective connectors to create scaffolds. The assembled scaffolds are then thermally bonded to a glass substrate to form the mould.
The entire fabrication process is highly suited to low-resource settings. Users can either print their own scaffolds, using the open-source CAD plug-in provided by the researchers, or request a mix-and-match library of microchannels through a 3D printing facility. Additionally, the thermal-bonding step produces well-defined channels that allow the master to be used repeatedly, which is anticipated to reduce the likelihood of channel fouling.
The future of 3D-printed microfluidics?
To assess the reliability of their technique, the researchers evaluated the standard print quality of all five microchannel designs.
They printed 100- and 350-µm channels using 0.1- and 0.4-mm nozzles, respectively. Approximately two-thirds of channels printed with the 0.1 mm nozzle were deemed useable. Meanwhile, the 0.4 mm nozzle yielded a higher success rate (96%). Nevertheless, the larger volume of material extruded per print meant that the 350 µm channels were more expensive to manufacture – despite the better success rate. It costs just $0.50 to print a 5000-piece library of functional 100 µm scaffolds.
While other 3D printing techniques can achieve higher resolution, the researchers note that the overall balance of cost-effectiveness, reliability and simplicity is what sets material extrusion printing apart from the pack.
“This technique is so simple, quick and cheap that devices can be fabricated using only everyday domestic or educational appliances,” says study author Harry Felton. Users can produce PDMS devices from the master mould using only a heat source. Furthermore, the smooth surface of the resulting devices can be applied directly to any clean glass surface, such as a mobile phone screen, without requiring expensive plasma activation.
The team from the University of Bristol (from left to right): Harry Felton, Robert Hughes and Andrea Diaz-Gaxiola. (Credit: Robert Hughes)
The team is now looking to promote the technology, and its potential for lab-on-a-chip diagnostics, both in the laboratory and in the classroom.
What have been the biggest changes to scientific publishing over the last decade during your time at IOP Publishing (IOPP)?
There have been two, and they are related: the growth of open access and the rise of pre-print servers. They are about making scientific research, in early form and in peer-reviewed form, accessible to as many people as possible who might benefit from it.
Did you expect all those changes to occur – or did any of them take you by surprise?
Looking back to 2010, open access was still far more an idea than a fact: in that year only about 2% of all journal articles were published on an open-access basis. But there was a growing demand for it from some countries and research funders, and it was very clear to me that we would need to satisfy that demand, where it existed, while maintaining other publishing models where it didn’t. There was already a preprint server in physics in arXiv, of course, but I didn’t foresee the rapid growth outside physics.
What challenges have learned-society publishers, like IOPP, faced during that time – and how well have they coped with them?
Learned-society publishers range from very large organizations like the American Chemical Society to very small publishers with a single small journal, and from the global to the very local, so I can’t speak for all of them. For many, however, it’s a question of scale. To cope with the challenge of supporting multiple business models and growing demands for digital services, we need to make the same kinds of investments in our technology, our processes and our staff as our much larger commercial rivals, but from a much smaller financial base. IOPP is fortunate in being larger than most learned-society publishers but even we can’t do everything we’d like to do as quickly as we’d wish. We compensate for that by focusing on the investments that will most benefit our customers and doing them well.
When it comes to open access, we’ve seen Wiley buy Hindawi this year for almost $300m; do you see further mergers or acquisitions of scientific publishers on the horizon?
Large publishers want to have as big a share of open-access publishing as they have had of subscription publishing and if they can do that quickly through acquisitions, they will. We will inevitably see more such acquisitions, though perhaps not all for such a price. We will also see further acquisitions by them of services to support researchers’ workflows.
What other big developments in open access do we need to watch out for?
Open access has been growing steadily for the last 10 years, but still not quickly enough for some funders. This year sees the introduction of the “rights retention strategy” by the group of funders known as Coalition S and it is also likely to be an element of the new UK Research and Innovation (UKRI) open-access policy due to be announced in April; UKRI is itself a member of Coalition S.
“Rights retention” essentially means that funders will require the accepted manuscript version of an article – that is after peer review and with the journal’s name attached but before copyediting and typesetting – to be made freely available immediately on publication under a licence that allows anyone to reuse it for any purpose.
This has two downsides. It undermines the subscription publishing model, as it places no value on a publisher’s development of journals and management of peer review, which in IOPP’s case represent around half of our costs. It also undermines the “gold” open-access model as it removes any incentive for an author to pay for gold open-access publication. It’s a disastrous policy that will cause enormous problems for authors as many journals will not accept submissions under these terms.
Last year, 33% of all articles appearing in journals that IOPP manages editorially were published on a gold open-access basis. We have made great strides forward in the last 10 years, especially when one considers that many parts of the world, including some of those with the largest research outputs, don’t support open-access publication. This policy risks damaging that growth, rather than enhancing it.
How can learned-society publishers thrive and stay relevant to the communities they serve?
They need to make their services to their communities more relevant and valuable than those offered by larger multidisciplinary publishers. A great example of that at IOPP is the launch last year of our programme of peer-review training and certification designed specifically for the physical sciences. It will be further expanded in 2021. As for managing the balance between open-access and subscription journals, we do that to some extent through our “transformative agreements”, which combine access to subscription journals with the ability for an institution’s researchers to publish in our journals on an open-access basis at no cost to them. We now have many of these agreements in place across Europe, including with close to 60 UK universities.
What long-term impact do you think the pandemic will have on scientists and scientific publishing?
It’s probably a little early to say, and it’s quite possible that it will have different impacts in different parts of the world, depending on how well countries have responded to the pandemic. It’s certainly shown the value of investment in scientific research, so we might hope that governments will heed that lesson. It’s also shown the continuing importance of peer review. The management of peer review is arguably the single biggest contribution of publishers to scholarly communications and it’s often ignored just how well we do it at scale; and Coalition S ignores that it has substantial costs.
What about Brexit: will it affect IOP Publishing?
Brexit will have little direct impact on IOPP as our business is now largely digital and there are no tariffs on digital publications. It will give us some extra costs in managing VAT but we will absorb those. Its greatest impact is more likely to be indirect, in how it affects funding for UK research and the ability of UK researchers to collaborate with their European peers.
What was your biggest achievement in your role as managing director of IOP Publishing?
Any leader of an organization should aim to leave it in a better state than he or she found it. I’m leaving IOPP in a very healthy state, with a very strong leadership team and an engaged and committed staff, with the right publishing strategies in place. I believe it’s as well positioned as it could be to meet the challenges of the next 10 years.
Any regrets or things you might have done differently?
This is my chance to say “Regrets, I’ve had a few, but then again too few to mention”; Physics World readers will at least be spared hearing me sing it. Of course there are things I would have done differently – hindsight’s a wonderful thing – but generally I think we have made the right decisions over the last 10 years.
What are your plans after leaving IOPP – do you expect to stay involved in the publishing industry to any extent?
After more than 40 years in academic publishing, which I have enjoyed enormously, I can’t just give it all up. I am a non-executive director on the board of Bloomsbury Publishing, publisher of Harry Potter (and I like to think of his invisibility cloak as a tenuous link to physics) and with a significant academic publishing division. Besides this, I will look for other opportunities to share what I have learned, and to help avoid the mistakes I have made, at a non-executive level.
Any words of advice for your successor Antonia Seymour and her staff at IOPP?
I’m stepping down partly because after nearly 11 years I believe it’s time for some fresh thinking. So I will forbear offering advice unless she requests it, in which case I’ll be very pleased to give it.
The cosmological constant has been a thorn in the side of physicists for decades. Even though its purpose in modern cosmology differs from its original role, the constant – commonly represented by Λ – still presents a challenge for models designed to explain the expansion of the universe.
Simply put, Λ describes the energy density of empty space. One of the main issues stems from the fact that Λ’s theoretical value, obtained through quantum field theory (QFT), is nowhere near the value obtained from the study of type Ia supernovae and the cosmic microwave background radiation (CMB) – in fact it diverges by as much as 10121. It is therefore of little wonder that cosmologists are eager to tackle this disparity.
More than meets the eye The Dark Energy Survey uses the Victor M Blanco telescope in Chile to investigate the large-scale structure of the universe. (Courtesy: DES Collaboration)
“The cosmological constant problem, in one form or another, is a century-old puzzle. It is one of the biggest problems in modern physics,” says theoretical physicist Lucas Lombriser from the University of Geneva (UNIGE), Switzerland. “Moreover, the cosmological constant is the most dominant component in our universe. It makes up 70% of the current energy budget. How could one not want to figure out what it really is?”
Indeed, with a new generation of cosmologists now on the scene, there are some rather radical ideas and revisions of older theories. But can the field accept these revolutionary ideas, or has Λ become a comfortably familiar burden?
Still crazy after all these years
The cosmological constant was first introduced to models of the universe by Albert Einstein in 1917. To the physicist’s own surprise, his general theory of relativity (GR) seemed to suggest that the universe is contracting, thanks to the effects of gravity. The consensus at the time was that the universe is static and, despite having already revolutionized several long-held ideas, Einstein was unwilling to challenge this particular paradigm. This desire to preserve the stability of the universe led Einstein to make an addition to GR’s equations. Later, he would infamously describe this as his “biggest blunder”.
“When Einstein was applying GR to cosmology, he realized he could add a constant to his equations and they would still be valid,” explains Peter Garnavich, a cosmologist at the University of Notre Dame, France. “This ‘cosmological constant’ could be viewed in two equivalent ways: as a curvature of space–time that was just a natural aspect of the universe; or as a fixed energy density throughout the universe.”
Thus, the initial role of Λ was to counterbalance the effects of gravity and help ensure a steady-state universe that is neither expanding nor contracting. This role, however, became obsolete following Edwin Hubble’s discovery in 1929 that the universe is expanding. When Einstein was finally convinced of this, Λ was designated to the cosmic dustbin. Yet, like the proverbial bad penny, it would resurface in a different form decades later.
Whereas once the cosmological constant was used to balance the universe against expansion, in modern cosmology Λ represents vacuum energy – the inherent energy density of empty space – that no longer just balances gravity , but overwhelms it. That doesn’t mean Λ has become any less problematic, though. “In 1998 the High-Z Supernova Search team discovered that the expansion rate was accelerating instead of decelerating,” says Garnavich, who took part in the research using type Ia supernovae to study the expansion of the universe. This requires some form of additional energy throughout the universe or some more exotic explanation. This driving force is referred to as “dark energy”, and the term itself has become a placeholder for the various theoretical entities that could account for this accelerating expansion. Suspects range from vacuum energy, the current most favoured model; to quantum fields; and even fields of time-travelling tachyons – hypothetical particles that travel at faster-than-light speeds.
The cosmological constant serves as the simplest possible explanation for the dark energy that drives this accelerating expansion, and its theoretical value should therefore match observations. Unfortunately, as mentioned, the former is greater than the latter by some 120 orders of magnitude. Clearly, Λ’s reputation as “the worst prediction in the history of physics” isn’t mere hyperbole.
The radical element about the team’s proposal is the idea that cosmological models might not need the cosmological constant at all. Of course, there is still that accelerating expansion to consider, so to account for this, García looks to other sources. “When I first approached this field, I came across the inconsistency with the values predicted from both cosmology and high-energy physics, and tried to formulate an alternative model to Λ by studying possible candidates to explain the accelerated expansion of the universe,” she says.
Λ, as it is currently considered, only accounts for the universe’s expansion once matter began to form structure – an era that lasted from 47,000 years to 9.8 billion years after the Big Bang. García wanted to consider a form of dark energy that began to play a role in the earlier, “radiation-dominated” epoch, from the earliest moments of “cosmic inflation”. Inflation – the sudden and very rapid expansion of the early universe – is thought to have taken place some 10−36 seconds after the Big Bang, but this rapid expansion is thought to have been driven by quantum fluctuations, and not dark energy. Eventually, the attractive force of gravity slowed this expansion, until about 9.8 billion years into the universe’s history, when dark energy began accelerating its expansion once again (figure 1). García and colleagues, however, describe this dark energy as an entity that could have been present in both the radiation-dominated and matter-dominated epochs as a “non-interacting perfect fluid” that evolved with the universe’s other components.
1 A cosmic conundrum The different eras of cosmic expansion. Dark energy dominates in the final era, driving accelerated expansion – and this is characterized by the cosmological constant. However, “early dark-energy” models suggest that this element could have been present in the earliest moments of the universe, albeit exerting little influence. (Courtesy: Ann Feild, STScI)
“The model’s strengths are the following: first, it provides a compelling description of the universe’s accelerated expansion during its current epoch, beginning around four billion years ago,” García explains. “Second, our formulation allows for evolution with redshift instead of the cosmological constant, in which energy density does not change over time.” This could explain why the theoretical value suggested by QFT is larger than the value given by the redshifts of distant supernovae. The value has evolved over time.
García identifies a further strength of her EDE model, which is that it offers several predictions that match up well with practical measurements and high-resolution data concerning various stages of the universe’s evolution. The result is a theoretical picture that matches the ratio we observe in the current dark-energy-dominated epoch of our universe, where its matter/energy content is dominated by the accelerating force. “Of course, we could use both the cosmological constant and our EDE, but it makes the description unnecessarily complicated, and there is not a physical justification for that,” says García. “We only need one component to describe the accelerated expansion of the universe today.”
If the decision to eliminate the cosmological constant, or set it to zero, taken by García and her collaborators seems somewhat arbitrary, she points out that there is almost an “arbitrariness” inherent to the introduction of the constant in the first place. “There is no fundamental reason to take for granted that dark energy has to manifest as the cosmological constant,” she remarks. “We have not detected any form of dark energy nor the cosmological constant; therefore, any form of dark energy is valid until the data confirm or refute its existence.”
The EDE that García suggests isn’t perfect. Indeed, it comes with elements that the wider scientific community may be reluctant to adopt. But she doesn’t shy away from pointing out the potential flaws in her own ideas. “There are two issues that the community could find troubling,”García admits. “On the one hand, more complex models imply a broader set of free parameters. It is not something we desire for our formulations, because those parameters might not have a direct physical interpretation. In that sense, the cosmological constant is an advantageous model, because it has a minimal number of free parameters, all of them constrained with current observations.”
We have been revising and looking for more sets of observational data to validate our models. Hence, we are creating a bridge between theory and observational cosmology
The second thing that García admits may cause some caution is that the model has yet to be submitted to many observational probes. “We have been revising and looking for more sets of observational data to validate our models. Hence, we are creating a bridge between theory and observational cosmology.”
The “well-tempered” cosmological constant
Forcing the cosmological constant to take a value of zero may lead the curious cosmologist to consider what happens if we do the opposite. In other words, what would happen if we allow it to take an arbitrarily large value, similar to the value purported by QFT.
Stephen Appleby, a cosmologist at the Asia Pacific Center for Theoretical Physics in Pohang, Republic of Korea, takes this approach to tackle the problem. He starts by assuming that the prediction given by QFT is correct, allowing Λ to take on the immensely large value it predicts (Journal of Cosmology and Astroparticle Physics2018 034). “Using modern cosmological observations from type Ia supernovae and the CMB, we can measure the total energy density of the universe, including the vacuum energy,” Appleby explains. “The value obtained from these measurements is tiny compared to particle-physics scales.”
This is because, according to QFT, every particle in the universe should contribute to vacuum energy, thus exerting a negative pressure that is driving the expansion of the universe. The problem is that given the estimated number of particles in the universe, as well as the virtual particle pairs that pop in and out of existence in empty space, vacuum energy should be accelerating the expansion much faster than astronomers see in the redshifts of supernovae (figure 2).
2 A type Ia supernova The image shows a supernova at redshift z = 0.40 (corresponding to a distance of about 6000 million light-years), observed by the 3.6-m New Technology Telescope in Chile. Observations of such distant supernovae, which appear much dimmer than expected despite their distance, provided observational evidence that the expansion of our universe is accelerating – a finding that inspired the reintroduction of the cosmological constant. (CC BY SA 4.0/ESO)
QFT says the value of this contribution is given by the mass of the particles, which are well known, meaning there isn’t a problem with this aspect of QFT. As an example of this radical difference between the contribution to dark energy and the cosmological constant that QFT says particles should make, and the value that we actually observe, Appleby cites the electron and Higgs boson. Based on their masses, the contributions made solely by these particles to the vacuum energy of the universe should be roughly 40–60 orders of magnitude greater than our astronomical measurements suggest.
Assuming that the value provided by QFT is correct, Appleby and his collaborator Eric Linder from the University of California, Berkeley, have to explain why the observed value is so diminutive. They do this by refining the idea of gravity itself. “We asked the question: can we construct a theory of gravity that possesses low energy vacuum states, via lower particle contributions, despite the large cosmological constant?” explains Appleby. “Our analysis shows that such a theory can be constructed, but only by introducing additional gravitational fields to models of the universe.”
Appleby and Linder have constructed a general class of gravitational models, which suggests that vacuum energy is present, but doesn’t affect the curvature of space–time. This results in a space–time that looks like our low-energy universe, not one with the huge vacuum energy of QFT. “We pick out particular gravity models with the behaviour that we are searching for,” he continues. “Vacuum energy is present in our approach, but it does not affect the curvature of space–time. It does gravitate, but its effect is purely felt by the new gravitational field that we have introduced. In this approach, the cosmological constant problem becomes moot because it can take any value, but its effect is not felt directly.”
The strength of the model – which the duo label as “the well-tempered cosmological constant” – is that no energy scales have to be fine-tuned within it. As the vacuum energy in their models doesn’t impact the curvature of space–time, the individual contributions of particles would not influence the redshift of supernovae, thereby doing away with the observational disparity. Therefore, the vacuum energy in their model can be whatever value that QFT and particle physics predicts, without conflicting with observed values from astronomy. This energy can even change due to a phase transition.
Despite this utility, Appleby, like García, accepts that the model he and Linder proposed isn’t perfect and needs to be refined. “The main issue with our work is that we have to introduce new gravitational fields, which have not yet been observed, and the kinetic energy and potential of these additional fields must take a very particular form,” he says. “It is an open question whether such a field can be embedded in some more fundamental quantum gravity model.”
Appleby also points out that his model requires a revision of GR, which is a hugely successful theory of gravity. Indeed, GR is supported by a wealth of experimental evidence both here on Earth and beyond the limits of the Milky Way. “When you modify gravity in some way, you have to show that this new theory can also pass the same stringent observational tests that GR has,” Appleby concedes. “This is a difficult hurdle for any gravity model to overcome, and we must perform these checks in the future.”
Tuning in to the cosmological constant problem
Seeking to adjust theories of gravity to account for the cosmological constant problem is also an approach that has been considered by Lombriser over in Geneva. “My research in this area started out with investigating modifications to Einstein’s theory of GR as an alternative driver of the late-time accelerated expansion of our cosmos to the cosmological constant,” explains Lombriser. “In 2015 I realized that for modifications of the theory of gravity to be the direct cause of cosmic acceleration, and not violate cosmological observations, the speed of gravitational waves would have to differ from the speed of light. That did not sound right, and I started to focus on different explanations.”
Lombriser has begun to explore the idea that while modifications to GR or scalar energy fields may not be responsible for directly causing the late-time acceleration, they could instead “tune” the cosmological constant to do so. “I was surprised that I did not even have to modify Einstein’s equations to solve the problem,” says Lombriser. “I simply had to perform an additional variation with respect to a quantity that already appears in the equations – the Planck mass, which represents the strength of the gravitational coupling.”
Give and take The opposing forces of gravity (green) and dark energy (purple) combine to define the expansion of the universe. (Courtesy: NASA/JPL-Caltech)
The variation results in an additional equation, one which constrains Λ to the volume of space–time in the observable universe (Phys. Lett. B797 134804). It also explains why vacuum energy can’t freely gravitate. Lombriser adds that by evaluating this constraint equation with some minimal assumptions about our place in the cosmic history, he and his colleagues can estimate the value that Λ occupies in our current cosmic energy budget. They have found this to be 70% in agreement with the dark energy contributions suggested by observations.
“The model solves both the old and new aspects of the cosmological constant problem,” Lombriser explains. “The old problem of the gravitating vacuum energy and the new problem of the cosmic acceleration with a small cosmological constant, results in this strange coincidence of us happening to live at a time where the energy density is comparable to that of the cosmological constant. A clear strength of the model is its simplicity.”
Lombriser also accepts there are elements to the solution that he puts forward that are flawed or need refinement. In particular, he points to the fact that, due to its similarity to standard theory, the model he suggests may be impossible to falsify. “I think the way forward here is to see whether this new approach can be extended to naturally explain other poorly understood phenomena, such as producing a natural inflationary phase in the early universe,” he says. “Or we can investigate how the self-tuning mechanism appears from fundamental theory interactions. These could give rise to yet unknown phenomena that may be testable in the laboratory.”
The “vanilla” appeal of the cosmological constant
Of course, the three ideas discussed here could prove to all be theoretical dead-ends – a leap too far for researchers who have become accustomed to the mystery of the cosmological constant.
Indeed, Λ could remain a problem for descriptions of the universe and its expansion for decades to come. “This cosmological constant is like vanilla ice cream, it is very good, but kind of boring,” Garnavich concludes. “Removing it will make the house fall down unless there is a better theory to replace it.”
This will likely result in more exciting “flavours” of ideas, theories and models until a satisfactory explanation for the cosmological constant problem is found. When it comes to cosmology and science in general, there is definitely a benefit to the approach of “nothing ventured, nothing gained”. As Einstein himself perfectly captured this ethos: “A person who never made a mistake never tried anything new.”
The search for axions, a hypothetical type of dark matter, could become far more efficient thanks to the emerging technique of light squeezing, according to researchers in the US. Kelly Backes and colleagues incorporated the technique into Yale University’s HAYSTAC axion experiment; halving the time needed to analyse their data. Their results highlight the promising potential for the widespread adoption of light squeezing by axion experiments worldwide.
Axions are predicted to be chargeless, much lighter than electrons, and produced in abundance after the Big Bang. This makes them a popular candidate for dark matter, a mysterious substance that appears to permeate the universe and affect the gravitational properties of large objects such as galaxies.
Recently, several experiments have tried to detect axions using strong magnetic fields, produced in the lab at cryogenic temperatures. The idea is that axions will scatter from quantum fluctuations in these fields, producing photons whose frequencies are proportional to axion masses. However, the signals produced by these photons are expected to be very weak and require extremely low levels of noise to detect. Recently, this has presented researchers with a fundamental limit to the accuracy of their measurements.
Quantum needle in HAYSTAC
The problem stems from Heisenberg’s uncertainty principle, which describes an inescapable trade-off in accuracy when measuring the positions and momenta of quantum particles such as photons. This “quantum noise” presents a significant barrier to experiments aiming to verify existing dark matter theories, which encompass a wide range of potential axion masses. Ultimately, identifying individual photons, whose frequencies must deviate from quantum fluctuations to highly specific degrees, is like finding a single quantum needle in an enormous haystack.
In their study, the team based at Yale, the University of Colorado, NIST and the University of California Berkeley pushed the HAYSTAC experiment to reach beyond this limit using the latest advances in light squeezing technology. This technique works by reducing the uncertainty of one component (position or momentum) to beyond the usual quantum limit at the expense of increasing the uncertainty in the other component. With the right approach, further knowledge can be gained about the component of interest without losing too much information about the other component.
The researchers tested this principle by using HAYSTAC to search for axions with masses predicted by two particular theories. Previously, analysis of the resulting data would have taken around 200 days to complete. In this case, however, light squeezing enabled Backes and colleagues to handle the entire dataset in just 100 days – unfortunately, finding no clear evidence for axions.
This improvement was relatively modest, but the researchers note that light squeezing technology is still in its early stages. Through further improvements, experimental tests of axion theories could become far more efficient still. Backes’ team now hope that their manipulation of the uncertainty principle will soon be extended to a wide variety of experiments like HAYSTAC; potentially bringing a long-awaited explanation of dark matter a step closer to reality.
Pots of potential: fired clay objects can shed light on the Earth’s magnetic field.
The idea that a record of the Earth’s magnetic past
might be stored in objects made from fired clay dates back to the 16th century when William Gilbert, physician to Queen Elizabeth I, wondered if the Earth is a giant bar magnet and that clay bricks possess a magnetic memory.
He was right and Gilbert’s far-sighted notion now forms the basis of a well-established method for dating archaeological sites that contain kilns, hearths, ovens or furnaces.
Known as “archaeomagnetism”, this field of research is helping geophysicists gain insights into local changes in the Earth’s magnetic field over the past 3000 years, and – as Rachel Brazil explains in the March 2021 issue of Physics World – how it might change in future.
For the record, here’s a run-down of what else is in the issue.
• China detector hints at new physics – The PandaX-II dark-matter experiment has confirmed previous signs of exotic particles but further evidence will be needed, as Edwin Cartlidge reports
• UAE Hope probe reaches Mars orbit – The United Arab Emirates has become the first Arab country to reach another planet, a feat that it hopes will turbocharge its science base, as James Dacey reports
• Concerns raised as Oxford renames physics chair – The University of Oxford has announced that its Wykeham chair of physics will be renamed after the giant Chinese technology corporation Tencent, which denies claims it has links with the Chinese security services. Michael Allen reports
• Widening career aspirations – As children narrow down their career interests from an early age, Carol Davenport says it is important that they are brought up with a positive attitude towards science
• Supporting science in difficult times – With COVID-19 fostering anti-science conspiracies, Caitlin Duffy says that scientists have a duty to speak up and challenge misinformation
• Grounds for optimism – The solution to climate change could be lying beneath our feet. James McKenzie examines the potential of pumps that warm our homes and offices by extracting heat from the ground below
• Beneath the rotunda – Robert P Crease reflects on the US Capitol’s invasion from a unique perspective
• Digging up magnetic clues – Analysing magnetic information stored in ancient artefacts is revealing the recent history of the Earth’s magnetic field and providing clues to the changes we might expect in the future. Rachel Brazil explains
• A new generation takes on the cosmological constant – The long-standing problem of the cosmological constant, described both as “the worst prediction in the history of physics” and by Einstein as his “biggest blunder”, is being tackled with renewed vigour by today’s cosmologists. Rob Lea investigates
• Make or break: building soft materials with DNA – DNA molecules are not fixed objects, they are constantly getting broken up and glued back together to adopt new shapes. Davide Michieletto explains how this process can be harnessed to create a new generation of “topologically active” material
• Hunt for the superheavies – Hamish Johnston reviews Superheavy: Making and Breaking the Periodic Table by Kit Chapman
• Strolling in the deep – Ian Randall reviews The Brilliant Abyss: True Tales of Exploring the Deep Sea, Discovering Hidden Life and Selling the Seabed by Helen Scales
• Rethinking nuclear for a greener planet – Troels Schönfeldt, co-founder and chief executive of Danish start-up Seaborg Technologies, talks to Julianna Photopoulos about his career in nuclear and particle physics – and how he unintentionally became an “impact entrepreneur”
• Fine structure and black holes – Sidney Perkowitz pays tribute to new research on these astronomical marvels
Black holes remain a fascinating idea in popular physics while inspiring high-level research. Indeed, the 2020 Nobel Prize for Physics honoured theoretical work on black holes carried out by Roger Penrose and observational results obtained by Andrea Ghez and Reinhard Genzel. In the 1990s, both Ghez and Genzel independently analysed the motion of stars near our galactic centre, some 27,000 light-years away from Earth. They concluded that a supermassive black hole (SMBH) resides there and holds 4 million times the mass of our Sun. Apart from finding unambiguous evidence of its existence, the discovery carries a bonus – the black hole’s extreme gravitational effects provide a new way for physicists to explore α, the fine structure constant.
Physical theories rely on essential constants such as c, e, ħ and the gravitational constant G, but some physicists argue that unitless constants are more fundamental because they are invariant in any system of measurement. The fine structure constant α – defined as e2/(4πε0ħc) where ε0 is the permittivity of free space – is one such pure number, equal to 0.0072973525693. It appeared in 1916 when Arnold Sommerfeld added relativity to quantum mechanics and calculated a better agreement with the observed fine features of the hydrogen spectrum. α showed up again in the 1920s in Paul Dirac’s relativistic quantum ideas, and in astronomer Arthur Eddington’s theory of the universe. Eddington predicted that 1/α would be an integer but failed to make a convincing case (the latest value 137.036 is only nearly a whole number).
Instead, α acquired deeper meaning within quantum electrodynamics (QED), the theory that won Richard Feynman and two others a Nobel prize in 1965. Now α is understood as determining how strongly electrons and photons couple. It is a key to the electromagnetic force, which – along with gravity, and the strong and weak nuclear forces – controls the universe. Within multiverse models proposing that our particular universe is especially tuned to support life, α may be additionally significant because a small change in its value would affect the conditions for life to form.
Within multiverse models proposing that our particular universe is especially tuned to support life, α may be additionally significant because a small change in its value would affect the conditions for life to form
In 1937 Dirac asked if the “constants” are really constant when he speculated that α and G have changed as the universe has aged. Such changes in the constants of nature could alter the Standard Model of particle physics, and general relativity, as well as modify our understanding of the history of the universe. Following Dirac’s suggestion, various researchers have searched for changes in the constants, especially c and α.
Since 1999 astrophysicist John Webb at the University of New South Wales, Australia, has sought changes in α over cosmic time. He examined light from astronomically distant sources after it traversed interstellar dust clouds, which imprints on the light spectral absorption lines from the atoms in the clouds. Analysing these wavelengths gives the value of α at the remote location and therefore in a younger universe, as determined by the time lag due to the finite speed of light. Webb’s early data showed an extremely small increase over the last 6 billion years. But in 2020 he interpreted new results from 13 billion years ago, when the universe was only 0.8 billion years old, as “consistent with no temporal change”. Webb however obtained a bigger change, 4 × 10–5 relative to the value on Earth, in measurements made in the strong gravity around a white dwarf star.
There are theoretical reasons why α should depend on gravity. Also last year, general relativity theorist Aurélien Hees of the Paris Observatory, along with 13 international co-authors including Ghez, used her data to measure the effect of the black hole’s gravity on α (Phys. Rev. Lett.124 081101).
This is the first measurement of α near an SMBH, and the work shows that this approach can more fully examine the connection between α and gravity. Ghez established the presence of the SMBH by plotting the observed paths of stars that orbit the galactic centre. These paths occurred within the gravitational field from the presumed black hole but were distant enough to form ellipses according to Newtonian mechanics; general relativity was not required. Then a comparatively straightforward analysis yielded the value of 4 million solar masses at an elliptical focus that held the stars in orbit.
To measure α at high gravity, the researchers chose five stars that came near the SMBH, and also have stellar atmospheres with strong spectral absorption lines. Then wavelength analysis gave the value of α at those locales, with small measured deviations of 1 × 10–5 or less from the Earthly value. Still, the data already yield new insight by supporting the prediction that the change in α is proportional to the gravitational potential. However, the measurement uncertainties are too large to yield a definitive value for the proportionality constant, which according to Hees et al. would help distinguish among different theories that incorporate dark matter and dark energy.
Hees now wants to observe stars that are closer to the black hole as they experienced a stronger gravitational potential. The spectral analysis will be harder, but Hees reckons he can reduce the measurement errors 10-fold and has requested new telescope time to do so. We should be optimistic that further improvement will bring new knowledge about α and the universe.
Atomic nuclei have been intensely researched for more than a century, but they remain things of mystery and wonder – especially to the nuclear physicists who study them. We know that nuclei are made of protons and neutrons bound together by the residual strong force. But the extreme difficulty of calculating nuclear properties using the Standard Model of particle physics leaves much to be learned about their internal workings. In a sense, nuclei are like the world’s oceans: despite their ubiquity, we are still on the shoreline trying to understand what lies in their depths.
Nuclei are made of just two components, but their properties can be very different indeed. Most of the nuclei in your body, for example, have been around for billions of years, yet some rare nuclei made in the lab can last just tiny fractions of a second before decaying. It is the heaviest of these rare nuclei, and the people who devoted their careers to discovering and characterizing them before they decay, that are the subject of Superheavy: Making and Breaking the Periodic Table by the pharmacist turned science writer Kit Chapman.
The book takes the reader on a romp that begins in 1930s Paris, when Irène and Frédéric Joliot-Curie discovered that heavier elements could be made by bombarding lighter elements with alpha particles (helium nuclei). This was followed shortly thereafter in Rome by Enrico Fermi and the “Via Panisperna Boys” who found that bombardment with neutrons had a similar effect.
The race was on to find new heavy elements and the result was a transformation of the periodic table – which is conveniently included in the frontmatter of Chapman’s book. Like many physicists, the last time that I had a serious look at a periodic table was when I took my last chemistry course – which was 35 years ago – and I rather sheepishly admit that studying the table once more was a revelation. Indeed, I wondered aloud “Where did all those new superheavy elements come from?” Even though as a physics journalist I have covered the twists and turns in the discovery and naming of new elements over the past two decades, I had always looked at each element in isolation and did not fully appreciate how the periodic table – full of holes when I was young – appears much more complete, at least for now.
What I mean by complete is that in the current incarnation of the familiar version of the table, the seventh and final row is full of elements named after people and places – there are no gaps and no systematic names such as unnilseptium that were placeholders as scientists argued over element names. The seventh row begins on the left with francium and radium; is punctuated by the 14 actinoids (from actinium to lawrencium); and then makes the sprint across the transition metals and on towards the noble gases.
The superheavy elements are the final 15 in this row, from rutherfordium with 104 protons, on to dubnium and eventually to oganesson with 118. Those last two names, by the way, reflect the importance of the Soviet/Russian Joint Institute for Nuclear Research (JINR) in the search for superheavy elements. The lab is in Dubna, near Moscow and since 1989 it has been run by Yuri Oganessian. Strictly speaking, new elements should not be named after living people, but two exceptions have been made – oganesson (118) and seaborgium (106), the latter honouring the nuclear chemist Glenn Seaborg, who created and ran the rare elements programme at the University of California, Berkeley.
GSI in Darmstadt, Germany, and RIKEN’s Radioactive Isotope Physics Laboratory in Japan were also major contributors to the discovery of the superheavy elements. They have been honoured with the names darmstadtium (110), hassium (108) and nihonium (113) – the last two inspired by the Latin name for the German state of Hesse and an alternative name for Japan.
Like many things in modern physics, the drive to create superheavy elements began in earnest during the Second World War with the race to build the atomic bomb – and specifically the development of a way to produce significant amounts of plutonium. That element was discovered in 1941 at the University of California Berkeley by a team that included three future Nobel laureates: Seaborg, Emilio Segrè and Edwin McMillan.
Chapman reveals that Seaborg chose the symbol Pu for plutonium because of the stench of his Berkeley chemistry lab. Although physicists had played an important role in the early discovery of new elements – the work of Seaborg and colleagues was made possible by the cyclotron, which was invented at Berkeley by the physicist and Nobel laureate Ernest Lawrence – it was chemists who isolated the new elements from bombarded targets. This was no mean feat; not only did they have to predict the chemistry of an element that had never been seen before, they also had to work very quickly because the elements have short half-lives. Indeed, Chapman tells us that Berkeley nuclear scientist Albert Ghiorso famously used a souped-up Volkswagen Beetle to transport samples in the shortest time possible across the campus, from where they were made to where they were analysed.
Because much of the early effort to create new elements occurred during the Second World War and the Cold War, there was a certain amount of censorship involved in publication of the work. Before the US entered the war in 1941, Chapman points out that the British were concerned that American scientists were providing the Germans with information that could be used to create nuclear weapons. In 1945 US officials prevented the publication of a Superman comic strip because the superhero was irradiated in a cyclotron – which was described with too much accurate detail for wartime censors.
While Seaborg is the scientist most associated with the discovery of new elements, it is Ghiorso who holds the record for being involved in the most discoveries. In 1993 he helped discover element 106, putting his tally at 11 and beating the 185-year record held by Humphrey Davy. Because the discovery was made at Berkeley, the lab gained the right to name the element. This was at a time when Berkeley, JINR and Darmstadt were in competition to find and name new elements.
The days of isolating new elements and studying their chemistry was waning. By the 1990s researchers often only caught fleeting glances of new elements and had to try to determine their decay chains – often only seeing part of the picture. Science is usually done incrementally with different labs contributing evidence that eventually adds up to a discovery – giving priority as to who made a discovery and who therefore had naming rights was a tricky business.
While this competition between labs resulted in a flurry of new elements, the labs were at loggerheads when it came to naming the new elements. This ruckus was dubbed the “Transfermium Wars”, with transfermium referring to elements beyond fermium (100). The wars ran for about 30 years, starting in the 1960s, and during this period three different elements had been named rutherfordium by different research groups and three different names had been proposed for element 102. What is more, two different names had been proposed to honour the Danish physicist Niels Bohr – bohrium and nielsbohrium – the latter favoured by the Germans who were concerned that bohrium could be confused with boron.
In 1986 the Transfermium Working Group was set up by the governing bodies of chemistry and physics (IUPAC and IUPAP respectively) to sort out the mess and after a decade-long slog it finally came up with a definitive list of names in 1997 – and bohrium (107) won out over nielsbohrium.
Atoms formed by superheavy elements have properties that are not predicted by their position in the periodic table
As for the future of the superheavy element hunters, Chapman writes that the best guess of physicists is that there could be as many as 172 elements – which means more than 50 could still be up for discovery. But Chapman also points out that discovering more and more heavy elements could be the undoing of the periodic table, bringing about the “end of chemistry”. While that might sound ominous, I’m afraid it doesn’t mean that chemistry students of the future can avoid learning how to balance redox reactions. What Chapman means is that the atoms formed by superheavy elements have properties that are not predicted by their position in the periodic table – a cornerstone of chemistry.
An early hint of this is that research using tiny numbers of copernicium (112) and flerovium (114) atoms at Dubna suggests that the element’s chemical properties are not as expected given its place in the periodic table. Flerovium, for example, should behave like lead, which is the element above it in the periodic table and copernicium should behave like mercury – but that is not what the study found. The likely reason is that these elements have huge charges on their nuclei and large numbers of electrons, so that conventional way of understanding how these elements react breaks down.
So rather than heralding the end of chemistry, the superheavy elements look set to open an exciting new chapter.
This video highlights the MeasureReady M91 FastHall, a revolutionary, all-in-one Hall analysis instrument that delivers significantly higher levels of precision, speed, and convenience to researchers involved in the study of electronic materials.
The M91 FastHall measurement controller combines all of the necessary HMS functions into a single instrument, automating and optimizing the measurement process, and directly reporting the calculated parameters. With Lake Shore’s patented new FastHall measurement technique, the M91 fundamentally changes the way the Hall effect is measured by eliminating the need to switch the polarity of the applied magnetic field during the measurement. This breakthrough results in faster and more accurate measurements, especially when using high field superconducting magnets or when measuring very low mobility materials.