Skip to main content

The Nobel prizes highlight what is wrong with recognition in science

The 2024 Nobel prizes in both physics and chemistry were awarded, for the first time, to scientists who have worked extensively with artificial intelligence (AI). Computer scientist Geoffrey Hinton and physicist John Hopfield shared the 2024 Nobel Prize for Physics. Meanwhile, half of the chemistry prize went to computer scientists Demis Hassabis and John Jumper from Google DeepMind, with the other half going to the biochemist David Baker.

The chemistry prize highlights the transformation that AI has achieved for science. Hassabis and Jumper developed AlphaFold2 – a cutting-edge AI tool that can predict the structure of a protein based on its amino-acid sequence. It revolutionized this area of science and has since been used to predict the structure of almost all 200 million known proteins.

The physics prize was more controversial, given that AI is not traditionally seen as being physics. Hinton, with a background in psychology, works in AI and developed “backpropagation” – a key part of machine learning that enables neural networks to learn. For the work, he won the Turing award from the Association for Computing Machinery in 2018, which some consider the computing equivalent of a Nobel prize. The physics part mostly came from Hopfield who developed the Hopfield network and Boltzmann machines, which are based on ideas from statistical physics and are now fundamental to AI.

While the Nobels sparked debate in the community about whether AI should be considered physics or chemistry, I don’t see an issue with the domains and definitions for subjects having moved on. Indeed, it is clear that the science of AI has had a huge impact. Yet the Nobel Prize for Physiology or Medicine, which was awarded to Victor Ambros and Gary Ruvkun for their work in microRNA, sparked a different albeit well-worn controversy. This being that no more than three people can share each science Nobel prize in a world where scientific breakthroughs are increasing highly collaborative.

No-one would doubt that Ambros and Ruvkin deserve their honour, but many complained that Rosalind Lee, who is married to Ambros, was overlooked for the award. She was the first author of the 1993 paper (Cell 75 843) that was cited for the prize. While I don’t see strong arguments for why Lee should have been included for being the first author or married to the last author (she herself also stated such), this case highlights the problem of how to credit teams and whether the lab lead should always be given the praise.

What sounded alarm bells for me was rather the demographics of this year’s science Nobel winners. It was not hard to notice that all seven were white men born or living in the UK, the US or Canada. To put this year’s Nobel winners in context, the number of white men in those three countries make up just 1.8% of the world’s population. A 2024 study by the economist Paul Novosad from Dartmouth College in the US and colleagues examined the income rank of the fathers of previous Nobel laureates. It found, instead of a uniform distribution, that over half come from the top 5% in terms of wealth.

This is concerning because, taken with other demographics, it tells us that less than 1% of people in the world can succeed in science. We should not accept that such a tiny demographic are born “better” at science than anyone else. The Nobel prizes highlight that we have a biased system in science and little is being done to even out the playing field.

Increasing the talent pool

Non-white people in western countries have historically been oppressed and excluded from or discouraged from science, a problem that continues to be unaddressed today. The Global North is home to a quarter of the world’s population but claims 80% of the world’s wealth and dominates the Global South both politically and economically. The Global North continues to acquire wealth from poorer countries through resource extraction, exploitation and the use of transnational co-operations. Many scientists in the Global South simply cannot fulfil their potential due to lack of resources for equipment; are unable to attend conferences; and cannot even subscribe to journals.

Moreover, women and Black scientists worldwide and even within the Global North are not proportionally represented by Nobel prizes. Data show that men are more likely to receive grants than women and are awarded almost double the funding amount on average. Institutions like to hire and promote men more than women. The fraction of women employed by CERN in science-related areas, for example, is 15%. That’s below the 20–25% of people in the field who are women (at CERN 22% of users are women), which is, of course, still half of the expected percentage of women given the global population.

AI will continue to play a stronger and more entangled role in the sciences, and it is promising that the Nobel prizes have evolved out of the traditional subject sphere in line with modern and interdisciplinary times. Yet the demographics of the winners highlight a discouraging picture of our political, educational and scientific system. Can we as a community help reshape a structure from the current version that favours those from affluent backgrounds, and work harder to reach out to young people – especially those from disadvantaged backgrounds?

Imagine the benefit not only to science – with a greater pool of talent – but also to society and our young students when they see that everyone can succeed in science, not just the privileged 1%.

Ask me anything: Dave Smith – ‘I don’t spend time on regrets’

What skills do you use every day in your job?

Being sociable, switching topics in an instant and making judgements.

Being sociable may sound trivial, but collaboration has been vital in all the roles I have had, especially now that I work in such a large and complex organization. No single person has the answer to the challenges we face (although occasionally you meet people who think they do). By working together, humans accomplish amazing things.

A key feature of seniority – managerial seniority anyway – is juggling multiple topics each day; from the bogs and bike sheds; to finance; to investment decisions; to technical review; to people – it has few limits. With the fast pace of our work, especially with a new government coming in, we need to quickly adapt and reprioritize. I have several teams reporting to me at any one time, so it’s important to allocate time and focus effectively – this is a core skill I’m constantly working on.

What do you like best and least about your job?

Even though I am officially part of the Department for Science, Innovation and Technology (DSIT), as the national technology adviser, I love that my work spans all government departments. We have a fantastic network of departmental chief scientific advisers (CSAs), led by Dame Angela McLean, the government’s chief scientific adviser. This network lets me see the amazing work my colleagues are doing. Anyone who has worked in government knows how tricky it can sometimes be to work through the barriers between departments. But the CSA network is open, allowing us to have honest and productive conversations, which is crucial for effective collaboration.

I’m also incredibly lucky to have a wonderful, efficient and supportive private office. They help me connect with the right people across government to push our key projects forward.

What do you know today, that you wish you knew when you were starting out in your career?

I don’t spend time on regrets, but I do try to learn. Learning is part of the journey and the joy, so I am not sure that I would give my younger self any advice. There have been big highs and deep lows but it has turned out ok so far. I have had three career plans in my life; they made me feel secure, but  I didn’t complete any of them because something more interesting cropped up. Since then, I have stopped having plans.

I would say two things to others, however. The first is advice that was given to me, which is to do the right things to make yourself useful in the first half of your career, then the second half will look after itself – don’t chase glory, just get good. The second is that whilst some might dismiss diversity as a buzzword, I see it as crucial to success, so value a wide range of views and skills when forming teams.

Speeding up MR-guided radiotherapy with VMAT

Researchers at the University of Iowa and University Medical Center Utrecht are working to incorporate volumetric-modulated arc therapy (VMAT) delivery capabilities into the MR-linac. They present the first dosimetric evaluation comparing VMAT and intensity-modulated radiation therapy (IMRT) on the MR-linac in the International Journal of Radiation Oncology, Biology, Physics.

“We’ve been doing MR-guided radiotherapy at the University of Iowa for about five and a half years now,” says senior author Daniel Hyer, a medical physicist and professor of radiation oncology at the University of Iowa. “We want to treat as many patients as possible in MR-guided radiotherapy and improve the access to technology, but we also want to do it efficiently so that we don’t have intra-fraction motion.”

MR-guided radiotherapy combines magnetic resonance imaging (MRI) with radiation therapy to treat cancer. By providing real-time images of internal organs during treatment, clinicians can more accurately target radiation to a tumour and spare healthy tissues.

MR-linacs in clinics today support IMRT, which delivers radiation in a “step-and-shoot” manner. For many treatment sites, VMAT is often preferred over IMRT. VMAT delivers radiation continuously through one or more arcs around a patient and has advantages in target coverage, organs-at-risk sparing, and planning and delivery times. To date, however, VMAT isn’t available on commercial MR-linacs.

Incorporating VMAT delivery into the MR-linac could improve plan quality and efficiency and reduce patient discomfort. Current MR-guided radiotherapy workflows require the patient to lie on the scanner bore throughout MR imaging, contour registration, plan optimization, dose checks and a verification scan prior to plan delivery, which can take over 20 min.

“VMAT is one of the top asks from physicians when it comes to desired – but currently missing – MR-linac functionality. The future availability of VMAT on the MR-linac will allow access to highly precise MRI-guided treatments for more patients,” says co-author Martin Fast, an associate professor at University Medical Center Utrecht.

In 2022, Fast’s group had shown that VMAT deliveries were possible on the Elekta Unity, a 1.5 T MR-linac; however, they lacked clinical-quality VMAT plans with fluence modulation. A serendipitous meeting helped both groups overcome hurdles in their research.

The MR-linac team at the University of Iowa

“Dan’s group and our group in Utrecht were independently working on MR-linac VMAT. Dan from the planning side, us from the delivery side,” Fast explains. “Dan’s limitation was that he couldn’t prove that his plans were deliverable, and we didn’t have the high-quality clinical-grade VMAT plans available for delivery. The collaboration started in July 2023 through a chance encounter in Houston, where Dan and I happened to present at the same session during an AAPM pre-meeting course.”

In their current study, the researchers demonstrated that VMAT deliveries are possible on Unity, without requiring changes to clinical hardware. The retrospective study showed that combined optimization and delivery time were shortened by up to 7.5 min compared with standard step-and-shoot IMRT.

“VMAT nearly doubles the delivery speed compared to conventional step-and-shoot IMRT, which means faster treatments (i.e., better patient comfort) and better accuracy (due to less chance for unexpected motion),” says Fast.

Jeffrey Snyder at Yale School of Medicine

In collaboration with Elekta, the researchers developed a modified version of software that allowed them to deliver a VMAT-like plan with Unity. For 10 prostate cancer patients previously treated on a 1.5 T MR-linac, they replanned treatments to deliver to 36.25 Gy in five fractions, using three techniques: step-and-shoot IMRT and a clinical optimizer; the same optimizer with a VMAT technique; and a research-based optimizer with VMAT.

The plans were adapted onto MRI datasets using two optimization strategies to assess adapt-to-position planning. The team assessed plan quality by evaluating organs-at-risk sparing and evaluated treatment efficiency by measuring the optimization time, delivery time and total (optimization plus delivery) time. Delivery accuracy was assessed via a gamma analysis (2%/2 mm).

Results showed that optimization time plus delivery time yielded savings of up to 7.5 min for the VMAT optimization and the research-based optimizer with VMAT, compared with the clinical optimizer with IMRT. Adapt-to-position planning showed a similar reduction in total time. All VMAT plans had gamma passing rates greater than 96%, and the delivery efficiency of VMAT plans was nearly 90%, compared with 50% for clinical IMRT.

“We’ve shown [that this technology] is feasible. We delivered it, we’ve done the quality assurance, we’ve made the plans. I think that is a huge milestone to pushing this towards clinical implementation. And there’s no major physics or technical hurdles that still need to be cleared – it’s mostly engineering…so that’s great news,” says Hyer.

Next steps for the researchers include providing physics guidance and quality assurance tests so that when VMAT becomes clinically available on MR-linacs, medical physicists have recommendations for implementation. They are also looking at VMAT for gated treatments, motion management strategies for VMAT deliveries, and other treatment sites.

How the UK Metamaterials Network supports scientific and commercial innovation

This episode of the Physics World Weekly podcast explores the science and commercial applications of metamaterials with Claire Dancer of the University of Warwick and Alastair Hibbins of the University of Exeter.

They lead the UK Metamaterials Network, which brings together people in academia, industry and governmental agencies to support and expand metamaterial R&D; nurture talent and skills; promote the adoption of metamaterials in the wider economy; and much more.

According to the network, “A metamaterial is a 3D structure with a response or function due to the collective effect of meta-atom elements that is not possible to achieve conventionally with any individual constituent material”.

In a wide-ranging conversation with Physics World’s Matin Durrani, Hibbins and Dancer talk about exciting commercial applications of metamaterials including soundproof materials and lenses for mobile phones – and how they look forward to welcoming the thousandth member of the network sometime in 2025.

Laser-based headset assesses stroke risk using the brain’s blood flow

A team of scientists based in the US has developed a non-invasive headset device designed to track changes in blood flow and assess a patient’s stroke risk. The device could make it easier to detect early signs of stroke, offering patients and physicians a direct, cost-effective approach to stroke prevention.

The challenge of stroke risk assessment

Stroke remains the leading cause of death and long-term disability, affecting 15 million people worldwide every year. In the United States, someone dies from a stroke roughly every 3 min. Those who survive are often left physically and cognitively impaired.

About 80% of strokes occur when a blood clot blocks an artery that carries blood to the brain (ischaemic stroke). In other cases, a blood vessel can rupture and bleed into the brain (haemorrhagic stroke). In both types of stroke, deprived of oxygen from the loss of blood flow, millions of brain cells rapidly die every minute, causing devastating disability and even death.

As debilitating as stroke is, current methods for assessing stroke risk remain limited. Physicians typically use a questionnaire that assesses factors such as demographics, blood test results and pre-existing medical conditions to estimate a patient’s risk. While non-invasive techniques exist to detect changes after the onset of a stroke, by the time a stroke is suspected and patients are rushed to the emergency room, critical damage may have already been done.

Consequently, there remains an acute need for tools that can proactively monitor and quantify stroke risk before an event occurs.

Blood flow dynamics as proxies for stroke risk

Seeking to bridge this gap, in a study published in Biomedical Optics Express, a research team, led by Charles Liu of the Keck School of Medicine at the University of Southern California and Changhuei Yang of California Institute of Technology, developed a headset device to monitor changes in the brain’s blood flow and volume while a patient holds their breath.

The research team

“Stroke is essentially a brain attack. The stroke world has been trying to draw a parallel between a heart attack and a brain attack,” explains Liu. “When you have a heart disease, under normal circumstances – like sitting on the couch or walking to the kitchen – your heart may seem fine. But if you start walking uphill, you might experience chest pain. For heart diseases, we have the cardiac stress test. During this test, a doctor puts you on a treadmill and monitors your heart with EKG leads. For stroke, we do not have a scalable and practical equivalent to a cardiac stress test.”

Indeed, breath holding temporarily stresses the brain, similar to the way that walking uphill or running on a treadmill would stress the heart in a cardiac stress test. During breath holding, blood volume and blood flow increase in response to lower oxygen and higher carbon dioxide levels. In turn, blood vessels dilate to mitigate the pressure of this increase in blood flow. In patients with higher stroke risk, less flexible blood vessels would impede dilation, causing distinct changes in blood flow dynamics.

Researchers have long had access to various imaging techniques to measure blood dynamics in the brain. However, these methods are often expensive, invasive and impractical for routine screening. To circumvent these limitations, the team built a device comprising a laser diode and a camera that can be placed on the head with no external optical elements, making it lightweight, portable, and cost-effective.

The device transmits infrared light through the skull and brain. A camera positioned elsewhere on the head captures the transmitted light through the skull. By tracking how much the light intensity decreases as it travels through the skull and into the camera, the device can measure changes in blood volume.

When a coherent light source such as a laser scatters off a moving sample (i.e., flowing blood), it creates a type of granular interference pattern, known as a speckle pattern. These patterns fluctuate as blood moves through the brain – the faster the blood flow, the quicker the fluctuations. This technique, called speckle contrast optical spectroscopy (SCOS), enables the researchers to non-invasively measure the blood flow rate in the brain.

The researchers tested the device on 50 participants, divided into low- and high-risk groups based on a standard stroke-risk calculator. During a breath-holding exercise, they found significant differences in blood dynamic changes between people with high stroke risk and those at lower risk.

Specifically, the high-risk group exhibited a faster blood flow rate but a lower volume of blood in response to the brain’s oxygen demands, suggesting restricted blood flow through the stiff vessels. Overall, these findings establish physiological links between stroke risk and blood dynamics measurements, highlighting the technology’s potential for stroke diagnosis and prevention.

The future of stroke prevention

The team plans to expand these studies to a broader population to reinforce the validity of the results. “Our goal is to further develop this concept to ensure it remains portable, compact, and easy to operate without requiring specialized technicians. We believe the design is scalable, aligning well with our vision of accessibility, allowing diverse and underrepresented communities to benefit from this technology,” says co-lead author Simon Mahler, a postdoctoral scholar in the Yang lab at Caltech.

The researchers also aim to integrate machine learning into data analysis and conduct clinical trials in a hospital setting, testing their approach’s effectiveness in stroke prevention. They are also excited about the applications of their device in other neurological conditions, including brain injuries, seizures, and headaches.

International Year of Quantum Science and Technology 2025: here’s all you need to know

I’m pleased to welcome you to Physics World’s coverage supporting the International Year of Quantum Science and Technology (IYQ) in 2025. The IYQ is a worldwide celebration, endorsed by the United Nations (UN), to increase the public’s awareness of quantum science and its applications. The year 2025 was chosen as it marks the centenary of the initial development of quantum mechanics by Werner Heisenberg.

With six “founding partners”, including the Institute of Physics (IOP), which publishes Physics World, the IYQ has ambitious aims. It wants to show how quantum science can do everything from grow the economy, support industry and improve our health to help the climate, deliver clean energy and reduce inequalities in education and research. You can join in by creating an event or donating money to the IYQ Global Fund.

Quantum science is burgeoning, with huge advances in basic research and applications such as quantum computing, communication, cryptography and sensors. Countless tech firms are getting in on the act, including giants like Google, IBM and Microsoft as well as start-ups such as Oxford Quantum Circuits, PsiQuantum, Quantinuum, QuEra and Riverlane. Businesses in related areas – from banking to aerospace – are eyeing up the possibilities of quantum tech too.

Helgoland at dawn

An official IYQ opening ceremony will be taking place at UNESCO headquarters in Paris on 4–5 February 2025. Perhaps the highlight of the year for physicists is a workshop from 9–14 June in Helgoland – the tiny island off the coast of Germany where Heisenberg made his breakthrough exactly 100 years ago. Many of the leading lights from quantum physics will be there, including five Nobel-prize winners.

Kicking off our coverage of IYQ, historian Robert P Crease from Stony Brook University has talked to some of the delegates at Helgoland to find out what they hope to achieve at the event. Crease also examines whether Heisenberg’s revelations were as clear-cut as he later claimed. Did he really devise the principles of quantum mechanics at precisely 3 a.m. one June morning 100 years ago?

From a different perspective, Oksana Kondratyeva explains how she has worked with US firm Rigetti Computing to create a piece of stained glass art inspired by the company’s quantum computers – a “quantum rose for the 21st century” as she puts it. You can find out more about her quantum-themed artwork in a special video she’s made.

Oksana Kondratyeva etching in a protective suit with a respiratory mask

Other quantum coverage in 2025 will include special episodes of the Physics World podcasts and Physics World Live. The next edition of Physics World Careers, due out in the new year, has a quantum theme, and there’ll also be a bumper, quantum-themed issue of the Physics World Briefing in May. The Physics World quantum channel will be regularly updated throughout the year so you don’t miss a thing.

The IOP has numerous quantum-themed public events lined up – including the QuAMP conference in September – building to a week of celebrations in November and December. A series of community events – spearheaded by the IOP’s quantum Business Innovation and Growth (qBIG) and history of physics groups – will include a public celebration at the Royal Institution, featuring physicist and broadcaster Jim Al-Khalili.

Oxford Ionics chip

IOP Publishing, meanwhile, will be bringing out a series of Perspectives articles – personal viewpoints from leading quantum scientists – in Quantum Science and Technology. The journal will also be publishing roadmaps in quantum computing, sensing, communication and simulation, as well as focus issues on topics such as quantum machine learning and technologies for quantum gravity.

There’ll be many other events and initiatives by other organizations too, including from the other founding partners, the American Physical Society, the German Physical Society, Optica, SPIE and the Chinese Optical Society. What’s more, the IYQ is a truly global initiative, with almost 60 nations, led by Ghana and Mexico, helping to get the year off the ground, spreading the benefits of quantum science across the planet, including to the Global South.

The beauty of quantum science lies not only in its mystery but also in the ground-breaking, practical applications that it is inspiring. The IYQ deserves to be a huge success – in fact, I am sure it will.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

Extended cosmic-ray electron spectrum has a break but no other features

A new observation of electron and positron cosmic rays has confirmed the existence of what could be a “cooling break” in the energy spectrum at around 1 TeV, beyond which the particle flux decreases more rapidly. Aside from this break, however, the spectrum is featureless, showing no evidence of an apparent anomaly previously associated with a dark matter signal.

Cosmic ray is a generic term for an energetic charged particle that enters Earth’s atmosphere from space. Most cosmic rays are protons, some are heavier nuclei, and a small number (orders of magnitude fewer than protons) are electrons and their antiparticles (positrons).

“Because the electron’s mass is small, they radiate much more effectively than protons,” explains high-energy astrophysicist Felix Aharonian of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. “It makes the electrons very fragile, so the electrons we detect cannot be very old. That means the sources that produce them cannot be very far away.” Cosmic-ray electrons and positrons can therefore provide important information about our local cosmic environment.

Today, however, the origins of these electrons and positrons are hotly debated. They could be produced by nearby pulsars or supernova remnants. Some astrophysicists favour a secondary production model in which other cosmic rays interact locally with interstellar gas to create high-energy electrons and positrons.

Unexplained features

Previous measurements of the energy spectra of these cosmic rays revealed several unexplained features. In general, the particle flux decreases with increasing energy. At energies below about 1 TeV, the decline is a steady exponential. But at about 1 TeV, the decline steepens with a larger exponential at a curious kink or break point.

Later observations by the Dark Matter Particle Explorer (DAMPE) collaboration confirmed this kink, but also appeared to show peaks at higher energies. Some theoreticians have suggested these inhomogeneities could arise from local sources such as pulsars, whereas others have advanced more exotic explanations, such as signals from dark matter.

In the new work, members of the High Energy Stereoscopic System (HESS) collaboration looked for evidence of cosmic electrons and positrons in 12 years of data from the HESS observatory in Namibia. HESS’s primary mission is to observe high-energy cosmic gamma rays. These gamma rays interact with the atmosphere creating showers of energetic charged particles. These showers create Cherenkov light, which is detected by HESS.

Similar but not identical

The observatory can also detect atmospheric showers created by cosmic rays such as protons and electrons. However, discerning between showers created by protons and electrons/positrons is a significant challenge (HESS cannot differentiate between electrons and positrons). “The hadronic showers produced by protons and electronic showers are extremely similar but not identical,” says Aharonian; “Now we want to use this tiny difference to distinguish between electron-produced showers and proton-produced showers. The task is very difficult because we need to reject proton showers by four orders of magnitude and still keep a reasonable fraction of electrons.”

Fortunately, the large data sample from HESS meant that the team could identify weak signals associated with electrons and positrons. The researchers were therefore able to extend the flux measurements out to much higher energies.  Whereas previous surveys could not look higher than about 5 TeV, the HESS researchers probed the 0.3–40 TeV range – although Aharonian concedes that the error bars are “huge” at higher energies.

The study confirms that, up until about 1 TeV, the spectrum decreases exponentially. The team measured this exponent to be about 3.25. At about 1 TeV, a sharp downward kink was also observed – with an exponential decrease of about 4.5 at higher energies. However, there is no sign of any bumps or peaks in the data.

This kink can be naturally explained, says Aharonian, as a “cooling break”, in which the low energy electrons are produced by background processes, whereas the high-energy electrons are produced locally. “Teraelectronvolt electrons can only come from local sources,” he says. In theoretical models, the intensity of both fluxes would decay exponentially, and the difference between the exponents would be 1 – close to that measured here. Aharonian believes that further information about this phenomenon could come from techniques such as machine learning or muon detection to distinguish between high-energy proton showers and electron showers.

“This is a unique measurement: it gives you a value of the electron–positron flux up to extremely high energies,” says Andrei Kounine of Massachusetts Institute of Technology, who works on the Alpha Magnetic Spectrometer (AMS-02) detector on the International Space Station. While he expresses some concerns about possible uncharacterized systematic errors at very high energies, he says they do not meaningfully diminish the value of the HESS team’s work. He notes that there are a variety of unexplained anomalies in the energy spectra of various cosmic-ray particles. “What we are missing at the moment,” he says, “is a comprehensive theory that considers all possible effects and tries to predict from fundamental measurements such as the proton spectrum the fluxes of all other elements.”

The research is described in Physical Review Letters.

Wafer mask alignment: Queensgate focuses on the move to 300 mm

Electronic chips are made using photolithography, which involves shining ultraviolet light through a patterned mask and onto a semiconductor wafer. The light activates a photoresist on the surface, which allows the etching of a pattern on the wafer. Through successive iterations of photolithography and the deposition of metals, devices with features as small as a few dozen nanometres are created.

Crucial to this complex manufacturing process is aligning the wafer with successive masks. This must be done in a rapid and repeatable manner, while maintaining  nanometre precision throughout the manufacturing process. That’s where Queensgate – part of precision optical and mechanical instrumentation manufacturer Prior Scientific – comes into the picture.

For 45 years, UK-based Queensgate has led the way in the development of nanopositioning technologies. The firm spun out of Imperial College London in 1979 as a supplier of precision instrumentation for astronomy. Its global reputation was sealed when NASA chose Queensgate technology for use on the Space Shuttle and the International Space Station. The company has worked for over two decades with the hard-disk drive-maker Seagate to develop technologies for the rapid inspection of read/write heads during manufacture.  Queensgate is also involved in a longstanding collaboration with the UK’s National Physical Laboratory (NPL) to develop nanopositioning technologies that are being used to define international standards of measurement.

Move to larger wafers

The semiconductor industry is in the process of moving from 200 mm to 300 mm wafers – which doubles the number of chips that can be produced from a wafer. Processing the larger and heavier wafers requires a new generation of equipment that can position wafers at nanometre precision.

Queensgate already works with original equipment manufacturers (OEMs) to make optical wafer-inspection systems that are used to identify defects during the processing of 300 mm wafers. Now the company has set its sights on wafer alignment systems. The move to 300 mm wafers offers the company an opportunity to contribute to the development the next-generation alignment systems – says Queensgate product manager Craig Goodman.

Craig Goodman

“The wafers are getting bigger, which puts a bigger strain on the positioning requirements and we’re here to help solve problems that that’s causing,” explains Goodman. “We are getting lots of inquiries from OEMs about how our technology can be used in the precision positioning of wafers used to produce next-generation high-performance semiconductor devices”.

The move to 300 mm means that fabs need to align wafers that are both larger in area and much heavier. What is more, a much heavier chuck is required to hold a 300 mm wafer during production. This leads to conflicting requirements for a positioning system. It must be accurate over shorter distances as feature sizes shrink, but also be capable of moving a much larger and much heavier wafer and chuck. Today, Queensgate’s wafer stage can handle wafers weighing up to 14 kg while achieving a spatial resolution of 1.5 nm.

Goodman explains that Queensgate’s technology is not used to make large adjustments in the relative alignment of wafer and mask – which is done by longer travel stages using technologies such as air-bearings. Instead, the firm’s nanopositioning systems are used in the final stage of alignment, moving the wafer by less than 1 mm at nanometre precision.

Eliminating noise

Achieving this precision was a huge challenge that Queensgate has overcome by focusing on the sources of noise in its nanopositioning systems. Goodman says that there are two main types of noise that must be minimized. One is external vibration, which can come from a range of environmental sources – even human voices. The other is noise in the electronics that control the nanopositioning system’s piezoelectric actuators.

Goodman explains that noise reduction is achieved through the clever design of the mechanical and electronic systems used for nanopositioning. The positioning stage, for example, must be stiff to reject vibrational noise and notch filters are used to minimize the effect of electronic noise to the sub-nanometre level.

Queensgate provides its nanopostioning technology to OEMs, who integrate it within their products – which are then sold to chipmakers. Goodman says that Queensgate works in-house with its OEM customers to ensure that the desired specifications are achieved. “A stage or a positioner for 300 mm wafers is a highly customized application of our technologies,” he explains.

While the resulting nanopositioning systems are state of the art, Goodman points out that they will be used in huge facilities that process tens of thousands of wafers per month. “It is our aim and our customer’s aim that Queensgate nanopositioning technologies will be used in the mass manufacture of chips,” says Goodman. This means that the system must be very fast to achieve high throughput. “That is why we are using piezoelectric actuators for the final micron of positioning – they are very fast and very precise.”

Today most chip manufacturing is done in Asia, but there are ongoing efforts to boost production in the US and Europe to ensure secure supplies in the future. Goodman says this trend to semiconductor independence is an important opportunity for Queensgate. “It’s a highly competitive, growing and interesting market to be a part of,” he says.

Setting the scale: the life and work of Anders Celsius

On Christmas Day in 1741, when Swedish scientist Anders Celsius first noted down the temperature in his Uppsala observatory using his own 100-point – or “Centi-grade” – scale, he would have had no idea that this was to be his greatest legacy.

A newly published, engrossing biography – Celsius: a Life and Death by Degrees  – by Ian Hembrow, tells the life story of the man whose name is so well known. The book reveals the broader scope of Celsius’ scientific contributions beyond the famous centigrade scale, as well as highlighting the collaborative nature of scientific endeavours, and drawing parallels to modern scientific challenges such as climate change.

That winter, Celsius, who was at the time in his early 40s, was making repeated measurements of the period of a pendulum – the time it takes for one complete swing back and forth. He could use that to calculate a precise value for the acceleration caused by gravity, and he was expecting to find that value to be very slightly greater in Sweden than at more southern latitudes. That would provide further evidence for the flattening of the Earth at the poles, something that Celsius had already helped establish. But it required great precision in the experimental work, and Celsius was worried that the length (and therefore the period) of the pendulum would vary slightly with temperature. He had started these measurements that summer and now it was winter, which meant he had lit a fire to hopefully match the summer temperatures. But would that suffice?

Throughout his career, Celsius had been a champion of precise measurement, and he knew that temperature readings were often far from precise. He was using a thermometer sent to him by the French astronomer Joseph-Nicolas Delisle, with a design based on the expansion of mercury. That method was promising, but Delisle used a scale that took the boiling point of water and the temperature in the basement of his home in Paris as its two  reference points. Celsius was unconvinced by the latter. So he made adaptations (which are still there to be seen in an Uppsala museum), twisting wire around the glass tube at the boiling and freezing points of water, and dividing the length between the two into 100 even steps.

Anders Celsius

The centigrade scale, later renamed in his honour, was born. In his first recorded readings he found the temperature in the pleasantly heated room to be a little over 80 degrees! Following Delisle’s system – perhaps noting that this would mean he had to do less work with negative numbers – he placed the boiling point at zero on his scale, and the freezing point at 100. It was some years later, after his death, that a scientific consensus flipped the scale on its head to create the version we know so well today.

Hembrow does a great job at placing this moment in the context of the time, and within the context of Celsius’ life. He spends considerable time recounting the scientist’s many other achievements and the milestones of his fascinating life.

The expedition that had established the flattening of the Earth at the poles was the culmination of a four-year grand tour that Celsius had undertaken in his early 30s. Already a professor at Uppsala University, in the town where he had grown up in an academic family, he travelled to Germany, Italy, France and London. There he saw at first hand the great observatories that he had heard of and established links with the people who had built and maintained them.

On his extended travels he became a respected figure in the world of science and so it was no surprise when he was selected to join a French expedition to the Arctic in 1736, led by mathematician Pierre Louis Maupertuis, to measure a degree of latitude. Isaac Newton had died just a few years before and his ideas relating to gravitation were not yet universally accepted. If it could be shown that the distance between two lines of latitude was greater near the poles than on the equator, that would prove Newton right about the shape of the Earth, a key prediction of his theory of gravitation.

After a period of time in London equipping themselves with the precision instruments, the team started the arduous journey to the Far North. Once there they had to survey the land – a task made challenging by the thick forest and hilly territory. They selected nine mountains to climb with their heavy equipment, felling dozens of trees on each and then creating a sturdy wooden marker on each peak. This allowed them to create a network of triangles stretching north, with each point visible from the two next to it. But they also needed one straight line of known length to complete their calculations. With his local knowledge, Celsius knew that this could only be achieved on the frozen surface of the Torne river – and that it would involve several weeks of living on the ice, working largely in the dark and the intense cold, and sleeping in tents.

After months of hardship, the calculations were complete and showed that the length of one degree of latitude in the Arctic was almost 1.5 km longer than the equivalent value in France. The spheroid shape of the Earth had been established.

Of course, not everybody accepted the result. Politics and personalities got in the way. Hembrow uses this as the starting point for a polemic about aspects of modern science and climate change with which he ends his fine book. He argues that the painstaking work carried out by an international team, willing to share ideas and learn from each other, provides us with a template by which modern problems must be addressed.

Considering how often we use his name, most of us know little about Celsius. This book helps to address that deficit. It is a very enjoyable and accessible read and would appeal, I think, to anybody with an interest in the history of science.

  • 2024 History Press 304pp £25hb

Vertical-nanowire transistors defeat the Boltzmann tyranny

A new transistor made from semiconducting vertical nanowires of gallium antimonide (GaSb) and indium arsenide (InAs) could rival today’s best silicon-based devices. The new transistors are switched on and off by electrons tunnelling through an energy barrier, making them highly energy-efficient. According to their developers at the Massachusetts Institute of Technology (MIT) in the US, they could be ideal for low-energy applications such as the Internet of Things (IoT).

Electronic transistors use an applied voltage to regulate the flow of electricity – that is, electrons – within a semiconductor chip. When this voltage is applied to a conventional silicon transistor, electrons climb over an energy barrier from one side of the device to the other, and it switches from an “off” state to an “on” one. This type of switching is the basis of modern information technology, but there is a fundamental physical limit on the threshold voltage required to get the electrons moving. This limit, which is sometimes termed the “Boltzmann tyranny” because it stems from the Boltzmann-like energy distribution of electrons in a semiconductor, puts a cap on the energy efficiency of this type of transistor.

Highly precise process

In the new work, MIT researchers led by electrical engineer Jesús A del Alamo made their transistor using a top-down fabrication technique they developed. This extremely precise process uses high-quality, epitaxially-grown structures and both dry and wet etching to fabricate nanowires just 6 nm in diameter. The researchers then placed a gate stack composed of a very thin gate dielectric and a metal gate on the sidewalls of the nanowires. Finally, they added point contacts to the source, gate and drain of the transistors using multiple planarization and etch-back steps.

The sub-10 nm size of the devices and the extreme thinness of the gate dielectric (just 2.4 nm) means that electrons are confined in a space so small that they can no longer move freely. In this quantum confinement regime, electrons no longer climb over the thin energy barrier at the GaSb/InAs heterojunction. Instead, they tunnel through it. The voltage required for such a device to switch is much lower than it is for traditional silicon-based transistors.

Steep switching slope and high drive current

Researchers have been studying tunnelling-type transistors for more than 20 years, notes Yanjie Shao, a postdoctoral researcher in nanoelectronics and semiconductor physics at MIT and the lead author of a study in Nature Electronics on the new transistor. Such devices are considered attractive because they allow for ultra-low-power electronics. However, they come with a major challenge: it is hard to maintain a sharp transition between “off” and “on” while delivering a high drive current.

When the project began five years ago, Shao says the team “believed in the potential of the GaSb/InAs ‘broken-band’ system to overcome this difficulty”. But it wasn’t all plain sailing. Fabricating such small vertical nanowires was, he says, “one of the biggest problems we faced”. Making a high-quality gate stack with a very low density of electronic trap states (states within dielectric materials that capture and release charge carriers in a semiconductor channel) was another challenge.

After many unsuccessful attempts, the team found a way to make the system work. “We devised a plasma-enhanced deposition method to make the gate electric and this was key to obtaining exciting transistor performance,” Shao tells Physics World.

The researchers also needed to understand the behaviour of tunnelling transistors, which Shao calls “not easy”. The task was made possible, he adds, by a combination of experimental work and first-principles modelling by Ju Li’s group at MIT, together with quantum transport simulation by David Esseni’s group at the University of Udine, Italy. These studies revealed that band alignment and near-periphery scaling of the number of conduction modes at the heterojunction interface play key roles in the physics of electrons under extreme confinement.

The reward for all this work is a device with a drive current as high as 300 uA/m and a switching slope less than 60 mV/decade (a decade, in this context, is a power of 10 difference between off and on states), meaning that the supply voltage is just 0.3 V. This is below the fundamental limit achievable with silicon-based devices, and around 20 times better than other tunnelling transistors of its type.

Potential for novel devices

Shao says the most likely applications for the new transistor are in ultra-low-voltage electronics. These will be useful for artificial intelligence and Internet of Things (IoT) applications, which require devices with higher energy efficiencies. Shao also hopes the team’s work will bring about a better understanding of the physics at surfaces and interfaces that feature extreme quantum confinement – something that could lead to novel devices that benefit from such nanoscale physics.

The MIT team is now developing transistors with a slightly different configuration that features vertical “nano-fins”. These could make it possible to build more uniform devices with less structural variation across the surface. “Being so small, even a variation of just 1 nm can adversely affect their operation,” Shao says. “We also hope that we can bring this technology closer to real manufacturing by optimizing the process technology.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors