Skip to main content

Why academia should be funded by governments, not students

In an e-mail to staff in September 2024, Christopher Day, the vice-chancellor of Newcastle University in the UK, announced a £35m shortfall in its finances for 2024. Unfortunately, Newcastle is not alone in facing financial difficulties. The problem is largely due to UK universities obtaining much of their funding by charging international students exorbitant tuition fees of tens of thousands of pounds per year. In 2022 international students made up 26% of the total student population. But with the number of international students coming to the UK recently falling and tuition fees for domestic students having increased by less than 6% over the last decade, the income from students is no longer enough to keep our universities afloat.

Both Day and Universities UK (UUK) – the advocacy organization for universities in the UK – pushed for the UK government to allow universities to increase fees for both international and domestic students. They suggested raising the cap on tuition fees for UK students to £13,000 per year, much more than the new cap that was set earlier this month at £9535. Increasing tuition fees further, however, would be a disaster for our education system.

The introduction of student fees was sold to universities in the late 1990s as a way to get more money, and sold to the wider public as a way to allow “market fairness” to improve the quality of education given by universities. In truth, it was never about either of these things.

Tuition fees were about making sure that the UK government would not have to worry about universities pressuring them to increase funding. Universities instead would have to rationalize higher fees with the students themselves. But it is far easier to argue that “we need more money from you, the government, to continue the social good we do” than it is to say “we need more money from you, the students, to keep giving you the same piece of paper”.

Degree-level education in the UK is now treated as a private commodity, to be sold by universities and bought by students, with domestic students taking out a loan from the government that they pay back once they earn above a certain threshold. But this implies that it is only students who profit from the education and that the only benefit for them of a degree is a high-paid job.

Education ends up reduced to an initial financial outlay for a potential future financial gain, with employers looking for job applicants with a degree regardless of what it is in. We might as well just sell students pieces of paper boasting about how much money they have “invested” in themselves.

Yet going to university brings so much more to students than just a boost to their future earnings. Just look, for example, at the high student satisfaction for arts and humanities degrees compared to business or engineering degrees. University education also brings huge social, cultural and economic benefits to the wider community at a local, regional and national level.

UUK estimates that for every £1 of public money invested in the higher-education sector across the UK, £14 is put back into the economy – totalling £265bn per year. Few other areas of government spending give such large economic returns for the UK. No wonder, then, that other countries continue to fund their universities centrally through taxes rather than fees. (Countries such as Germany that do levy fees charge only a nominal amount, as the UK once did.)

Some might say that the public should not pay for students to go to university. But that argument doesn’t stack up. We all pay for roads, schools and hospitals from general taxation whether we use those services or not, so the same should apply for university education. Students from Scotland who study in the country have their fees paid by the state, for example.

Up in arms

Thankfully, some subsidy still remains in the system, mainly for technical degrees such as the sciences and medicine. These courses on average cost more to run than humanities and social sciences courses due to the cost of practical work and equipment. However, as budgets tighten, even this is being threatened.

In 2004 Newcastle closed its physics degree programme due to its costs. While the university soon reversed the mistake, it lives long in the memories of those who today still talk about the incalculable damage this and similar cuts did to UK physics. Indeed, I worry whether this renewed focus on profitability, which over the last few years has led to many humanities programmes and departments closing at UK universities, could again lead to closures in the sciences. Without additional funding, it seems inevitable.

University leaders should have been up in arms when student fees were introduced in the early 2000s. Instead, most went along with them, and are now reaping what they sowed. University vice-chancellors shouldn’t be asking the government to allow universities to charge ever higher fees – they should be telling the government that we need more money to keep doing the good we do for this country. They should not view universities as private businesses and instead lobby the government to reinstate a no-fee system and to support universities again as being social institutions.

If this doesn’t happen, then the UK academic system will fall. Even if we do manage to somehow cut costs in the short term by around £35m per university, it will only prolong the inevitable. I hope vice chancellors and the UK government wake up to this fact before it is too late.

Ultrafast electron entanglement could be studied using helium photoemission

The effect of quantum entanglement on the emission time of photoelectrons has been calculated by physicists in China and Austria. Their result includes several counter-intuitive predictions that could be testable with improved free-electron lasers.

The photoelectric effect involves quantum particles of light (photons) interacting with electrons in atoms, molecules and solids. This can result in the emission of an electron (called a photoelectron), but only if the photon energy is greater than the binding energy of the electron.

“Typically when people calculate the photoelectric effect they assume it’s a very weak perturbation on an otherwise inert atom or solid surface and most of the time does not suffer anything from these other atoms or photons coming in,” explains Wei-Chao Jiang of Shenzhen University in China. In very intense radiation fields, however, the atom may simultaneously absorb multiple photons, and these can give rise to multiple emission pathways.

Jiang and colleagues have done a theoretical study of the ionization of a helium atom from its ground state by intense pulses of extreme ultraviolet (XUV) light. At sufficient photon intensities, there are two possible pathways by which a photoelectron can be produced. In the first, called direct single ionization, the photon in the ground state simply absorbs an electron and escapes the potential well. The second is a two-photon pathway called excitation ionization, in which both of the helium electrons absorb a photon from the same light pulse. One of them subsequently escapes, while the other remains in a higher energy level in the residual ion.

Distinct pathways

The two photoemission pathways are distinct, so making a measurement of the emitted electron reveals information about the state of the bound electron that was left behind. The light pulse therefore creates an entangled state in which the two electrons are described by the same quantum wavefunction. To better understand the system, the researchers modelled the emission time for an electron undergoing excitation ionization relative to an electron undergoing direct single ionization.

“The naïve expectation is that, if I have a process that takes two photons, that process will take longer than one where one photon does the whole thing,” says team member Joachim Burgdörfer of the Vienna University of Technology. What the researchers calculated, however, is that photoelectrons emitted by excitation ionization were most likely to be detected about 200 as earlier than photons detected by direct single ionization. This can be explained semi-classically by assuming that the photoionization event must precede the creation of the  helium ion (He+) for the second excitation step to occur. Excitation ionization therefore requires earlier photoemission.

The researchers believe that, in principle, it should be possible to test their model using attosecond streaking or RABBITT (reconstruction of attosecond beating by interference of two-photon transitions). These are special types of pump-probe spectroscopy that can observe interactions at ultrashort timescales. “Naïve thinking would say that, using a 500 as pulse as a pump and a 10 fs pulse as a probe, there is no way you can get time resolution down to say, 10 as,” says Burgdörfer. “This is where recently developed techniques such as streaking or RABBITT  come in. You no longer try to keep the pump and probe pulses apart, instead you want overlap between the pump and probe and you extract the time information from the phase information.”

Simulated streaking

The team also did numerical simulations of the expected streaking patterns at one energy and found that they were consistent with an analytical calculation based on their intuitive picture. “Within a theory paper, we can only check for mutual consistency,” says Burgdörfer.

The principal hurdle to actual experiments lies in generating the required XUV pulses. Pulses from high harmonic generation may not be sufficiently strong to excite the two-photon emission. Free electron laser pulses can be extremely high powered, but are prone to phase noise. However, the researchers note that entanglement between a photoelectron and an ion has been achieved recently at the FERMI free electron laser facility in Italy.

“Testing these predictions employing experimentally realizable pulse shapes should certainly be the next important step.” Burgdörfer says. Beyond this, the researchers intend to study entanglement in more complex systems such as multi-electron atoms or simple molecules.

Paul Corkum at Canada’s University of Ottawa is intrigued by the research. “If all we’re going to do with attosecond science is measure single electron processes, probably we understood them before, and it would be disappointing if we didn’t do something more,” he says. “It would be nice to learn about atoms, and this is beginning to go into an atom or at least its theory thereof.” He cautions, however, that “If you want to do an experiment this way, it is hard.”

The research is described in Physical Review Letters.  

Noodles of fun as UK researchers create the world’s thinnest spaghetti

While spaghetti might have a diameter of a couple of millimetres and capelli d’angelo (angel hair) is around 0.8 mm, the thinnest known pasta to date is thought to be su filindeu (threads of God), which is made by hand in Sardinia, Italy, and is about 0.4 mm in diameter.

That is, however, until researchers in the UK created spaghetti coming in at a mindboggling 372 nanometres (0.000372 mm) across (Nanoscale Adv. 10.1039/D4NA00601A).

About 200 times thinner than a human hair, the “nanopasta” is made using a technique called electrospinning, in which the threads of flour and liquid were pulled through the tip of a needle by an electric charge.

“To make spaghetti, you push a mixture of water and flour through metal holes,” notes Adam Clancy from University College London (UCL). “In our study, we did the same except we pulled our flour mixture through with an electrical charge. It’s literally spaghetti but much smaller.”

While each individual strand is too thin to see directly with the human eye or with a visible light microscope, the team used the threads to form a mat of nanofibres about two centimetres across, creating in effect a mini lasagne sheet.

The researchers are now investigating how the starch-based nanofibres could be used for medical purposes such as wound dressing, for scaffolds in tissue regrowth and even in drug delivery. “We want to know, for instance, how quickly it disintegrates, how it interacts with cells, and if you could produce it at scale,” says UCL materials scientist Gareth Williams.

But don’t expect to see nanopasta hitting the supermarket shelves anytime soon. “I don’t think it’s useful as pasta, sadly, as it would overcook in less than a second, before you could take it out of the pan,” adds Williams. And no-one likes rubbery pasta.

Lens breakthrough paves the way for ultrathin cameras

A research team headed up at Seoul National University has pioneered an innovative metasurface-based folded lens system, paving the way for a new generation of slimline cameras for use in smartphones and augmented/virtual reality devices.

Traditional lens modules, built from vertically stacked refractive lenses, have fundamental thickness limitations, mainly due to the need for space between lenses and the intrinsic volume of each individual lens. In an effort to overcome these restrictions, the researchers – also at Stanford University and the Korea Institute of Science and Technology – have developed a lens system using metasurface folded optics. The approach enables unprecedented manipulation of light with exceptional control of intensity, phase and polarization – all while maintaining thicknesses of less than a millimetre.

Folding the light path

As part of the research – detailed in Science Advances – the team placed metasurface optics horizontally on a glass wafer. These metasurfaces direct light through multiple folded diagonal paths within the substrate, optimizing space usage and demonstrating the feasibility of a 0.7 mm-thick lens module for ultrathin cameras.

“Most prior research has focused on understanding and developing single metasurface elements. I saw the next step as integrating and co-designing multiple metasurfaces to create entirely new optical systems, leveraging each metasurface’s unique capabilities. This was the main motivation for our paper,” says co-author Youngjin Kim, a PhD candidate in the Optical Engineering and Quantum Electronics Laboratory at Seoul National University.

According to Kim, creation of a metasurface folded lens system requires a wide range of interdisciplinary expertise, including a fundamental understanding of conventional imaging systems such as ray-optic-based lens module design, knowledge of point spread function and modulation transfer function analysis and imaging simulations – both used in imaging and optics to describe the performance of imaging systems – plus a deep awareness of the physical principles behind designing metasurfaces and the nano-fabrication techniques for constructing metasurface systems.

“In this work, we adapted traditional imaging system design techniques, using the commercial tool Zemax, for metasurface systems,” Kim adds. “We then used nanoscale simulations to design the metasurface nanostructures and, finally, we employed lithography-based nanofabrication to create a prototype sample.”

Smoothing the “camera bump”

The researchers evaluated their proposed lens system by illuminating it with an 852 nm laser, observing that it could achieve near-diffraction-limited imaging quality. The folding of the optical path length reduced the lens module thickness to half of the effective focal length (1.4 mm), overcoming inherent limitations of conventional optical systems.

“Potential applications include fully integrated, miniaturized, lightweight camera systems for augmented reality glasses, as well as solutions to the ‘camera bump’ issue in smartphones and miniaturized microscopes for in vivo imaging of live animals,” Kim explains.

Kim also highlights some more general advantages of using novel folded lens systems in devices like compact cameras, smartphones and augmented/virtual reality devices – especially when compared with existing approaches – including include the ultraslim and lightweight form factor, and the potential for mass production using standard semiconductor fabrication processes.

When it comes to further research and practical applications in this area over the next few years, Kim points out that metasurface folded optics “offer a powerful platform for light modulation” within an ultrathin form factor, particularly since the system’s thickness remains constant regardless of the number of metasurfaces used.

“Recently, there has been growing interest in co-designing hardware-based optical elements with software-based AI-based image processing for end-to-end optimization, which maximizes device functionality for specific applications,” he says. “Future research may focus on combining metasurface folded optics with end-to-end optimization to harness the strengths of both advanced hardware and AI.”

Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary

If you want to read about controversies in physics, a (brief) history of the speed of light or the quest for dark matter, then make sure to check out this collection of papers to mark the 10th anniversary of the St Cross Centre for the History and Philosophy of Physics (HAPP).

HAPP was co-founded in 2014 by Jo Ashbourn and James Dodd and since then the centre has run a series of one-day conferences as well as standalone lectures and seminars about big topics in physics and philosophy.

Based on these contributions, HAPP has now published a 10th anniversary commemorative volume in the open-access Journal of Physics: Conference Series, which is published by IOP Publishing.

The volume is structured around four themes: physicists across history; space and astronomy; philosophical perspectives; and concepts in physics.

The big names in physics to write for the volume include Martin Rees on the search for extraterrestrial intelligence across a century; Carlo Rovelli on scientific thinking across the centuries; and the late Steven Weinberg on the greatest physics discoveries of the 20th century.

I was delighted to also contribute to the volume based on a talk I gave in February 2020 for a one-day HAPP meeting about big science in physics.

The conference covered the past, present and future of big science and I spoke about the coming decade of new facilities in physics and the possible science that may result. I also included my “top 10 facilities to watch” for the coming decade.

In a preface to the volume, Ashbourn writes that HAPP was founded to provide “a forum in which the philosophy and methodologies that inform how current research in physics is undertaken would be included alongside the history of the discipline in an accessible way that could engage the general public as well as scientists, historians and philosophers,” adding that she is “looking forward” to HAPP’s second decade.

  • The HAPP Centre is now looking for financial support to allow it to continue its activities – donate here.

Top-cited authors from North America share their tips for boosting research impact

More than 80 papers from North America have been recognized with a Top Cited Paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

Among the awardees are astrophysicists Sarah Vigeland and Stephen Taylor who are co-authors of the winning article examining the gravitational-wave background using NANoGrav data. “This is an incredible validation of the hard work of the entire NANOGrav collaboration, who persisted over more than 15 years in the search for gravitational wave signals at wavelengths of lightyears,” says Vigeland and Taylor in a joint e-mail.

They add that the article has sparked and unexpected “interest and engagement” from the high-energy theory and cosmology communities and that the award is a “welcome surprise”.

While citations give broader visibility, the authors say that research is not impactful because of its citations alone, but rather it attracts citations because of its impact and importance.

“Nevertheless, a high citation count does signal to others that a paper is relevant and worth reading, which will attract broader audiences and new attention,” they explain, adding that factors that make a research paper highly citable is often because it is “an interesting problem” that intersects a variety of different disciplines. “Such work will attract a broad readership and make it more likely for researchers to cite a paper,” they say.

Aiming for impact

Another top-cited award winner from North America is bio-inspired engineer Carl White who is first author of the winning article about a tuna-inspired robot called Tunabot Flex. “In our paper, we designed and tested a research platform based on tuna to close the performance gap between robotic and biological systems,” says White. “Using this platform, termed Tunabot Flex, we demonstrated the role of body flexibility in high-performance swimming.”

White notes that the interdisciplinary nature of the work between engineers and biologists led to researchers from a variety of topics citing the work. “Our paper is just one example of the many studies benefitting from the rich cross-pollination of ideas to new contexts,” says White adding that the IOP Publishing award is a “great honour”.

White states that scientific knowledge grows in “irregular and interconnected” ways and tracing citations from one paper to another “provides transparency into the origins of ideas and their development”.

“My advice to researchers looking to maximize their work’s impact is to focus on a novel idea that addresses a significant need,” says White. “Innovative work fills gaps in existing literature, so you must identify a gap and then characterize its presence. Show how your work is groundbreaking by thoroughly placing it within the context of your field.”

  • For the full list of top-cited papers from North America for 2024, see here. To read the award-winning research click here and here.
  • For the full in-depth interviews with White, Vigeland and Taylor, see here.

Quantum error correction research yields unexpected quantum gravity insights

In computing, quantum mechanics is a double-edged sword. While computers that use quantum bits, or qubits, can perform certain operations much faster than their classical counterparts, these qubits only maintain their quantum nature – their superpositions and entanglement – for a limited time. Beyond this so-called coherence time, interactions with the environment, or noise, lead to loss of information and errors. Worse, because quantum states cannot be copied – a consequence of quantum mechanics known as the no-cloning theorem – or directly observed without collapsing the state, correcting these errors requires more sophisticated strategies than the simple duplications used in classical computing.

One such strategy is known as an approximate quantum error correction (AQEC) code. Unlike exact QEC codes, which aim for perfect error correction, AQEC codes help quantum computers return to almost, though not exactly, their intended state. “When we can allow mild degrees of approximation, the code can be much more efficient,” explains Zi-Wen Liu, a theoretical physicist who studies quantum information and computation at China’s Tsinghua University. “This is a very worthwhile trade-off.”

The problem is that the performance and characteristics of AQEC codes are poorly understood. For instance, AQEC conventionally entails the expectation that errors will become negligible as system size increases. This can in fact be achieved simply by appending a series of redundant qubits to the logical state for random local noise; the likelihood of the logical information being affected would, in that case, be vanishingly small. However, this approach is ultimately unhelpful. This raises the questions: What separates good (that is, non-trivial) codes from bad ones? Is this dividing line universal?

Establishing a new boundary

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so.

To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity.

Circuit complexity, an important concept in both computer science and physics, represents the optimal cost of a computational process. This cost can be assessed in many ways, with the most intuitive metrics being the minimum time or the “size” of computation required to prepare a quantum state using local gate operations. For instance, how long does it take to link up the individual qubits to create the desired quantum states or transformations needed to complete a computational task?

The researchers found that if the subsystem variance falls below a certain threshold, any code within this regime is considered a nontrivial AQEC code and subject to a lower bound of circuit complexity. This finding is highly general and does not depend on the specific structures of the system. Hence, by establishing this boundary, the researchers gained a more unified framework for evaluating and using AQEC codes, allowing them to explore broader error correction schemes essential for building reliable quantum computers.

A quantum leap

But that wasn’t all. The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature.

One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. These conditions include long-range entanglement, which is a circuit complexity condition, and topological entanglement entropy, which quantifies the extent of long-range entanglement. The new framework clarifies the connection between these entanglement conditions and topological quantum order, allowing researchers to better understand these exotic phases of matter.

A more surprising connection, though, concerns one of the deepest questions in modern physics: how do we reconcile quantum mechanics with Einstein’s general theory of relativity? While quantum mechanics governs the behavior of particles at the smallest scales, general relativity accounts for gravity and space-time on a cosmic scale. These two pillars of modern physics have some incompatible intersections, creating challenges when applying quantum mechanics to strongly gravitational systems.

In the 1990s, a mathematical framework called the anti-de Sitter/conformal field theory correspondence (AdS/CFT) emerged as a way of using CFT to study quantum gravity even though it does not incorporate gravity. As it turns out, the way quantum information is encoded in CFT has conceptual ties to QEC. Indeed, these ties have driven recent advances in our understanding of quantum gravity.

By studying CFT systems at low energies and identifying connections between code properties and intrinsic CFT features, the researchers discovered that the CFT codes that pass their AQEC threshold might be useful for probing certain symmetries in quantum gravity. New insights from AQEC codes could even lead to new approaches to spacetime and gravity, helping to bridge the divide between quantum mechanics and general relativity.

Some big questions remain unanswered, though. One of these concerns the line between trivial and non-trivial codes. For instance, what happens to codes that live close to the boundary? The researchers plan to investigate scenarios where AQEC codes could outperform exact codes, and to explore ways to make the implications for quantum gravity more rigorous. They hope their study will inspire further explorations of AQEC’s applications to other interesting physical systems.

The research is described in Nature Physics.

Mechanical qubit could be used in quantum sensors and quantum memories

Researchers in Switzerland have created a mechanical qubit using an acoustic wave resonator, marking a significant step forward in quantum acoustodynamics. The qubit is not good enough for quantum logic operations, but researchers hope that further efforts could lead to applications in quantum sensing and quantum memories.

Contemporary quantum computing platforms such as trapped ions and superconducting qubits operate according to the principles of quantum electrodynamics. In such systems, quantum information is held in electromagnetic states and transmitted using photons. In quantum acoustodynamics, however, the quantum information is stored in the quantum states of mechanical resonators. These devices interact with their surroundings via quantized vibrations (phonons), which cannot propagate through a vacuum. As a result, isolated mechanical resonators can have much longer lifetimes that their electromagnetic counterparts. This could be particularly useful for creating quantum memories.

John Teufel of the US’s National Institute for Standards and Technology (NIST) and his team shared Physics World’s 2021 Breakthrough of the Year award for using light to achieve the quantum entanglement of two mechanical resonators. “If you entangle two drums, you know that their motion is correlated beyond vacuum fluctuations,” explains Teufel. “You can do very quantum things, but what you’d really want is for these things to be nonlinear at the single-photon level – that’s more like a bit, holding one and only one excitation – if you want to do things like quantum computing. In my work that’s not a regime we’re usually ever in.”

Hitherto impossible

Several groups such as Yiwen Chu’s at ETH Zurich have interfaced electromagnetic qubits with mechanical resonators and used qubits to induce quantized mechanical excitations. Actually producing a mechanical qubit had proved hitherto impossible, however. A good qubit must have two energy levels, akin to the 1 and 0 states of a classical bit. It can then be placed (or initialized) in one of those levels and remain in a coherent superposition of the two without other levels interfering.

This is possible if the system has unevenly spaced energy levels – which is true in an atom or ion, and can be engineered in a superconducting qubit. Driving a qubit using photons with the exact transition energy then excites Rabi oscillations, in which the population of the upper level rises and falls periodically. However, acoustic resonators are harmonic oscillators, and the energy levels of a harmonic oscillator are evenly spaced. “Every time we would prepare a phonon mode into a harmonic oscillator we would jump by one energy level,” says Igor Kladarić, who is a PhD student in Chu’s group.

In the new work, Kladarić and colleagues used a superconducting transmon qubit coupled to an acoustic resonator on a sapphire chip. The frequency of the superconducting qubit was slightly off-resonance with that of the mechanical resonator. Within being driven in any way, the superconducting qubit coupled to the mechanical resonator and created a shift in the frequencies of the ground state and first excited state of the resonator. This created the desired two-level system in the resonator.

Swapping excitations

The researchers then injected microwave signals at the frequency of the mechanical resonator, converting them into acoustic signals using piezoelectric aluminium nitride. “The way we did the measurement is the way we did it beforehand,” says Kladarić. “We would simply put our superconducting qubit on resonance with our mechanical qubit to swap an excitation back into the superconducting qubit and then simply read out the superconducting qubit itself.”

The researchers confirmed that the mechanical resonator undergoes Rabi oscillations between the first and second excited states, with less than 10% probability of leakage into the second excited state, and was therefore a true mechanical qubit.

The team is now working to improve the qubit to the point where it could be useful in quantum information processing. They are also interested in the possibility of using the qubit in quantum sensing. “These mechanical systems are very massive and so…they can couple to degrees of freedom that single atoms or superconducting qubits cannot, such as gravitational forces,” explains Kladarić.

Teufel is impressed by the Swiss team’s accomplishment, “There are a very short list of strong nonlinearities in nature that are also clean and not lossy…The hard thing for any technology is to make something that’s simultaneously super-nonlinear and super-long lived, and if you do that, you’ve made a very good qubit”. He adds, “This is really the first mechanical resonator that is nonlinear at the single quantum level…It’s not a spectacular qubit yet, but the heart of this work is demonstrating that this is yet another of a very small number of technologies that can behave like a qubit.”

Warwick Bowen of Australia’s University of Queensland told Physics World, “the creation of a mechanical qubit has been a dream in the quantum community for many decades – taking the most classical of systems – a macroscopic pendulum – and converting it to the most quantum of systems, effectively an atom.”

The mechanical qubit is described in Science.

Top tips for physics outreach from a prize winner, making graphene more sustainable

In this episode of the Physics World Weekly podcast I am in conversation with Joanne O’Meara, who has bagged a King Charles III Coronation Medal for her outstanding achievements in science education and outreach. Based at Canada’s University of Guelph, the medical physicist talks about her passion for science communication and her plans for a new science centre.

This episode also features a wide-ranging interview with Burcu Saner Okan, who is principal investigator at Sabanci University’s Sustainable Advanced Materials Research Group in Istanbul, Turkey. She explains how graphene is manufactured today and how the process can be made more sustainable – by using recycled materials as feedstocks, for example. Saner Okan also talks about her commercial endeavours including Euronova.

Elevating brachytherapy QA with RadCalc

Want to learn more on this subject?

An engaging webinar where we explore how RadCalc supports advanced brachytherapy quality assurance, enabling accurate and efficient dose calculations. Brachytherapy plays a critical role in cancer treatment, with modalities like HDR, LDR, and permanent seed implants requiring precise dose verification to ensure optimal patient outcomes.

The increasing complexity of modern brachytherapy plans has heightened the demand for streamlined QA processes. Traditional methods, while effective, often involve time-consuming experimental workflows. With RadCalc’s 3D dose calculation system based on the TG-43 protocol, users can achieve fast and reliable QA, supported by seamless integration with treatment planning systems and automation through RadCalcAIR.

The webinar will showcase the implementation of independent RadCalc QA.

Don’t miss the opportunity to listen to two RadCalc clinical users!

A Q&A session follows the presentation.

Want to learn more on this subject?

Michal Poltorak, Oskar Sobotka, Lucy Wolfsberger, Carlos Bohorquez (left to right)

Michal Poltorak, MSc, is the head of the department of Medical Physics at the National Institute of Medicine, Ministry of the Interior and Administration, in Warsaw, Poland. With expertise in medical physics, he oversees research and clinical applications in radiation therapy and patient safety. His professional focus lies in integrating innovative technologies.

Oskar Sobotka, MSc.Eng, is a medical physicist at the Radiotherapy Center in Gorzów Wielkopolski, specializing in treatment planning and dosimetry. With a Master’s degree from Adam Mickiewicz University and experience in nuclear medicine and radiotherapy, he ensures precision and safety in patient care.

Lucy Wolfsberger, MS, LAP, is an application specialist for RadCalc at LifeLine Software Inc., a part of the LAP Group. She is dedicated to enhancing safety and accuracy in radiotherapy by supporting clinicians with a patient-centric, independent quality assurance platform. Lucy combines her expertise in medical physics and clinical workflows to help healthcare providers achieve efficient, reliable, and comprehensive QA.

Carlos Bohorquez, MS, DABR, is the product manager for RadCalc at LifeLine Software Inc., a part of the LAP Group. An experienced board-certified clinical physicist with a proven history of working in the clinic and medical device industry, Carlos’ passion for clinical quality assurance is demonstrated in the research and development of RadCalc into the future.

Copyright © 2025 by IOP Publishing Ltd and individual contributors