Cancer therapies successfully enable the excision of tumours or destruction of cancer cells. However, the presence of cancer stem cells (CSCs), which can reproduce themselves and initiate tumour progression, may lead to cancer recurrence and resistance to chemotherapies and radiotherapy. Identifying and enriching CSCs for active targeting by anti-cancer drugs could enhance the efficacy of cancer treatments.
CSCs are usually present in tumours in very low numbers, making them difficult to identify. But using hydrogels to create an artificial tumour environment can elevate the percentage of CSCs found in tumours. Scientists from Hokkaido University and the National Cancer Centre Research Institute have developed a novel double-network (DN) hydrogel that rapidly reverts cancer cells into CSCs within just 24 hr. They report their findings in Nature Biomedical Engineering.
High-performance materials for fighting cancer
DN gels contain two networks of polymers with different mechanical properties. In this study, the DN gels incorporate rigid strong polyelectrolyte gels (the first network) and flexible neutral polymer gels (the second network) The resulting material possesses both toughness and exceptional mechanical strength.
The researchers combined two differing polymers to create a tough double-network gel. (Courtesy: Hokkaido University/soft matter)
This DN gel serves as an artificial microenvironment for inducing cellular responses in CSCs. Principal investigator Shinya Tanaka, from the Institute for Chemical Reaction Design and Discovery, describes the gel as a “potential weapon to fight cancer, with unique applications in regenerative medicine”.
The hydrogel, which has an elasticity similar to the specific microenvironment required by CSCs, could increase stem cell-like behaviour (stemness). This could enable more efficient detection of CSCs, enhance cancer cell type diagnosis, and ultimately help to produce personalized medicines.
To evaluate the effect of DN gels on cancer cells, the team cultured six human cancer cell lines on the hydrogel: sarcoma, uterine cancer, lung cancer, colon cancer, bladder cancer and brain cancer. All the cell lines formed spherical structures within 24 hrs of cell seeding.
The sphere-like shapes on the DN gel contained a large proportion of CSCs, which are seldom found in primary tumours. This indicates that reprogramming of differentiated cancer cells into CSCs is enabled by their interaction with the hydrogel.
Glioblastoma multiforme, an aggressive brain cancer, forms spherical structures when grown on DS hydrogel (left), but not when grown on a polystyrene dish. (Courtesy: Nat. Biomed. Eng. 10.1038/s41551-021-00692-2)
The researchers also examined glioblastoma – a malignant brain cancer with a five-year survival rate of only around 8%. The DN gels rapidly induced CSCs in four patient-derived primary brain cancer cell lines. They observed that a protein known as Sox2, which is responsible for cancer cell reprogramming, was highly expressed in the nuclei of sphere-forming cells. Uncovering this phenomenon helps to reveal the molecular mechanism behind hydrogel-inducing stemness in cancer.
In addition, human brain cancer cells cultured on DN gels formed tumours efficiently when transplanted into mice.
The team is currently investigating how the intrinsic properties of the DN gels affect the cancer cells, in particular, to examine how the hydrogel’s chemical characteristics impact the resulting stemness.
An atomic-scale ion transistor that achieves the ultrafast opening and closing of ion channels to specific ions has been developed by researchers in China and the US. The team believes that the device could have a wide range of applications, ranging from brain–computer interfaces and sea-water desalination to precious metal extraction.
The key component of electronics is the transistor – a switch that either allows or blocks the flow of electrons between two terminals depending on the electric potential applied to a third terminal, called the gate. This is the fundamental component of digital logic, and more than 50 billion transistors can be crammed onto a single silicon chip. Like electronic devices, the human central nervous system uses electrical impulses to transmit and process information. The key difference between the two is that electronic devices rely on the flow of negatively charged electrons, whereas neural signals are carried by positive ions.
Researchers would like to create ion transistors inspired by biological systems – but this has proven to be very tricky indeed. In nature, the gating function is performed by channels that open in response to certain stimuli to allow specific ions to diffuse through. A nanoscale system that reproduces this efficiently in the laboratory, however, has eluded scientists, explains Xiang Zhang of the University of Hong Kong: “The development of artificial ion channels using traditional pore structures has been hindered by the trade-off between permeability and selectivity for ion transport. Pore sizes exceeding the diameters of hydrated ions cause ion selectivity to largely vanish.”
Electrically driven selectivity
This problem has hindered the development of interfaces between electronics and the human body, and achieving the electrically driven selectivity of specific type of ions in nanoscale transport promises to be the key to progress. Success could potentially lead to a better understanding, diagnosis and treatment of diseases such as Alzheimer’s and epilepsy, as well as assisting in the control of artificial limbs and a host of other applications.
In this latest research Zhang, together with scientists in Hong Kong and the University of California, Berkeley, produced an ion transistor by attaching a gold gate electrode to the back of a reduced graphene oxide flake mounted on a silicon nitride substrate. They set up their device between two reservoirs – one containing potassium ions, the other not. When no electric potential is applied to the gate, the potassium ions are prevented from entering the 0.3 nm-wide gaps between the layers in the reduced graphene oxide by the water molecules attracted to the positive ionic charge. When a potential of –1.2 V is applied, however, the electrostatic attraction between the negative graphene and the positive potassium ions is sufficient to draw the ions into the channels – either by distorting the “hydration shell” or by partially stripping it away. The ions can therefore flow down the concentration gradient into the other reservoir, turning on the transistor.
The researchers found that, as the applied potential is increased, so does the flow rate. “Ions can diffuse more than 100 times faster in our graphene channels than in bulk water,” says Zhang. In fact, they moved through the channels even faster than in biological ion channels. When the voltage was turned off, the flow stopped again.
Ion selectivity
The researchers also demonstrated ion selectivity. They filled the feed solution with equal concentrations of potassium chloride, caesium chloride and lithium chloride and varied the gate voltage. They found significant, voltage-dependent changes in the concentration of the solution allowed through. “The beauty here is that, by applying a given voltage, you can select a size,” explains Zhang. “Why? Because if I have a bigger ion, by applying a different voltage I have a different ability to strip it or squeeze it – the egg becomes flatter, so it can go into the channel.”
As well as being fundamental to biological electronics, the ability to remove ions selectively from fluids could be useful in water treatment. “Elaborately designed atomic-scale channels only allowing particular ions or no ions to permeate can be used to efficiently extract precious or rare metals or pure water from seawater,” explains Zhang. “This work is a fundamental breakthrough in the study of ion transport that can be electrically selected through a given size of atomic scale solid pores.”
“I would certainly consider [Zhang’s work] a significant step forward in our ability to understand and control nanofluidic transport,” says biophysicist Aleksandr Noy of University of California, Merced and the Lawrence Livermore National Laboratory, who was not involved in the research. “The trend now is increasingly towards electric field-driven separations. This research is one of the steps that is enabling increasingly precise and efficient electric field-driven separations, and I for one definitely welcome that.”
Two-dimensional “puddles” of electrons that form inside a three-dimensional superconductor could be a way for some superconductors to reorganize themselves before undergoing an abrupt phase transition into an insulating state. The phenomenon, dubbed “inter-dimensional superconductivity” by the researchers who discovered it, might make it easier to fabricate 2D materials for electronics applications.
Superconductors are materials that, when cooled to below their superconducting transition temperature, Tc, can conduct electricity without any resistance. In the Bardeen-Cooper-Schrieffer (BCS) theory of conventional superconductivity, this occurs when electrons overcome their mutual repulsion and form so-called Cooper pairs that travel unimpeded through the material as a supercurrent.
The first superconductors to be discovered (beginning with solid mercury in 1911) had transition temperatures only a few Kelvin above absolute zero, meaning that expensive liquid helium coolant was required to keep them in the superconducting phase. Beginning in the late 1980s, however, a new class of “high-temperature” superconductors with Tc at liquid nitrogen rather than liquid helium temperatures began to emerge. These materials were not metals but insulators made of copper oxides, or cuprates.
“The other high-temperature superconductor”
In the new work, researchers led by Hari Manoharan of Stanford University and the Stanford Institute for Materials and Energy Sciences (SIMES) at the US Department of Energy‘s SLAC National Accelerator Laboratory studied a somewhat similar material: a bismuthate known as BPBO that has the chemical formula BaPb1-xBixO3. Manoharan explains that unlike the cuprate high-temperature (d-wave) superconductors, BPBO is believed to be a conventional (s-wave) superconductor that behaves according to BCS theory. “BPBO does, however, have higher Tc than other conventional superconductors,” he tells Physics World. “In fact, it is sometimes called ‘the other high-temperature superconductor’.”
During experiments aimed at pinning down the temperature at which BPBO becomes an insulator – a point known as the superconducting-insulator transition (SIT) – the researchers observed that the electrons in the material behaved as if they were confined to ultrathin 2D layers or stripes. This was unexpected behaviour, since BPBO is a 3D superconductor in which electrons can move in any direction.
Further investigations with a scanning tunnelling microscope (STM), which can directly image individual atoms in the top few atomic layers of a material, revealed that the stripes were domains (or “puddles”) that formed in the material at its SIT. These domains were separated by distances short enough to allow the electrons in them to interact and coherently couple (pair up).
Emergent behaviour
The observation closely matches the predictions of so-called “emergent electronic granularity” theory, which is specific to 2D materials. This theory was first articulated by Nandini Trivedi of Ohio State University, US, and it describes what happens when superconducting domains on the scale of the material’s coherence length (one of the characteristic parameters for describing superconductors) are embedded in an insulating matrix and coupled via a phenomenon known as Josephson tunnelling.
Usually, the stronger the superconductor, the greater the energy required to break the bonds between its electron pairs. This bond-breaking energy is known as the “energy gap” and is related to the material’s coherence length. Trivedi and her colleagues predicted, however, that in certain disordered types of superconductor, like BPBO, the opposite would be true: the system would form emergent domains in which superconductivity was strong, but the pairs could be broken with much less energy than expected.
Collective reorganization
Trivedi says that it is “quite thrilling” to see her predictions being confirmed by the STM measurements from the Stanford group. These observations suggest that the electrons in a 3D superconductor collectively reorganize themselves into a 2D granular state before the material ultimately transforms to an insulator.
The results, which are detailed in PNAS, could have implications for crafting 2D materials, says team member Caroline Parra, who now heads the Nanobiomaterials Laboratory at the Universidad Técnica Federico Santa María, Valparaíso, Chile. “Most methods for fabricating 2D materials rely on growing films a few atomic layers thick or creating a sharp interface between two materials and confining a 2D state in these materials. The new finding offers an additional way to reach these 2D superconducting states.”
The only tricky part, she explains, would be to make sure that the composition of the superconducting material was just right.
Quantum-based system of units: The cold atom vacuum standard laser trapping and cooling of atomic Li in a magnetic/optical trap. (Source: NIST)
In this webinar you will learn about measurements, standards and a bit of history leading up to now and why our national standards changed on 20 May 2019.
The presenter, Dr Jay Hendricks, will describe the role of NIST as a national metrology institute, talk about the NIST on a Chip program, a daring and innovative approach that seeks to utilize fundamental physics to develop quantum-based sensors and standards, and then take a slightly deeper dive into research developing new measurement methods of pressure and vacuum that are photonically quantum-based.
Finally, he will speak on how science and research are done at a government lab such as NIST and talk about the types of partnership opportunities that NIST can offer for researchers, students and private companies.
Dr Jay Hendricks is a world-class expert in low pressure and vacuum metrology and serves as the deputy program manager for NIST on a Chip (NOAC), an innovative approach that seeks to utilize fundamental physics to develop quantum-based sensors and standards. He serves as the scientific director of IUVSTA (International Union of Vacuum Science, Technique and Application) an organization representing more than 15,000 physicists, chemists, materials scientists, engineers and technologists linked by their common study and use of vacuum science.
Dr Hendricks received a PhD and MA and in physical chemistry from Johns Hopkins University, and his BS in chemistry from Penn State University. He has 30+ years of vacuum science and technology experience and has authored more than 60 papers. He is a two-time winner of the US Department of Commerce Gold Medal, one of which was for an innovative quantum-based pressure standard.
Dr Hendricks has demonstrated leadership and chairs national and international vacuum standards meetings and symposia. He currently serves as the scientific director of IUVSTA, chair of IMEKO TC16, and is active with the AVS Recommended Practices Committee, and AVS Publication Committee.
Mind and matter: the May 2021 edition of Physics World is a special issue on AI and physics.
Theory and experiment have long served as the two pillars of the scientific method and our exploration of the natural world. But since the middle of the last century, physical science has been irrevocably changed and propelled forward by the power of computing.
Indeed, our computing ability and the idea of a “thinking machine”, or artificial intelligence (AI), have grown hand-in-hand. As the May 2021 special issue of Physics World makes clear, physicists have had a big impact on building better AI, and many end up working on AI research in industry.
But AI is not purely code and algorithms. The biases that exist within our society are mirrored and sometimes amplified by these systems, which have far-reaching and significant impacts on society in turn. Along with proper governance and regulation, it’s important that physicists recognize and tackle bias in AI.
For the record, here’s a run-down of what else is in the issue.
• Muon study hints at new physics – A measurement of the muon’s magnetic moment is at odds with the Standard Model – potentially hinting at new forces or particles – as Edwin Cartlidge reports
• Rooting for women in science – Arushi Borundia says that while stereotypes will always exist, more can be done to change the public’s perception of what a physicist is
• Licensing Arm – Arm Holdings’ chips power countless smart phones, tablets and TVs, but the company – a great intellectual-property success story – is facing an uncertain future. James McKenzie explains
• Combat robotics – TV robot fights are not just entertainment – they can also help turn students on to physics and engineering, as Robert P Crease finds out
• The Turing Test 2.0 –When Alan Turing devised his famous test to see if machines could think, computers were slow, primitive objects that filled entire rooms. Juanita Bawagan discovers how modern algorithms have transformed our understanding of the “Turing Test” and what it means for artificial intelligence
• Medical marvels – Irina Grigorescu, a medical physicist at King’s College London, explains how artificial intelligence can transform medical physics
• Powerful partnership – Experimental particle physicist Jessica Esquivel explores the beneficial collaboration between artificial intelligence and particle physics that is advancing both fields
• Intelligent drug discovery – Leonard Wossnig, chief executive of quantum drug-discovery company Rahko, describes the powerful capabilities of artificial intelligence and quantum computing when combined
• Fighting algorithmic bias – Physicists are increasingly developing artificial intelligence and machine learning techniques to advance our understanding of the physical world but there is a rising concern about the bias in such systems and their wider impact on society at large. Julianna Photopoulos explores the issues of racial and gender bias in AI – and what physicists can do to recognize and tackle the problem
• The science in science fiction – Ian Randall reviews Science Fiction by Sherryl Vint
• Judge, jury and AI-xecutioner – Achintya Rao reviews A Citizen’s Guide to AI by John Zerilli and co-authors
• Biophysics for personalized medicine – Nabiha Saklayen is co-founder and chief executive of US start-up Cellino Biotech, which uses biophysics technologies to advance personalized regenerative medicine. She talks to Julianna Photopoulos about the importance of multidisciplinarity for tackling real-world problems
• Ask me anything – Jony Hudson, AI research engineer at DeepMind, London
• More human than human – A short story by Kevlin Henney
A scattering-invariant mode of light is generated by a spatial light modulator. This modulated beam propagates through free space (top) or a strongly scattering medium (bottom) such that the pattern of the transmitted light is the same in both cases. (Courtesy: Allard Mosk/Matthias Kühmayer)
Scientists in Europe have experimentally created a new class of light waves that can pass through opaque materials as if they were not there. These “scattering-invariant modes of light” create the same light pattern whether they travel through a complex scattering structure or a homogenous medium, such as air. According to the researchers, the mathematical formula used to calculate the optimal waveform for the disordered environment could also be applied to other types of waves, such as acoustics.
Physicists Stefan Rotter, at TU Wien in Austria, and Allard Mosk, at Utrecht University in the Netherlands, are developing mathematical methods to describe the light scattering effects of disordered systems. Their analysis shows them how the disordered medium affects the light. This allows them to then create a wave pattern that is affected in a predictable way by the material, for example, by being scattered in a way that they expect.
Such insights could enable scientists to better control how light travels through complex scattering environments, with implications for various applications, particularly imaging. Recently, Rotter and Mosk demonstrated how the scattering of laser light through such a disordered system can be controlled to allow accurate measurements of objects hidden behind it.
In their latest research, the researchers “show how you can construct a light wave that goes through a medium in such a way that the wave is identical to the one you would get if the medium wasn’t there,” Rotter tells Physics World. These light waves “propagate through complicated medium and are transmitted in such a way that the output pattern is the same as the output pattern you would get if the object wasn’t there,” he explains.
To do this, the researchers first had to characterize the scattering medium precisely. At Utrecht University, they shone light through a layer of zinc oxide nanopowder deposited on a glass slide. They then used a detector on the other side to measure the light and analyse how it had been scattered by the powder. Once this has been done, Rotter explains, you can calculate “what kind of wave would be transmitted through this medium in the same way as through fresh space”.
Doing this, the researchers identified the scattering-invariant modes of light. They then demonstrated that they could successfully generate these light waves experimentally by sending a laser beam onto a spatial light modulator.
In a series of experiments, the researchers shone scattering-invariant modes of light in patterns shaped to resemble the stellar constellations of Ursa Minor and Ursa Major through a 5 μm-thick layer of zinc oxide powder and through air. They found that the patterns produced by the light travelling through the zinc oxide and through air were very similar, although the light that had propagated through the zinc oxide was attenuated slightly.
In principle, these scattering-invariant modes can be created for any type of wave, including visible light and sound waves, Rotter tells Physics World.
Rotter says that as well as it being interesting that such waves can be created, they could also help improve imaging through complex scattering media, including in biomedical applications. Essentially, this technique could be used to introduce light further into scattering environments such as the human body than currently possible. Rotter cautions, however, there are challenges with this. Patients move, they breathe and blood flows through their bodies, meaning the medium is constantly changing, making it hard to precisely characterize the scattering effect on light.
What benefits does artificial intelligence (AI) add to the field of quantum technology that other tools don’t?
Quantum technology promises to revolutionize many aspects of life. However, we will need to employ many supporting technologies to realize its full potential, with AI being particularly important among these. In the field of drug discovery, we expect quantum computing to help tackle currently intractable problems that stand in the way of discovering better and safer drugs for more diseases. However, we will only be able to harness the powerful capabilities of quantum computing for drug discovery if we can embed them into an AI-based pipeline.
In drug discovery, we follow a well-established path to find the best candidates for a new drug. The first step is computational modelling to select drug candidates with a range of target properties. We then manufacture the most promising few candidates. The third step involves experimentally testing candidates to determine if they indeed have the target properties. These steps are then repeated until a candidate is confirmed to have the target properties. While effective, this process is extremely slow and expensive. Indeed, it takes an average of $2.6bn and 10 years to bring a single drug to market, largely because this process needs to be repeated so often due to our inability to identify high-quality drug candidates that have the necessary properties in the first step of this process.
So why do the computational methods in our modelling so often fail to produce candidates that can pass successfully through the subsequent steps? The success and failure of such modelling is based on two key criteria: prediction accuracy and screening speed. When it comes to the former, we need to predict the properties of each drug candidate with a certain degree of accuracy, to understand how each candidate will behave in the human body.
Today’s computational chemistry methods model how a drug will interact with a target in the body by calculating those interactions classically when in fact drug candidates are governed by the laws of quantum mechanics. What that means is that the predictions of computational chemistry methods are often highly inaccurate. This is one area where quantum computers hold great promise, as they will allow us to model the interactions of the candidates with the targets in the human body using quantum mechanical calculations, which are extremely accurate.
The other critical counterpart to prediction accuracy is the speed at which we are able to screen drug candidates. At the first step of the drug-discovery process, computational methods need to screen many candidates. In an ideal world we would be able to screen billions of candidates very quickly, modelling each one in a few milliseconds, thereby assessing a vast volume of candidates in a matter of days. Unfortunately, no quantum computer or classical computational chemistry method that calculates interactions will ever be fast enough to achieve such speed. This is where machine learning and AI become extremely important and valuable. With AI, we can take a few candidates, evaluate their properties selectively, then train a model to predict the properties of all remaining candidates, allowing us to screen large volumes of candidates at high speed.
What are the disadvantages of using AI?
We do not know of any obvious disadvantages in using AI in combination with quantum computing, when it comes to the discovery and development of new drugs. However, to combine the two technologies in practice is a complicated endeavour requiring diverse, multidisciplinary teams of scientists working together to be able to neatly integrate the two technologies.
While the four-step process described above seems straightforward, there are numerous additional complexities that need to be factored in, such as those inherent in collecting and working with experimental data. AI offers further benefits here, enabling the collection of higher quality data, which in turn enables the training of other AI models with smaller amounts of data. Automation with AI can allow cheaper and more reliable data collection.
Unlocking potential The Rahko team, led by Leonard Wossnig (far left). (Courtesy: Rahko)
How do you use AI in your research?
At Rahko, we work at the intersection of AI, quantum computing and computational chemistry. We are building a quantum drug-discovery pipeline to overcome the key challenges described above. Unlike standard deep learning approaches, the AI models we are developing are highly specialized for discovery. Deep learning methods, and more generally neural networks, have been immensely successful in areas such as image recognition, where data are abundant. However, due to the nature of the problem in drug discovery, we deal with very little data, with sourcing more data being very expensive or simply impossible to obtain.
In scenarios such as this where we have little data, neural networks absolutely fail to make good predictions as there are not enough data to train them. This is where our team at Rahko excels. We embed the laws of quantum mechanics into our quantum machine-learning methods, for example, in the way we represent data. This way, we are able to rely on minimal amounts of data to make good predictions that also generalize to many other candidates in the screening process.
This is particularly important when combining AI with quantum computing as quantum computers will be expensive to run for larger systems, and we will therefore need to be selective when running these high-precision calculations. Embedding quantum-mechanical calculations into our AI-based framework allows us to maximize the utility of each individual calculation we perform on a quantum computer.
In what specific areas of quantum tech will AI be most useful and most crucial in moving forward?
The synergies between quantum computers and AI will have the most obvious benefit to the discovery of drugs and other materials. Here, it is crucial to combine highly precise but still expensive quantum-mechanical calculations with AI, in order to be able to screen through billions of possible candidates to identify the most promising. Chemical simulation is also widely believed to be among the very first areas in which quantum computing will have a powerful impact, once the machines reach a sufficient scale.
Another area of significant potential is the correction of errors in quantum computers, which may allow us to accelerate the broad availability of quantum computers of sufficient scale for useful, practical applications. AI for quantum error correction and mitigation has been demonstrated to correct and learn a range of errors in current quantum computers. Our team is actively working in this area in partnership with industrial and quantum-computing hardware companies to test and understand the capabilities of AI for error correction (arXiv:1912.10063).
In 2011, during her undergraduate degree at Georgia Institute of Technology, Ghanaian-US computer scientist Joy Buolamwini discovered that getting a robot to play a simple game of peek-a-boo with her was impossible – the machine was incapable of seeing her dark-skinned face. Later, in 2015, as a Master’s student at Massachusetts Institute of Technology’s Media Lab working on a science–art project called Aspire Mirror, she had a similar issue with facial analysis software: it detected her face only when she wore a white mask. Was this a coincidence?
Buolamwini’s curiosity led her to run one of her profile images across four facial recognition demos, which, she discovered, either couldn’t identify a face at all or misgendered her – a bias that she refers to as the “coded gaze”. She then decided to test 1270 faces of politicians from three African and three European countries, with different features, skin tones and gender, which became her Master’s thesis project “Gender Shades: Intersectional accuracy disparities in commercial gender classification” (figure 1). Buolamwini uncovered that three commercially available facial-recognition technologies made by Microsoft, IBM and Megvii misidentified darker female faces nearly 35% of the time, while they worked almost perfectly (99%) on white men (Proceedings of Machine Learning Research81 77).
Machines are often assumed to make smarter, better and more objective decisions, but this algorithmic bias is one of many examples that dispels the notion of machine neutrality and replicates existing inequalities in society. From Black individuals being mislabelled as gorillas or a Google search for “Black girls” or “Latina girls” leading to adult content to medical devices working poorly for people with darker skin, it is evident that algorithms can be inherently discriminatory (see box below).
Our main goal is to develop tools and algorithms to help physics, but unfortunately, we don’t anticipate how these can also be deployed in society to further oppress marginalized individuals
Jessica Esquivel
Physicists are increasingly using artificial intelligence (AI) and machine learning (ML) in a variety of fields, ranging from medical physics to materials. While they may believe their research will only be applied in physics, their findings can also be translated to society. “As particle physicists, our main goal is to develop tools and algorithms to help us find physics beyond the Standard Model, but unfortunately, we don’t step back and anticipate how these can also be deployed in technology and used every day within society to further oppress marginalized individuals,” says Jessica Esquivel, a physicist and data analyst from the Fermi National Accelerator Laboratory (Fermilab) in Chicago, Illinois, who is working on developing AI algorithms to enhance beam storage and optimization in the Muon g-2 experiment.
What’s more, the lack of diversity that exists in physics affects both the work carried out and the systems that are being created. “The huge gender and race imbalance problem is definitely a hindrance to rectifying some of these broader issues of bias in AI,” says Savannah Thais, a particle-physics and machine-learning researcher at Princeton University in New Jersey. That’s why physicists need to be aware of their existing biases and more importantly, as a community, need to be asking what exactly they should be doing.
1 Accuracy across the spectrum Joy Buolamwini, Timnit Gebru, Deborah Raji and colleagues work on the Gender Shades project to evaluate the accuracy of AI gender-classification products. The study looked at three companies’ commercial products and how they classified 1270 images of subjects from African and European countries. Subjects were grouped by gender, skin type, and the intersection of gender and skin type. The study found that while the products appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. All the companies performed better on male than female faces; and on lighter subjects than those with darker skin colour. The worst recognition was on darker females, failing on over 1 in 3 women of colour. A key factor in the accuracy differences is the lack of diversity in training images and benchmark data sets. (Courtesy: Joy Buolamwini)
‘Intelligent being’ beginnings
The idea that machines could become intelligent beings has existed for centuries, with myths about robots in ancient Greece and automatons in numerous civilizations. But it wasn’t until after the Second World War that scientists, mathematicians and philosophers began to discuss the possibility of creating an artificial mind. In 1950 the British mathematician Alan Turing famously asked whether machines could think and proposed the Turing Test for measuring their intelligence. Six years later, the research field of AI was formally founded during the Dartmouth Summer Research Project on Artificial Intelligence in Hanover, New Hampshire. Based on the notion that human thought processes could be defined and replicated in a computer program, the term “artificial intelligence” was coined by US mathematician John McCarthy – replacing the previously used “automata studies”.
Although the groundwork for AI and machine learning was laid down in the 1950s and 1960s, it took a while for the field to really take off. “It’s only in the past 10 years that there has been the combination of vast computing power, labelled data and wealth in tech companies to make artificial intelligence on a massive scale feasible,” says Rankin. And although Black and Latinx women in the US were discussing issues of discrimination and inequity in computing as far back as the 1970s, as highlighted by the 1983 Barriers to Equality in Academia: Women in Computer Science at MIT report, the problems of bias in computing systems have only become more widely discussed over the last decade.
Turning tides In the early days of computing, it was low-paid work largely performed by women. As the field became more prestigious, it was increasingly dominated by white men. This photo shows a US government employee using an NCR 796-201 video terminal, c. 1972. (Courtesy: National Archives at College Park)
The bias is all the more surprising given that women in fact formed the heart of the computing industry in the UK and the US from 1940s to the 1960s. “Computers used to be people, not machines,” says Rankin. “And many of those computers were women.” But as they were pushed out and replaced by white men, the field changed, as Rankin puts it, “from something that was more feminine and less valued to something that became prestigious, and therefore, also more masculine”. Indeed, in the mid-1980s nearly 40% of all those who graduated with a degree in computer science in the US were women, but that proportion had fallen to barely 15% by 2010.
Computer science, like physics, has one of the largest gender gaps in science, technology, engineering, mathematics and medicine (PLOS Biol.16 e2004956). Despite increases in the number of women earning physics degrees, the proportion of women is about 20% across all degree levels in the US. Black representation in physics is even lower, with barely 3% of physics undergraduate degrees in the US being awarded to Black students in 2017. There is a similar problem in the UK, where women made up 57.5% of all undergraduate students in 2018, but only 1.7% of all physics undergraduate students were Black women.
This under-representation has serious consequences for how research is built, conducted and implemented. There is a harmful feedback loop between the lack of diversity in the communities building algorithmic technologies and the ways these technologies can harm women, people of colour, people with disabilities and the LGBTQ+ community, says Rankin. One example is Amazon’s experimental hiring algorithms, which – based as they were on their past hiring practices and applicant data – preferentially rejected women’s job applications. Amazon eventually abandoned the tool because gender bias was embedded too deeply in their system from past hiring practices and could not ensure fairness.
Many of these issues were tackled in Discriminating Systems – a major report from the AI Now Institute in 2019, which demonstrated that diversity and AI bias issues should not be considered separately because “they are two sides of the same problem”. Rankin adds that harassment within the workplace is also tied to discrimination and bias, noting that it has been reported by the National Academies of Sciences, Engineering, and Medicine that over 50% of female faculty and staff in scientific fields have experienced some form of harassment.
Having diverse voices in physics is essential for a number of reasons, according to Thais, who is currently developing accelerated ML-based reconstruction algorithms for the High-Luminosity Large Hadron Collider at CERN. “A large portion of physics researchers do not have direct lived experience with people of other races, genders and communities, which are impacted by these algorithms,” she says. That’s why marginalized individual scientists need to be involved in developing algorithms to ensure they are not inundated with bias, argues Esquivel.
It’s a message echoed by Pratyusha Kalluri, an AI researcher from Stanford University in the US, who co-created the Radical AI Network, which advocates anti-oppressive technologies, and gives a voice to those marginalized by AI. “It is time to put marginalized and impacted communities at the centre of AI research – their needs, knowledge and dreams should guide development,” she wrote last year in Nature (583 169).
Role of physicists
Back at Fermilab, Brian Nord is a cosmologist using AI to search for clues about the origins and evolution of the universe. “Telescopes scan the sky in multi-year surveys to collect very large amounts of complex data, including images, and I analyse that data using AI in pursuit of understanding dark energy which is causing space–time expansion to accelerate,” he explains.
But in 2016 he realized that AI could be harmful and biased against Black people, after reading an investigation by ProPublica that analysed a risk-assessment software known as COMPAS, which is used in US courts to predict which criminals are most likely to reoffend and make decisions about bail setting. The investigation found that Black people were almost twice as likely as whites to be labelled a higher risk; irrespective of the severity of the crime committed, or the actual likelihood of re-offending. “I’m very concerned about my complicity in developing algorithms that could lead to applications where they’re used against me,” says Nord, who is Black and knows that facial-recognition technology, for example, is biased against him, often misidentifies Black men, and is under-regulated. So while physicists may have developed a certain AI technology to tackle purely scientific problems, its application in the real world is beyond their control and may be used for insidious purposes. “It’s more likely to lead to an infringement on my rights, to my disenfranchisement from communities and aspects of society and life,” he says.
Nord decided not to “reinvent the wheel” and is instead building a coalition of physicists and computer scientists to fight for more scrutiny when developing algorithms. He points to companies such as Clearview AI – a US facial-recognition outfit used by law-enforcement agencies and other private institutions – that are scraping social-media data and then selling a surveillance service to law enforcement without explicit consent. Countries around the world, including China, are using such surveillance technology for widespread oppressive purposes, he warns. “Physicists should be working to understand power structures – such as data privacy issues, how data and science have been used to violate civil rights, how technology has upheld white supremacy, the history of surveillance capitalism – in which data-driven technologies disenfranchise people.”
To bring this issue to wider attention, Nord, Esquivel and other colleagues wrote a letter to the entire particle-physics community as part of the Snowmass process, which regularly develops a scientific vision for the future of the community, both in the US and abroad. Their letter, which discussed the “Ethical implications for computational research and the roles of scientists”, emphasizes why physicists, as individuals or at institutions and funding agencies, should care about the algorithms they are building and implementing.
Thais also urges physicists to actively engage with AI ethics, especially as citizens with deep technical knowledge (APS Physics13 107). “It’s extremely important that physicists are educated on these issues of bias in AI and machine learning, even though it typically doesn’t come up in physics research applications of ML,” she says. One reason for this, Thais explains, is that many physicists leave the field to work in computer software, hardware and data science companies. “Many of these companies are using human data, so we have to prepare our students to do that work responsibly,” she says. “We can’t just teach the technical skills and ignore the broader societal context because many are eventually going to apply these methods beyond physics.”
Both Thais and Esquivel also believe that physicists have an important role to play in understanding and regulating AI because they often have to interpret and quantify systematic uncertainties using methods that produce more accurate output data, which can then counteract the inherent bias in the data. “With a machine-learning algorithm that is more ‘black boxy’, we really want to understand how accurate the algorithm is, how it works on edge cases, and why it performs best for that certain problem,” Thais says, “and those are tasks that a physicist did before.”
Another researcher who is using physics to improve accuracy and reliability in AI is Payel Das, a principal research staff manager with IBM’s Thomas J Watson Research Center. To design new materials and antibiotics, she and her team are developing ML algorithms that can combine learning from both data and physics principles, increasing the success rate of a new scientific discovery up to 100-fold. “We often enhance, guide or validate the AI models with the help of prior scientific or other form of knowledge, for example, physics-based principles, in order to make the AI system more robust, efficient, interpretable and reliable,” says Das, further explaining that “by using physics-driven learning, one can crosscheck the AI models in terms of accuracy, reliability and inductive bias”.
The real-world impact of algorithmic bias
Big brother Algorithmic decision-making tools may be developed for scientific research and then later used in commercial surveillance situations where any biases in the data have real-life consequences. (Courtesy: Mark Schiefelbein/AP/Shutterstock)
In 2015 a Black software developer Tweeted that Google Photos had labelled images of him with a friend as “gorillas”. Google managed to fix this issue by deleting the word “gorilla”, and some others referring to primates, from its vocabulary. By censoring these searches, the service can no longer find primates such as “gorilla”, “chimp”, “chimpanzee” or “monkey.”
When searching for the terms “Black girls”, “Latina girls” or “Asian girls”, the Google Ad portal would offer keyword suggestions related to pornography. Searches for “boys” of those same ethnicities also mostly returned suggestions related to pornography, but searches for “white girls” or “white boys” offered no suggested terms. In June 2020 the Google Ad portal was still perpetuating the objectification of Black, Latinx and Asian people, and has now solved the issue by blocking results from these terms.
Infrared technology, such as that in pulse oximeters, does not work properly on darker skin because less light passes through the skin. This can lead to inaccurate readings that may mean not getting the medical care needed. The same infrared technology has also been shown to fail in soap dispensers.
Auditing algorithms
In 2020 the Centre for Data Ethics and Innovation, the UK government’s independent advisory body on data-driven technologies, published a review on bias in algorithmic decision-making. It found that there has been a notable growth in algorithmic decision-making over the last few years across four sectors – recruitment, financial services, policing and local government – and discovered clear evidence of algorithmic bias. The report calls for organizations to actively use data to identify and mitigate bias, making sure to understand the capabilities and limitations of their tools. It’s a sentiment echoed by Michael Rovatsos, AI professor and director of the Bayes Centre at the University of Edinburgh. “It’s very hard to actually get access to the data or to the algorithms used,” he explains, adding that companies should be required by government to audit and be transparent about their systems applied in the real world.
Some researchers, just like Buolamwini, are trying to use their scientific experience in AI to uncover bias in commercial algorithms from the outside. They include the mathematician Cathy O’Neil, who wrote Weapons of Math Destruction in 2016 about her work on data-driven biases and who in 2018 founded a consultancy to work privately with companies and audit their algorithms. Buolamwini also continues her work trying to create more equitable and accountable technology through her non-profit Algorithmic Justice League – an interdisciplinary research institute that she founded in 2016 to understand the social implications of AI technologies.
Mirror mirror Computer scientist Joy Buolamwini tested this image of herself across a few facial-analysis demos. Microsoft and Face++ did not detect her face, while IBM and Kairos misgendered her. (Courtesy: TJ Rak)
Following her 2018 Gender Shades study, published with computer scientist Timnit Gebru who co-founded Black in AI, the findings were sent to the companies involved. A year later, a follow-up study was carried out to rerun the audits and added two more companies: Amazon and Kairos (AIES ’19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 429). Led by Deborah Raji, a computer scientist and currently a Mozilla Foundation fellow, the follow-up found that the two had huge accuracy errors – Amazon’s facial recognition software even failed to classify Michelle Obama’s face correctly – but the original three companies had improved significantly, suggesting their dataset was trained with more diverse images.
These two studies had a profound real-world impact, leading to two US federal bills – the Algorithmic Accountability Act and No Biometric Barriers Act – as well as state bills in New York and Massachusetts. The research also helped persuade Microsoft, IBM and Amazon to put a hold on their facial-recognition technology for the police. “We designed an ‘actionable audit’ to lead to some accountability action, whether that be an update or modification to the product or its removal – something that past audits had struggled with,” says Raji. She is continuing her work on algorithmic evaluation and in 2020 developed with colleagues at Google a framework of algorithmic audits (arXiv:2001.00973) for AI accountability. “Internal auditing is essential as it allows for changes to be made to a system before it gets deployed out in the world,” she says, adding that sometimes the bias involved can be harmful for a particular population “so it’s important to identify the groups that are most vulnerable in these decisions and audit for the moments in the development pipeline that could introduce bias”.
In 2019 the AI Now Institute published a detailed report outlining a framework for public agencies interested in adopting algorithmic decision-making tools responsibly, and subsequently released an Algorithmic Accountability Policy Toolkit. The report called for AI and ML researchers to know what they are building; to account for potential risks and harms; and to better document the origins of their models and data. Esquivel points out the importance of physicists knowing where their data has come from, especially the data sets used to train ML systems. “Many of the algorithms that are being used on particle-physics data are fine-tuned architectures that were developed by AI experts and trained on industry standard data sets – data sets that have been shown to be racist, discriminatory and sexist,” she says, using the example of MIT permanently taking down a widely used, 80-million-image AI data set because existing images were labelled in offensive, racist and misogynistic ways.
Gebru and her colleagues also recently highlighted problems with large data sets such as the Common Crawl open repository of web crawl data, where there is an over-representation of white supremacist, ageist and misogynistic views (FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency) – a paper for which she was recently fired from Google’s AI Ethics team. Consequently, Esquivel is clear that academics “have the opportunity to act as an objective third party to the development of these tools”.
Removing bias
The 2019 AI Now Institute report also recommends AI bias research to move beyond technical fixes. “It’s not just that we need to change the algorithms or the systems; we need to change institutions and social structures,” explains Rankin. From her perspective, to have any chance of removing or minimizing and regulating bias and discrimination, there would need to be “massive, collective action”. Involving people from beyond the natural sciences community in the process would also help.
It’s not just that we need to change the algorithms or the systems; we need to change institutions and social structures
Joy Lisi Rankin
Nord agrees that physicists need to work with scientists from other disciplines, as well as with social scientists and ethicists. “Unfortunately, I don’t see physical scientists or computer scientists engaging sufficiently with the literature and communities of these other fields that have spent so much time and energy in studying these issues,” he says, noting that “it seems like every couple of weeks there is a new terrible, harmful and inane machine-learning application that tries to do the impossible and the unethical.” For example, the University of Texas in Austin only recently stopped using an ML system to predict success in graduate school, whose data was based on previous admissions cycles and so would have carried biases. “Why do we pursue such technocratic solutions in a necessarily humanistic space?” asks Nord.
Thais insists that physicists must become better informed about the current state of these issues of bias, and then understand the approaches adopted by others for mitigating them. “We have to bring these conversations into all of our discussions around machine learning and artificial intelligence,” she says, hoping physicists might attend relevant conferences, workshops or talks. “This technology is impacting and already ingrained in so many facets of our lives that it’s irresponsible to not contextualize the work in the broader societal context.”
Nord is even clearer. “Physicists should be asking whether they should, before asking whether they could, create or implement some AI technology,” he says, adding that it is also possible to stop using existing, harmful technology. “The use of these technologies is a choice we make as individuals and as a society.”
An international team of researchers has succeeded in steering light waves deep into “forbidden” regions of photonic crystals by manipulating the shape of the waves. The technique, which was developed by scientists at the University of Twente in the Netherlands, the University of Iowa, US and the University of Copenhagen, Denmark, takes advantage of nanoscale channels created naturally when the crystals are fabricated, and could find use in a host of optoelectronics applications.
Photonic crystals are made by etching patterned nanopores into a substrate such as a silicon wafer. These patterned structures are specially designed to make the crystal’s refractive index vary periodically on the length scale of visible light. This periodic variation, in turn, produces a photonic “band gap” that affects how photons propagate through the crystal – similar to the way a periodic potential in semiconductors affects the flow of electrons by defining allowed and forbidden energy bands.
The presence of this band gap means that only light within certain wavelength ranges can pass through the crystal. Outside these ranges, the light is reflected due to an effect called Bragg interference. The prohibition on light travel at forbidden wavelengths is so restrictive that if a quantum dot that emits light at one of these wavelengths is placed inside the crystal, it stops emitting the forbidden colour of light.
“Out of control”
Photonic crystals were discovered 30 years ago, and they are now routinely integrated into devices such as light sources, lasers, efficient solar cells and so-called invisibility cloaks. They are also used to trap light in extremely small volumes and to process optical information. In addition, the ability to tightly control their emission properties makes them attractive for advanced applications such as nonlinear processors for quantum computing and memories that store information encoded as light.
To date, all these applications have been static, because the structure of the crystals (and thus the path of the light transported within them) is fixed. New functionalities should be possible, however, if light can be controllably steered anywhere inside the crystals, beyond the depth set by Bragg interference.
“This depth is known as the Bragg length and is determined by the intentionally introduced periodic structural order in the crystal when it is fabricated,” explains study lead author Ravitej Uppu. “Disorder arising from unavoidable imperfections in the nanofabrication process, however, produces channels that penetrate deep into the crystal and through which the trajectory of incoming light waves can be deviated. These channels are usually detrimental for applications because they allow a small fraction of waves to ‘get out of control’ and randomly scatter into the crystal.”
Light-steering demonstration
Led by Willem Vos of the University of Twente, Uppu and colleagues have now turned these channels and the fact that light waves can travel through them into an advantage. They did this by shaping the wavefronts of light waves so that they selectively couple to these channels, thus allowing the waves to travel much further into the crystal. What is more, by programming the wavefronts correctly, they could interfere the waves such that their intensity concentrates at a single location deep inside the crystals.
In their work, published in Physical Review Letters, the researchers studied light propagation in two-dimensional photonic crystals consisting of large periodic arrays of pores (about 6 microns deep) etched in a silicon wafer. They began by directing unstructured, random, plane light waves onto the crystals and imaging the light that leaks through the structures’ top surface. This leaked light revealed the energy density of light at any given position inside the crystals, and as the researchers expected, they saw hardly any sign that light had penetrated the crystal at all. They confirmed this result by showing that 95% of the incident light was reflected.
Eight times the Bragg length
The researchers then repeated their experiment using light waves with wavefronts shaped using a device known as a spatial light modulator. By programming the shapes, they managed to steer the waves into otherwise forbidden gaps in the crystal, travelling up to eight times the Bragg length. Focusing this light allowed them to create a bright spot that is up to 100 times more intense compared to that created by unshaped wavefronts.
Members of the team say they now plan to extend their experiments to 3D photonic band gap crystals, where they “eagerly expect” to see additional phenomena such as Anderson localization of light. “Such 3D control of light transport could be exploited for exotic light hopping across a lattice of cavities inside these crystals,” Uppu tells Physics World. “The combination of reconfigurable light transport and cavities could potentially allow us to realize nonlinear quantum operations for quantum computing.”
And that is not all. Since the observed phenomena are essentially exploiting wave interference, the team is confident that their results can be generalized to electron waves, magnetic spin waves or even sound waves. Indeed, Uppu notes that other researchers have recently made considerable advances in the latter two fields, so the required spatial shaping of these waves should be feasible.
Fourteen possible antimatter stars (“antistars”) have been flagged up by astronomers searching for the origin of puzzling amounts of antihelium nuclei detected coming from deep space by the Alpha Magnetic Spectrometer (AMS-02) on the International Space Station.
Three astronomers at the University of Toulouse – Simon Dupourqué, Luigi Tibaldo and Peter von Ballmoos – found the possible antistars in archive gamma-ray data from NASA’s Fermi Gamma-ray Space Telescope. While antistars are highly speculative, if they are real, then they may be revealed by their production of weak gamma-ray emission peaking at 70 MeV, when particles of normal matter from the interstellar medium fall onto them and are annihilated.
However, when it was announced in 2018 that AMS-02 had tentatively detected eight antihelium nuclei in cosmic rays – six of antihelium-3 and two of antihelium-4 – those unconfirmed detections were initially attributed to cosmic rays colliding with molecules in the interstellar medium and producing the antimatter in the process.
Subsequent analysis by scientists including Vivian Poulin, now at the University of Montpellier, cast doubt on the cosmic-ray origin, since the greater the number of nucleons (protons and neutrons) that an antimatter nucleus has, the more difficult it is to form from cosmic ray collisions. Poulin’s group calculated that antihelium-3 is created by cosmic rays at a rate 50 times less than that detected by the AMS, while antihelium-4 is formed at a rate 105 times less.
The mystery of matter and antimatter
The focus has therefore turned back to what at first may seem an improbable explanation – stars made purely from antimatter. According to theory, matter and antimatter should have been created in equal amounts in the Big Bang, and subsequently all annihilated, leaving a universe full of radiation and no matter. Yet since we live in a matter-dominated universe, more matter than antimatter must have been created in the Big Bang – a mystery that physicists have grappled with for decades.
“Most scientists have been persuaded for decades now that the universe is essentially free of antimatter apart from small traces produced in collisions of normal matter,” says Tibaldo.
The possible existence of antistars threatens to turn this on its head. “The definitive discovery of antihelium would be absolutely fundamental,” says Dupourqué.
The 14 candidates were identified from a total of 5787 gamma-ray sources catalogued over 10 years by Fermi’s Large Area Telescope, and have allowed Dupourqué, Tibaldo and von Ballmoos to calculate constraints for the possible populations of antistars in the Milky Way.
If antistars formed in the spiral disc of the galaxy alongside normal stars, then they calculate that there is one antistar for every 400,000 ordinary stars. If, on the other hand, antistars are primordial, dating from the early universe when the Milky Way was just forming, meaning that they are located in the oldest part of the Milky Way (the galactic halo), then as many as a fifth of the stars there could be antistars.
“Locking up antimatter in antistars would be a plausible way to spare antimatter from being annihilated,” says von Ballmoos. “Particularly if they hide away in regions of relatively low densities of normal matter, like the galactic halo.”
Balloon mission
Poulin, who was not involved in the detection of these 14 candidates, agrees that if antistars are real, then a primordial origin is most likely, since clouds of antihydrogen “would have annihilated on very short time scales,” he says. Instead, “they would have formed in the very early universe.”
Given the speculative nature of antistars, it is quite possible that all 14 candidates will turn out to be something more mundane. Dupourqué, Tibaldo and von Ballmoos suggest that a possible next step should be to check whether the 14 candidates emit electromagnetic radiation at other wavelengths that could reveal them to actually be active galactic nuclei or pulsars.
Meanwhile, the GAPS experiment, a balloon mission set to launch later this year, will join the search for antimatter cosmic rays. One of its aims is to independently confirm the AMS’ antihelium detections, which for now should be treated with caution because they are a statistically low sample, say Dupourqué, Tibaldo and von Ballmoos.
“The conventional claim is that the detection of antihelium-4 is a smoking gun for new physics, and the existence of antistars,” says Poulin. If antistars can be shown to be real, then they promise to alter our view of cosmology, astrophysics and particle physics.