Skip to main content

The flawed multiverse

According to the quantum-information theorist David Deutsch, our modern understanding of how the world works has provided us with “good explanations” that open up essentially infinite possibilities for future progress. One of these explanations is the idea of the quantum multiverse, which Deutsch discussed in the May issue of Physics World (pp34–38, print version only) and to which he devotes a chapter in his book The Beginning of Infinity.

In 1957 Hugh Everett III noted that if quantum mechanics is a universal theory, then it should be applicable to particle detectors, and indeed observers, as well as to individual particles. Consider an experiment where a photon interacts with a partially reflecting surface, and separate photon detectors are positioned to register transmitted and reflected photons. A straightforward quantum-mechanical calculation predicts that the resulting quantum state of this whole set-up should be a linear combination of one where the photon has been detected in the transmitted (but not the reflected) direction, and another where the reverse is true. Experimentally, of course, a photon is found in one or other of the detectors at random, with probabilities that depend on the properties of the reflecting surface.

To get agreement with experiment, we normally employ a further postulate, known as the Born rule, which states that a measurement causes the wavefunction of the system to “collapse”, which means that only one of the above outcomes actually occurs. The relative probabilities of the possible outcomes are given by the modulus squared of the corresponding parts of the wavefunction.

However, Everett proposed that the collapse postulate is unnecessary, thanks to decoherence. Once a photon has been detected, its quantum state becomes entangled with the state of the detector, and to perform an interference experiment, coherence would have to be maintained for all the phases associated with the huge number of particles making up the detectors. This is a practical impossibility; so even though the two outcomes coexist, they cannot affect each other in any way. This means that both can continue while unaware of each other’s existence. In other words, the photon has gone both ways, and the detectors and everything that interacts with them have two different futures: one in which the photon was reflected and another in which it was transmitted.

An inevitable consequence of Everett’s theory is that this splitting occurs even if a human observer records the result of the experiment. The observer also evolves into a superposition of two states, each of which is completely unaware of the other, leading them to assume that collapse has occurred and only their branch exists. As Deutsch explains, this splitting spreads out from the experimental apparatus in what he calls a “wave of differentiation” until it eventually encompasses everything – hence the terms “parallel universes” or “many worlds”.

Because the assumption of collapse is no longer required, it has been said that the many-worlds interpretation is “economical with postulates although extravagant with universes”. This should surely be sufficient reason not to dismiss the idea out of hand. However, I believe the many-worlds theory is open to criticism for reasons other than extravagence. One of these concerns probabilities in a situation where both outcomes occur in parallel. If both options are happening, how can it be meaningful to say that one is more probable than the other – as is experimentally the case if the reflector is not exactly 50/50?

As he described in his Physics World article, Deutsch’s response is to propose that before the measurement, the photon is not just a single particle but is actually an (uncountable) infinity of identical or “fungible” particles. After interacting with the reflector, an infinite number of fungible photons exist in both output channels, but the ratio of these numbers is finite, so that each has a “measure” proportional to the squared modulus of the wavefunction. Even though an observer knows they are going to evolve into two copies of themself, they can apparently assign relative probabilities to which copy they expect to become. These probabilities are given by the Born rule.

Whether Deutsch’s fungibility formulation successfully resolves the question of probabilities is a moot point, but it seems to me that the multiverse concept raises another problem, which other supporters of many-worlds theories thought they had resolved some time ago. This is known as the “preferred basis” problem, and it arises from the fact that there is no unique way to express a quantum state as a superposition of two component states.

Consider a spin-half particle in an eigenstate of the x component of spin. This can be expressed as an equally weighted superposition of the positive and negative eigenstates of the z component – or of the y component, or, indeed, as an appropriately weighted superposition of any two linearly independent spin states. Together, these states constitute a “basis”. Suppose now that we pass such a particle through an apparatus oriented to measure some component of spin. According to the multiverse model, before the measurement, the ratio of the (infinite) numbers of instances of the particle that will appear in the two possible output channels corresponds to the relative probabilities given by the Born rule.

However, this ratio is a function of the direction chosen for the measurement, so the initial state of the particle must depend on the nature of the measurement that is still to be performed. Choosing the basis beforehand to suit the properties of the subsequent measurement seems to me to destroy the objectivity of the description of the initial state and, indeed, of the multiverse. This process also implies an additional assumption, which means that we have lost the multiverse’s economy with postulates, while the extravagance with universes remains. Deutsch appears to recognize this difficulty to some extent; he indicates that it is related to the quantum electron “field”, but he does not explain how this could resolve the problem.

Deutsch’s belief in the existence of the multiverse inspired his ground-breaking contributions to quantum computing, and he believes that a successful implementation of a quantum computer would constitute incontrovertible evidence for it. He argues that the reason a quantum computer can carry out some tasks very much faster than a classical one is because the former performs a large number of calculations simultaneously in parallel universes. However, I believe that this idea is also challenged by the preferred-basis problem.

To see how, let us take as an example the quantum Fourier transform, which is the core operation in Shor’s algorithm for efficiently obtaining the prime factors of large numbers using a quantum computer. This operation subjects a set of qubits – quantum objects such as spin-half particles that have two possible states – to a series of “unitary” operations, which in the spin-half example amount to subjecting the spins to a series of rotations. This creates an entangled state that is a linear superposition of binary representations of the components of the Fourier transform of whichever function was represented by the original configuration of qubits. These separate components certainly form a basis, but there is no obvious reason why this basis should be preferred over any other, or why this quantum process should not occur in a single universe.

Criticisms of many-worlds theories in general and of the quantum multiverse in particular have been around for a long time now, and it is a pity that Deutsch does not recognize and address some of them in his book. Instead, he devotes a chapter to attacking the “bad philosophy” underlying alternative interpretations, particularly the conventional “Copenhagen” interpretation, which relies on making a distinction between the quantum world of particles and the classical world of detectors and observers. Many of us can certainly see weaknesses in the Copenhagen approach, but this does not mean that the multiverse is immune to criticism.

Quantum physics occupies only two of the 18 chapters of this book, which also surveys a wide range of modern science and philosophy. Deutsch’s main theme is the possibilities for future progress that the ongoing scientific revolution has generated. As well as allegedly explaining quantum physics, these possibilities include the evolution of a culture based on a democratic political system and our ability to achieve anything that is not forbidden by the laws of nature – including immortality. Illness and old age, he writes, “are going to be cured …certainly within the next few lifetimes”.

Deutsch’s ideas are expressed very clearly and the text is enlivened by a number of amusing anecdotes. This is a book that should interest anyone who likes thinking about the deep issues that underlie our understanding of the modern world – provided they maintain some scepticism over its dogmatic tone and reluctance to countenance alternative viewpoints. Deutsch willingly accepts that much of his inspiration comes from the work of Karl Popper, whose mantra “we have a duty to be optimistic” clearly underlies his thinking. However, he would have done well to remember that Popper was often dogmatic, to the point where some wags said that his book The Open Society and its Enemies should have been called “The Open Society by one of its Enemies”!

The geek shall inherit the Earth

“India was once one of the most powerful scientific nations of all,” writes Angela Saini in the preface to Geek Nation. She bases her claim on the fact that Indians were using algebra, the zero and square roots centuries before the West, and that Indian astronomers are believed to have been the first to realize that day and night are caused by the Earth’s rotation. Then Indian science went into a slump, Europe had its Enlightenment and modern science ended up being shaped mostly in the West. But now, Saini writes, “this impoverished tea- and cotton-growing backwater is starting to reclaim the scientific legacy that it lost thousands of years ago”, and she is “here to learn about this rediscovered nation of geeks”.

Unfortunately, this highlights one of the problems with Geek Nation, which is that Saini, a UK-based science journalist, seems to know from the outset what she wants to find in India. In her first interview on arriving in the country, she tells one of the pioneers of India’s space programme that her book is to be titled Geek Nation and explains that, to her, “geekiness is all about passion. It’s about choosing science and technology or another intellectual pursuit…and devoting your life to it.”

The first few people she meets do not live up to her expectations. One of them is a physics prodigy who turns out not to be curious or brilliant but just hard-working and dreary. She spends a week in one of the most selective engineering colleges in India, the Indian Institute of Technology, Delhi (IITD), but finds the place dilapidated and the students uncreative and unresponsive. She visits India’s largest software company, Tata Consultancy Services, and finds no trace of innovation. Saini is disappointed at not finding any geeks: “All I can see are drones.”

Then, just when Saini wonders if “India is genuinely turning into a geeky, scientific society, or whether it’s just hype”, she visits IBM’s India research lab and sees the development of the Spoken Web, a voice-based version of the Internet that can be accessed through a phone call by the underprivileged or illiterate. She visits a start-up in Bangalore where she finds innovation. And she goes back to IITD and finds that things there are not as bad as she initially thought. She even finds a student who is interested in design and is writing a novel about robots, and enthusiastically hails him as “a real geek”.

Despite her (belated) success in finding someone who conforms to it, this narrative created around “geekiness” feels forced. Even Saini’s constant use of the word “geek” – perhaps to justify the book’s title – becomes tiresome. Hers is a “geeky mission”; the founder of India’s space programme, Vikram Sarabhai, was a “good-looking geek”; and a tuberculosis researcher whose work may have damaged her health is saluted as a “true geek”.

A bigger problem, though, is Saini’s grasp of the facts. In her introductory chapter, for example, she writes that the first Indian prime minister, Jawaharlal Nehru, “even wedged a plea into the Indian constitution, announcing, ‘It shall be the duty of every citizen of India to develop the scientific temper’.” She repeats this claim in a later chapter, but although this line does appear in the Indian constitution, it was introduced more than a decade after Nehru’s death. Elsewhere, an environmental activist tells Saini that “quantum theory teaches you that things are connected”. Saini guesses that the activist is mixing up unrelated branches of science by invoking the idea of quantum entanglement, “which says that every particle in the universe is connected to every other, however distant it is, on a fundamental level” – yet this quasi-mystical definition is scarcely better than the activist’s claim. Then there is a stray factoid that turns out to be only three-quarters true: “[Tuberculosis] killed Chekhov, Kafka, Keats and Napoleon.”

In other places, Saini comes across as fanciful in her perceptions, writing of one interviewee that he “spends so much time designing computer programs that he speaks in peculiarly clipped sentences”. When she meets a professor who has chosen science over religion, and comes across as calm and well adjusted, Saini even appears to indulge in a bit of “hot reading” when she says that behind his “happy, baggy eyes, I can see how hard the tension between the two must have been for him”.

There are too many instances of error and imprecision in Geek Nation to list, but a particularly flagrant episode is Saini’s description of India’s first Moon mission. According to Saini, the probe Chandrayaan-1 was launched from “a remote island” and it “wandered the lunar surface for months” before being “forced to come back to Earth”. But the launch site, Sriharikota, is just off the Indian coast and is easily accessible by road. Chandrayaan-1 was an orbiter with an impact probe, had no rover capable of wandering and only its radio signals ever came back to Earth. These errors end up compromising the wider argument they are part of. In addition to factual errors, the book contains unattributed perceptions about the Chandrayaan-1 mission, such as this (p5): “Nobody was sure whether the project would be a success or a waste of time, but the reputation of Indian science depended on it. Many in the scientific community had always assumed that India would never be able to afford a space mission.” Statements such as these call upon the reader to trust Saini’s command of relevant facts and judgement, but this becomes increasingly difficult as the book progresses.

Despite its limitations, Geek Nation does manage to provide a broadly impressionistic view of Indian science today. The book’s sub-title, “How Indian Science is Taking Over the World”, is debatable, but there is no doubt that steady progress is being made. Saini has chosen material that is rich in its potential to yield insights about science and technology in India and the cultural battles being fought around it: the education system; genetically modified crops; the sometimes overlapping boundaries between science, superstition and religion; the possibilities created by electronic governance in a country reeling from corruption; open-source drug discovery; and nuclear power. She also meets an array of people – industrialists, scientists, students, academics and traditional scholars – whose ideas are often interesting or insightful. With some better fact-checking and with a less predetermined view, Geek Nation might have been a very good book.

El Niño marches to the same beat as seasonal change

The El Niño–Southern Oscillation (ENSO) occurs in the Pacific Ocean every few years and the resulting weather conditions can wreak havoc on people and the environment, particularly in Latin America and South East Asia. Predicting when an ENSO event will occur has confounded scientists because the phenomenon does not appear at regular intervals. But a new study by researchers at institutions in the US could provide an important step in our understanding of this phenomenon, by establishing a direct link between ENSO and the annual global weather cycle.

El Niño, meaning “the Christ child”, is so-called because the first signs of its appearance are marked by a warm current off the coast of Ecuador just after Christmas. These rising sea temperatures are related to a weakening of the trade winds that usually transport warm surface waters to the western margin of the Pacific. During an ENSO phase – which occur every 2–7 years – these warmer waters accumulate in the eastern tropical Pacific.

Individual ENSO episodes can last up to two years and lead to severe flooding in Latin America and droughts in South East Asia. One extreme cycle in 1997–1998 had far-reaching consequences, including extensive fires in the Indonesian rainforests and mudslides in California. Another impact of El Niño is that the accumulated warm water acts to block cold-water currents, which usually transport nutrients from the deep ocean to ecosystems along the Latin American coast. This can have a devastating effect on the fish stocks that form an important part of the economy in countries such as Peru and Colombia.

Mysterious origins

Despite El Niño’s familiarity, scientists still do not fully understand what triggers these events, how they are sustained or what finally causes an ENSO cycle to subside. One thing that has been noted is that once ENSO episodes are under way, they all tend to follow a similar pattern of developing during summer or autumn in the northern hemisphere, then peaking during the northern winter. This interaction between ENSO and the annual cycle has now been more firmly established by a numerical study by Karl Stein and his colleagues at the University of Hawaii at Manoa.

Stein’s team has analysed observations of sea-surface temperature from the UK Met Office Hadley Centre spanning the period 1964–2007 and covering 20°S–20°N and 120–290°E. The extensive numerical analysis showed that ENSO events and the annual variation in temperature in the eastern Pacific are synchronized in a “2:1 Arnold tongue”. In simple terms, this means that during a positive phase, ENSO and the annual cycle run according to the same beat but the seasonal cycle is moving twice as fast as ENSO.

Stein told physicsworld.com that one of the ultimate goals in characterizing ENSO is to develop a means of predicting when the next large warming event might occur. “Understanding the relative importance of amplitude versus phase modulation should lead to a better understanding of the physics involved in synchronizing ENSO to the annual cycle, which should hopefully lead to better predictions,” he says. Stein believes that given the complexity of the climate system, realistically we could hope to predict the state of the equatorial Pacific only months or a year ahead of time at best.

Numerical connection

Rameshan Kallummal, a climate scientist at the Centre for Mathematical Modelling and Computer Simulation in Bangalore, India, is impressed by the fact that the new research establishes a quantative relationship between ENSO and the annual cycle. However, he feels that to gain a better understanding of El Niño will also require improved climate monitoring. “The main limitations to our understanding come from the various practical constraints in setting up well-distributed observing systems capable of making measurements continuously,” he says.

Stein says that he and his colleagues intend to develop their research by investigating the influence that the tropical convergence zones have on the timing of ENSO events. He believes that the main outstanding questions relate to how ENSO will respond to future changes in the global climate. “The ENSO cycle is always going on; right now, we’re observing La Niña [cold] conditions that are likely to persist through the winter,” he says.

This latest research in published in Physical Review Letters.

Cyclotrons could boost technetium supply

 

The medical isotope technetium-99m (Tc-99m) could be made in hospitals rather than nuclear reactors. That is the conclusion of researchers in Canada, who have done theoretical modelling of how the material could be produced and processed in medical cyclotrons. Today Tc-99m is made centrally in a few nuclear reactors and cyclotron-based production could help to alleviate shortages that can occur when a reactor shuts down.

Although Tc-99m is used in a wide range of nuclear medicine procedures, the isotope is produced at only five nuclear reactors worldwide. The fragility of supply was highlighted recently by a global shortage brought about by the unscheduled shutdown of a reactor in Canada. As a result, physicists are keen to develop alternative methods for making the material.

This latest study was done by Anna Celler of the University of British Columbia and colleagues and is part of a C$35 million initiative by the Canadian government to look for alternative manufacturing techniques for Tc-99m.

Unwanted isotopes

In principle, Tc-99m can be made using a hospital’s medical cyclotron to bombard molybdenum with a proton beam, causing the transmutation of some of the molybdenum-100 nuclei into Tc-99m. However, molybdenum targets are expensive and the technique produces other unwanted isotopes that reduce the diagnostic benefit for the patient. The viability of the technique must therefore be carefully scrutinized – however, doing experiments is extremely expensive.

Now Celler and colleagues have developed a theoretical model that predicts the viability of the method and estimates logistical parameters, such as the number of cyclotron runs needed to meet the daily demands of a typical nuclear medicine department. The reaction conditions needed for optimal yields, such as beam energy and target geometry, were also identified.

The researchers used the nuclear-reaction model code EMPIRE-3 to calculate the cross-section, or probability, of each of the possible molybdenum–proton reactions, across an energy range of 6–30 MeV. The simulation confirmed that the numerous molybdenum–proton reactions produced multiple contaminants, including several technetium, molybdenum, niobium and zirconium isotopes. Together with the yield calculations, the EMPIRE-3 simulation also demonstrated that only molybdenum targets enriched with molybdenum-100 were viable for efficient Tc-99m production. Natural molybdenum, with its composition of several isotopes, produced significant amounts of contaminant isotopes.

Ideal proton energy

The researchers also identified 16–19 MeV as the optimal proton energy range for Tc-99m production. In this range, relative Tc-99m yields were greatest when compared with contaminant isotopes. Shorter, multiple molybdenum -100 irradiation cycles per day, each 3–6 hours long, also proved to be the most efficient production schedule.

“We are very happy with these results: not only are our theoretical calculations in agreement with the existing experimental data, but also they provide us with guidance for future experiments and suggest what could be the optimal conditions for technetium production,” said Celler. “The yields are sufficient, so that even cyclotrons designed to produce [positron emission tomography] PET radionuclides can produce sufficient quantities of Tc-99m to meet local needs.”

The researchers are now using their results to calculate radiation doses to patients that will result from the cyclotron-produced technetium. “These dose calculations can then be compared with those related to reactor-produced technetium and will serve as guidance for the selection of target enrichment,” explained Celler.

The research is described in Phys. Med. Biol. 56 5469.

The rhythm is gonna get you – where you want to go

Cell in the entorhinal cortex


Cell in the entorhinal cortex (Credit:Journal of Neuroscience )

By Tushna Commissariat

Last week I wrote about physicists in Europe who have developed a model to better understand the neuron activity in the brain that occurs when we listen to music. Their simulations suggest that certain notes sound harmonious because of the consistent rhythmic firing of neurons in the auditory system. They quantified this effect by showing that neural signals are regularly spaced for frequencies that are pleasant sounding, but are erratic for those that are not. The same researchers also said that their model may also provide insights into other senses, such as vision, that employ similar neural processing systems. Hot on the heels of that statement, this week I came across a paper in the Journal of Neuroscience discussing how neurons work to “code position in space” – simply put, the team looked at what mental processes occur while your brain perceives the space you are in and helps you to navigate within it.

Motoharu Yoshida and colleagues at Boston University in the US investigated how the rhythmic activity of nerve cells supports spatial navigation. The scientists showed that cells in the entorhinal cortex – which is located in the medial temporal lobe of the brain and acts as the main interface between the hippocampus and neocortex, playing an essential role in episodic and spatial memory formation – oscillate with individual frequencies, with the frequencies depending upon the positions of the cells within that cortex. Until now, it was believed that the frequency was modulated by the interaction with neurons in other brain regions, but in the light of these new data, this may be incorrect.

“The brain seems to represent the environment like a map with perfect distances and angles” explains Yoshida. “However, we are not robots with GPS in our head. But the rhythmic activity of the neurons in the entorhinal cortex seems to create a kind of map.” The activity of individual neurons in this region of the brain represents different positions in space, according to the researchers. The rhythmic activity of each cell may enable us to code a set of positions, forming a regular grid in the brain. Researching the capacity that most animals and mammals have for spatial navigation is always of interest, as a through understanding of it could lead to a clear picture of how our brains function in general.

Cells dine on nanotubes with dire results

A new study by researchers in the US reveals that nanometre-thick strands such as nanotubes and nanowires enter biological cells head-on and almost always at a 90° angle. This orientation means that a cell mistakes the long cylinder for a sphere and tries to ingest it – with dire consequences. The findings could be important for designing safer, less-toxic nanomaterials.

Biological cells ingest objects in their environment by engulfing them, in a process known as endocytosis. The particles are taken into the cell, or “internalized”, and subsequently degraded.

However, things do not quite work out this way when it comes to very thin tubes, wires and fibres, say Huajian Gao and colleagues at Brown University. Optical-microscope techniques such as in situ “spinning disk confocal microscopy” have already shown that multiwalled carbon nanotubes enter cells head-on, or tip first. Now Gao’s team has now gone a step further by using ex situ field-emission scanning electron microscopy, which provides resolution on the nanoscale, to confirm these observations.

Similar to asbestos

The researchers have also found that most nanotubes enter a cell perpendicular to it. Indeed, a nanotube placed at a shallow angle (say 30°) to a cell membrane will actually rotate so that its tip ends up lying closer to 90°. Only then will the tip be engulfed by the cell. Such “tip entry”, as it has been dubbed, also occurs for other materials the team tested, such as gold nanowires and crocidolite asbestos fibres.

Gao and co-workers believe that the cells mistakenly treat the nanotubes as sphere-shaped particles rather than long cylinders. “But, by the time the cell realizes that the nanotubes are too long to be fully ingested, it is too late,” explains Gao. “Once the engulfing process begins, there is no turning back – within minutes, a cell senses that it cannot fully engulf the nanostructure and it calls for ‘help’; that is, it triggers an immune response that can cause inflammation.”

To test the theory, the team carried out so-called coarse-grained molecular dynamics (CGMD) simulations of how nanotube tips interact with a model lipid bilayer. The researchers found that receptors (which are made of proteins in real biological cells) diffuse along the bilayer and cluster around the nanotube. The receptors then adhere to the nanotube surface, pulling the tube into the cell as the bilayer wraps around the nanotube tip.

Minimizing energy

The likely reason for why the cell rotates the tube to 90° is that this angle seems to reduce the amount of energy needed by the cell to engulf the tube. According to the researchers, the rotation is caused by a torque exerted by the curved bilayer on the partially wrapped tube.

The researchers repeated their simulations for nanotubes with different diameters and lengths, and found that in the main the entry mechanism was the same. They did notice, however, that nanotubes with open ends – that is those with their caps cut off – did not enter cells at all but instead lay on the cell membrane. This is because open tubes lack the carbon atoms to bind to the receptors on a cell membrane, says Gao.

“Our work shows that tip geometry plays an important role in cell entry and can govern how cells and 1D nanomaterials interact with each other,” says Gao. “This leads us to pose the interesting question of whether modifying tube tips might help avoid such ‘frustrated’ uptake.”

Nanomaterials becoming ubiquitous

Understanding how nanomaterials interact with cells is crucial as nanotechnology becomes ever more present in our lives. Materials such as nanotubes, for instance, show great promise for next-generation microchips, composites, coatings, biosensors and, importantly, as drug-delivery vehicles.

Ken Donaldson of the University of Edinburgh in the UK, who was not involved in the work, says that more experiments are needed to confirm these results. “This study is interesting but is largely based on modelling – calculations on what nanofibres might do when they encounter a cell membrane. Would this be a mechanism that actually occurs in real life?” he asks.

Donaldson’s team has also studied nanotube–cell interactions and has obtained images that show some nanotubes entering a cell end-on but others that do so from the middle, and others still that do not enter at all.

Gao’s group published its results in Nature Nanotechnology 10.1038/nnano.2011.151.

Welcome to The Age of the Qubit

qubit.jpg

By Hamish Johnston
Many things have changed in physics since I was a postgraduate in the 1990s. But if you asked me what the most exciting change has been, I would say the emergence of quantum information and computing as a discipline.

Or, more precisely, the fact that physicists are now able to create devices that routinely exploit quantum properties such as superposition and entanglement to make calculations and exchange information.

Granted, the devices are primitive and the calculations are limited, but let’s not forget that less than 70 years ago the first transistor was a large lump of germanium with wires sticking out of it.

The Institute of Physics (IOP) has just come out with a glossy brochure that captures the excitement of this revolution. It’s called The Age of The Qubit and a PDF can be downloaded here. Most of the IOP’s members are in the UK, so the brochure focuses on research done in Britain and Northern Ireland.

A new world of physics books

PW books podcast


Physics World podcasters. Left to right: Margaret Harris, James Dacey, Matin Durrani

By James Dacey

“What I have to say about this book can be found inside the book,” was Einstein’s reply to a New York Times reporter’s request for a comment on his book, The Evolution of Physics.

But books, from my experience, have always had the power to raise questions that can stay with you, sometimes for years after finishing the final page.

In a new move at Physics World we are venturing deep into the world of books by creating a series of podcasts devoted to physics books and the issues they cover. These programmes will uncover the stories behind the stories, as we’ll be discussing some of the books recently reviewed in Physics World. And hopefully some of the authors will be a bit more forthcoming than Einstein and we’ll hear them discuss their books. I’ll be presenting the shows alongside Physics World‘s editor Matin Durrani and the magazine’s reviews editor Margaret Harris.

On Friday we went into a studio at the University of Bristol to record our first podcast, a snapshot of which you can see in the photo above. The books we discussed in this debut programme all raised some interesting questions around the theme of “women in science”. The titles we discussed were the following:

Discoverers of the Universe: William and Caroline Herschel by Michael Hoskin
Science Secrets: the Truth About Darwin’s Finches, Einstein’s Wife and Other Myths by Alberto Martinez
Soft Matter: the Stuff that Dreams are Made Of by Roberto Piazza

The programme will be available by mid-October and it will also contain an interview with feminist historian Julie Des Jardins about her book, The Madame Curie Complex. Keep your eyes on this website for this podcast, which will be available to listen to or download by mid-October. And, don’t forget that in the meantime you can continue to read book reviews each month in Physics World and on physicsworld.com.

TV's Sheldon bags Emmy

Big Bang Theory
From left to right, Raj, Howard, Leonard and Sheldon build a robot to enter a fighting-robot competition. (Courtesy: Warner Bros Television Entertainment)

By Matin Durrani

Yes we know that physicsworld.com is probably not your first port of call for red-hot showbiz news, but congratulations to Jim Parsons for picking up an award for “outstanding lead actor in a comedy series” at last night’s Emmy Awards in Los Angeles.

Parsons, as you may well be aware (and if you’re not, then you really have been living under a stone), plays socially inept physics postdoc Sheldon Cooper on the hit CBS TV comedy show The Big Bang Theory.

Parsons, 38, bagged the same award last year, which marked the show’s first Emmy win. This time he beat his co-star Johnny Galecki, who plays fellow physicist Leonard Hofstadter.

On the show, a fifth series of which is set to start in the US on Thursday 22 September, the two physicists share an apartment together in Pasadena, with Leonard being what my colleague Tushna (who’s a self-confessed Big Bang nut) calls “a quintessentially cute geek”, who stoically puts up with Sheldon’s comical antics.

The appeal of the show lies partly in the relationships between Sheldon, Leonard and their two pals Raj (another physicist) and Howard (an engineer), but also in their interactions with “near-normal” neighbour Penny (Kaley Cuoco), who is the foil to the others’ actions.

But what’s made the show so popular with scientists – 2005 Nobel laureate Jan Hall told us it is just so damn funny – is that the show is peppered with references to physics, most of which are reasonably coherent, thanks in part to the contributions of the show’s science consultant – astrophysicist David Saltzberg from the University of California, Los Angeles.

I sat through about a dozen episodes on a long flight back to the UK from Australia a couple of months back and found the show moderately amusing – if not laugh-out-loud funny – and felt the writing was (like many other US sit-coms) a bit manufactured for my taste.

But, damnit, what do I know? Parsons has won an Emmy so he must be doing something right. And according to that trusted information source, Wikipedia, he, Galecki and Cuoco each earned $200,000 per episode in the last series, which is more than most postdocs earn in five years.

You can read more about the show in this great feature article we published last year, which includes interviews with Galecki, Saltzberg, Simon Helberg (who plays Howard) and the show’s creator, writer and executive producer Chuck Lorre.

Graphene bubbles could make better lenses

A tiny bubble of graphene could be used to make an optical lens with an adjustable focal length. That is the claim of physicists in the UK, who have shown that the curvature of such bubbles can be controlled by applying an external voltage. Devices based on the discovery could find use in adaptive-focus systems that try to mimic how the human eye works.

Graphene is a sheet of carbon just one atom thick and has a host of unique mechanical and electronic properties. It is extremely elastic and can be stretched by up to 20%, which means that bubbles of various shapes can be “blown” from the material. This, combined with the fact that graphene is transparent to light yet impermeable to most liquids and gases, could make the material ideal for creating adaptive-focus optical lenses.

Such lenses are employed in mobile-phone cameras, webcams and auto-focusing eye glasses, and are usually made of transparent liquid crystals or fluids. Although such devices work well, they are relatively difficult and expensive to make. In principle, graphene-based adaptive optics could be fabricated using much simpler methods than those used for existing devices. They could also become cheaper to produce if industrial-scale processes to manufacture graphene devices become available.

Tiny bubbles

Now Andre Geim and Konstantin Novoselov – who shared the 2010 Nobel Prize for Physics for discovery of graphene – have built tiny devices that show how graphene could be used in adaptive optical systems. Working with colleagues at the University of Manchester, the physicists began by preparing large graphene flakes on flat silicon-oxide substrates. When the air underneath the graphene cannot escape, a bubble of the material naturally forms. The bubbles are extremely stable and range in size from a few tens of nanometres to tens of micrometres in diameter.

To show that the bubbles could work as adaptive-focus lenses, the team made devices that contained titanium/gold electrodes contacted to the bubbles in a transistor-like arrangement. In this way, the researchers were able to apply a gate voltage to the set-up. They then obtained optical-microscope images of the structures while tuning the gate voltage from –35 to +35 V. As expected, they saw the shape of the bubbles go from being highly curved to more flat as the voltage changed.

Real, working lenses could be made by filling the graphene bubbles with a high-refractive index liquid or by covering the bubbles with a flat layer of this liquid, say the researchers.

So, what is next? “We have shown that controlling the curvature of these bubbles is an easy task,” says Novoselov. “We are now looking at performing other experiments where more complicated deformations in graphene would be created and controlled.”

The results are published in Applied Physics Letters 99 093103.

Copyright © 2026 by IOP Publishing Ltd and individual contributors