Skip to main content

Arrived and ready

river.jpg

The 21 hour door-to-door trip is past us now as we focus on the start of the conference tomorrow. We arrived at the hotel early on Sunday morning after a quick connection in Chicago. The whole trip from Heathrow to New Orleans went quite smoothly, except for the need to change planes in Chicago after we were all seated and ready to go – apparently there was a problem with the braking system, so I wasn’t complaining to change planes. It was also on the flight from Chicago that it became apparent that there were possibly many physicists on-board, most of them armed and ready with poster tubes.

Today, we had our first chance to see some of New Orleans. We had a brief walk around the French Quarter and along the Mississippi river where most of the hotels are situated near to the convention center. Though it was not immediately clear from these areas the devastation that was inflicted by hurricane Katrina in 2005.

I popped into the convention center itself, and already saw a mass of physicists queuing up to register. Coming to the center is the first time the scale of the APS March meeting hits you – containing massive halls where the exhibitions are held. Some people already have their hands on the thick conference book, meticulously studying it, though no doubt looking for the location of colleague’s talks.

After a few too many shrimps this afternoon, we are ready for the conference tomorrow and look forward to keeping you updated on all the in’s and out’s of the 2008 APS March meeting.

Off to New Orleans

nola.jpg

Michael and I are leaving for New Orleans bright and early tomorrow morning — along with five other colleagues from IOP Publishing. Our journey begins in Bristol at about 9.30 in the morning and if all goes well, we will arrive in New Orleans just before midnight (local time). I reckon that’s about 21 hours door-to-door. Unless, we get snowed-in in Chicago!

We have just put the finishing touches on our battle plan for what promises to be a intensive week of condensed matter physics. Actually, more than just condensed matter is on the agenda. Michael will be looking into “econophysics” and physics of the stock market, while I’m looking forward to learn about the physics of hurricane formation and climate change.

See you in the Big Easy!

Nanotubes measure DNA conductivity

Ever since the famous double-helix structure of DNA was discovered more than 50 years ago, researchers have struggled to understand the complex relationships between its structural, chemical and electrical properties. One mystery has been why attempts to measure the electrical conductivity of DNA have yielded conflicting results suggesting that the molecule is an insulator, semiconductor, metal—and even a superconductor at very low temperatures

DNA’s apparent metallic and semiconductor properties along with its ability to self-replicate has led some researchers to suggest that it could be used to create electronic circuits that assemble themselves. Now, however a team of researchers in the US has shown that DNA’s electrical conductivity is extremely sensitive to tiny structural changes in the molecule — which means that it could be very difficult to make reliable DNA circuits.

Reliable connection

Colin Nuckolls of Columbia University, Jacqueline Barton of Caltech and colleagues were able to make reliable conductivity measurements by inventing a new and consistent way of connecting a single DNA molecule to two carbon nanotubes (Nature Nanotechnology 3 163). Past methods had struggled to make a reliable connection between a DNA molecule — which is only about 2  nm wide— and two electrodes. Poor connectivity is thought to be behind many of the inconsistencies in previous measurements.

The team began with a nanotube — a tiny tube of carbon about as thick as DNA itself – that was integrated within a simple electrical circuit. A 6-nm section of the nanotube was removed using plasma ion etching. This procedure not only cuts the tube, but also oxidizes the remaining tips. This makes it possible to bridge the gap with a DNA molecule with ends that have been designed to form strong chemical bonds with the oxidized tips.

Similar to graphite

The conductivity was determined by simply applying 50  mV across the DNA and measuring the current that flowed through it. In a standard piece of DNA, the conductivity was similar to that seen in graphite. This is consistent with the fact that the core of the double helix of DNA consists of stacked molecular rings that have a similar structure to graphite.

A benefit of having the DNA attached securely to the electrodes is that the conductivity can be studied under ambient conditions — in a liquid at room temperature. This allowed the team to confirm that they were actually measuring the conductivity of DNA and not something else in the experiment. This was done by adding an enzyme to the surrounding liquid that cuts DNA – and as expected the electrical circuit was broken.

Mismatched bases

The team were also able to investigate the effect of base mismatches on conductivity. DNA double strands are normally connected through interactions between particular bases—adenine to thymine and cytosine to guanine. If one of the bases in a pair is changed, the two strands will still stick together, but with an altered structure around the mismatched bases.

The team first measured the conductivity of a well matched strand and then exchanged it for a strand with a single mismatch. This single mismatch boosted the resistance of the DNA by a factor of 300. According to Jacqueline Barton “this highlights the need to make measurements on duplex DNA that is well-matched, undamaged, and in its native conformation.”

An important implication of this sensitivity to small changes in structure is that DNA by itself might not be an ideal component for future electronic devices.

Indeed, this inherent sensitivity to structural change could allow living cells to detect DNA damage, which can accumulate in cells and lead to problems including cancer. Cells have ways of repairing this damage, but the mechanism they use to detect damage is still not completely understood. Barton says that this “whole body of experiments now begs the question of whether the cell utilizes this chemistry to detect DNA damage.” This is a question her group is now trying to answer.

Artificial black hole created in lab

Everyone knows the score with black holes: even if light strays too close, the immense gravity will drag it inside, never to be seen again. They are thought to be created when large stars finally spend all their fuel and collapse. It might come as a surprise, therefore, to find that physicists in the UK have now managed to create an “artificial” black hole in the lab.

Originally, theorists studying black holes focused almost exclusively on applying Einstein’s theory of general relativity, which describes how the gravity of massive objects arises from the curvature of space–time. Then, in 1974, the Cambridge University physicist Stephen Hawking, building on the work of Jacob Bekenstein, showed that quantum mechanics should also be thrown into the mix.

Hawking suggested that the point of no return surrounding a black hole beyond which light cannot escape — the so-called event horizon — should itself emit particles such as neutrinos or photons. In quantum mechanics, Heisenberg’s uncertainty principle allows such particles to spring out of the empty vacuum in pairs all the time, although they usually annihilate shortly after. But if two particles were to crop up on either side of a black hole’s event horizon, the one on the inside would be trapped while the one on the outside could break free. To an observer, the black hole would look like a thermal body, and these particles would be the black hole’s “Hawking radiation”.

This is all very well in theory, but in practice Hawking radiation from a black hole would be too low to be detected above the noisy cosmic microwave background (CMB) left over from the Big Bang. Simply put, black holes are too cold. Even the smallest black holes, which according to Hawking should have the warmest characteristic temperature, would still be about eight orders of magnitude colder than the CMB.

Faced with the difficulty of observing Hawking radiation from astrophysical black holes, some physicists have attempted to make artificial ones in the lab that have a higher characteristic temperature. Clearly, generating huge amounts of gravity is both dangerous and next to impossible. But artificial black holes could be based on an analogous system in which the curved space–time of a gravitational field is enacted by another varying parameter that affects the propagation of a wave. “We cannot change the laws of gravity at our will,” Ulf Leonhardt at the University of St Andrews in the UK tells physicsworld.com. “But we can change analogous parameters in a condensed-matter system.” Leonhardt’s group at St Andrews is the first to create an artificial black-hole system in which Hawking radiation could be detected (Science 319 1367).

We cannot change the laws of gravity at our will Ulf Leonhardt, University of St Andrews

Fishy physics

The idea of using analogous systems to create black holes was first proposed by William Unruh of the University of British Columbia in 1981. He imagined fish trying to swim upstream away from a waterfall, which represents a black hole. Beyond a certain point close to the waterfall, the current becomes so strong — like an event horizon — that fish cannot swim fast enough to escape. In the same vein, Unruh then considered what would happen to waves flowing from the sea into a river mouth. Because the current gets stronger farther up a river, the waves can only progress so far upstream before being defeated. In this way, the river is a “white hole”: nothing can enter.

In the St Andrews experiment, which uses the refractive index of a fibre optic as the analogy for a gravitational field, there are actually both black and white holes. It relies on the fact that the speed of light of light in a medium is determined not only by the light’s wavelength, but also by the refractive index.

The group begins by sending a pulse of light through an optical fibre that, as a result of a phenomenon known as the Kerr effect, alters the local refractive index. A split-second later they send a “probe” beam of light, which has a wavelength long enough to travel faster through the fibre and catch up the pulse. But due to the altered refractive index around the pulse, the probe light is always slowed enough to prevent it from overtaking — so the pulse appears as a white hole. Likewise, if the group were to send the probe light from the opposite end of the fibre, it would reach the pulse but would not be able to go through to the other side — so the pulse would appear as a black hole.

What are the minimal properties required to induce Hawking radiation in a lab system the way we think it is induced by gravitational black holes? Renaud Parentani, University Paris-Sud

Over the event horizon

Leonhardt and his colleagues proved that these black- and white-hole event horizons exist by monitoring the group velocity of the probe light, which never exceeded that of the pulse. More importantly, they have calculated that it should be possible to detect Hawking-radiation particles produced at either of the event horizons by filtering out the rest of the light at the far end of the fibre.

The detection of Hawking radiation would help physicists bridge the gap between quantum mechanics and general relativity, two presently incompatible theories. It might also help physicists investigate the mystery surrounding the wavelength of photons emitted at an event horizon, which is thought to start at practically zero before being stretched almost infinitely via gravity.

However, Renaud Parentani of University Paris-Sud in France thinks that, although it may be possible to glimpse radiation from an event horizon in future versions of the group’s system, the radiation might not possess all the expected properties of Hawking radiation generated by astrophysical black holes. For instance, the fibre-optic system is limited by dispersion, which means that the wavelength of photons produced at the event horizon will not be stretched very far. “What are the minimal properties required to induce Hawking radiation in a lab system the way we think it is induced by gravitational black holes?” he asks. “The answer, even on the theoretical side, isn’t clear. But these experiments will encourage us to consider the question more deeply.”

Entangled memory is a first

Physicists in the US are the first to store two entangled quantum states in a memory device and then retrieve the states with their entanglement intact. Their demonstration, which involves “stopping” photons in an ultracold atomic gas, could be an important step towards the practical implementation of quantum computers.

The basic unit of information in a quantum computer is the qubit, which can take the value 0, 1 or—unlike a classical bit — a superposition of 0 and 1 together. A photon could be used as a qubit, for example, with its “up” and “down” polarization states representing 0 or 1.

If many of these qubits are combined or “entangled” together in a quantum computer, they could be processed simultaneously and allow the device to work exponentially faster than its classical counterpart for certain operations. Entanglement could also play an important role in the secure transmission of information because the act of interception would destroy entanglement and reveal the presence of an eavesdropper.

Fragile states

However, this fragile nature of entanglement has so far prevented physicists from making practical quantum information systems.

Now, a team of physicists at the California Institute of Technology led by Jeff Kimble has taken a step towards this goal by working out a way to store two entangled photon states in separate regions of an extremely cold gas of caesium atoms (Nature 452 67).

The entangled states are made by firing a single photon at a beam splitter, in which half the light is deflected left into one beam, and the other half deflected right to form a second beam. These two beams are parallel, separated by about 1 mm and contain a pair of entangled photon states—one state in the left beam and the other in the right.

Slow-moving hologram

Once inside the cloud, a hologram-like imprint of the two entangled photon states on the quantum states of the atoms can be created using an effect called electromagnetically-induced transparency (EIT). This imprint moves through the gas many orders or magnitude more slowly than the speed of light, effectively “stopping” the entangled states for as long as 8 µs.

EIT is initiated by a control laser that is shone through the gas to create the holograms. Then, the laser is switched off, which causes the photon states to vanish leaving the holograms. Then the control laser is switched back on, which recreates the entangled photon states from the holograms. The storage time can be changed by simply varying the time that the laser is off.

When the recreated entangled photon states leave the atomic gas, one of the states passes through a device that shifts its phase, while the other does not. The two states then recombine at a detector. If the states remain entangled, adjusting their relative phase would create a series of bright and dark quantum interference effects at the detector.

20% entangled

By repeating the experiment for a large number of single photons and measuring the interference intensity, the team concluded that about 20% of the entangled photon states were recovered from the atomic gas. While this might seem like a poor success rate, it is good by quantum-computing standards where entanglement efficiencies of 1-2% are common.

According to Lene Hau of Harvard University, who pioneered EIT, the Caltech technique could be improved by cooling the atomic gas below the current 125 mK to create a Bose-Einstein condensate in which all the atoms are in a single coherent quantum state.

Sharing secrets

Hau also believes that the Caltech memory device could be modified to store the entangled states in two different atomic gases. This, she says, would allow quantum keys to be shared securely between users of a quantum encryption system.

The storage and retrieval of individual photons in an atomic gas was first demonstrated in 2005 by two independent groups—one led by Hau and the other working at the Georgia Institute of Technology. The ability to store photons without destroying entanglement is crucial for the transport of single photons over large distances, where the memories would work as quantum repeater that would boost the optical signal without destroying the quantum nature of the signal.

Gravity-test constrains new forces

Physicists in the US have used a tabletop experiment to rule out the existence of strong, gravitational-like forces at short length scales. Such forces, which could hint at additional space–time dimensions or weird new particles, would cause Newton’s inverse square law of gravity to break down. By directly measuring the gravitational force on a micromechanical cantilever, however, Andrew Geraci and co-workers at Stanford University have found no evidence for such effects down to a distance of about 10 μm.

These are the most stringent constraints on non-Newtonian forces to date at this length scale Andrew Geraci

The result represents a small reduction in the amount of “wiggle-room” available in theories that attempt to incorporate gravity with the other three forces of nature, in particular string theory. “These are the most stringent constraints on non-Newtonian forces to date at this length scale,” says Geraci, who is currently based at the National Institute of Standards and Technology in Boulder.

Mysterious force

Gravity is the most mysterious of nature’s four known forces. Because it is so weak, researchers have only been able to test Newton’s inverse square law — which states that the gravitational force between two masses is inversely proportional to the square of their separation — down to distances of about 0.1 mm in the last few years. Compare this with electromagnetism, which is some 40 orders of magnitude stronger than gravity and has been tested at subatomic scales.

On the theoretical side, gravity poses even more of a challenge. Unlike electromagnetism and the strong and weak nuclear forces, which are described by quantum field theories, gravity is described by Einstein’s general relativity — a geometric theory which reduces to Newtonian gravity in everyday situations but which breaks down at the quantum scale. The lack of experimental constraints on gravity at distances less than a millimetre has given theorists broad scope to hypothesize how gravity relates to the other three forces. Testing gravity at short length scales therefore helps put these ideas on more solid ground.

Attonewton measurement

Geraci and co-workers have used an interferometer to measure how much a 0.25 mm-long silicon cantilever loaded with a 1.5 μg test mass is displaced by a source mass located 25 μm beneath it (arXiv:0802.2350; submitted to Phys Rev D). Keeping the apparatus at cryogenic temperatures to reduce thermal noise, this allowed the team to measure the attonewton (10–18 N) forces between two masses directly. This contrasts with the precision torsion-balance experiments that have tested Newtonian gravity at sub-millimetre scales by measuring torque, namely those performed by Eric Adelberger’s group at the University of Washington (see: Exclusion zone).

As is usual when testing for departures from the inverse square law, the team then looked for evidence of a corrected “Yukawa-type” potential: VN(1+α e–r/λ), where VN is the Newtonian gravitational potential, α describes the relative strength of the new force and λ is its range. Although the Stanford apparatus does not allow the Newtonian gravitational interaction between the test masses (i.e. corresponding to α ~1) to be measured, the experiment probes large forces in the regime where α = 104 – 108 at length scales of about 10 μm. It is here that some rather outlandish theories suggest powerful new forces will show up.

Extra dimensions

A decade ago, theorists suggested that all matter and the quantum fields of the strong, weak and electromagnetic interactions (i.e. our entire material universe) are confined to a 3D “brane” that floats around in a higher dimensional space where gravitons (the supposed mediators of gravity) roam wild. The apparent weakness of gravity would therefore be an illusion to inhabitants of the brane: gravity is merely diluted as it spreads out into additional dimensions that we cannot perceive.

In accordance with Gauss’s law, each extra dimension would increase the exponent in Newton’s law: if a single extra dimension turned up at a certain length scale, then the inverse square law would become an inverse cube law at that scale, for example. Provided the extra dimensions are large enough, they could also harbour new “gauge” particles that mediate forces many orders of magnitude stronger than gravity. Unlike gravity, the range of such forces (which are of the order of a fraction of a millimetre owing to the finite masses of the particles) might be independent of the number and size of the extra dimensions, and the forces can be repulsive rather than attractive.

We have taken out a sizeable chunk of the parameter space for moduli Andrew Geraci

Similar forces can arise in string theory, which is the inspiration behind such “braneworld” scenarios and the leading contender for a quantum theory of gravity. String theory demands six extra dimensions, and the exchange of certain “light moduli” particles that determine the way these dimensions are compactified at small scales could mediate forces at least 10,000 times as strong as Newtonian gravity. These forces may be present even if the dimensions are not large but curled up at the Planck length (about 10–33 cm), as most string theorists think is the case.

Moduli constraints

According to Geraci, it is these light moduli that the Stanford experiment has constrained most dramatically. “We have taken out a sizeable chunk of the parameter space for moduli,” he says. “Of course we can not rule out string theory, but we are able to place meaningful bounds on the mass [which determines the range, λ] and coupling strength of moduli [which determines α] that are quite generic.” This adds to constraints set in 2003 by researchers at the University of Colorado using a planar oscillator separated by a gap of 108 μm, which operates at length scales that are intermediate between the Stanford and Washington set-ups (see: Exclusion zone).

The results represent a four-fold improvement on previous experiments performed by the same team in 2003, but the researchers are currently working on a new “rotary” experiment that has a larger interaction area for the test and source masses. With this, the team expects to constrain parameter space in alpha by a further one to two orders of magnitude within a year or so, which would allow the “strange-modulus region” to be surveyed almost entirely (see: Exclusion zone).

“This is a pioneering experiment,” says Stanford theorist Savas Dimopoulos, who was one of the first to propose braneworld scenarios in 1998.

Squeezed electrons shed light on silicon

A long-standing controversy about why tiny silicon crystals emit light appears to have been settled by a clever experiment that pins down the movements of electrons in the material. The work was done by researchers in Europe and reveals that one of two distinct effects is involved in producing the light — depending on the structure of the nanocrystals. This information could help in the development of silicon-based optical devices, which have hitherto been very difficult to make.

Silicon’s wonderful electronic properties mean that just about every high-tech gadget contains devices made from the semiconductor material. However, silicon is notoriously bad when it comes to creating or processing light, which is why other semiconductors such as gallium arsenide are used in optical devices such as light-emitting diodes (LEDs) or switches for optical communications networks.

Indirect band gap

Silicon’s optical inadequacies are related to an “indirect” gap between its electron energy bands, which makes it very difficult for an electron to jump directly from the conduction band to the valence band by simply emitting one photon.

One glimmer of hope for those pursuing silicon-based optical devices is that nanometre-sized crystals of silicon are known to emit light in a process called photoluminescence — something that is not seen in larger bulk samples. But exactly how and why this process occurs in tiny bits of silicon had been a hotly contested issue since the effect was first seen in porous silicon in 1990.

Defects or confinement?

Experiments done by several groups suggest two distinct possibilities: structural defects and quantum confinement. Defects are thought to change the energy bands of the silicon, making it easier for electrons to move between energy levels by emitting light. Quantum confinement arises because the size of the nanocrystals is on par with the wavelength of the electrons — which also modifies the energy bands. Light from defects and light from quantum confinement normally looks very similar, which had made it difficult to work out which effect was responsible for photoluminescence.

Now, Manus Hayne of the University of Lancaster in the UK and colleagues in the Netherlands, Germany and Belgium have shown that when defects are present, they are responsible for nearly all of the light emission — but if there are no defects, the light is a result of quantum confinement Nature Nanotechnology doi:10.1038/nnano.2008.7).

Squeezing electrons

To do this the team made their photoluminescence measurements in a strong magnetic field. The field “squeezes” the electrons, which confines their motion to a “magnetic length”, which is dependent on the field strength

Confinement is also a feature of defects, from which conduction electrons stray no further than 1 nm. By contrast, quantum confinement occurs on length scales comparable to the size of the nanocrystal — about 3–5 nm in Hayne’s experiment.

If a field (about 50 T) with a magnetic length between 1 and 3 nm is applied to the nanocrystals, electrons associated with quantum confinement will be squeezed — causing a tiny shift in the wavelength of the light emitted — but electrons associated with defects are largely unaffected.

One or the other

When such a field was applied to nanocrystals that were known to have lots of defects, no shift in the light was seen — suggesting that the photoluminescence is associated with defects. The team then removed the defects by heating the nanocrystals in pure hydrogen and again applied the magnetic field. This time they saw a clear shift in the wavelength of the emitted light — proof that quantum confinement was responsible for the photoluminescence. Finally, the hydrogen was removed by heating the nanocrystals in vacuum — which brought back the defects. And sure enough, the wavelength shift vanished.

So does Hayne think that these experiments will put a stop to the controversy? “We think that using a magnetic field is a pretty definitive test, but the answer could vary from sample to sample,” he said.

According to Hayne, understanding the underlying mechanisms for light emission is crucial for researchers who are attempting to make efficient light-emitting devices from silicon. “Even from silicon nanocrystals the luminescence is not very bright, compared to nanostructures made from compound semiconductors, for example,” Hayne told optics.org. “It is very important to understand where the light is coming from if you want to improve the efficiency.”

Physicists roll out nanotube paper

Rolling a small steel cylinder across an array of carbon nanotubes (CNTs) is a quick and easy way of preparing “buckypaper” — a thin material that is an excellent conductor of both heat and electricity. Invented by physicists at Tsinghua University, China, the new technique could be used to make materials that boost the performance of high energy density supercapacitors or remove heat from computer chips.

The team’s production method really is as simple as it sounds. They begin with arrays of millions of CNTs that have been grown on a silicon substrate using a well-established technique. The arrays, which resemble a forest with all the CNTs aligned perpendicular to the silicon surface, are about 10 cm in diameter and the CNTs are about 100 µm tall.

Flattened forest

To make a piece of buckypaper, Changhong Liu and colleagues place a very thin microporous membrane on top of a CNT array and then push a steel cylinder slowly across the sample — which knocks all the CNTs over in the same direction and flattens them between the membrane and silicon substrate. Next, the membrane and buckypaper is peeled off the silicon substrate and the membrane is removed by washing the sample with ethanol — leaving just the buckypaper (Nanotechnology 19 075609).

The team claim that their method is a significant improvement over previous attempts at making buckypaper that involved filtering a liquid suspension of CNTs in a high magnetic field. Paper made in this way often has poor mechanical, thermal and electrical properties because it is difficult to ensure that the process results in a uniformly-thick material in which all the CNTs point in the same direction.

Swanning about

Liu and his colleagues report that their dry technique produces a strong and flexible film and demonstrate the claim by folding their CNT material into an origami swan.

The team also put their material to more practical use by using it to make supercapacitors — devices that can store up to 1000 times more electrical energy than standard capacitors and are often used when a large but brief surge of energy is required, such as driving the starter motor of a large engine. Supercapacitors are also being used in some prototype fuel-cell and hybrid cars to improve acceleration.

Superior supercapacitors

Buckypaper shows great promise for use as capacitor electrodes because it has a rough surface with a very large surface area — and the capacitance of a device is proportional to the surface area of its electrodes. “From our comparison measurements, our buckypaper-based supercapacitors were at least twice as good as the commercially purchased carbon fiber-based capacitors,” said Liu.

The team also found that the buckypaper was a very good conductor of heat, having a thermal conductivity of about 330 W/(m K). This is the highest value of any known CNT film and about the same as copper. “The material’s high in-plane thermal conductivity means that it can be used to transmit heat from confined areas,” said Liu. “For example, our tightly aligned buckypaper could be used to solve thermal management problems in microelectronic packaging.”

A brief history of Hawking

One could say that Stephen Hawking is the epitome of the general public’s view of a scientist — someone who dedicates their life to science with a blind determination to unravel the mysteries of the cosmos. Struck down by motor neurone disease in his early twenties while doing his PhD, Hawking is now almost totally paralysed and can only communicate via his synthesized voice box. Yet it is this oracle-like voice and his dogged determination to understand our universe that have helped him become one of the most recognisable physicists alive today — even if he is not the latter-day Newton or Einstein the media likes to suggest.

Hawking now can only communicate via a single cheek muscle, which he can flex in response to characters on a screen allowing him to type out sentences. Even though it takes him a minute to type three words, Hawking unbelievably still undertakes a full week of teaching and research. Indeed, he still has four PhD students. The two-part television series Stephen Hawking: Master of the Universe, which is to be broadcast on 3 and 10 March on the UK’s Channel 4, looks at the life and work of Hawking, from his two failed marriages to his work on black holes and the beginnings of the universe. The series, like Hawking himself, doesn’t shy away from getting stuck into the biggest topics in physics from string theory to colliding branes.

The first episode looks at Hawking’s early life, and his quest to unify quantum mechanics with general relativity through his work on black holes. It also delves into his first marriage to Jane Wilde — then a language student — and the eventual strains that occurred due to his work and the fame that was brought by his book A Brief History of Time, published in 1988.

The first instalment does a good job of explaining the concept of Hawking radiation, which is caused by the creation of negative and positive mass particles at the edge of black holes. “It is one of the greatest papers of the 20th century,” is how Andy Strominger from Harvard University describes Hawking’s 1975 paper on particle creation by black holes.

I felt somewhat uneasy with the constant reference to God

The mind of God

Hawking appears throughout the two programmes, but I felt somewhat uneasy with the constant reference to God running throughout the first part, which starts with Hawking’s famous the suggestion that he “wanted to know the mind of God” and continues with him questioning whether we need a god at all. As if to emphasize the link to God, heavenly-sounding choirs pipe-up whenever we see old footage of Hawking wheeling himself around Cambridge or giving seminars.

Both episodes feature cameo appearances from various physicists describing physical concepts. Media darling Michio Kaku from the City College of New York is wheeled in to describe difficult topics such as supersymmetry, quantum mechanics and string theory, all with the help of props from a local fairground. String theorist Lisa Randall from Harvard University also appears, describing the concepts of extra dimensions using, bizarrely, calorie-filled doughnuts in a coffee shop.

Stephen Hawking

The second part focuses more on current issues in physics and has less about Hawking’s own work. You start to get a sense that a new generation of physicists have taken over his mantel in the quest to unify the four fundamental forces. Indeed, the programme almost turns into an episode of Michael Green: Master of String Theory when it describes how Green — another theorist at Cambridge who, together with John Swartz, came up with the idea of superstring theory and how it is possibly the best way of describing gravity with quantum mechanics. The viewer is left not knowing whether Green and Hawking are competitors or collaborators. Hawking, however, gives a rather subdued response to string theory: “If string theory is correct,” he says, “help may be on hand from extra dimensions [to unify the four forces].”

If people haven’t had enough of trying to understand and visualize 11 dimensions (helped in part by Kaku and a fishpond), the final 20 minutes of the second instalment then goes into the world of colliding branes and spontaneous creation of universes which Hawking pioneered in his “no boundary condition” proposal for the start of the universe.

It gives a sense that time has run out for Hawking

Stubborn drive

With all the difficulties that Hawking has experienced, you may wonder what would have been possible if he was still fully able-bodied. But is that possibly the point — that his drive to understand the universe was brought on by his disability and his stubbornness to not let it get in the way? Yet even with his disability, he never gives the impression that he is frustrated with the cards life has dealt him.

Overall, the documentary explains Hawking’s theories well enough for the layperson to grasp. But it also gives a sense that time has now run out for Hawking and his quest to formulate a theory of everything. Even Hawking, who is now 66, hints at this: “I would have liked to have done more,” he concedes, “in particular to have found a complete theory of quantum gravity and the early universe.” Ever the jester, he points out wryly: “But that wouldn’t have left much for anyone else to do.”

The best years of your life?

If you are coming to the end of your undergraduate physics degree, the chances are that you have considered — at least for a brief moment — staying on at university to do a PhD. Careers advisors will give you a host of reasons to take this path: better long-term career prospects; the chance to get paid to do something that you are really interested in; and more time to decide what career path you eventually want to take. But are PhDs really all they are cracked up to be, and what is life actually like for physics graduates who take this option?

The postgraduate experience largely depends on where and what you choose to study — and how well those choices suit you. A PhD usually takes three to four years to complete and at least three of these years will be spent doing hands-on research, so it is important to pick a topic that really interests you, be it medical physics, particle physics or astronomy. You also need to decide whether you would rather spend your days taking measurements in the lab or doing computer modelling or theory; or whether you want to work in a small university department or at a large facility such as CERN. “Being a scientist is hard work, and the requirement is a genuine interest,” says Erik Olofsson, a first-year PhD student at the Royal Institute of Technology in Stockholm, Sweden, whose research involves modelling magnetohydrodynamic systems such as plasmas.

Equally important is that you end up in a location that you like, and that you are among people whom you get on with. “The worst part of my PhD was actually the weather,” says Bo Jayatilaka, who recently completed a PhD at the University of Michigan that involved measuring the mass of the top quark. “By choosing to work in hadron-collider physics, I was committing myself to living in the Chicago area — and so enduring brutal winters — for at least two years.”

Sheila Kanani, who is in the first year of an astrophysics PhD at University College London in the UK researching Saturn’s magnetosphere, believes that it is definitely worth visiting the places that you are thinking of applying to. “Talking to students already there will give you the best, unbiased, view,” she says.

Many physics students decide to embark on further study because they enjoyed their final-year undergraduate project — and indeed the option is often open to stay at the same university to do a PhD, sometimes even with the same supervisor. This has the advantages that you already know the place and at least some of the people whom you will be working with. Louise Wheatland, who is in the second year of a PhD with Lancaster University’s ultra-low-temperature group, chose to go down that path. “During my undergraduate physics degree at Lancaster I did some low-temperature work, which I really enjoyed,” she explains. “I wanted to stay in the city, and if I hadn’t got a place, I probably would have got a job for a year and then tried to reapply.”

Broadening your horizons is no bad thing, however, when it comes to finding the perfect PhD project, and there is no reason why you cannot apply to any university worldwide that has a research group in your area of interest. Indeed, a PhD offers a great opportunity to spend a few years living in a different country, experiencing a new culture and improving your language skills. Going abroad also means that you have a much wider variety of research groups and potential supervisors to choose from.

Subject matters

Having a research topic in mind is absolutely essential when applying for PhD positions in the UK and elsewhere in Europe, since you will usually begin working on your chosen research problem straight away. In the US, however, PhD students spend two years doing coursework and exams in all areas of physics and only then begin proper research.

“Most physics students in the US start their PhDs without a specific research field in mind,” says Jayatilaka. This adds at least an extra year to the process, but it makes the US a good option for those who want to learn a bit more physics before choosing an area to specialize in, or for students who want to undertake a PhD project in an area that they do not have much experience in.

As a PhD student, how you spend your days depends mostly on the nature of your project. Researchers in some theoretical fields spend much of their time working alone, but many PhD students — particularly those doing experimental projects — say that the best thing about their work is how social it is. “In the ultra-low-temperature group we all work together on the experiments, so it’s a nice little community,” says Wheatland.

According to Jayatilaka, teamwork is also at the heart of high-energy physics. “The collaborative and largely social nature of the field was an eye-opening experience for me,” he says. “Getting to live at the facility where the research is done and interact with dozens if not hundreds of people on a regular basis was fantastic.”

Some experiences are common across all locations and fields, however. All PhD students are usually expected to help teach undergraduates by taking tutorials and being lab demonstrators — when they are not being confused with them that is. “I still get mistaken for a fresher every year and asked if I’m lost, and I’ve even been mistaken for an undergraduate in classes where I’m a teaching assistant,” comments Karina Williams, who is in the fourth year of a particle-physics PhD at the University of Durham in the UK. Although perhaps not the most exciting aspect of doing a PhD, teaching does develop valuable communications skills and is a good way to earn extra money, with rates starting from about £10 per hour. “I’m quite a shy person so I’m not too happy about having to do teaching, but it is good personal training,” says Olofsson.

The high life

Money is no longer the problem for science PhD students that it once was, in the UK at least. Most projects will come with a tax-free allowance of at least £12,000 per year. Once tax is taken into consideration, this is nearly as much as many new graduates in the UK earn during their first few years in work. One of the best perks, however, is the opportunity for travel. PhD students are expected to attend lectures, workshops and conferences in their field, many of which take place away from home. “I have only had one full week in the lab so far — all the other weeks I’ve spent at least one day at a lecture or conference in different parts of the country,” says Kanani. “And next year I should get to go to Germany or the US for conferences.” Olofsson agrees: “In the six months since I started my PhD I’ve been to Oxford, Warsaw, and New York.”

If the postgraduate life sounds like the right choice for you, then you can look at available projects on www.findaphd.com, and among the recruitment adverts published by magazines like Physics World. You might also want to visit www.phdcomics.com, a US site featuring a regularly updated comic strip about life as a post-graduate student, which is a good source of light-hearted information about the problems and pitfalls encountered by PhD students.

“The comic strips often mirror exactly what’s happening to me and my office mates at that moment, for example fighting over office space or getting scooped,” says Williams. She also offers this advice to potential new PhD students: “The things that will help you get through the tough bits are your friends and co-workers, or possibly lots of cake. I’m especially lucky as my fellow PhD students are very good at baking cakes.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors