Skip to main content

Graphene could be a perfect absorber of light

Physicists in Spain and the UK have calculated that graphene – a layer of carbon just one atom thick – could be used to create a perfect absorber of light if it is doped and patterned into a periodic array. The work could lead to improved light-detection devices, particularly in the infrared part of the electromagnetic spectrum, where current technologies struggle to function.

The claim is extraordinary because conventional materials normally need to be thousands of atoms thick to absorb light completely. “The prediction that a material layer only one atom thick can completely absorb light is remarkable and exciting,” says team leader F Javier García de Abajo of the Institute of Optics in Madrid.

“The layer in question is graphene patterned into a periodic array of nanodiscs,” explains García de Abajo. The structure absorbs light by confining it to regions that are hundreds of times smaller than the wavelength of the light. This is done by exploiting plasmons that occur within the individual nanodisc structures. Plasmons are quantized collective oscillations of the electrons within a nanodisc – and they interact strongly with light.

Doping with electrodes

Light confinement in graphene is possible only if the material is electrically charged. And the wavelength at which light can be confined and absorbed depends on how much the material is charged. Also known as doping because it has an effect similar to that of introducing impurities to conventional semiconductors, charging is easily achieved by placing electrodes near the graphene. The amount of charging itself can then be controlled by varying the voltage applied to the electrodes.

In their calculations, the team studied how the patterned graphene absorbed light in the near- to mid-infrared range of the electromagnetic spectrum. The researchers say that it would be easy to extend their results to other wavelength ranges, towards the mid-infrared and the terahertz regime, for example, by directly applying the analytical equations that they employed. “All of these spectral regions are especially interesting, with potential applications in imaging, sensing and detection,” says García de Abajo. “We are in need of good light-absorbing devices in this range of wavelengths because existing detectors perform poorly here. Our work may even provide a way to bridge this infamous ‘terahertz gap’.”

Separation is just right

The researchers say that the nanodiscs are able to absorb large quantities of light because these individual graphene structures are arranged at a well defined distance from each other. If they are too close, the light can be re-emitted back and can be reflected. On the other hand, it is insufficiently absorbed if the nanodiscs are placed too far apart. A similar effect can also be obtained with other graphene patterns, specifically with ribbons, which the researchers say are easier to dope.

The light also produces induced fields near the nanodiscs. These fields are made up of evanescent waves – electromagnetic waves that decay exponentially away from a structure. “The mechanism is therefore not a diffraction effect in the classical wave sense in which two or more propagating waves interfere and form patterns limited in size to about half the wavelength of light,” explains García de Abajo. “It instead involves critical coupling.”

The team, which includes scientists from ICFO in Barcelona and the Optoelectronics Research Centre at the University of Southampton, now plans to explore other extraordinary optical effects in graphene – possibly down to the quantum limit with studies on the effects of single photons. “We also hope to analyse alternative materials, such as topological insulators, that might produce similar effects,” reveals García de Abajo.

The work is described in Phys. Rev. Lett. 108 047401.

Will the scientific paper always be the gold standard for sharing new results?

By James Dacey

hands smll.jpg

A new report released earlier this week concluded that physical scientists use and access information in very different ways depending on the precise field they work in. Based on interviews and focus groups with a range of physical scientists, Collaborative Yet Independent reports that researchers have started to use online tools such as social networking sites in relation to their work. It found, however, that when it comes to disseminating new scientific results, publication in a traditional scientific journal remains the “gold standard” for researchers.

We want to know whether you think this will remain the case looking to the future of science. In this week’s Facebook poll we are asking the question:

Do you believe that researchers will always view the scientific paper as the gold standard for sharing new results?

Yes
No, it will be replaced by other forms of communication

To cast your vote, please visit our Facebook page. And, as always, please feel free to explain your response by posting a comment on the poll.

In last week’s poll you may have clocked that we addressed the timely issue of timekeeping. It was the topic of the hour because last Thursday delegates were debating whether or not we should scrap the “leap second”, at a meeting of the International Telecommunication Union in Geneva. This is a second that is added to or taken away from Co-ordinated Universal Time (UTC) every few years to take account of the slight speeding up or slowing down in the rotation of the Earth.

Since the first leap second was inserted in 1972, people have deliberated whether this is the most effective way of dealing with time. Some have suggested swapping the leap second in favour of the addition of a larger chunk of time after a longer period – such as a leap hour roughly every millennium. Others have suggested abandoning astronomical time altogether, replacing it with an Earth-based reference such as an atomic clock. To do so would decouple time from the Earth’s rotation, allowing traditional night hours to gradually become day hours, and over millions of years the seasons would shift from their traditional months.

We asked for your opinion on this issue and 72% of respondents believe that we should define time using an atomic clock. The remaining 28% would prefer to maintain our connection with the heavens by keeping astronomical time.

One commenter, Robert Minchin, believes that we should keep the leap second to save a stitch in time. “Getting rid of them would simply be storing up problems for the future, when a larger leap-something will need to be introduced before the night becomes the day,” he wrote. Another respondent, who goes by the name of Strum Cat, feels strongly that we should ditch astronomical time. He wrote: “Are you kidding? Defining time by the rotation of Earth is fine for getting to work on time, but useless for precise science.”

It appears, however, that the debate is set to continue for some time yet. Last Thursday – after our poll went live – officials at the ITU announced that they have sent the issue back to a panel of experts for further assessment. They say a revised proposal will be introduced no earlier than 2015.

Thank you for all of your votes and comments, and we look forward to hearing from you again in this week’s poll.

What is the scientific method?

bacon statue.jpg

By Hamish Johnston

Anyone who has trained as a scientist has learned about the “scientific method” – but the concept remains ill-defined and its origins are a topic of debate among philosophers and historians.

In this week’s instalment of In Our Time on BBC Radio 4, Melvyn Bragg and his cabal of intellectuals discuss the role of the English polymath Francis Bacon (1561–1626) in the development of the method. Through writings such as Novum Organum Scientiarum, Bacon (right) championed the use of inductive reasoning in science. Indeed, Bacon had a very important influence on a future generation of scientists who founded the Royal Society in 1660.

Another character associated with the development of the scientific method is Isaac Newton. According to historian Simon Shaffer of Cambridge University, Newton first developed his rules of scientific enquiry to study a very non-scientific subject: the Bible’s Book of Revelation. Newton then further developed his ideas by applying them to what we would think of as science.

Rounding off Bragg’s panel are the philosophers John Worrall of the London School of Economics and Michela Massimi of University College London. The quartet go on to discuss how Charles Darwin’s 1859 On the Origin of Species was first received by Victorian scientists. Not very well it seems – Darwin’s arguments seemed to fly in the face of the scientific method because the processes of evolution could not be observed in laboratory experiments.

The team also looks at how the overthrow of Newtonian physics in the early 20th century by relativity and quantum mechanics led to a rethinking of the scientific method. Leading the way was philosopher Karl Popper with his idea of falsifiability and Thomas Kuhn with his theory of paradigm shifts.

You can listen to the programme here.

Parting the clouds

A few decades ago, concerns about climate change focused on the “nuclear winter” scenario that scientists feared would result from a full-scale exchange of nuclear weapons. Now that this has, thankfully, become less likely, physical scientists (and others) have turned their attention to a less dramatic but still important aspect of climate change: the extent to which rising global temperatures are natural, or induced by increases in CO2 and other gases in the atmosphere.

What a difficult area. As someone who has worked on a variety of topics in particle physics and astrophysics, I well know that these are easier subjects to study, and their ramifications are less important (at least at present) than the behaviour of the atmosphere. Moreover, insofar as everyone is “affected by the weather”, most of us have opinions on what is going to happen to the climate in the future, and on whether we can (or should) modify it. The difficulty of the subject plus its widespread relevance has led to a polarization of opinion and great contention. In this respect, I am reminded of the “Big Bang versus continuous creation” arguments in cosmology half a century ago – except that the present subject is far, far more important.

We all agree that the climate is changing, and nearly everyone will concede that it is getting warmer. However, there remains a dichotomy of beliefs among members of the public as to whether this is a natural or man-made phenomenon, despite the great efforts of the Intergovernmental Panel on Climate Change (IPCC) to convince us of the latter. There are even a few eminent scientists, physicists included, who still “know” – I repeat, “know” – that the warming is natural. These scientists have not worked in the field themselves, nor have they read the literature in a comprehensive way, but they know the answer nevertheless. Remarkable!

So there is clearly education yet to be done, even in the scientific community, and a number of publications have attempted to address this need. The “bibles” in the field are the IPCC’s own famous reports, which summarize the research results of many climatologists and related scientists, most of whom have impressive pedigrees. My own view is that, apart from some rather muddled thinking over the question of probabilities – always a thorny issue – they are very good. However, they are inevitably rather technical, and this is where books such as Richard Alley’s Earth: the Operator’s Manual come in.

Alley’s book is the companion to a TV documentary that was broadcast in 2011 on US public television. He is a recognized world leader in the science of global warming, being a professor of geosciences at Pennsylvania State University and a member of the US National Academy of Sciences (NAS). Thus, he should know what he is writing about. Nevertheless, having ploughed through Earth, my first reaction was that it is a good example of how to cram 100 pages of information into an almost-500-page book.

This is perhaps not entirely fair, because the multitude of by-ways along which we pass are undoubtedly interesting. Almost every aspect of climate change is covered, from the fact that we are burning fossil fuels at a rate some million times higher than the rate at which nature saved them, through the intricacies of climate models, to the dire consequences of “doing nothing”. The pluses and minuses of possible techniques for alleviating warming are also well thought out; Alley covers carbon sequestration along with several renewable-energy sources, not forgetting nuclear power. He does baulk at making a specific recommendation, offering instead the standard “more research is needed” formula. He is probably right to do so.

Yet despite the book’s comprehensive coverage of all the facets of climate change, I must say that Alley’s descriptions of such incidental topics as the activities of the NAS and the manifold characteristics of Cape Cod, Massachusetts – not to mention the problem of disposing of human waste in 1750s Edinburgh – gets a bit wearing. No doubt the last two were put in to encourage TV viewers seeking some sort of “human angle”. Perhaps we should be grateful that the carbon footprint of Alley is smaller than those of some British TV scientist-personalities, who seem to need to demonstrate simple physics from the tops of mountains, or simple biology from distant jungles.

Alongside these unnecessary additions, the book has, in my opinion, one big omission: population growth. Alley gears his discussion about our future power needs to a projected population of 10 billion, but surely efforts to limit population growth are just as important – if not more so – as those to stem global warming per se. Indeed, the two are related. If a fraction of the effort that is currently being put into reducing global warming were instead applied to techniques for reducing human fertility (in a humane way), it would surely increase the sum total of human happiness.

Returning to the book, mention must be made of its illustrations. Most are interesting and relevant, but there are a few, such as “Dangers of the whale fishery” and “Greenland’s musk oxen”, that seem not to add much to the topic under study. The biggest disappointment with the illustrations, though, is their dull sepia tone. Surely colour could have been afforded? As to readership, this will be a good coffee-table book for many, particularly those who enjoyed the TV documentary. However, I think that most physicists would prefer a slimmed-down version containing the important messages that Alley, as an expert geoscientist, has to offer. A “Noddy” version for the doubters would be useful, too.

Online tools are ‘distraction’ for science

Physical scientists use and access information in very different ways depending on the precise field they work in, according to a report released today by the UK’s Research Information Network, the Royal Astronomical Society and the Institute of Physics, which publishes physicsworld.com. Google Scholar, for example, is used by 73% of Earth scientists and by 70% of nanoscientists to discover new research findings, but by just 13% of particle physicists and 7% of astrophysicists. Meanwhile, whereas all chemists and Earth scientists surveyed say they read online journals, only 38% of particle physicists do so, largely preferring preprint servers such as arXiv.

Entitled Collaborative Yet Independent, the new report is based on interviews with 51 researchers and focus-group sessions with 35 participants in seven different fields. It reveals that although physical scientists have led the way in using computers to analyse data, they are still fairly conservative when it comes to adopting new communications technologies, with formal publication in traditional journals “remaining the gold standard” for disseminating findings. Indeed, the report says that talking to peers and experts seems likely to remain one of the most important ways for such researchers to learn about new results.

Few physical scientists use blogs, Twitter, Open Notebook Science, social networks, public wikis or other “public-facing” technologies to share research information, the report finds, although some particle physicists and astrophysicists use internal, private wikis. Most physical scientists view these services as “distractions” from their communications with key colleagues – the only exception being researchers involved in “citizen-science” projects such as Galaxy Zoo, which rely on close collaboration with members of the public. Indeed, three-quarters of particle physicists still use e-mail lists to find new information.

Another issue the report highlights is the unwillingness of physical scientists to reference scientific databases, despite such scientific data increasing in volume and becoming ever easier to access. “There is little agreement on how to cite databases, or otherwise assign credit to the scientists and technicians responsible for the creation and maintenance of databases,” the report says. But finding ways to assign credit is important, it adds, because otherwise “those responsible for creating data have fewer career incentives to engage in such efforts”.

Monica Bulger from the Oxford Internet Institute, who co-authored the report, says that one problem with getting scientists to change how they access information is that they tend to have picked up the “tools” of their field in the lab as graduate students and later only learn new techniques in response to the needs of a particular project. “I wouldn’t classify senior figures as frowning on new techniques and tools, but they may be less likely to experiment for the sake of trying something new,” she says. Bulger was also surprised by the “fragility” of scientific research, which depends on “shrinking sources of funding and the use of often outdated computing systems due to lack of financial support”.

Plasmonic metamaterials could make ‘gecko toes’

A material that promises to stick to smooth surfaces and then release on demand has been designed by scientists in the UK. The plasmonic metamaterial has yet to be built and tested in the lab, but if successful, it could be used to create artificial “gecko toes” that mimic those used by the lizards to walk up smooth walls.

The technique works by creating plasmons – quantized oscillations of electron density that occur in a conductor – by shining light on a conducting material. Creating plasmons causes regions of the conductor to become slightly positively charged and other regions to become negatively charged. If another surface is brought near enough to the material, there will be an attractive force drawing the two together. However, if the conductor is simply a flat sheet of metal film, then the resulting force is tiny and not very useful.

Plasmons also occur in artificial metamaterials, which are made of tiny structures with specific electromagnetic properties. Such metamaterials can be designed to resonate at certain plasmon frequencies, which allows the plasmons to hold much more energy.

Greater attraction

Nikolay Zheludev and colleagues at the University of Southampton suspected that the attractive force of such a plasmonic metamaterial would be much greater than a flat conductor, and have constructed a computer model to help test their ideas.

The team modelled a 2D plasmonic metamaterial comprising a thin film of evenly spaced gold structures that act as resonators. This is a common type of metamaterial that is simple to fabricate and has properties that are well understood. The researchers simulated the behaviour of the metamaterial when it is a few tens of nanometres from the surface of a conventional dielectric material. In particular, they looked at the forces that would result when the system is illuminated with light at or near the metamaterial’s resonant wavelength of 1370 nm.

If the light is fired from behind the metamaterial and then travels through it to the interface of the two surfaces, the simulation shows that the radiation pressure of the light would draw the two materials together. In other words, the two materials would feel a net attractive force. However, the strength of the attraction surprised the researchers – it is far stronger than can be explained by just radiation pressure pushing the metamaterial towards the other surface.

Trapped energy

“When you illuminate such a metamaterial structure with light, a lot of electromagnetic energy is trapped in the vicinity of the structure,” explains Zheludev. “We are talking about a distance that is less than the wavelength of light, so the density of trapped electromagnetic energy can become very high; and when you bring this structure close to the surface of something else, this trapped energy is affected by the presence of this other material. And in physics, a change in energy is always associated with a force.”

More interestingly, if the light first comes through the conventional dielectric material, the radiation pressure tends to push the metamaterial away. Nevertheless, the simulation predicts that, if the metamaterial were illuminated at or near its resonant frequency, the attraction caused by the optical adhesion force would be greater than the repulsion from the radiation pressure.

In either illumination scenario, the team calculates that the net force holding the metamaterial to the surface would be sufficient to overcome the downward pull of gravity on the metamaterial. As such, the metamaterial could in principle mimic a gecko’s toe – which can stick to a smooth surface when required and detaches on command to allow the lizard to take a step. This could be achieved with such a metamaterial by simply turning the light source on and off.

Remarkable study

“It is certainly an interesting and quite remarkable study,” says metamaterials expert Ortwin Hess of Imperial College London. “And I think it’s good to look into the very large opportunities that one has for structuring on the sub-wavelength scale – nanoscale in the case of visible light, or microscale in this case with infrared – and how these substructured materials can lead to completely new effects that are quite surprising.”

Before gecko-toe metamaterials can become useful, Zheludev says that we must find cheaper, more efficient ways to make plasmonic metamaterials on industrial scales. However, he is optimistic that such technical obstacles can be overcome with new innovations. In the short term, his team is working on achieving a “convincing demonstration and measurement of the effect in a laboratory system”.

A preprint about the research is available on arXiv.

Isotopic purity boosts graphene’s heat conduction

The thermal conductivity of graphene strongly depends on the material’s isotopic composition. So say researchers in the US and China, who have shown that graphene made from pure carbon-12 has a much higher thermal conductivity than normal graphene – which contains about 1% carbon-13. As well as helping to develop an accurate theory of heat conduction in 2D materials, the result means that isotopically pure graphene could be ideal for cooling tiny components in electronic circuits.

Graphene is a 2D sheet of carbon just one atom thick that has a range of unique electronic and mechanical properties. It is a semiconductor and is often touted as a replacement for silicon as the material of choice for electronics in the future, thanks in part to its exceptional ability to conduct heat. As electronic devices become ever smaller, local heating becomes more of a problem, and silicon in particular suffers in this respect. Materials such as graphene have a higher thermal conductivity and so can remove this waste heat more efficiently than materials such as silicon with a lower thermal conductivity.

Graphene isotopes

Two stable isotopes of carbon occur in nature – carbon-12 comprising about 99% of naturally occurring carbon and carbon-13 about 1%. These concentrations are also found in the graphene made and studied in labs. Now, two teams, one led by Rodney Ruoff of the University of Texas and the other by Alexander Balandin at the University of California, Riverside, have discovered that removing the carbon-13 from normal graphene strongly modifies the crystal lattice of the material and significantly increases its thermal conductivity.

Heat travels through crystalline materials such as graphene by way of lattice vibrations called phonons. Atoms with different masses scatter phonons differently, and therefore thermal-conductivity studies of graphene with varying isotopic compositions should help physicists gain a better understanding of how atomic mass affects the transport of heat. “Our result will help develop an accurate theory of heat conduction in graphene and other 2D crystals,” explains Balandin. “This isotope scattering is easier to describe theoretically than the scattering caused by impurity atoms in a sample, which not only differ by mass but also by size and many other parameters,” he adds.

Twice the conductivity

Using an optothermal laser Raman technique, originally developed in Balandin’s lab and subsequently modified by Ruoff’s group, the researchers found that the thermal conductivity of isotopically pure carbon-12 graphene (containing just 0.01% carbon-13) is higher than 4000 Wm–1 K–1 at a temperature of 320 K, while graphene with 1% carbon-13 has a conductivity of 2500 Wm–1 K–1. The thermal conductivity drops to about 2000 Wm–1 K–1 in graphene sheets made up of half carbon-12 and half carbon-13. In comparison, bulk copper, which is widely used to cool computer chips, has a thermal conductivity of about 400 Wm–1 K–1.

The graphene samples studied were made using large-area chemical vapour deposition, which allowed the researchers to create regions of film with different ratios of carbon-12 to carbon-13. This meant that areas with differing isotope ratios could be studied in the same experimental run.

When an object is illuminated with a laser beam, part of the incoming energy is reflected by the solid, part is transmitted through it and the rest is absorbed by the material. The researchers were interested in the fraction of energy absorbed because it heats up the material. Raman-scattering signals can correspond to the emission or to the absorption of a phonon, and the ratio of these two signals can be used to determine the total number of phonons, which, in turn, gives the lattice temperature.

“The interesting feature of this technique is that the temperature rise in graphene in response to laser heating is simply measured from the position of the Raman peaks we observed,” says Balandin.

New material of choice

The result means that isotopically pure graphene may now be considered as the material of choice for some thermal-management applications because of its superior heat-transferring properties, claims Ruoff.

The team, which also includes researchers from the Xiamen University in China, now plans to map out the thermal-conductivity characteristics of graphene below room temperature.

Balandin, for his part, says that he is also going to be busy developing an accurate theoretical description of phonon–isotope scattering. “This would provide more physical insights into 2D phonon transport,” he says.

The work is described in Nature Materials.

Neutrons revive Heisenberg’s first take on uncertainty

Physicists in Austria and Japan are the first to measure two physical quantities that were used in 1927 by Werner Heisenberg in an early formulation of quantum mechanics – but then abandoned because the terms did not seem to agree with the rapidly-evolving theory. The neutron experiment verifies a 2003 reformulation of Heisenberg’s famous uncertainty principle that reintroduces the concepts of error and disturbance.

When Heisenberg first proposed the uncertainty principle, it was in terms of the back-action of a measurement made on an extremely small object. His thinking is summed up in the “Heisenberg microscope” thought experiment whereby a photon is used to determine the location of an electron. The photon is scattered from the electron and then detected.

Heisenberg pointed out that such a measurement must contend with an inherent uncertainty in measuring the position at which the scattering took place – called the “error” – and an inherent uncertainty about how the momentum of the electron is changed by the scattering process. The latter is called the “disturbance” and Heisenberg showed that for a quantum system, the product of the two must be no less than a certain value – which we now recognize as being related to Planck’s constant.

Deeper statistical significance

The concepts of error and disturbance soon fell out of vogue, however, because it became apparent that there was a deeper statistical interpretation of uncertainty in quantum mechanics. As a result, Heisenberg’s ideas could not be reconciled with the mathematical expression of quantum mechanics.

Heisenberg and others began to express the uncertainty principle using statistical concepts – the product of the standard deviations of the position and momentum must be no less than a certain value. While this formulation provides a more universal definition of the uncertainty principle, there has always been some lingering interest among physicists about Heisenberg’s original ideas of error and disturbance.

Then, in 2003, Masanao Ozawa at Japan’s Nagoya University derived a new universal expression of the uncertainty principle that includes error and disturbance – as well as the standard-deviation terms. Now, Ozawa has joined forces with Yuji Hasegawa and colleagues at the Vienna University of Technology to confirm the calculation using spin-polarized neutrons. Instead of looking at position and momentum, the experiment measures two orthogonal spin components of the neutron – quantities also governed by the uncertainty principle.

Polarized neutrons

The experiment begins with a beam of mono-energetic thermal neutrons from a research reactor – the sort of neutrons that would be used in neutron-diffraction studies of solids. The neutron spins are aligned in the Z direction by passing the beam through a polarizing filter. The beam is then sent to an apparatus that determines the standard deviation in the measurement of the X-polarization, and then on to a similar apparatus that determines the standard deviation in the Y-polarization.

The error and disturbance are created by “detuning” the first apparatus so that it measures the polarization in a direction in the XY plane that is a small angular deviation from the X-axis. As well as creating a well-defined error in the measurement of the X-polarization, the rotation also causes a well-defined disturbance in the Y-polarization.

The error and disturbance are determined using data from the two polarization measurements – and agreed with Ozawa’s theory.

Arbitrarily small

“The smaller the error in one measurement, the larger the disturbance of the other – this rule still holds,” explains Hasegawa. However, he points out that the experiment confirms Ozawa’s result that the product of error and disturbance can be made arbitrarily small, confirming that Heisenberg was correct to abandon his original formulation.

“This is certainly the first experiment to test Ozawa’s formulation, so I think this should draw more attention to Ozawa’s formulation, and how it is universally valid unlike a naive Heisenberg measurement-disturbance relation,” said Howard Wiseman of Griffith University in Australia.

“The naive idea that the uncertainty relation can be understood as arising because any measurement of quantity X “kicks around” the value of a complementary quantity Y is still commonly presented in elementary presentations of quantum mechanics. Hopefully this experiment will help to dispel that idea.”

The experiment is described in Nature Physics.

Hawking exhibition opens in London

Hawking

By Matin Durrani

I travelled up to London last night to attend the official opening of a new exhibition at the Science Museum celebrating the Cambridge University cosmologist Stephen Hawking, who turned 70 earlier this month.

Sadly, Hawking was too ill to attend in person, but he did deliver a “speech” via his trademark voice synthesizer, in which he said that “it has been a glorious time to be alive and doing research in theoretical physics”.

“Our picture of the universe has changed a great deal in the last 70 years, and I’m happy if I have made a small contribution,” he added.

Hawking went on to say that he wanted to share his “inspiration and enthusiasm” for science. “There’s nothing like the ‘eureka’ moment of discovering something that no-one knew before,” he claimed.

The exhibition, which is fairly small, includes a short letter that Hawking sent to the editor of Nature in 1974 accompanying his paper showing that black holes can emit radiation – a hypothesis that he warned “might cause quite a stir”.

There is also a drawing of Hawking by the artist David Hockney and some other memorabilia, including a copy of a baseball encyclopedia that was the subject of a bet with Caltech physicist John Preskill. Hawking gave Preskill the book in 2004 after conceding that information could be retrieved from a black hole, as Preskill had argued but Hawking had originally denied.

Also present last night was Hawking’s daughter Lucy, who paid tribute to her father and thanked the museum for putting on the display.

Spotted among the attendees was Graham Farmelo, author of a biography of that other great British theoretical physicist, Paul Dirac. Entitled Strangest Man, it was Physics World‘s Book of the Year 2010 and you can listen to an online lecture by Farmelo about Dirac here. Also present last night was Surrey University physicist Jim Al-Khalili, who recently delivered an online lecture for physicsworld.com about the scientific contributions of Muslim scholars.

More details about the exhibition can be found here.

How should time be defined?

hands smll.jpg

By Hamish Johnston

For most people there are 86,400 seconds in a day – but astronomers have known for some time that days are getting longer thanks to sudden shifts in the Earth’s rotation.

While most of us will live our entire lives oblivious to this tiny warping of time, it does mean that the time kept by super-accurate atomic clocks and the astronomical time calculated from Earth’s motion are drifting apart by up to one second per year.

To solve this problem, the International Telecommunications Union (ITU) maintains Coordinated Universal Time (UTC). The length of the second in UTC is defined as a certain number of beats of an atomic clock, whereas the actual time of day is defined astronomically. This is done by adding or subtracting “leap seconds” to UTC when necessary.

For the past decade, however, various groups have been calling for the abolition of the leap second and the adoption of pure atomic time. The ITU will be meeting in Geneva over the next few weeks and the abolition of the leap second is on the agenda. Indeed, the first debate is scheduled for today.

If the ITU does do away with the leap second, it will end tens of thousands of years of astronomical timekeeping by humans. This bothers some scientists – including Markus Kuhn of the University of Cambridge in the UK. You can read more about the leap second, and Kuhn’s arguments, here.

What do you think? You can have your say by participating in this week’s Facebook poll, where the question is:

How should time be defined?

By the Earth’s rotation
By an atomic clock

In last week’s poll we found ourselves in rather gloomy territory following the news that the famous Doomsday Clock had swung one minute closer to midnight. We asked you to choose from a list of scenarios the one you believe is most likely to lead to the end of civilization as we know it.

Runaway climate change emerged as voters “favourite” choice by picking up 49% of the votes. In second place was a nuclear world war, receiving 27% of votes. In third place was an asteroid impact with 12% of votes. Fourth place went to an act of bioterrorism with 10% of votes. And just 6% of voters believe that we will meet our end at the hands of an alien invasion.

Once again, the poll attracted a lot of comments from our fans on Facebook, despite its rather depressing theme. And a lot of people appeared to have given the doomsday scenario some serious thought. That includes Bill Dortch, who warned “I would say an act of bioterrorism, especially now that not one, but two researchers, with NIH funding, have demonstrated how very easy it would be.”

Cathy McHale Albano also believes that our fate will ultimately lie in our own hands. “I’m guessing it’s got to be something caused by humanity, so it’s runaway climate change, bioterrorism or nuclear world war,” she says. “The insidious nature of climate change makes it more likely, in my mind, although all it takes is one wrong move by one of the world’s wackos for the other two to happen.”

However, there were plenty of others who answered the poll in jest, including Lynette Fitch Blair: “Since there is no category for zombie apocalyse, then I guess alien invasion is the next best choice.” And Paul Tangney, who chipped in early to point out that we were offering “some post-festive cheer from the physics community”.

Thank you for all of your votes and comments, and we look forward to hearing from you again in this week’s poll.

Copyright © 2026 by IOP Publishing Ltd and individual contributors