This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

Free weekly newswire

Sign up to receive all our latest news direct to your inbox.

Physics on film

physicsworld.com's multimedia channel features exclusive video interviews with leading figures in the physics community.

Visit our multimedia channel to see the latest video.

June 2008 Archives

By Hamish Johnston

The British media were talking about physics today, and I’m afraid the news wasn’t good.

The BBC was reporting on a study on the state of physics teaching in England by Alan Smithers and Pamela Robinson of the Centre for Education and Employment Research at the University Of Buckingham.

Robinson and Smithers found that in 2007 just 12% of scientists accepted on teacher training programmes were trained as physicists — down from 30% in 1983. If this trend continues, it could be very difficult for the government to hit its target of having 25% of all science teachers specialising in physics by 2014.

The decline in physics teachers has meant that many education authorities have opted for “general science” teachers who cover biology and chemistry as well as physics. Indeed, the researchers found that half the schools in inner London have no teachers specialising in physics.

However, all is not gloom and doom for teaching physics in England. Last week we had our summer company meeting and Bob Kirby-Harris, chief executive, the Institute of Physics (which owns the company that publishes physicsworld.com) told us about how the organization was tackling the problem. The IOP has set up the Physics Enhancement Project, which aims to boost the physics expertise of trainee science teachers who don’t have formal qualifications in physics.

Most of our readers are outside of the England, so please let us know about the state of physics teaching where you are.

By Hamish Johnston

There’s nothing more annoying than the sound of your own voice…

I have come to this conclusion after spending many painful hours transcribing hundreds of taped interviews that I have done with scientists, industrialists and other luminaries.

But today, the shoe was on the other foot (or ear). At noon I was in a radio studio speaking to the host of the BBC Radio Wales programme The Science Cafe about dark energy.

2008 marks the tenth anniversary of the discovery of dark energy and Adam Walton wanted an update on the stuff from Physics World.

I’ll be the first to admit that cosmology is not one of my strengths and I know that I don’t have a voice for radio — but no-one else on the editorial team seemed to be available, so I agreed.

I spent the last few days reading up on dark energy — indeed, Physics World recently published three articles to mark the 10th anniversary, all of which were very helpful.

I did my best to answer Adam’s questions and I hope that after some skillful editing, his listeners will learn something about dark energy.

One thing I learned from the experience is that many of the things that I find interesting about dark energy — the huge discrepancy between the cosmological constant invoked to explain dark energy and the “cosmological constant” that can be derived from quantum field theory, for example — are very difficult to explain in sound bites.

If you live in Wales, you can listen to the show on Sunday, 29 June at 5pm. The rest of us can listen online.

I was pretty nervous during the interview, so I can’t really remember half of what I said. It will be interesting to hear what they use and what ends up on the cutting room floor.

By Jon Cartwright

Reports of rumour-mongering, pettiness and mud-slinging may still be rife, but I think it’s safe to say that the fever surrounding the US primaries has at least partly subsided. Among those who have not been taking time to convalesce, however, are the folks at ScienceDebate 2008. According to an email they dropped into my inbox this morning, they’ve been busy working with a dozen national science organizations to prepare a list of 14 questions related to science policy for the presidential candidates. Get ready, they’ll be announcing it shortly.

Until then, check out this page on the Scientists and Engineers for America (SEA) website. Together with ScienceDebate 2008, the American Association for the Advancement of Science, the American Institute of Physics, the American Physical Society and 11 other organizations, the SEA has drawn up a list of seven questions on science policy for the 2008 congressional candidates.

Two candidates have already posted some responses. If you want to pester your local candidate, SEA gives you the option to send him or her an email.

By Hamish Johnston

Earlier this week I received a press release about a paper entitled ‘Abundant health from radioactive waste’, which was published today in the International Journal of Low Radiation.

Not surprisingly, this set the alarm bells ringing, but I couldn’t resist following it up.

The paper is by Don Luckey who is Emeritus Professor of Biochemistry at the University of Missouri. Luckey is a proponent of “radiation hormesis” — the idea that small doses of radiation can actually be good for you, even if much larger doses will kill you.

In his paper, Luckey goes so far as to suggest that schools be built “in used nuclear power plants”, and children be given sculptures that are impregnated with nuclear waste to boost their exposure to radiation (and their health). He does caution, “However, children should not ride [sculptures of] radioactive ponies for more than a few minutes every day”.

I had never heard of radiation hormesis, so I got in touch with several health physicists in the UK and I was genuinely surprised to get a mixed verdict on the theory. Although they all agreed that hormesis was at the fringes of health physics, some did say that there could be something to it.

Indeed, I was told that the theory has a small but very vocal group of supporters, particularly in France, Japan and the US, who have been lobbying the International Commission on Radiological Protection to look into revising its Linear No-Threshold (LNT) principle. The LNT maintains that there is no exposure level below which radiation has no harmful effects (although these effects are extremely small at very low levels).

The reality is that it is very difficult to understand the effects — good or bad — of very low levels of radiation. As a result, the literature is full of seemingly conflicting reports and scientists who have a passionate belief in radiation hormesis can pick and choose studies that support the theory, while dismissing those that don’t.

A case in point is the controversial 1995 study by Bernard Cohen, which suggested that people living in parts of the US with high levels of the radioactive gas radon tend to be less likely to die from lung cancer — strong evidence for radiation hormesis, according to Luckey. However, in 2003, Jerry Puskin showed that this could be explained by considering the different rates of smoking in these regions — something that Luckey seems to have ignored in his latest paper.

So, will my children be playing on a radioactive pony? I don’t think so!

zeil.jpg

By Matin Durrani

I was up in London yesterday at the headquarters of the Institute of Physics to listen to a talk by top quantum-information scientist Anton Zeilinger from the University of Vienna.

Zeilinger was giving the inaugural Isaac Newton lecture after being named the first recipient of the Institute’s Newton medal.

Unlike the Institute’s other medals, the Newton medal is awarded to “any physicist, regardless of subject area, background or nationality”, rather than to a physicist with specific links to the UK.

I’d say there were about 200 physicists in the audience to listen to Zeilinger whizz through topics like entanglement and decoherence — and how these have applications in quantum communication, quantum cryptography and quantum teleportation, some of which are being commercialized.

His basic message is that, thanks to various technological advances, we can now examine some of the fundamental questions in quantum mechanics that the likes of Heisenberg, Schrödinger, Bohr and Einstein posed as mere “thought experiments”, such as whether measurements on one particle can instantly affect an entangled partner a finite distance away. We are in fact living through a “quantum rennaissance”.

Zeilinger and his colleague Markus Aspelmeyer are fleshing out these themes in an article to appear in the next issue of Physics World. I was delighted that he referred several times to the article, even flashing up a couple of figures from the article that our art studio has redrawn from Zeilinger’s hand sketches.

After the lecture, I caught up with Zeilinger over champagne and quizzed him on the fact that he had put his neck firmly on the line when it comes to decoherence — the fact that fragile quantum states can be lost when they interact with the environment.

Having described how molecules as large as buckyballs can demonstrate quantum behaviour, Zeilinger had told the audience that he thinks “there is no limit” to how heavy, complex or warm a molecule can be while still showing quantum phenomena. “Decoherence won’t be a problem for molecules as large as viruses even at room temperature,” he speculated. “The limit is only one of money.”

After the lecture, delegates were treated to a concert by the Abram Wilson Jazz Quartet. Zeilinger is, apparently, something of a jazz buff.

darlington.jpg

By Hamish Johnston

Summer can be a miserable time in Toronto. It can get very hot and humid, causing folks to turn up their air conditioning, which in turn puts the region’s coal-fired generating plants into overdrive blanketing the city in a sickly yellow smog that can harm those with breathing difficulties.

As a result, local utilities have begun to shut down aging coal-fired plants in an attempt to improve air quality — but leaving some wondering where the city and surrounding province of Ontario will get its electricity.

Now, Ontario’s Minister for Energy Gerry Phillips has given the go ahead for two new nuclear reactors to be built at the Darlington generating plant just east of the city, which is already home to four reactors. These are the first power reactors to be built in Canada in over 15 years.

According to the Toronto Star, the reactors will come online in 2018 and the design will be chosen in November from a short list of three firms: Atomic Energy of Canada; US-based Westinghouse; and Areva of France.

The reactors are expected to generate about 3200 MW of power, which will double Darlington’s current capacity.

The move is part of Ontario’s CDN$26 billion plan to maintain its current nuclear capacity of 14,000 MW through a series of upgrades over the next 20 years.

Darlington is located in a part of Ontario that has been hard hit by lay-offs in the automotive industry, so Phillips may be hoping that the promise of 3500 new jobs will offset the concerns of environmentalists.

By Hamish Johnston

I spent an hour or so this morning trawling through the arXiv preprint server, where many physicists post their research results before they are published formally. It’s a great way to keep up with the latest breakthroughs — and a good source of more controversial or off-beat stories.

That’s where I spotted this gem: “Growth of Diamond Films from Tequila” by Javier Morales, Miguel Apátiga and Victor M Castaño, who are physicists based in (you guessed it) Mexico.

It seems that the three physicists have used the famous spirit in their chemical vapour deposition (CVD) machine to create tiny diamonds.

Although the paper notes that diamonds have already been made by CVD using a number of other precursors, the trio suggest that tequila provides “an excellent alternative to produce industrial-scale diamond thin films for practical applications using low-cost precursors”.

It’s not clear from the paper why tequila was used rather than vodka, gin or whisky — and of course, if this paper was entitled “Growth of Diamond Films from Water and Ethanol”, I wouldn’t have given it a second thought.

By Jon Cartwright

Several of you have asked when I’m going to give you an update on Yoshiaki Arata’s cold-fusion demonstration that took place at Osaka University, Japan, three weeks ago. I have not yet come across any other first-hand accounts, and the videos, which I believe were taken by people at the university, have still not surfaced.

However, you may have noticed that Jed Rothwell of the LENR library website has put some figures with explanations relating to Arata’s recent work online. I’ve decided to go over them and some others here briefly to give you an idea of how Arata’s cold-fusion experiments work. It’s a bit more technical than usual, so get your thinking hats on.

ColdFusionFig3.jpg

Above is a diagram of his apparatus. It comprises a stainless-steel cell (2) containing a sample, which for the case of the demonstration was palladium dispersed in zirconium oxide (ZrO2–Pd). Arata measures the temperature of the sample (Tin) using a thermocouple mounted through its centre, and the temperature of the cell wall (Ts) using a thermocouple attached on the outside.

Let’s have a look at how these two temperatures, Tin and Ts, change over the course of Arata’s experiments. The first graph below is one of the control experiments (performed in July last year) in which hydrogen, rather than deuterium, is injected into the cell via the controller- (8) operated valve (5):

ColdFusionFig2.jpg

At 50 minutes — after the cell has been baked and cooled to remove gas contamination — Arata begins injecting hydrogen into the cell. This generates heat, which Arata says is due to a chemical reaction, and the temperature of the sample, Tin (green line), rises to 61 °C. After 15 minutes the sample can apparently take no more hydrogen, and the sample temperature begins to drop.

Now let’s look at the next graph below, which is essentially the same experiment but with deuterium gas (performed in October last year):

ColdFusionFig1.jpg

As before, Arata injects the gas after 50 minutes, although it takes a little longer for the sample to become saturated, around 18 minutes. This time the sample temperature Tin (red line) rises to 71 °C.

At a quick glance the temperatures in both graphs, after saturation, appear to peter out as one would expect if heat escapes to the environment. However, in the case of deuterium there is always a significant temperature difference between Tin and Ts, indicating that the sample and cell are not reaching equilibrium. Moreover, after 300 minutes the Tin of the deuterium experiment is about 28 °C (4 °C warmer than ambient), while Tin/Ts of the hydrogen experiment is at about 25 °C (1 °C warmer than ambient).

These results imply there must be a source of heat from inside the cell. Arata claims that, given the large amount of power involved, this must be some form of fusion — what he prefers to call “solid fusion”. This can be described, he says, by the following equation:

D + D = 4He + heat

(According to this equation, there should be no neutrons produced as by-products — thanks to those of you who pointed this out on the last post.)

If any of you are still reading, this graph below is also worth a look:

ColdFusionFig4.jpg

Here, Arata also displays data from deuterium and hydrogen experiments, but starts recording temperatures after the previous graphs finished, at 300 minutes. There are four plots: (A) is a deuterium and ZrO2–Pd experiment, like the one just described; (B) is another deuterium experiment, this time with a different sample; (C) is a control experiment with hydrogen, again similar to the one just described; and (D) is ambient temperature.

You can see here that the hydrogen experiment (C) reaches ambient temperature quite soon, after around 500 minutes. However, both the deuterium experiments remain 1 °C or more than ambient for at least 3000 minutes while still exhibiting the temperature difference between the sample and the cell, Tin and Ts.

Could this apparently lasting power output be used as a energy source? Arata believes it is potentially more important to us than hot or “thermonuclear” fusion and notes that, unlike the latter, it does not emit any pollution at all.

cloak%202.jpg

By Hamish Johnston

I have just updated the “Featured Journal” slot on physicsworld.com to include a paper published today that presents a “feasible” recipe for creating a metamaterial that could completely cloak an object from sound — at least in two dimensions.

Instead of absorbing and/or reflecting sound like traditional acoustic insulation, an acoustic cloak would guide sound waves around an object, much like water flowing around a stone in the middle of a stream. This analogy makes it easy to see why those charged with hiding submarines from sonar would like to get their hands on such a material.

But don’t expect to be able to cover the walls of your bedroom with the stuff and finally get a good night’s sleep, because the authors haven’t actually made the material yet (although if your neighbour’s microwave oven is keeping you awake, physicists have made a microwave cloak).

The paper is by Daniel Torrent and José Sánchez-Dehesa of the Polytechnic University of Valencia, Spain. I spoke with Sánchez-Dehesa earlier this year when I wrote a news story about a paper they published in February. This earlier work suggested that an acoustic cloak could be made by surrounding an object with an array of cylindrical rods. If the rods had the right elastic properties and their radii and spacing were varied in a specific way, silence would reign.
cloak%201.jpg
However, it looks like they have decided that this earlier design cannot be built using real materials. Now, they have refined their design to a multilayered composite of two different “sonic crystals” — with each crystal being a lattice of cylindrical rods.

Torrent and José Sánchez-Dehesa have calculated that about 200 metamaterial layers would be required to cloak an object over a wide range of acoustic frequencies — although it’s not clear from the paper how thick this would be if real materials were used.

Both papers are published in the New Journal of Physics, which will be putting out a special issue on “Cloaking and Transformation Optics” later this year.

By Hamish Johnston

Earlier this week our Paris correspondent Belle Dumé was back in her hometown of Liverpool, where she took in an exhibition of astronomy images on display around the city’s famous Albert Dock.

Belle took a selection of photos and reported back to us.

albert%203.jpg


Called “From Earth to the Universe”, the exhibition runs until 29 June 2008. Belle says that it provides a taste of things to come during the International Year of Astronomy celebrations next year.

2009 has been proclaimed the International Year of Astronomy (IYA2009) by the International Astronomical Union (IAU) and UNESCO. Its mission is to bring astronomy into the wider public domain.

albert%202.jpg

Belle tells me that the new exhibition, sponsored by the Science Photo Library and the ASTRONET Symposium, is probably the first real IYA2009 event and consists of 48 stunning images taken by professional as well as amateur astronomers. These include photographs of our Milky Way, the Andromeda galaxy, the horse head nebula and the now famous image of the Cosmic Microwave Background revealed by the Wilkinson Microwave Anisotropy Probe (WMAP) satellite in 2003.

Belle’s hometown was chosen to host the exhibit because it is the European Capital of Culture this year. The exhibition also coincides with a major European astronomy meeting, the ASTRONET Symposium, which will take place from 16-19 June at the Liverpool John Moore’s University.

albert%201.jpg

Belle tells me that the display is a prototype for an exhibit that will tour the world next year. So it might be coming to a park, shopping centre, metro station or airport near you.

There will be special coverage of IYA2009 in upcoming issues of Physics World.

Copernicus%20book.jpg

By Jon Cartwright

Most of you will never have raised an arm at Christie’s auction house. But, if you’re partial to the odd extravagance, there’s a first edition of Nicolaus Copernicus’s De Revolutionibus Orbium Coelestium (“On the Revolutions of Celestial Spheres”) up for grabs. It’ll probably cost you around a million dollars.

Bidding for the 1543 volume starts on 17 June, and I expect it will end up in the vault of some blasé collector. No-one will ever read it, but then it is in Latin, and who understands that these days? Nil desperandum, though, that’s what I like to say.

Still, I know of least one physicist who would love to get his hands on it. Owen Gingerich, a historian of astronomy from Harvard University, has spent years tracing copies of Copernicus’s masterpiece, partly as an exercise for a book he wrote in 2004. A first edition would be the darling possession on his mantelpiece. “There aren’t that many copies in private hands these days,” he lamented on the phone to me a few moments ago.

Nowadays Gingerich finds solace in a second-edition. Although considerably less valuable, it does have annotations by Rheticus, the young mathematician who persuaded Copernicus to publish his radical ideas. Gingerich did get the opportunity a few years ago to buy a bona-fide first edition for $50,000, which would have been a good investment but which unfortunately would have required him to re-mortgage his house.

Will Gingerich put in a bid at Christie’s this time round? “I figure that even if I had it I’d have to rent a bank safety deposit box to keep it in,” he says. “So I’ll give it a pass.”

By Jon Cartwright

Here’s a statistic for you, taken from a website called Sense About Science. It claims that there’s over a million scientific papers published every year. If that’s right, there must be something in the region of 20,000 published a week. Even if physics accounts for only a small fraction of the sciences, that still means we’re looking at several hundred every day. (I could dig out a reliable figure, but it’s probably not far wrong.)

There’s no way we at Physics World can even hope of keeping you up to date with that many papers. Nor would you want us to — let’s face it, most papers only deal with very minor developments that would only interest those working in exactly the same field.

So, I would like to raise a question: should we bother to comb the journals for the biggest developments, or should we give up reporting research news altogether?

Actually, I’m not the first to raise it. I discovered the idea nestled at the bottom of an article written last week in the Guardian by Peter Wilby. He had been haranguing the Daily Mail for the way they report “breakthrough” developments in health research. (It’s the same old score: this week they tell you a glass of wine a day will increase your life expectancy; next week they will tell you the opposite.) Wilby proposes that, instead of mindlessly regurgitating seesawing opinions from the medical community, the media should offer “state of knowledge” features that give an overview of where the present scientific consensus is headed.

Would this feature-only approach benefit physics too? Conclusions seen in physics papers are usually straightforward to interpret — particularly compared with, say, the vagaries of health surveys — which would suggest the answer is no. However, there are still many difficulties.

One is that small developments in research are seen as being less newsworthy than those that go against prevailing opinion. In the latter instance, unless there is enough context to show how the research fits into the grand scheme of things, a news story can be misleading. Another, as I showed in my recent article on the use of embargoes in science publishing, is that much (if not most) science news is artificially timed to fit in with publishers’ agendas; in a sense, the news is not “new” at all. A feature-only approach would avoid both of these.

The main point I can see in favour of science news is that there are certain developments that deserve to be brought to people’s attention immediately. Think of the recent claims by the DAMA experiment team in Italy that they had detected dark matter on Earth. Or the discovery by Japanese physicists of a new class of high-temperature superconductors based on iron. Should we only report on such critical research? If so, where do we draw the line?

Let’s hear your thoughts. But bear in mind that if we do decide to scrap science news, I’ll be out of a job.

By Jon Cartwright

We have Leon Lederman to blame. For the “God particle”, that is. Since he published his 1993 book, The God Particle: If the Universe Is the Answer, What Is the Question?, the layperson might be forgiven for believing the Large Hadron Collider (LHC) is not searching for a particle called the Higgs boson, but a path to spiritual enlightenment.

Many physicists hate referring to Him. For some particle physicists, the “God particle” belittles the hoards of other theoretical particles that might be detected at the LHC. They say it reveals little of the particle’s function, and is savoured by writers with little rhetoric. For some non-particle physicists, the God particle epitomizes the hype that surrounds particle physics. Then there are those who think divine connotations are always a bad idea.

Are they, though? When a furore about the use of “God particle” began bouncing around the blogsphere last August, mostly in response to an article written by Dennis Overbye of the New York Times in which he defended the term, several agreed that religious metaphors should be an acceptable part of our language. Einstein used them all the time (e.g. “Quantum mechanics…yields much, but it hardly brings us close to the secrets of the Old One”) yet historians tend to conclude he was not a theist. Even when I began writing this blog entry I thought I might be clever and refer to the Higgs as the light at the end of the LHC’s tunnel — before I reminded myself that the Higgs is not the only particle of import they expect to find.

As Sean Carroll noted on the Cosmic Variance blog, it is a fear of pandering to the religious right that is driving the expulsion of religious metaphors. If certain atheists succeed, religious metaphors will go the way of the dodo. The God particle is not one of the best, but it might be one of the last.

Which brings me to the point of this entry (not that Earth-shattering, I’ll warn you now). This morning I was looking at the news links posted on the Interactions website, only to find one from the Guardian newspaper headlined “The hunt for God’s particle”. That’s right — you read it right the first time. “God’s particle”? Where’s the metaphor in that? Have we now come full-circle, having compared the search for the Higgs boson to the path for spiritual enlightenment, only to reduce it to another of God’s creations?

Poor old Lederman must wonder what he started.

mike.jpg

By Hamish Johnston

In my line of work I don’t usually get to talk to multi-millionaires — but a few weeks ago I had the pleasure of speaking with high-tech magnate Mike Lazaridis, who made his fortune developing the Blackberry handheld email/mobile phone device.

Lazaridis and I were in a conference call with Neil Turok, one of the world’s leading cosmologists who had just been enticed by Lazaridis to leave Cambridge and become executive director of Canada’s Perimeter Institute for Theoretical Physics in Waterloo, Ontario.

The institute was founded in 2000 by Lazaridis who put up about CDN$100m of his own money. Now, Lazardis has donated a further $50m to Perimeter.

If you count the millions that he and his wife have given to the Institute for Quantum Computing at the nearby University of Waterloo, Lazaridis (who is not a physicist) has spent an amazing $200m on physics research!

When I asked Lazaridis why Turok was the right person to lead the institute he said: “We share deep convictions in the importance of basic science, the importance of funding basic science, and the importance of philanthropy in promoting basic science for the advancement of mankind”.

Lazaridis is one of a small but growing number of benefactors with deep convictions and deep pockets when it comes to the more esoteric disciplines of physics such as cosmology, astrophysics and particle physics.

Just two weeks ago an anonymous benefactor donated $5m to Fermilab, which has been particulalry hard hit by US government cuts on physics spending.

And staying with the topic of funding cuts, during our conversation Turok told me that recent cutbacks in the UK made Perimeter’s offer all the more attractive — something that he has discussed in great detail in a recent interview with the Times.

By Michael Banks

I have been closely following events concerning a new class of iron-based superconductors ever since Physics World broke the story about their discovery in March. The new materials, containing planes of iron and arsenic separated by planes of lanthanum and oxygen, offer high temperature superconductivity without the need of copper-oxide planes as in the cuprates.

The challenge now is to understand how these superconductors work, i.e. what the responsible pairing mechanism is. Early calculations showed that the superconductivity cannot be described by electron-phonon coupling. The mechanism could therefore be similar to cuprate-based superconductors, which currently hold the record for the highest superconducting transition temperature (although the mechanism in the cuprates is still not understood).

Now, however, a paper published in Nature suggests that SmFeAsOF, which is the same as the material in the story we reported in March but with the lanthanum replaced by samarium, may behave quite differently to the cuprates. The paper’s authors, who are based in the US and China, show that SmFeAsOF has a ‘single gap’ in the density of states of the Cooper-pair fluid (an energy gap originates since there is a finite amount of energy needed to break the pair of electrons held in a Cooper-pair). The temperature dependence of the gap was found to obey conventional BCS predictions — the theory named after Bardeen, Cooper and Schrieffer that proposes electron attraction, via phonons, to form cooper-pairs.

This is all different from the cuprates, which don’t follow BCS predictions and also have a so-called ‘pseudo-gap’, which as I understand only allows certain electrons to ‘see’ a gap depending on how the travel with respect to the crystal lattice. The authors found no evidence of a ‘pseudo-gap’ in the new materials. So it seems that the materials follow BCS predictions, but with a superconducting transition temperature that is too high to be explained via electron-phonon coupling. The mystery deepens.

In another recent development, researchers in Switzerland have managed to grow single crystals of the Sm based iron superconductor. All research done before was performed on polycrystalline samples, but now opening research into single crystals means finding those elusive mechanisms may be a step closer.