Skip to main content

Frequency comb helps kill dangerous bacteria

Scientists in the US have used an optical-frequency comb – a laser that emits light at a range of equally spaced frequencies, like the teeth on a comb – to monitor how well a device designed to kill dangerous bacteria does its job. The comb was used to measure the concentrations of ozone, hydrogen peroxide and other reactive molecules in the stream of air and cold plasma produced by the decontamination device. The study reveals that decontamination is most efficient when both a plasma and hydrogen peroxide are present in the stream.

“Cold-air plasmas” – room-temperature gases of ionized air molecules – are widely used to kill dangerous bacteria, both in medical and food-processing environments. While the technique is good at dealing with antibiotic- and heat-resistant bacteria, the devices can be even more potent if the plasma is combined with an antibacterial chemical such as hydrogen peroxide. But understanding why this process occurs and how it could be improved is not easy because accurately measuring the relative abundances of different molecules in the stream – and how they interact – is tricky.

Sensitive teeth

Mark Golkowski and colleagues at the University of Colorado, along with Jun Ye and team at JILA in nearby Boulder, have shown that an optical-frequency comb – a device normally associated with atomic clocks and precision spectroscopy – can get round this problem to study molecules in the decontaminating stream. When light from the comb passes through the stream, the presence of a specific molecule or ion is signified by the absorption of a specific set of teeth. According to Golkowski, JILA’s frequency comb offers the “unique capability of an extremely sensitive measurement and one that also yields information about the interaction dynamics, since many molecules can be simultaneously observed on short timescales”.

As well as quantifying how much hydrogen peroxide is in the stream, the frequency comb also revealed that the addition of hydrogen peroxide did not affect the level of the toxic gas nitrogen dioxide in the stream. The team also found that the levels of ozone and nitrous oxide in the stream halved when hydrogen peroxide was introduced. According to the researchers it is difficult to predict the relative concentrations of these molecules using numerical modelling, and therefore the frequency comb has given them a unique insight into the chemistry of their stream.

The research also confirmed that a hydrogen-peroxide-rich stream can quickly kill bacteria up to 3 m from the source. The system proved very effective at disinfecting surfaces of potentially dangerous organisms including Staphylococcus aureus – a cause of pneumonia and other diseases – and Pseudomonas aeruginosa, which is often found on medical equipment.

The research will be published in IEEE Transactions on Plasma Science.

Planet-spotting

PW-2012-06-22-blogexoplanets-large.jpg

(Courtesy: Randall Munroe/Creative Commons)



By Tushna Commissariat

At first glance, the image above might remind you of a colour-perception test. But what one-time physicist and comic-designer Randall Munroe has done is to create a to-scale visualization of all the known 786 planets that we have discovered over the years – including the eight of our own system.

He has been particular enough to note that the size of some of these planets has been determined simply on the basis of their mass – meaning that they might, in actuality, be smaller and denser. Interestingly, this is not the first time that Munroe, who is behind the hugely popular xkcd.com webcomic, has had something to say about the billions and billions of exoplanets that we now know exist. In fact, in a previous comic, he talks about travelling in interstellar space, with one particular character agonizing about his partner’s apparent apathy over the wonder of all the worlds.

While Munroe’s newest comic is excellent, unfortunately, he is already out with the count. As of yesterday, the team behind NASA’s planet-finding Kepler telescope announced that it has found another two planets in its data. However, these two planets are caught in quite a clinch – they are closer to each other than any planetary system we’ve found to date.

The cosy system, mundanely called Kepler-36, contains two planets circling a subgiant Sun-like star that is several billion years older than the Sun. The inner world, Kepler-36b, is a rocky planet with a 14-day orbit. It is about 1.5 times the size of Earth and is 4.5 times as massive. The outer world, Kepler-36c, is a “hot Neptune” planet with a 16-day orbit that is 3.7 times the size of Earth and 8 times as massive.

The researchers point out that as the planets are so close to each other, from the surface of the smaller planet one would see the partner-planet as we see the Moon, only 4–5 times bigger, filling up its sky and presenting quite a spectacular view.

If Munroe’s first exoplanet comic does prove to be correct, I vote we point our spaceships towards this rather interesting system.

CERN calls press conference for 4 July…

By Hamish Johnston

We have heard from a reliable source that CERN will be holding a press conference on 4 July. This is the first day of the International Conference on High Energy Physics in Melbourne, Australia, where physicists working on the Large Hadron Collider (LHC) are expected to unveil the latest results in their search for the Higgs boson.

Earlier this week the Physics World editorial team played out a few scenarios regarding how CERN would deal with the possibly that data presented in Melbourne would tip 2011’s preliminary sighting of the Higgs to “discovery status”.

A big problem with an “official announcement” in Australia is that the country is a “non-member” of CERN – and therefore it seems very unlikely that the discovery would be unveiled in a country that hasn’t paid a significant chunk of the LHC’s price tag. Also, CERN’s PR guru James Gillies is on the record as saying that the Higgs announcement will be made in Geneva.

The only option, it seemed, was for CERN to organize a press conference in Geneva before or during the Melbourne conference to announce the discovery – and it looks like that’s what it has done.

But this introduces another problem. The press conference is scheduled for 09.00 Geneva time, presumably because this is 17.00 Melbourne time. However, this is 02.00 at Fermilab in Chicago – which is keen to emphasize the huge role that lab has played in the hunt for the Higgs. Oh, and 4 July is a national holiday in the US. Apparently, the Americans are not pleased!

Well, that’s enough speculation…I’ve got to get my ticket to Geneva booked!

Plasmons spotted in graphene

Two independent teams of physicists have been able to create and control “plasmons” – collective oscillations of conduction electrons – on the surface of graphene for the first time. Their experimental approach, dubbed “plasmon interferometry”, could be used to study a wide range of materials including superconductors and topological insulators. As the plasmons interact so strongly with light, graphene could be used to create new optical devices and even materials for invisibility cloaks.

Surface plasmons can interact with light at certain wavelengths to create surface plasma polaritons (SPPs), which are light-like excitations that propagate along the surface. SPPs often have shorter wavelengths than the associated light and so tend to confine the light to a smaller region than is allowed in free space. This concentrating effect is particularly useful for shrinking the size of optical circuits and can also be used to create “transformational optics” such as superlenses and invisibility cloaks. Structures that support SPPs have also been used as chemical sensors because they tend to enhance interactions between light and molecules.

Concentrating energy

Graphene – a honeycomb lattice of carbon just one atom thick – is expected to have plasmonic properties that are somewhat different to metals, where plasmons have been studied in detail. This is because the conduction electrons in graphene behave like “Dirac fermions”, which means that they travel through the material at near the speed of light. Graphene plasmons therefore concentrate electromagnetic energy into a much smaller region, which could be useful to researchers developing plasmon-based technologies. Because of these differences, however, conventional techniques for using infrared light to create and study plasmons do not work with graphene.

The two teams of physicists have used the sharp tip of an atomic force microscope (AFM) as a nanoantenna, which focuses a beam of infrared light into the graphene. One team includes Zhe Fei, Dmitri Basov and colleagues at the University of California, San Diego, along with collaborators in the US, Singapore and Germany. The other team is led by Frank Koppens of ICFO in Barcelona and includes scientists from CSIC in Madrid and the nanoGUNE research lab in San Sebastian.

Sharp tip

The AFM tip also interacts with plasmons that are already in the graphene and this affects how infrared light is reflected back from the tip to a detector. The teams can therefore detect plasmons in the material by scanning the tip across the surface of graphene and making measurements at a large number of locations. At each location, plasmons created at the tip radiate across the graphene like ripples on a pond until they are reflected at the edges. This creates an interference pattern of standing waves – an image of which is built up by scanning the AFM tip across the samples.

In both experiments the physicists found that the wavelengths of the plasmons are much shorter than those created by shining infrared light on metals – confirming the idea that graphene plasmons concentrate electromagnetic energy into a much smaller region than metal plasmons. The researchers were also able to tune the wavelength and amplitude of the graphene plasmons by changing an applied gate voltage, which could lead to the development of transistor-like devices in which light can be controlled electrically. While the properties of metal plasmons can be controlled by changing the size and shape of metallic nanostructures, such electrical tuning has not been demonstrated in metals.

“The gate voltage changes the density of mobile electrons in graphene,” explains Basov. “With more free electrons in the graphene, its electronic liquid becomes more ‘rigid’ and the wavelength of plasmonic oscillations is reduced, whereas their amplitude is enhanced.” One important consequence of this is that the gate voltage can be used to switch the plasmons on and off. As a result, Koppens believes that the research could lead to ultrafast optical switches. Other applications include better sensors and new quantum-information processing systems.

As for Basov, he is keen to develop plasmon interferometry as a new tool for studying a wide range of materials. “This new approach will be equally useful to probe the surface physics of many other exotic and interesting materials, among them topological insulators that conduct on the surface while remaining insulating in the bulk,” he says.

The research is described in two papers in Nature.

Should CERN scientists be encouraged to discuss ongoing LHC analyses with the outside world?

By James Dacey

With the International Conference on High Energy Physics (ICHEP in Melbourne just around the corner, the rumour mill has gone into overdrive over whether CERN scientists will be presenting findings that confirm last year’s initial sighting of the long-sought Higgs boson.

ICHEP will start on 4 July and will include presentations by scientists working on the two major experiments at CERN’s Large Hadron Collider (LHC) that are searching for the Higgs particle: CMS and ATLAS. It is presumed that these researchers will be discussing new data that either support or destroy the bumps that appeared in the datasets of both CMS and ATLAS last December, which both corresponded to a Higgs particle with an energy of roughly 125 GeV/c2.

Speculation about the state of play in the LHC analysis has been going on via the usual suspects in the blogosphere. This includes the mathematical physicist Peter Woit, based at Colombia University in the US, who writes in this post about how he has heard from both CMS and ATLAS that they are seeing data that strengthen the bump from last year. Meanwhile, the independent physicist Philip Gibbs, located in the UK, ponders the statistical significance of the new results. He concedes that he does not know how much new data have been analysed but speculates that if both experiments have reached the gold-standard “5-sigma significance” then they will not be able to resist combining their results for the Melbourne conference. If they do indeed do this, then by the standards of particle physics they will effectively be announcing the Higgs discovery in Australia.

Interestingly, there has been little on the blogs from the LHC researchers themselves over these latest developments in the Higgs hunt. CMS physicist Tommaso Dorigo, who is never usually one to shy away from informed speculation, prefers to discuss predictions for the existence of the Higgs made in 2010. Another CMS research and blogger, Seth Zenz, actively tries to ward off speculation. He is critical of the New York Times for running a recent article with the headline, “New data on elusive particle shrouded in secrecy”. Zenz says that there is nothing to hide and he asks politely if we can all wait patiently for another couple of weeks for the ICHEP conference.

The extent to which this silence is CERN-sanctioned is unclear, but it does appear that LHC scientists have a (possibly unspoken) agreement to keep quiet about their analyses with the outside world. You could argue of course that there are very good reasons for this, not least because this is an incredibly important and busy time in their scientific careers that requires complete focus.

From a scientific communication point of view, I reckon you could argue it both ways. On the one hand it will be a lot “neater” to wait until the finding is beyond any doubt before announcing the discovery to great fanfare, embarking on the Higgs boson grand tour, scripting the Hollwood film, etc. But on the other hand, by depriving the general public of your thoughts (and by this I mean depriving anyone who is not involved with the LHC), you are depriving them of a fantastic insight into how science really works. As any researcher knows, the scientific process is messy. It’s about carefully tweaking experiments and rigorously testing statistical data. So, for CERN to remain quiet while it carefully choreographs a public discovery announcement could create a false impression of science as a series of “Eureka moments” occurring among a secret society of knowledgeable folk.

Let us know what you think in this week’s Facebook poll.

Should CERN scientists be encouraged to discuss ongoing LHC analyses with the outside world?

Yes, they should discuss the scientific process in the open
No, they should wait until conclusions are firmly established

Let us know by visiting our Facebook page. As always, please feel free to explain your choice by posting a comment on the poll.

hands smll.jpg

In last week’s poll we asked you to place yourself in a scenario that could soon become a reality for a few certain people if the Higgs boson is confirmed. We asked you tell us what you think would be the best thing about winning a Nobel prize, by selecting one from a list of options. People responded as follows:

The recognition that my field would receive (43%)
Freedom to do the science that I want to do (30%)
Securing a place in the history of science (17%)
I wouldn’t want to win (6%)
The fame and all that comes with it (3%)

The poll also attracted some interesting comments, including some alternative benefits that could come from winning the prize. Alan Saeed wrote: “I think the most rewarding part of a Nobel prize is the inspiration that it will infuse in the young minds of the country from where the recipients came”. Robert Ley, in the UK, made the good point that: “It would be interesting to see if this poll returns the same result if the votes were made anonymously!” One commenter, who probably wouldn’t be altered by anonymity, is Alan Timme who wrote (possibly with his tongue in cheek) that the best thing about a Nobel prize would be: “Rubbing it in the face of my doubters!”.

Thank you for all your comments and we look forward to hearing from you in this week’s poll.

Introducing Agent Higgs

By Tushna Commissariat

“Dodging the physicists of the world is no easy task! You need all your wits about you, and the steady hand of a secret agent to stay out of sight.”

Agent Higgs

Agent Higgs

If that sentence has left you wondering who or what is hiding from physicists, apart from the so-called elusive Higgs boson, you have hit the particle on the head! In a bid to keep the Higgs even further from physicists’ clutches, physics educator turned science-related games designer Andy Hall has developed a game for iOS devices known as “Agent Higgs”.

“The whole world is after the elusive Higgs particle. Accelerators the size of cities are being used to create truly awesome concentrations of energy in a vast number of collisions. We’ve detected all the other particles of the Standard Model. Muons, quarks, neutrinos, the whole gambit. But not the Higgs. How is it avoiding detection?” asks the tantalizing game description. Well, apparently it’s thanks to the help of gamers the world over.

In the game, you are encouraged to help the Higgs hide from nosy particle detectors, by hiding “him” behind a slew of the other known subatomic particles such as electrons, neutrinos and muons. Hall, who set up TestTubeGames in 2011 in an attempt to make complex scientific topics fun and interesting, is hoping that Agent Higgs will introduce people to particle physics in a fun way.

According to Hall, “The rules of the game are based on the fundamental forces. Use the weak force to get particles moving, or to make them decay. Use the electromagnetic force to make particles attract or repel. The physics introduced in this game even extends to matter–antimatter annihilation and neutrino oscillation.”

The game has more than 100 levels that slowly and steadily introduce particle-physics laws that gamers can use to block the Higgs from the detector’s view. Hall says that he designed the game to be challenging yet engaging.

The game has been released worldwide through the iTunes Store and is priced at $0.99 in the US. You can download the game from the iTunes Store here.

Financial weaponry

Almost every physicist or mathematician under a certain age knows former colleagues or classmates who now work in finance. Older scientists, however, can probably remember a time before such “quants” existed. What happened to create this new and lucrative profession over the course of a few decades?

In his new book on the history of options-pricing theory, Pricing the Future, George Szpiro sheds light on the answer by explaining the theoretical developments that led to new markets in complex financial instruments. Known generally as derivatives, these instruments are contracts between two parties that obligate one of the parties to pay the other an amount determined by the outcome of some future event. For example, the pay-off might depend on the closing price of a particular stock – termed the “underlying” security, on whose price that of the “derived” security, or derivative, depends – at some specified date. This particular type of derivative is known as a stock option.

What is the proper value (price) of such an instrument? Since the pay-off depends on future events, any pricing model must include, at minimum, a probability distribution over future outcomes. But in fact the problem is more subtle, because the value of a given probability distribution of future payouts to a particular individual depends on that individual’s attitude towards risk. For example, a gamble that resulted in a $10,000 gain 60% of the time and an equal loss 40% of the time might be very attractive to a millionaire, who might be willing to pay up to $2000 to play each time. However, that same gamble would be unappealing to a cash-strapped student, since they cannot afford the loss. This simple example shows that it is more than the expected return that determines the value of a gamble – the risk preferences of the gambler enter as well.

Szpiro explains how both problems – modelling the future behaviour of the underlying security, and taking into account risk preferences – were solved by a collection of mathematicians, physicists and economists over the last 100 years. The list of contributors is too long to do it justice here – for that I urge the reader to consult Szpiro’s well-written and entertaining book – but I cannot resist mentioning a few of them.

One early notable was the French mathematician Louis Bachelier, a student of Poincaré who used a random walk to model stock prices (and, incidentally, anticipated Einstein’s description of Brownian motion by five years). However, Bachelier’s model was deeply flawed, as he assumed the stock price itself exhibited a random walk. In fact, historical data on stock prices fit much better when it is the logarithm of the price that fluctuates in this way. The US high-energy physicist M F M Osborne was the first to note the log-normal distribution of stock-price movements, and he seems to have been unaware of Bachelier’s earlier work, as he failed to cite him in a 1959 paper. Today, much more sophisticated models involving autoregression, time-dependent volatility and other refinements are used to model stock movements.

The most famous result in derivatives theory is the Black–Scholes equation, which is duly referenced in Szpiro’s subtitle Physics, Finance and the 300-year Journey to the Black–Scholes Equation. This partial differential equation governs the behaviour of a certain class of options, and was discovered in 1973 by the economists Fischer Black (who trained in physics at Harvard University) and Myron Scholes. However, their results were probably already known as early as 1967 to Ed Thorp, a mathematician who became a pioneer in hedge-fund management. In a 2011 interview with the Journal of Investment Consulting, Thorp said that he wrote down “what later became known as the Black–Scholes model” and started using it to invest in 1969, but “realized in retrospect that there was no chance I was going to get any recognition for an options formula because I was not part of the economic academic community”. One lacuna of Szpiro’s otherwise well-researched book is that he neglects to properly credit Thorp for his early work on options pricing.

The reader may still be wondering how the problem of risk preference was overcome. In a sense, it was finessed. The option price obtained by the Black–Scholes equation is essentially just the expected return: the return averaged over all possible outcomes, weighted by probability according to the model of underlying dynamics – and, if necessary, discounted back in time using a deflator, which corrects for the “risk-free” return that could have been earned by investing the assets in, for example, US Treasury notes.

This result is justified by the assumption of perfect hedging, which states that a trader can at any moment in time (and without cost) construct a “riskless” portfolio comprised of the option, the underlying security (such as a stock) and some other securities, typically cash. Such a portfolio is considered riskless because over a small time step the value of the option and the underlying security are related; so by assembling a portfolio with the correct ratio of the two, one can ensure that the portfolio’s value is constant – a process known as “delta hedging”. Because the value of this riskless portfolio and of the stock and cash at the initial time are known, one can deduce the value of the option independent of individual attitudes toward risk.

The financial utility of the Black–Scholes equation is not in doubt, but experts are divided on the general societal utility of advances in derivatives. The conventional view, taught in business schools and in economics departments, is that financial innovation enables economic dynamism and allows markets to allocate resources more efficiently. The opposing perspective, held by billionaire investor Warren Buffett, is that derivatives are “financial weapons of mass destruction” – speculative instruments in a complex global casino that carry more risk than benefit. When Buffett’s company, Berkshire Hathaway, bought the reinsurance firm General Re in 1998, the latter had 23,000 derivative contracts. Buffett later explained his attitude towards these contracts to a US government panel: “I could have hired 15 of the smartest people, you know, math majors, PhDs. I could have given them carte blanche to devise any reporting system that would enable me to get my mind around what exposure I had, and it wouldn’t have worked…Can you imagine 23,000 contracts with 900 institutions all over the world, with probably 200 of them names I can’t pronounce?” Ultimately, Buffett decided to unwind the derivative deals, even though doing so incurred some $400m in losses for Berkshire.

The recent financial crises suggest that Buffett’s attitude may be the right one, and that the potential benefits of derivatives and other complex instruments come with dangerous systemic risks. In 2010, while testifying before the US Senate Banking Committee, the former chief of the US Federal Reserve, Paul Volcker, commented “I wish somebody would give me some shred of evidence linking financial innovation with a benefit to the economy.” In Pricing the Future, Szpiro gives us a colourful history of derivatives, but does little to address Volcker’s fundamental question.

Gigapixel camera pushes resolution limit

Researchers in the US have unveiled a 1 gigapixel camera, which has about five times as many pixels as today’s best professional digital cameras and nearly 100 times as many as a compact consumer camera. Moreover, the camera has a much smaller aperture than other gigapixel devices – meaning that, unlike other sensors, this latest camera pushes the fundamental resolution limit of optical devices. The team has also shown how the device could be used in several applications including surveillance, astronomy and environmental monitoring.

In the past, gigapixel images have be formed by stitching together 1000 or more megapixel images, or by scanning a sensor across a large-format image. Acquiring “snapshot” gigapixel images is trickier, but a few options exist or are in development. One of these is the 3.2 gigapixel digital camera that will sit within the Large Synoptic Survey Telescope (LSST), an optical device currently under construction in northern Chile.

Aperture effects

In theory, the smallest details resolvable by a lens are limited by diffraction, and the larger the lens – that is, the greater the aperture – the smaller the details it is possible to identify. A 1 mm aperture should be able to resolve about 1 megapixel, while a 1 cm aperture should be able to resolve 100 megapixels. The LSST, which will have an aperture of several metres, should be able to resolve not just images of gigapixels, but of terapixels.

In practice, however, large lenses struggle to reach their diffraction limit. One problem is that big lenses are more likely to suffer from aberrations, which smudge the focusing, particularly around the extremities. Lens designers get round this problem by introducing more lens elements and reducing the field of view. Nevertheless, these devices still generally fail to get close to the diffraction limit.

Pushing the diffraction limit

Now, David Brady of Duke University in North Carolina and colleagues claim to have created a high-resolution camera that approaches the diffraction limit. Known as AWARE-2, their camera has an aperture of just 1.6 cm yet offers a resolution of 1 gigapixel. For visible light, that is half way to the diffraction limit of 2 gigapixels.

AWARE-2 uses a “multiscale” design, where one spherical objective lens projects a coarse image onto a sphere. On this sphere, an array of 98 microcameras, each with a 14-megapixel sensor, refocuses and samples the image. “The design approach is directly analogous to the development of supercomputers using arrays of microprocessors,” says Brady. “We build supercameras using arrays of microcameras.”

Drop in price

The Brady group’s approach works because it replaces one big camera with a composite of tiny cameras, which are less prone to aberrations. It also means a cut in cost: the mobile-phone market has brought the cost of sensors down to about $1 per megapixel, which suggests cameras of the AWARE-2 design could one day cost just $1000 per gigapixel. Brady thinks it should be possible to manufacture cameras for less than $100,000 per gigapixel by 2013. “We hope that our systems will reach $10,000 per camera within 5 to 10 years,” he says.

However, not everyone is impressed. Engineer David Pollock of the University of Alabama in Huntsville believes the AWARE-2 camera suffers from “a lack of focal length”. Shorter focal lengths tend to have lower “f-numbers” – that is, wider relative apertures – resulting in less thermal noise. On the other hand, shorter focal lengths mean less magnification: the subjects recorded on the camera sensor appear very distant.

Minimum focal length

This is true of the AWARE-2 camera, which offers a very broad field of view of 120°. But Brady does not consider this feature a disadvantage. “In my experience, lens designers universally regard the ability to design to the minimum focal length possible as a positive thing,” he says.

“I would say that this debate gets to the heart of the innovator’s dilemma of the battle between good and excellent,” he adds. “Current high-pixel-count imagers for aerial photography and astronomy are very good systems and their designers are naturally hesitant to believe that a better approach is possible…It’s a fun battle to fight.”

The team has also shown how the camera could be used in a number of different applications. In one example, the camera acquired an image of a lake in North Carolina that was then analysed to determine how many swans were on the lake. In another example, details such as car licence-plate numbers and individual faces were picked out of a surveillance image.

The work is described in Nature.

Silicene pops out of the plane

Researchers in Japan say that they have made 2D honeycomb crystals of silicon that resemble the carbon-based material graphene. This is the second potential sighting of the material dubbed “silicene”; the other was reported in April by an independent group in Europe. The Japanese research suggests it may be relatively easy to alter the structure of silicene by changing the substrate on which it is grown – which could allow different versions of silicene to be produced with a range of useful electronic properties. However, not all scientists agree that this latest material is actually silicene.

Graphene is a honeycomb lattice of carbon just one atom thick and since its discovery in 2004 the material has proved to have a myriad of interesting and potentially useful properties. As well as being the material of choice in the electronics industry, silicon lies directly below carbon in the periodic table. The latter suggests that if silicon atoms were deposited in a layer on an appropriate surface, they could arrange themselves in a honeycomb lattice to make silicene – which would have properties similar to graphene.

Silicon on silver

This was first done by a team of researchers in Italy, Germany and France that included Paola de Padova of the Consiglio Nazionale delle Ricerche-ISM in Rome. De Padova and colleagues deposited silicon on a silver crystal and found the film to have the structural and electronic properties expected of silicene.

Now, a team in Japan led by Yukiko Yamada-Takamura of the Japan Advanced Institute of Science and Technology in Ishikawa say that it has created a modified form of silicene on a substrate of zirconium diboride. The crucial difference to the previous work is that while silver has a very similar lattice constant to that expected of silicene, the lattice constant of zirconium diboride is quite different. As a result, silicene grown on silver adopts a flat, graphene-like atomic configuration and has very similar properties to the carbon-based material. Silicene on zirconium diboride, on the other hand, has a distorted lattice structure.

It is this ability to distort the silicene lattice that appeals to Yamada-Takamura and colleagues. This is different to the structure of graphene, which is very stable. This stability is a problem because altering graphene’s structure could provide researchers with a way of creating an electronic band gap in the material – a development that would allow graphene to be used in a wide range of electronic applications.

Flexible on an atomic scale

The team describes silicene as being very “flexible” and says that it should be possible to manipulate its structure in ways that are virtually impossible with graphene. “You can roll a sheet of graphene or something like that,” explains Yamada-Takamura, “but what we mean by flexible is that silicene can be atomitistically flexible, so the atoms can be displaced out of the plane.” This buckling out of the plane leads to very different electronic properties and by depositing silicon on a variety of different substrates, the researchers believe it may be possible to produce a whole family of silicenes that offer a range of different electronic properties.

De Padova is impressed by this latest work, but is cautious about describing the new material as silicene. “I think the paper is interesting because there is, in principle, evidence for the growth of silicene on, for example, zirconium diboride and not only on silver.” She is reluctant, however, to endorse the researchers’ conclusion that their material is a modified form of silicene. She also has doubts about whether the work shows that the properties of silicene can be modified by epitaxial strain. She suggests instead that the substance produced on the surface of the zirconium diboride may not be silicene at all. Yamada-Takamura responded “that depends how you define silicene”.

The research is published in Physical Review Letters.

UK should lead on open-access publishing, says report

The UK should lead the way in transforming scientific publishing from a “reader pays” model to an “author pays” model. That is the main conclusion of a 140-page report released today by an independent working group of academics, publishers, librarians and representatives from learned societies. Led by the British sociologist Janet Finch, the 15-strong working group includes Steven Hall, managing director of IOP Publishing, which publishes physicsworld.com.

Commissioned by the UK government, the report notes that the Internet has had a profound impact on how scientists access peer-reviewed research papers, with nearly all articles now being available online. However, many journals are subscription based, which means that they can only be accessed by researchers working at institutions that have taken out a subscription or those who are willing to pay a one-off fee to access individual articles on a pay-per-view basis.

Some researchers therefore feel that subscription-based journals are preventing the results of government-funded research from being more widely disseminated, arguing that it should be freely accessible in the public domain – a view that the report describes as both “compelling” and “fundamentally unanswerable”. Proponents of this “open access” model say it would not only benefit researchers in smaller universities and poorer nations that cannot afford subscriptions, but also help inventors and small businesses by giving non-academics access to scientific and technical knowledge.

The challenge in making the transition to full open-access publishing will be to decide who should pay the not insubstantial cost of running peer-review systems, publishing the papers and maintaining and upgrading the complex online systems that underpin most modern journals. The Finch group has come down firmly in support of the “author pays” model, whereby scientists pay an article processing charge (APC) before a paper is published. This model is already used in part by a number of scientific publishers, including IOP Publishing, which has run New Journal of Physics in this way since it was launched in 1998 with the German Physical Society.

The report calls on UK research councils – which provide the bulk of public research funding – to “establish more effective and flexible arrangements to meet the cost of publishing in open-access and hybrid journals”. Based on an APC of about £1750, the group believes that a move to open access would cost the UK an additional £38m per year. The report also says that the UK government must spend an extra £10m per year to extend its current licences on reader-pays journals to provide wider access to this material in the higher-education and health sectors, with publishers also providing “walk in” access at public libraries at no charge.

A further £3–5m per year, the report argues, would need to be spent on open repositories of scientific reports that have not been subject to peer review. Such repositories, it suggests, could contain work done at a university or institute – or done UK-wide in a specific discipline. The report also cites a one-off transition cost of £5m, putting the total cost of the transition to full open access at about £50–60m per year. This, it says, is “modest” compared with the £10.4bn that the government spends every year on research and development in the UK.

One challenge facing the UK if it leads the move to open access is how to apportion APCs when research is published by an international collaboration that includes one or more UK-based scientists. According to the report, about 46% of papers met this criterion in 2010 and a clear policy would have to be put in place to decide who pays for what – and what to do if foreign funding agencies refuse to pay their share.

Response and reaction

David Willetts, the UK’s minister for universities and science, has welcomed the report, saying that it will shape the government’s forthcoming policy on open-access journals. “Opening up access to publicly funded research findings is a key commitment for this government,” he says. “Proposed initiatives such as providing access to findings for small companies and making peer-reviewed journals available free of charge at public libraries would foster innovation, drive growth and open up a new area of academic discovery.”

The response from the publishing industry has generally been positive. David Hoole, marketing director of Nature Publishing Group that publishes the Nature suite of journals, says that the company “welcomes the balanced approach of the Finch report, and its recognition of the need for a mixed economy, of licensing subscription content, self-archiving and open-access publication”. However, Hoole warns that the small number of papers published in highly selective journals such as Nature will require APCs higher than those acknowledged in the report.

Timothy Gowers, a mathematician at the University of Cambridge who is involved in a boycott of the commercial publisher Elsevier, told physicsworld.com that while he welcomes the general direction suggested by the report, he does not think it sufficiently acknowledges the “very large” profits that he says publishers make. “The report recommends moving to a more open system, which I strongly support,” says Gowers. “But I would have liked to have seen a bolder report that also recommended taking steps to move to a cheaper system that covers the costs of publishers but significantly reduces their profits.”

Any move to open access will also affect UK-based learned societies such as the Royal Society, the Institute of Physics and the Royal Society of Chemistry, all of which publish journals on a not-for-profit basis. “The report clearly recognizes the challenge that the transition poses to learned societies,” says Peter Knight, president of the Institute of Physics. “With more than two-thirds of the Institute’s charitable projects funded by the gift-aided profits from our publishing company, IOP Publishing, it’s crucial to us that the shift is managed carefully.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors