Skip to main content

Russia mulls over huge 60 m telescope

The rector of Moscow State University, Viktor Sadovnichy, has unveiled plans for a massive 60 m optical telescope on the Canary Islands. If built, the telescope would be the world’s largest and would hunt for Earth-like planets around other stars, says Sadovnichy. But the plans have divided researchers, with some Russian astronomers saying the country should not build its own facility but join the European Southern Observatory (ESO) instead.

Massive scope

According to Vladimir Lipunov, director of the Space Monitoring Laboratory at Moscow State University, the telescope would be built by Russia, Spain and possibly Switzerland and Germany, with Russia getting a quarter of the observing time at the facility. The so-far-unnamed telescope would dwarf all existing – and currently planned – facilities, including the 39 m European Extremely Large Telescope (E-ELT) and 25 m Giant Magellan Telescope, both to be based in Chile, as well as the Thirty Meter Telescope to be built in Hawaii.

“Astronomers are always happy to have a new instrument and will always find a use for it – as there are lots of objects out there to look at,” says Sergei Popov of the Sternberg Astronomical Institute in Moscow. However, Popov questions whether it is the best option for Russian astronomy because the country has been debating whether to join ESO since 2006. As a member state, its astronomers would gain access to the ESO’s various telescopes, including the Very Large Telescope (VLT) in Chile and the next-generation E-ELT.

All about the money

Membership, however, costs money – and Russia was told it would have to fork out €130m to join ESO, plus an annual subscription of €13m. While the Russian government is deciding whether to allocate the necessary funds, some researchers argue that the money should instead be spent on a facility partly owned by Russia. “The idea is to leave the money at home and use it to build [the 60 m telescope] on the Canary Islands in one of Russia’s factories,” says Lipunov.

Yuri Balega, director of the Special Astrophysical Observatory of the Russian Academy of Sciences, questions the timing of the proposal to build a 60 m telescope. “Such an instrument will cost at least €2–3bn to build and today we do not have the necessary technologies, engineering power and money to start such a project,” says Balega. “Even if we had all that in Russia, such a fantastic telescope would only be built 20–25 years from now.” He feels that Russian astronomers, who currently lag behind the rest of the world, would do best by gaining access to ESO’s world-class instruments.

3D printing food, 'Top 10' lists, teenage nuclear physicists and more

 

By Tushna Commissariat

Over the past few years, 3D printing has captured the imagination and interest of scientists and the public alike. Now, a €3 million EU-funded project known as “PERFORMANCE – PERsonalised FOod using Rapid MAnufacturing for the Nutrition of elderly ConsumErs” is adapting 3D printing technology to food in order to create easily digestible sustenance that is not only nutritious but also looks and tastes like the real thing. The proposed printer would work like its conventional inkjet counterpart – except the cartridges would be filled with liquefied food instead of ink! While that may not sound like the most appetising way of eating your five-a-day, it might come as a relief for those who suffer from a condition known as “dysphagia” that makes swallowing food difficult. You can read more about the proposed scheme on the EU’s Horizon magazine website and take a look at the video above.

(more…)

First discovery of double star that brightens during eclipse

For the first time, astronomers have seen a double star brighten rather than fade when one star passes in front of its companion. Predicted decades ago, the phenomenon arises from gravitational microlensing as the great surface gravity of a white-dwarf star magnifies its partner’s light. The discovery by US researchers raises the hope that we will someday catch a neutron star or black hole doing the same thing, which would lend new insight into these extreme objects.

Many star systems are double, having two stars that orbit each other. In some cases, the orbit aligns edge-on to our line of sight, so that one star periodically eclipses the other and dims the light that we see. Astronomers have known of these eclipsing binaries for centuries. The best example is Algol – Arabic for “the ghoul” – which medieval astrologers considered to be the most dangerous star in the sky, probably because they knew that its light flickers. In 1782 British astronomer Edward Pigott correctly explained why Algol dims.

It was not until 1973 that Swiss astronomer André Maeder predicted that some binaries should exhibit the opposite phenomenon. According to both Newton’s theory of gravity and Einstein’s general theory of relativity, mass bends light. Therefore, Maeder says, if a small but massive star eclipses its companion, then the small star’s gravity should amplify the other star’s light so much that it overwhelms the eclipse-induced darkening.

Discovery at last

Now, four decades later, a pair of astronomers has discovered the first example, 2600 light-years away. “We found it by accident,” says Ethan Kruse, a second-year graduate student at the University of Washington in Seattle. “My main research is looking for new planets that other people have missed.”

In early December 2013 Kruse was examining KOI 3278, a star that NASA’s Kepler spacecraft had found to be fading every 88.18 days. This suggested that a planet circled the star with that periodicity and dimmed the light as it passed in front.

But Kruse noticed a strange feature. “The first thing I thought was that something had gone horribly wrong,” he says. “Instead of finding a new planet, I found what looked to be the same signal as a planet transiting its star except upside down, where the star got brighter instead of dimmer.” Each brightening was subtle, just 0.1%, and lasted five hours. The brightenings repeated every 88.18 days, out of phase with the dimmings.

In fact, KOI 3278 has no known planet. Instead, it consists of a star like the Sun coupled with a white dwarf, a small dense star. The system dims when the white dwarf passes behind the Sun-like star and brightens when the white dwarf passes in front, magnifying the light of its mate.

More exotic objects

“This is a very nice surprise,” says Maeder, who is now 72 years old. “I must say, I more or less forgot about this effect, and as a matter of fact, I did not expect it would be found in my lifetime.”

Edge-on binaries containing more exotic objects – neutron stars and black holes – should also display periodic brightenings. “That’s what I think is most interesting,” Kruse says. “There are not a lot of people looking for such signals, and they might find them in the Kepler data.” Such systems would yield new information on the masses of neutron stars and black holes.

“It’s supercool!” says B Scott Gaudi, an astronomer at Ohio State University in Columbus. “It just shows the power of Kepler. You open up parameter space and you’re guaranteed to find really interesting stuff.” Last year, other astronomers reported a Kepler system with a white dwarf, but the gravitational brightening was too small to erase the eclipse.

Kruse and his university colleague Eric Agol have published their discovery online today in Science.

Edible lasers and death rays

“Optics is light work” was one of Arthur Schawlow’s favourite slogans, and it didn’t just appear on the T-shirt he wore while giving lectures at Stanford University. Schawlow, who shared the 1981 Nobel Prize for Physics for developing laser spectroscopy, was a playful physicist who concocted novel experiments to amuse as well as educate. So it’s probably inevitable that there’s a Schawlow story in How the Ray Gun Got Its Zap, Stephen Wilk’s collection of “odd excursions into optics”.

The Schawlow story Wilk picks is a gem, though. In the decade after the first laser was demonstrated in 1960, so many different materials were made into lasers that Schawlow concluded “anything will lase if you hit it hard enough”. To prove this, he and Theodor Hänsch (then a young postdoc, later a Nobel laureate himself) decided to fire pulses of light at brightly coloured Jell-O brand gelatine desserts, in the hope of making an “edible laser”. None of the 12 flavours they bought in the supermarket would lase, but they eventually succeeded by adding a dye, sodium fluorescein, to unflavoured gelatine. Schawlow called the result “almost non-toxic”, but declined to eat the experiment afterwards.

After recounting this tale, Wilk, an optical engineer, goes in search of the truly edible “gin and tonic laser” he heard about as a doctoral student. Eventually, he traces the story to an experiment at Eastman Kodak labs, where two researchers found that very fast flashlamps could indeed excite blue laser pulses from an unidentified brand of tonic water. It didn’t make a good laser, Wilk observes, but it does make a fun story.

Wilk’s interest in such fun stories makes his book an entertaining tour of history’s optical oddities. The most intriguing mystery of ancient optics concerns Archimedes: did the great proto-physicist really mastermind the burning of a fleet of Roman ships by focusing sunlight onto them with polished shields? The surviving written evidence is scanty, and scientists and engineers have tried to resolve the question for centuries. A long list of experiments shows disconcertingly wide variations. The results ranged from modest heating to conflagration, often in keeping with what the experimenters hoped to find, so the question may be among the great unanswerables. All history has to say is that whether or not the Greeks burned some of their ships, the Romans ultimately won.

Many of the book’s essays answer odd questions raised by curious minds, such as why we think the Sun is yellow despite the fact that sunlight is, by definition, white light. That’s a puzzler to ponder, and Wilk points out flaws in three common explanations before concluding that the Sun looks yellow to the eye because the atmosphere scatters blue light across the sky. But that’s not quite the whole story, he adds, because when the Sun is high in the sky, too little blue light is scattered to make the Sun look yellow. The impression of a yellow Sun comes when it is low enough in the sky to glance at briefly, but not so low that it looks orange or red to the eye – colours that we know to be wrong.

Another, similar, essay concerns the number of colours in a rainbow. As a child, Wilk was told there were seven, but as he writes, “My old Crayola crayon box held 64 colours.” To resolve this paradox, he digs back to – what else? – a three-volume 1858 treatise on the Greek poet Homer. This work devoted a full 42 pages to Homer’s use of colour, and thereby launched an ongoing debate over how the ancients described it. The division of the spectrum into seven colours is sometimes linked to an attempt to mirror the seven notes of the musical scale, but Wilk traces it instead to a decision Isaac Newton made while writing his definitive treatise Opticks. In some places Newton listed only the colours red, yellow, green, blue and violet, but in others he added “orange” and “indigo”. Orange was soon ensconced as a definitive colour, but the distinction between blue and indigo is so subtle that indigo is often lost – except when an “i” is needed to make the colour mnemonic “Roy G Biv” pronounceable.

The book’s title comes from a wonderful chapter in which Wilk traces the history of fictional “death rays” back more than 200 years, to an 1809 novel in which the author Washington Irving – best known for The Legend of Sleepy Hollow – armed his interplanetary invaders with beams of concentrated sunlight. The “heat rays” of H G Wells’ better-known Martian invaders did not arrive until 1896. “Disintegrator rays” soon followed, and death rays of various types became standards of pulp-era science fiction, comics and films. In most cases, these rays killed on contact, leaving dead bodies but not the blood and guts of the deadly mechanized warfare that began with the First World War. Quoting the science-fiction critic Peter Nichols, Wilk notes that their invention “may have resulted from a certain squeamishness, since it allows for a maximum of destruction with a minimum of bleeding pieces to sweep up afterwards”. As a card-carrying laser and SF geek, I couldn’t ask for more.

How the Ray Gun Got Its Zap is not a big-picture, big-issue or deep-thought book. It’s an old-fashioned cabinet of wonders in book form, offered in the spirit of intellectual fun. It sent me down to the kitchen to see if my violet laser pointer would stimulate bright fluorescence from any of the leftover Christmas food colouring. The only glimmer of hope was from red cinnamon nonpareils, but I may put some coloured Jell-O on my grocery list.

  • 2013 Oxford University Press £18.99/$34.95hb 256pp

Setting the standard for Brazilian materials science

By Michael Banks in Rio de Janeiro, Brazil

As part of my road-trip round Brazil, I visited Inmetro – the Brazilian standards lab. Located around 50 km north of Rio de Janeiro, Inmetro certainly has the feeling of being well away from the hustle and bustle of one of Brazil’s major cities.

The first thing that you notice when you enter Inmetro’s vast campus is that the buildings have a unique architecture (see above). The bunker-like structures are built in such a way that they are protected from the Sun, which can deliver 40 °C temperatures in summer. (Thankfully, I am here in autumn, but the temperature is still a warm 30 °C.)

Inmetro’s campus was built about 40 years ago with the help of the PTB – the German standards lab. The buildings were also specially designed so that the labs are vibrationally separate from the offices. So, any wild jumping around at your desk won’t affect the sensitive measurements in the lab.

Inmetro has traditionally carried out all kinds of metrology, covering chemical, mechanical and optical standards. But in 2000 bosses at the lab decided to focus more on research rather than just doing precise measurements, so they bought new instruments and hired scientists and students to start up research groups.

Equipment at Inmetro, Brazil's standards lab

The materials section at Inmetro was one of those departments born in 2000. The government ploughed money into the centre, allowing researchers to buy a host of equipment for materials science, including scanning electron microscopes, X-ray diffractometers and tunnelling electron microscopes. The researchers now use these to study a range of materials from nanoparticles and graphene to ball joints that are used in hip replacements.

“Inmetro has to do everything and in the best possible way,” says Marco Cremona from the centre, who is also based at PUC-Rio. “It is a unique institution in Brazil where you can find all of this equipment under one roof.”

Yet all these microscopes and spectrometers are not just for Inmetro researchers. The centre runs a proposals programme where it offers time on its machines to external users. Researchers at other institutions write proposals and then either send their samples to Inmetro or go there themselves to carry out the measurements.

But the real benefit of the programme is that the central government does not have to put the best microscopes in every laboratory but can locate them centrally at Inmetro. Not only that, it also fosters collaboration by allowing people from different labs to work together.

You can read more about research in the country in the latest Physics World Special Report: Brazil.

Boosting innovation in Brazil

By Michael Banks in Belo Horizonte, Brazil

I’m writing this while on a week-long road trip across Brazil to gather information for a new report that IOP Publishing, which publishes Physics World, is producing for the Brazilian Materials Research Society.

While on my trip, I have visited a number of institutes that focus on materials research. But I also had the chance to talk a bit of policy when visiting FAPEMIG – the main state funder for research in Minas Gerais, which is the second most populous state in Brazil.

In Belo Horizonte, which is the capital of Minas Gerais, I met Evaldo Ferreira Vilela, who is director of science, technology and innovation at FAPEMIG.

FAPEMIG is about 30 years old and it mainly awards grants to researchers plus scholarships for students. Some of these grants are funded 50/50 together with the federal government.

Research for the region was boosted in 2007 when the state government of Minas Gerais began to pay 1% of its state income to FAPEMIG. “The state realized that it had to put money into research and to acknowledge the benefits that science brings,” says Vilela. “But we now have to deliver the results of this investment.”

Vilela has only been director of science at FAPEMIG for three months, but he is already preparing the agency for the next 10 years. “We have to do more than just publish papers,” he adds, “We have to go further.”

Indeed, part of that new focus is a programme FAPEMIG began last year looking to boost innovation in the region. The initiative aims to foster start-up companies at universities by providing them with facilities as well as a funding of $25,000 to help get them started.

In the programme’s first call it received 1500 proposals from all areas of science. Indeed, you don’t even have to be based in Minas Gerais to apply, but if you win a grant, then you will have to develop some link with the state. In June, 40 of these proposals were selected, including eight from abroad.

The programme will also have a second round when the 10 most successful projects receive another $25,000 as well as a mentor who could also provide more cash. The other 30 will not receive any more support from FAPEMIG but they can get it from elsewhere if they need it. “This programme is to renew the economy and also to produce entrepreneurs,” says Vilela.

It is not the only FAPEMIG programme helping to nurture future entrepreneurs. The agency also runs the Programme of Incentives, Innovation (Pii). This involves people applying to FAPEMIG outlining how their research could have practical applications. FAPEMIG then analyses the proposals and talks to companies about them. If a firm feel that a proposal is interesting, then the agency helps the researchers put together a business plan, which could then secure investment from the company. In addition, successful applicants also get an all-expenses-paid trip to MIT or Oxford University.

You can read more about research in the country in Physics World Special Report: Brazil, which is now out.

Nuclear waste heads into the virtual realm

A new computer-based tool designed to help find the best sites for nuclear-waste repositories and to win public confidence in them has been developed by researchers in Germany. The €3m VIRTUS virtual underground laboratory will allow scientists to explore the behaviour of highly radioactive materials inside specific rock formations, with the aim of making it cheaper to develop and build repositories. Critics, however, argue that the new software will do little to improve safety and might disrupt real laboratory studies of nuclear waste.

Underground disposal

Many scientists believe that the best way to dispose of spent nuclear fuel and other long-lived radioactive materials is to bury them hundreds of metres underground, with Sweden and Finland having both selected sites for national waste repositories next to existing nuclear power stations. France also plans to open its own facility in 2025, and, like Sweden, has built a major underground lab to test the geology and technologies to be used at the site.

However, there are severe technical and societal problems associated with repositories, not least that the waste they contain will remain harmful for hundreds of thousands of years. The development of a national repository in Germany, for example, has been mired in controversy. A formal site-selection process has still to be set up, even though exploratory work at the Gorleben salt mine in the north of the country began as far back as the 1970s. The nearby Asse mine, meanwhile, was set up in the 1960s as a research facility but was decommissioned in 1997 after a brine leak threatened to flood the complex and cause it to collapse.

Developed by the Fraunhofer Institute for Factory Operation and Automation (IFF) in Magdeburg, together with Germany’s nuclear-safety organization (GRS), the Federal Institute for Geosciences and Natural Resources and the waste-repository company DBE Technology, VIRTUS will attempt to partially address this issue. The software enables detailed models of specific rock formations or mine structures to be created and then fed into a simulation to calculate how a repository would evolve physically and chemically over time. The results of these calculations can then be visualized graphically, and it is planned that members of the public will in future be able to see those graphics inside a 360° projection system.

Modelling heat

Klaus Wieczorek from the GRS, who is head of VIRTUS, says that the software could, among other things, model the heat emitted by the radioactive decays taking place inside canisters, and the resulting temperature-induced stress that would build up in surrounding rocks. It could be used to better design laboratory experiments, he explains, and to simulate the performance of potential repositories – ensuring that safety criteria, such as maximum-allowed temperatures, are met and that the position of tunnels can be optimized to minimize mining costs.

A prototype VIRTUS system was supposed to have been completed this spring, but unexpected difficulties associated with matching up the geological models with those simulating the behaviour of nuclear waste has now pushed that deadline back. “We are continually improving the prototype and we will present it to funding institutions in October this year,” says Wieczorek. In fact, he admits that it might be “another two or three years” before it is ready for public use.

Arising uncertainities

Johan Swahn of Swedish nuclear-repository watchdog MKG believes that the new software has little or nothing to contribute to research on radioactive-waste disposal. He says that experiments carried out in underground laboratories continue to provide “a lot of surprises”. For example, new uncertainties have emerged regarding how copper canisters designed to hold Sweden’s spent fuel behave in low-oxygen environments, and as a result the licence application for the proposed national repository may not be approved. “Creating a generic safety case with a nice visualization will in my opinion only enhance a dangerous belief in modelling, creating a false impression that we have understood more than we actually have,” he says.

Could pulsars explain the positron excess?

Invoking dark matter as a means to explain the excess of positrons observed last year by the Alpha Magnetic Spectrometer (AMS-02) experiment might not be necessary, according to physicists in Europe. Instead, they say, the entire dataset can be explained purely in terms of known astrophysical processes such as pulsars and cosmic rays.

Dark explanations?

Last April, the AMS-02 experiment, which is mounted upon the International Space Station, reported a spike in the fraction of positrons – the antimatter counterpart of electrons – coming from beyond the solar system. Conventional wisdom says that the dominant process for creating positrons in the Milky Way is high-energy protons scattering off the galactic disc. Such a scenario would see the number of positrons drop off at higher energies. However, the AMS-02 result seemed to show a rise in the number of positrons as the energy increased above 10 GeV. This backed up a similar high-energy positron excess uncovered by the PAMELA and Fermi satellites in 2008 and 2011, respectively.

Some saw the confirmed excess as possible proof for the existence of dark matter through the annihilation of “weakly interacting massive particles” or WIMPs – the leading candidate for dark matter – that would produce a swathe of positrons in that energy range. Others have cast doubt on whether the perceived excess exists at all. Now, a new model created by a team led by Mattia Di Mauro at the University of Turin, Italy, can recreate the AMS-02 data, including the spike, without resorting to dark matter, although that does not necessarily rule it out as an option. “It is still possible to vary the astrophysical parameters to include a dark-matter contribution and be compatible with observations,” Di Mauro told physicsworld.com. However, when it comes to explaining the unexpected peak, the team points the finger at pulsars instead.

Cascading positrons

A pulsar is a rapidly rotating neutron star with an intense magnetic field. Such a magnetic field can in turn generate an electric field that is capable of ripping charged particles from the surface of the neutron star. The charged particles are then accelerated, producing a cascade of particle/antiparticle pairs, including electrons and positrons. This forms part of an out-flowing “pulsar wind”, which injects the particles into the interstellar medium. Di Mauro and colleagues were able to model the electron/positron flux emitted by local pulsars in the Australia Telescope National Facility (ATNF) Pulsar Catalogue.

They then examined the way these particles propagate throughout the galaxy, losing energy through interactions with the Milky Way’s radiation field. That allowed them to calculate the number and energy of positrons arriving at Earth. “The high-energy part of the AMS-02 positron flux finds a remarkable solution in terms of pulsars present in the ATNF catalogue,” says Di Mauro. The team also recreated the low-energy region of the AMS-02 data by modelling the flux of additional positrons created by additional sources such as supernova remnants, and those created by cosmic rays interacting with interstellar material.

Not everyone is convinced, however. “I believe the positron excess is most naturally explained by a nearby supernova remnant shock wave,” says Subir Sarkar, who divides his time between the University of Oxford and the University of Copenhagen. Along with Philipp Mertsch, of Stanford University, US, Sarkar first proposed in 2009 that the shock wave accelerates cosmic rays, causing them to bounce back and forth across the wavefront. Some of the positrons are also accelerated, boosting their energy and explaining the excess.

Mertsch and Sarkar have recently updated their proposal, and crucially, it makes a prediction that can distinguish between the two explanations. At higher energies, they expect future AMS-02 data to show a rise in both the proton/antiproton ratio and the boron/carbon ratio. The final answer might come with the release of the next AMS-02 dataset.

The research is published in the Journal of Cosmology and Astroparticle Physics.

Plasmonic waveguide stops light in its tracks

A schematic illustration of the design to stop light

A simple, solid-state waveguide that can “stop” light has been proposed by physicists in the UK. The researchers say that their device – which has yet to be built in the lab – would be straightforward to create and could be used as an interface between electronic and optical circuits. The waveguide could also lead to the development of new lasers and molecular-imaging systems.

Unlike the phase velocity of light, which is the speed at which individual wavefronts move, photons travel at the group velocity of light waves. This is the speed at which each wavepacket advances as the individual wavefronts pass through it. If you want to hold a pulse of light still, therefore, you need to reduce this group velocity to zero. In principle, this can be achieved in photonic crystals, which are synthetic materials comprising periodic regions of high and low refractive index. However, unavoidable inhomogeneities in these structures have prevented light from being completely stopped in these materials.

An ingenious alternative is electromagnetically induced transparency, in which two laser beams suppress an electronic transition excited by light at a certain frequency, making the material transparent to that particular light. If one of the lasers is suddenly turned off, light can be trapped inside the material and stored for up to a minute within coherent spin excitations of the material’s electrons, before being released when the laser is turned back on. However, this must be done at temperatures near to absolute zero in order to preserve the coherence of the spin excitations. Furthermore, it does not truly store photons, instead preserving the information of the photons in another form.

Complex frequencies

Now, Ortwin Hess and colleagues at Imperial College in London have unveiled a simpler scheme. They calculate that a 290-nm-thick silicon slab clad in 500 nm layers of indium tin oxide (ITO) would support optical modes with frequencies that are complex numbers. Furthermore, one of these modes would have a group velocity of exactly zero.

A practical question is how these modes could be excited, given that one cannot send light down a waveguide at zero velocity. The Imperial team argues that the solution lies in the fact that the zero-velocity mode is a leaky mode, which means that light in the silicon waveguide can escape through the ITO cladding as a type of non-propagating wave called an evanescent wave. This would be a disadvantage in an ordinary waveguide, but the researchers have now turned it to their advantage.

“In the same way that you can radiate out, you can also use that trick to radiate in,” explains Hess. The team calculates that near-infrared light shone on the waveguide at a specific angle would excite an evanescent wave in the ITO. This would then excite the desired zero-velocity mode in the silicon slab. Intriguingly, the wavepacket in the slab has almost no dispersion, which means that not only would it not move forwards, but the different wavelengths would also not spread out. This could be useful for boosting the number of optical data that could be transmitted down a waveguide.

Going to California

The researchers calculate that the effect should survive realistic levels of imperfection in the surfaces of the ITO and silicon, and experimentalists in California are planning to realize the set-up. If successful, Hess believes that the work could lead to several applications. The group is currently working on a tiny “stopped-light laser”, in which a stationary pulse of light can be pumped and amplified with no need for a cavity or mirrors. Beyond this, holding light in one place would massively increase the probability of it interacting with matter. This could have applications in optical computing, high-efficiency solar cells and even biomolecular imaging. It might also be useful for creating an optical quantum memory.

Nicholas Fang of the Massachusetts Institute of Technology applauds the work. “This is a unique approach to squeeze light into deep sub-wavelength features,” he says. “This is probably the smallest optical fibre that one can engineer right now.” He is particularly impressed by the fact that the design does not need any exotic materials. “The core is a standard silicon material and the waveguide traps light using a conducting oxide that is very often used in the display business, so both materials are fairly familiar to the photonics industry.”

The research has been accepted for publication in Physical Review Letters.

Why electricity grids fail, what to do if your PhD is stolen, and what is a 'Suris tetron'?

 

By Hamish Johnston

It’s the nightmare scenario for any PhD student: losing all those research results that you carefully squirreled away for when you finally sit down to write your thesis. That’s just what happened to biologist Billy Hinchen, who lost four years’ worth of 3D time-lapse videos of developing crustacean embryos when his laptop and back-up drives were stolen. Find out what happened next in “What would happen if you lost all of your research data?” by Julia Giddings at the scientific software firm Digital Science. Hinchen also tells his tale of woe in the video above.

(more…)

Copyright © 2026 by IOP Publishing Ltd and individual contributors