Scientists have long been fascinated by the ability of an octopus to grab objects. Indeed, we have covered several octopus-inspired gripping systems over the years. But according to August Domel and colleagues at Harvard University, these systems have either mimicked an octopus’s suckers or the curling of the creature’s arm – but not both.
But now Domel and colleagues claim to be the first to create a robotic arm that mimics both aspects of the creature’s grip. Check-out the video above, but beware if monster movies give you nightmares. You can read much more about their research here.
Peppermint oil and walnut aroma could be something you use the next time you are baking cookies — so you might be surprised that researchers in Korea have used these two ingredients to boost the performance of perovskite solar cells. Both substances were used to dissolve polymers that both improve the performance of the cells and prevent them from leaking lead as they age – with the walnut aroma performing slightly better than the peppermint oil.
People have been travelling to space for nearly 60 years and it looks like we are at the early stages of an era of commercial human space travel. In “This is your brain on space: how gravity influences your mental abilities”, the psychologist Elisa Raffaella Ferrè of Royal Holloway University of London explains that the effect of gravity on human cognition has been a neglected field of research. She is trying to change this by doing experiments during parabolic flights that simulate the low-gravity environment of space. While the tests only last about 20 s, Ferrè explains that lots can be learned.
During this year’s edition of Photonics West in San Francisco, Photonis Scientific launched the Mantis3 camera. Evert van Gelder, global director of sales and business development, and René Glazenborg, product manager, explain the unique benefits of this nanosecond time-stamping camera, which is complete with the Cricket intensifier adapter from Photonis.
Evidence for the largest known explosion since the Big Bang has been reported by astronomers in the US and Australia. Using four telescopes, the team spotted a huge hole that the blast punched in the plasma that envelopes a galaxy cluster.
The researchers reckon that the hole was made by a colossal burst of energy from a supermassive black hole at the centre of one of the cluster’s galaxies. They estimate that the explosion involved the release of 5×1061 erg (5×1054 J) – which is the energy that 1020 Suns would output in a year. This is five-times more energy than the previous record holder for the biggest known explosion since the Big Bang.
The event occurred in the Ophiuchus galaxy cluster, which is 390 million light-years from Earth. Unlike the supernova explosion of a star, which occurs over a matter of months, the Ophiuchus event seems to have occurred over hundreds of millions of years.
Huge rift
Evidence for the explosion was observed using NASA’s Chandra X-ray Observatory, ESA’s XMM-Newton X-ray telescope, the Murchison Widefield Array radio telescope in Western Australia and the Giant Metrewave Radio Telescope in India. The multiple observations confirmed previous X-ray studies that suggested there was a huge rift in the cluster plasma. That X-ray evidence had been discounted because of the huge size of the structure. It was only when radio observations were made that astronomers became convinced of its existence.
“You could fit 15 Milky Way galaxies in a row into the crater this eruption punched into the cluster’s [plasma],” says Simona Giacintucci, from the Naval Research Laboratory in the US, who is one of six astronomers involved in the study.
Her colleague Melanie Johnston-Hollitt, from Curtin University adds “We’ve seen outbursts in the centres of galaxies before but this one is really, really massive…and we don’t know why it’s so big”.
The discovery was made using the first phase of the Murchison telescope, which had comprised 2048 antennas. The facility is being expanded to 4096 antennas, which Johnston-Hollitt says will make the telescope “ten times more sensitive”. As a result, she believes that many more such explosions will soon be discovered.
Whole-body exposure to stray radiation can now be calculated accurately and efficiently for patients undergoing radiotherapy. Researchers in the US and Germany modified a treatment planning system (TPS) – the software used to predict patient dose distribution – to include unwanted doses from scattered and leaked radiation. The additional calculation, which adds an average of 7% to the computation time required for a standard treatment plan, could lead to better radiation treatments that avoid radiogenic secondary cancers and other side effects later in life. This is especially important for survivors of childhood cancer (Med. Phys. 10.1002/mp.14018).
External-beam radiotherapy has come a long way since its inception in the first half of the last century. In that time, most of the developments that have occurred have been with a view to improving the way the technique targets tumours – typically by delivering greater doses with ever increasing accuracy. Nowadays, patients undergoing radiotherapy can usually expect to survive their primary cancers, with five-year survival rates exceeding 70%.
But as post-treatment lifespans grow longer, late side effects of radiotherapy – such as damage to the heart, fertility issues and secondary cancers – are becoming increasingly prevalent. These side effects can be caused by radiation that is delivered to non-target tissue outside of the main therapeutic beam, much of which is not modelled by clinical TPSs.
To address this shortcoming and model the stray-radiation dose for the whole body, Lydia Wilson at Louisiana State University (LSU) and colleagues (also from LMU Munich, PTB and BsF) set out to modify the research TPS CERR (Computational Environment for Radiotherapy Research).
“Typically, commercial systems are proprietary and we can’t get sufficient access to the source code to integrate our algorithms,” says Wayne Newhauser, at LSU and Mary Bird Perkins Cancer Center. “CERR is open and we can get our grubby little hands on every line of code.”
Various methods exist for calculating treatment doses. The most accurate way is to model the process using a Monte Carlo simulation, but the computational expense of this technique limits its utility, particularly for stray radiation doses to the whole body. As these are exactly what Wilson and colleagues intended to calculate, they chose a much more efficient method – a physics-based analytical model.
The team’s algorithm calculates, for every location, a total dose that is the sum of four components: the primary therapeutic dose intended for the tumour; the dose contributed by radiation scattered from the head of the linear accelerator (linac); the dose from radiation leaking from inside the linac; and the dose from radiation scattered in the patient’s own body.
For regions within the primary radiation field, the researchers used the dose calculated by the baseline TPS. For regions far from the target, not modelled by the unmodified system, they calculated the dose with their analytical algorithm alone. For regions near to the target, they used a combination of the two.
To compare the performance of the baseline CERR and their extended version (which they call CERR-LSU), the researchers used both systems to calculate dose distributions for two X-ray energies – 6 and 15 MV – and two phantom geometries: a simple water phantom and a prostate-cancer treatment delivered to a realistic, human-shaped phantom. They then implemented the treatment plans on physical versions of the phantoms and measured the delivered doses at various locations.
Where the dose predictions of the two systems diverged – in regions outside of the treatment field – the team found that CERR-LSU was more accurate in every case. The locations where CERR-LSU offered the least accuracy were those within the treatment field, where both systems used the baseline dose calculations. The improved performance of CERR-LSU came at a modest increase in computation time, with the extended system taking only 30% longer than CERR in the most extreme case.
So when can we expect these improvements to show up in the commercial TPSs used in the clinic? Newhauser thinks that it all depends on the priorities of the TPS vendors and their customers. When the demand is there, however, adoption could be quick – given that the physics is now well understood and other algorithms have been integrated successfully into multiple TPSs.
“The biggest challenge to commercialization has been the continuation of a historical focus on short-term outcomes,” says Newhauser. “As disease-specific survival rates gradually continue to increase, patient, clinicians and vendors will eventually become more interested in treatment-planning features that improve long-term outcomes. We could begin to see basic capabilities appear in commercial systems in two to three years.”
The world’s largest meeting of physicists will get underway in Denver, Colorado, at the beginning of March. More than 10,000 attendees will gather for the March Meeting of the American Physical Society, which promises to deliver a packed scientific programme as well as plenty of opportunities for networking, exploring options for careers and professional development, and catching up with colleagues and friends.
One highlight this year will be the Kavli Foundation Special Symposium on machine learning and quantum computation. Speakers include John Preskill from Caltech, Michelle Girvan from the University of Maryland, Google’s Patrick Riley, and Roger Melko from the University of Waterloo. During the session Eun-Ah Kim from Cornell University will also describe how machine learning has been used by her group to analyse complex experimental data derived from quantum matter.
All the exciting new research presented at the meeting would not be possible without the latest generation of experimental instrumentation, and the meeting’s technical exhibit will enable delegates to discuss their specific requirements with representatives from more than 150 companies. Many equipment vendors will be introducing new devices and systems designed specifically for cutting-edge physics research – some of which are detailed below.
A modular approach to next-generation cryogenics
Oxford Instruments, a pioneer in the development of cryogen-free dilution refrigerators, has released a next-generation system that provides a step change in modularity and adaptability for ultralow-temperature experiments in condensed-matter physics and quantum computing.
The Proteox system has been designed for a multi-user, multi-experiment environment. (Courtesy: Oxford Instruments)
The Proteox system has been fully redeveloped to provide a single, interchangeable, experimental unit that can support multiple users and a variety of experiments. This is achieved by a side-loading “secondary insert” module that allows samples, communications wiring and signal-conditioning components to be installed and changed whenever necessary.
“Our development team has recognized that to optimize for such a wide range of applications, adaptability needs to be designed in as the very foundation of the system,” says Matt Martin, engineering director at Oxford Instruments NanoScience. “This configurability allows us to offer more tailored solutions and experimental set-ups on standard lead times, and also provides our customers with maximized future-proofing against changing requirements in a dynamic research landscape.”
The system features a web-based software control system to provide remote connectivity as well as powerful visualization capabilities. Improved control is achieved by a patented gas-gap heat-switch system that actively adjusts the thermal conductivity between experimental plates.
This is the first release of a new family of Proteox dilution refrigerators that will all share the same modular layout to provide cross-compatibility and added flexibility for cryogenic installations.
Visit Oxford Instruments at Booth #1611 to find out how the Proteox system could help your research.
Ultracold experiments made simple
ICEoxford offers a range of cryogen-free cryostats that have been designed to take large experimental heat loads, which makes them perfect for everyday research and low-temperature experiments with quantum-based technologies.
The DRY ICE O.8K Benchtop Cryostat is ideal for quantum experiments. (Courtesy: ICEoxford)
The latest addition to the range is the DRY ICE 0.8K Benchtop Cryostat, an upgrade from the DRY ICE 1.2K Benchtop. The new model combines continuous operation over the temperature range of 0.8 K to 425 K with a large sample space and a compact benchtop design.
With a cooling power of more than 400 mW at 2 K and above 80 mW at 1.7 K, the cryostat can reach its base temperature within 12 hours.
Various options are available, including five different window materials for optical access, custom wiring, and several alternative magnet technologies. Piezo-driven XYZ positioners, stacks and rotators can also be integrated into the cryostat.
Visit the ICEoxford team at Booth #1510 to explore their full range of research cryostats.
Complete solution delivers easy Hall analysis
Lake Shore Cryotronics will be demonstrating a new tabletop station for rapid, convenient Hall analysis that exploits the company’s patented FastHall measurement technology. The FastHall Station is a complete solution for researchers who are looking for a cost-effective way to add state-of-the-art Hall measurement capabilities to their lab.
FastHall Station offers a tabletop solution for fast and accurate Hall measurements . (Courtesy: Lake Shore Cryotronics)
In addition to a 1 T permanent magnet, high-precision sample holder, and a PC with application software, the station includes Lake Shore’s unique MeasureReady M91 FastHall measurement controller, which the company says is faster, more accurate and more convenient than traditional Hall solutions.
The M91 automatically executes measurement sequences, and provides better measurements more quickly – up to 100 times faster in many cases – especially when working with low-mobility materials. The M91 controller is also available as a standalone instrument for integrating into Hall measurement systems with existing electro or superconducting magnets.
You can find out more about the M91 at Booth #1101, along with Lake Shore’s full range of measurement and control solutions for low-temperature and magnetic-field conditions.
China’s Chang’E-4 mission has given us the first detailed view of the subsurface geology on the far side of the Moon. Using ground penetrating radar on the mission’s rover, scientists observed layers of dust and boulders formed by debris from past impacts on the lunar surface. The radar was able to probe about four-times deeper than previous studies on the near side of the Moon.
Much of the surface of the Moon is covered in a lunar regolith – a loose layer of pulverized rock and dust created by billions of years of meteorite impacts. While the regolith in parts of the near side of the Moon has been studied in detail by several missions, it had not been clear whether the surface geology is similar in underexplored regions of the Moon. The far side of the Moon, most of which is not visible from Earth, is of particular interest because this hemisphere has a thicker crust and consequently less volcanism. Until Chang’E-4, only relatively low-resolution satellite-based radar measurements have been made of far-side regolith.
In early 2019, the Chang’E-4 lander of the Chinese National Space Administration made history by being the first spacecraft to survive a landing on the far side of the Moon. Its landing site lies in the east of the Von Kármán impact crater, which is in the South Pole–Aitken Basin.
Two lunar days
The mission’s rover Yutu-2 was deployed after the landing and it scanned the lunar subsurface with its on-board radar. In a new study, a team of researchers from China and Italy present the results of Yutu-2’s first two lunar days (about 58 Earth days) probing the geology of the lunar far side.
The team found that the Chang’E-4 landing side sits atop a layer of loose deposits reaching up 12 m in thickness. Beneath this, the radar found a second layer of progressively coarsening material with embedded boulders that endures to a depth of around 24 m. This is underlaid by alternating layers of both fine and coarse materials down to a depth of at least 40 m.
“The most plausible interpretation [of the subsurface geology] is that the sequence is made of a layer of regolith overlying a sequence of ejecta deposits from various craters, which progressively accumulated after the emplacement of the mare basalts on the floor of the Von Kármán crater,” the Chang’E-4 scientists write in Science Advances.
“Very high resolution”
“We can see for the first time at very high resolution an ejecta deposit on the moon — how it’s made, the main characteristics, the thickness of the regolith,” team member Elena Pettinelli of the Roma Tre University tells Physics World.
The far-side regolith differs in some ways from the regolith studied on the other side of the Moon. Apollo-era work had suggested that regolith is typically only a few metres thick and sat atop lava flow surfaces. In 2013, a similar radar system used by the Chang’E-3 mission to the near side was only able to probe down to a depth of 10 m. These limits on probing depths on the near side suggest that the regolith on the far side is more porous and contains less ilmenite, which is a radar-absorbing mineral commonly found in the Moon’s volcanic basalts.
Geophysicist Wenzhe Fa of Peking University — who was not involved in the study — notes that the “exciting” Chang’E-4 radar data shine a light on the geological evolution of the far-side landing site. The regolith structure, he adds, “is the combined result of volcanic eruptions and multiple impact catering events. All of these show that the geological history of the Moon’s far side (especially the South Pole–Aitken Basin) is complex”.
“The tentative identification of buried regolith layers developed on top of ancient crater ejecta deposits is especially interesting,” adds planetary scientist Ian Crawford, of Birkbeck College London, who was not involved in the study. While accessing buried material is a task for future missions, he says these layers “may preserve ancient solar wind and galactic cosmic ray particles which could potentially provide information on the past evolution of the Sun and the solar system’s galactic environment”.
With their initial study complete, the Chang’E-4 researchers will be applying the lessons they learnt in optimizing the processing of the Yutu-2 data to revisit those collected by the earlier Chang’E-3 mission on the near side of the Moon. They will also continue to study the ongoing readings from the far side. Pettinelli is hopeful that the rover may pass over an area of thinner regolith, where it might be possible to see more layers and even the underlying lava deposits.
In this latest episode of the Physics WorldWeekly podcast, we discuss the science of sand dunes, and find out how dunes interact with each other as they move across the landscape.
After that, we have a chat about our recent skiing trips and examine the physics underlying the formation and migration of moguls on the piste. We also take a look at snowmaking cannons, and whether the recent “warm” weather in ski resorts could preclude their use.
Finally, we talk about a new variant of the classic Young’s double-slit experiments, which uses photoelectrons emitted via two different paths from rubidium atoms, and the implications of this research.
Whenever bad science appears on screen, a physicist is likely to declare “that’s wrong!”, making it hard to just sit back and enjoy the action along with your popcorn. As a physicist who writes about science on screen, I myself have pointed out Hollywood’s errors in physics and other sciences. But now, having reviewed some 150 science-based films, I’ve learned that the usual rationale for distorting the science is to maintain the flow of the story, which does not automatically make these films scientific disasters. They can still provide vivid teaching moments, publicize real science–society issues, and point young people toward science. Ideally though, the science should receive its proper weight too.
Fortunately, despite Hollywood’s tendency to put “story” over “science”, there are a number of independent filmmakers who make science an integral part of their stories. Without the publicity and distribution machinery that brings Hollywood features to many millions around the world, however, such independent films typically reach far smaller audiences. But in compensation there are lots of these films, supported by organizations that value their fresh approaches – and some indie efforts become films that are indeed seen by millions.
Physics is well represented among these independent films. With roots in a film festival held by scientist-filmmaker Alexis Gambis in 2006 at The Rockefeller University, his New York-based Imagine Science Films (ISF) is a successful non-profit devoted to merging science and film. ISF sponsors varied festivals that show independent science-based films around the world, and encourages scientist-filmmaker collaborations. In 2016, Gambis began Labocine, an online digital platform with 3000 science-based fiction, documentary and animated films, accompanied by curated comments.
Many of the films at Labocine.com convey what physics is really like. For example, The Researcher’s Article (2014) entertainingly shows the process of publishing a physics paper, and how important this is to its authors. Conservation (2008) dramatizes what happens when credit for a physics breakthrough is stolen. In Strange Particles (2018), a young theoretical physicist, frustrated by his lack of research progress and inability to inspire students, faces a hard question: is there any point in being a scientist if you’re not brilliantly talented?
Some films express physics ideas. Stuck in the Past (2016) shows how the finite speed of light brings us cosmic history, as an astrophysics student looks down the length of Manhattan and imagines historic moments carried by light that has been travelling since New York City was founded. Touching on general relativity, in Einstein–Rosen (2017) two brothers with a soccer ball show that a wormhole allows travel in time as well as space. In (a)symmetry (2015), quantum theorist David Bohm talks about the deep meaning of quantum physics; and in Bien Heureux (All is Well, 2016), a young physicist has no luck in explaining quantum entanglement to a friend, but educates us, the viewers.
The Alfred P Sloan Foundation also supports independent science films. Doron Weber, who directs Sloan’s programme in Public Understanding of Science, Technology and Economics, sees film as one way to bring science to people. As he describes it, film, together with books, theatre and other media, “support and reinforce each other to showcase stories about science and scientists”. The programme has provided more than 600 screenwriting and production grants to develop science films, and presents awards to outstanding science films. Weber also works with the Sundance Film Institute and other film schools to “influence a generation of aspiring filmmakers to integrate science and technology” into their work by exposing them to science. He finds that most of the 263 Sloan film school awardees continue to work in entertainment media and include science and tech in their creative efforts.
Many of the Sloan-supported films can be viewed online at scienceandfilm.org. Since 2000, about 140 of these have covered physics, astronomy and space science, and mathematics. They include documentaries such as Particle Fever (2013), about the first experiments at CERN’s Large Hadron Collider, and Chasing the Moon (2019), covering the early days of the Space Age. Biographical films include Dear Miss Leavitt (2018), about the pioneering astronomer Henrietta Leavitt, and Adventures of a Mathematician (2019), the story of Polish mathematician Stanislaw Ulam and his contributions to designing the hydrogen bomb and to early computation. Some films with roots in Sloan support have reached millions through wide theatrical release, such as the Oscar-nominated hit Hidden Figures (2016), which started as a Sloan book grant.
These films have another special value: view one, and you just might learn something new about your science and yourself
Asked about the importance of supporting independent films outside the Hollywood mainstream, Gambis and Weber give remarkably similar answers. Gambis notes the varied scientific fields that Labocine films cover and their cultural diversity. For instance, the nine films I described represent five different countries and include four women among their writers and directors. Weber also cites the range of subject matter and genres, and the varied ethnicities and nationalities and high proportion of women among Sloan filmmakers.
For us as physicists, these films have another special value: view one, and you just might learn something new about your science and yourself.
I am a semiconductor physicist by training – though this is very different to where I ended up. For my PhD, I studied the optical response of semiconductors in high magnetic fields and at liquid helium temperatures. I did a postdoc in quantum optics at KTH in Stockholm, then I went back to semiconductor physics at the University of Toronto.
You next worked in the film industry, that sounds intriguing, how did that happen?
I applied for what I thought was a software job in the film industry. I went into this Victorian ex-workhouse in Soho, only to find that the guys there were making world-leading celluloid film scanners. I saw the enormous film scanner, and thought ‘this is cool, I want to do this’. We were designing film scanners using military-grade satellite image sensors to scan celluloid film with extremely high resolution and supplying these scanners to all the major studios. That role took me all over the world, including Hollywood and South America.
But in 2008, Sony produced a new type of CMOS image sensor that was head and shoulders above the rest. Suddenly, what you could capture digitally was equivalent to, if not better than, what you could capture onto film. The whole motion picture industry changed – the post-production that used to take months now had to be turned around in weeks. The market for film scanners died overnight.
So what prompted your move into medical imaging?
That was an end of an era and I had to find another job. As an expert in imaging, I thought about areas where imaging will always be relevant – and chose medical imaging. I ended up at a medical imaging company as the engineering manager for their CMOS detector line. The irony being, of course, that it was CMOS that destroyed my motion picture career. I did that for about two years, but I thought I’d be more comfortable in a small business, so I decided to set one up myself.
What was the idea behind Unitive Design and Analysis?
I’d seen a lot of the challenges that large companies face in terms of performing technology development while also trying to deliver a product. I thought we could help people like that to explore new technologies.
Another issue is the regulatory landscape for medical devices. This changed a great deal in the last 10 years and I had learnt a lot about designing for compliance. You’ll see a lot of SMEs design a product and then try to get it through regulatory compliance. Suddenly they realise that they need evidence and feasibility data from the early design stages, and they end up having to redo all their design work.
So we can benefit large companies in terms of exploring new technologies. And small companies who don’t have the product development experience can leverage the expertise of those who have already done that. And for the investors who are investing in these technologies, it means that they can get a return-on-investment faster. That was why I set up the company, these three opportunities.
While 60% of our business is consultancy and contracting, we also perform product development and sponsor a biochemical engineering student at UCL. We are completely self-funded, so the money we make in consulting and contracting is reinvested in our own product development.
How did you get involved with the IOP’s Medical Physics Group?
I’ve been an IOP member since 1993, and was most active when I was an academic. As I started working within medical devices, I became a member of the Medical Physics Group. When I started my own business, I realised that I needed to strengthen my networks and meet more people. So I got involved in the group again, joined the committee in 2016 and in 2017, I was elected as its chair.
The remit of the group is to raise awareness of medical physics and medical physicists. People don’t realise there’s a huge team of medical physicists in every hospital making sure that everything works, to the standard that you expect, and that this standard is the same across clinics, hospitals and trusts — they are the unsung heroes. There are a lot of radiotherapy machines across the UK, and with techniques like proton therapy becoming more popular, the number of medical physicists is actually increasing. Our role as a group is to promote that.
How does the group achieve this?
We organize five or six meetings a year. The December meeting is always focused on clinical translation – looking at the enormous challenges required to translate technologies into the clinic. There are clinical pathway challenges, regulatory challenges, just getting technology into the NHS is a challenge. It’s an enormous task to bring medical devices to market.
The other thing that we do as a group, which I think we do quite well, is bring together a community that’s normally quite siloed. Medical physicists don’t just work in hospitals, there are a huge number of academic and industrial medical physicists as well, the people designing kit and producing big systems like accelerators, as well as devices like surgical lasers. Then there are those between the boundary of healthcare engineering and medical physics, who repair and service these systems. We all have the hat of medical physicist, but work in very different environments. We want to bring all these together and become a core community to discuss the evolving challenges.
A novel “decentralized” protocol makes it possible to share secret information among multiple senders and receivers using quantum teleportation. According to the South Korea-based team of researchers who developed it, the new method is the first of its kind, and might be used to make the first networked quantum computers.
Quantum teleportation – a way of instantly transferring a quantum state between distant parties without actually sending a particle in that state through space – is a fundamental building block of quantum computation and communication and works thanks to quantum entanglement. This “spooky action at a distance”, as Albert Einstein called it, allows two or more interacting particles to remain linked in a manner not possible in classical physics – no matter how far apart they are.
Conventional quantum teleportation begins when the sender and the receiver share a pair of entangled particles (for example, photons). The sender then interacts her half of the entangled pair with a third particle in an unknown state. Next, she measures the outcome of this interaction and then communicates the result to the receiver via a classical channel. Armed with this information and a measurement on his half of the entangled pair, the receiver is thus able to recover the state of the unknown state that has been teleported.
From one party to many
The first experimental demonstration of quantum teleportation came in 1997, when researchers succeeded in teleporting the spin (or polarization) of a photon. Since then, various groups have teleported the states of atomic spins, nuclear spins and trapped ions – to cite but three examples.
According to the South Korea team, however, there is no scheme that allows quantum information to be teleported to (and securely shared by) multiple parties at the same time. The researchers – Sang Min Lee and Hee Su Park of the Korea Research Institute of Standards and Science in Daejeon; Seung-Woo Lee of the Quantum Universe Center at the Korea Institute for Advanced Study in Seoul; and Hyunseok Jeong of the Department of Physics and Astronomy at Seoul National University – now propose such a protocol. Importantly, their scheme involves teleporting these “quantum secrets” in a decentralized way, so that the information does not have to be concentrated at a single location (a so-called “trusted node”).
“Unlike all previous teleportation protocols, our scheme allows quantum information shared by an arbitrary number of senders to be transferred to another arbitrary number of receivers,” they tell Physics World. “If any unauthorized group or individual tries to access the hidden secret, this break-in attempt is detected by the other parties.”
Proof-of-principle experiment
The researchers say they have already performed a proof-of-principle experiment between two senders and two receivers using a four-photon entanglement network. Unlike previous techniques, no single- or sub-party of senders and receivers can fully access the secret information, they explain. The results clearly indicate that the full information cannot be owned by individual parties and remains hidden until everyone involved agrees to reveal it.
The scheme facilitates quantum information relay over a network without requiring fully trusted central – or even intermediate – nodes, and the researchers say it could be further extended to include error corrections against photon losses or even quantum bit- or phase-flip errors. Such “decoherence” phenomena must be accounted for if quantum computation is to succeed. The work could thus eventually become a building block for a distributed network of quantum computers.