Skip to main content

Katsuhiko Sato: from inflation to science policy

Katsuhiko Sato.

Your first paper was with the physics Nobel laureate Hans Bethe. How did that come about?

Yes, it was my very first paper and was published in 1970 when I was still a graduate student at Kyoto University (Astronomy and Astrophysics 7 279). Bethe stayed in Kyoto for around four months and I was very fortunate to work with him at that time on the melting of nuclei in neutron stars. Bethe always explained things very kindly, using easy words. During this time I realized what a great physicist he was. It was a great start to my career.

You worked on the early universe next. What was interesting about this?

In 1979 I was invited to the Nordic Institute for Theoretical Atomic Physics in Copenhagen for a year to work on supernova explosions. At that time I was becoming more interested in cosmology, in particular what effects phase transitions had in the early universe. I found that cosmic expansion becomes exponential, and this is what we call inflation.

At the same time, physicists Alan Guth and Alexei Starobinsky independently came up with inflation and the theory had an immediate impact. What was it like watching the idea take hold?

It is very natural for scientists to come up with the same idea at the same time. Researchers read articles from around the world and from this, new ideas are born. What surprised me regarding inflation theory is that today many scientists are still working on new types of inflation. Alan Guth and I proposed inflation as a result of grand unified theories, but it was found there are some difficulties with this model. Now many types of inflation are proposed from various other points of view such as superstring theory. In that sense, the situation is a little complicated and confusing.

What did you think when you first heard that BICEP2 had detected signatures of inflation in the cosmic microwave background, only for it to be later disproved?

When I first heard the news, I was very pleased. I told Japanese newspaper and television reporters that this was a historic discovery. But when I learned that this important result for inflation theory had disappeared I was very sad. So now I’m greatly looking forward to a Japanese-led satellite mission called LiteBIRD, which has just been approved by the Japanese government. It involves people from NASA and the European Space Agency and will be 100 times more sensitive than BICEP2. This kind of co-operation is becoming stronger and stronger and is very important.

What surprised me is that many scientists are still working on new types of inflation

In the 1990s you switched into science administration, including two stints as president of the Physical Society of Japan. What attracted you to make this move?

Theoretical physicists usually make their contributions when they are young. I was getting old and felt that to continue making a contribution to science and Japanese society, I should move into science administration. At least then my research position could be used by an early-career scientist.

You’ve just stepped down as director of the Research Centre for Science Systems at the Japan Society for the Promotion of Science – a position you held since 2016. What does the research centre do?

Our primary mission is to advise the government on scientific research funding. Our secondary mission is to select scientists to review applications from researchers for competitive grants. There are approximately 100,000 applications each year and we have about 7000 reviewers. These are huge numbers. Although I stepped down as director at the end of March, I am staying on as a consultant.

Do you think the International Linear Collider (ILC) should be built in Japan?

I think the ILC should be built. Not only could the ILC produce large quantities of Higgs bosons, but the signal should be very clean. However, it is difficult for the ILC to be approved by the Japanese government as the total construction cost is estimated at more than 800 billion yen ($7bn). Of course, we will make many efforts to help the government understand the long-term scientific significance of this project.

How important is it that physicists communicate with the public to get their support for big projects like the ILC?

This is a very important role for scientists. Government support for science comes from people’s taxes so we must tell people about the results and achievements of science, and we must make it interesting. The Subaru Telescope in Hawaii is a good example. It has been very successful when communicating its results to the public and Japanese society has shown great support for this project.

Are you confident that Japan’s rich history in physics will continue?

The Japanese science budget has been decreasing for a number of years, which has been challenging for Japan’s academic community. On the other hand, although government support for universities has been hurt by cuts, the money given directly to researchers through competitive grants is actually increasing. So it is not easy – scientists must make great efforts – but we should still have hope for Japanese science.

Image quality analysis for MR in radiotherapy using the MagphanRT system

Want to learn more on this subject?

MRI manufacturers have made great strides in reducing MR system distortion. Maintaining acceptable levels of distortion relies on properly controlling a long chain of conditions. A robust system of quality control for crucial imaging performance characteristics is critical for detecting significant deviations before they affect clinical operations. The Phantom Laboratory’s MagphanRT® phantom design meets the specific QA needs for MR imagers in radiotherapy applications. MagphanRT’s modular configuration allows QA measurements over the wide fields of view found in radiotherapy applications.

MagRT webinarThis webinar presented by Richard Mallozzi will discuss:

  • The design and accompanying automated analysis of the phantom.
  • Setting up an automated QA system with the accompanying cloud-based Smári image analysis system.
  • Clinical experience and findings using the MagphanRT.

Want to learn more on this subject?

Richard MallozziRichard Mallozzi earned an AB in physics from Harvard University. He also gained a PhD in physics from the University of California at Berkeley in 1998, where he studied high-temperature superconductivity.

After graduating from UC Berkeley, Mallozzi joined GE Global Research as a magnetic resonance scientist, where he worked on diverse projects ranging from MRI acoustic noise reduction, interventional MRI, gradient and RF coil development, and applications of MRI to neuroimaging.

While at GE Global Research, Mallozzi collaborated with The Phantom Laboratory and GE scientist Daniel Blezek to develop the phantom and analysis technique that formed the basis of the control method.

Richard joined ONI Medical Systems in 2007, where he helped develop the first high-field (1.5 Tesla) commercial extremity MRI system. He then joined The Phantom Laboratory and Image Owl in 2014, where he works on a variety of physics and application issues pertinent to medical imaging. He co-designed the Magphan RT system and developed the automated analysis for the phantom.

Cyanobacteria and nanomaterials give solar cell a boost

Strategically designed nanomaterials have been used to optimize the performance of a solar cell that incorporates photosynthesizing cyanobacteria. The work was done by Jae Ryoun Youn, Young Seok Song and colleagues at  Seoul National University and Dankook University in South Korea. What is especially impressive about their new technology is that it exploits a broad region of the solar spectrum while simultaneously boosting the photosynthetic activity of the cyanobacteria.

The Sun offers a supply of clean and renewable energy, but how to utilize this limitless yet decentralized energy source as efficiently and practically as possible is a significant engineering challenge. The Korean team is pursuing a biological solution to this problem in cyanobacteria, which are ancient organisms that carry out photosynthesis and respiration in almost every environment on Earth.

Optimized biophotovoltaic cell

The device, described in Nano Letters, achieved enhanced efficiency by employing three separate active materials, each covering different regions of the solar spectrum.

Zinc oxide nanorods are highly photoactive in the ultraviolet region, but the researchers extended this range to include visible light by coating the nanorods with another functional nanomaterial: gold nanoparticles. These exhibit localized surface plasmon resonance, a phenomenon capable of increasing the photoactivity of semiconductors through strong light absorption and scattering; and an enhanced local electromagnetic field at a specific frequency. Essentially, the gold nanoparticles act as tiny light-harvesting antennas.

By loading this hybrid nanostructure with cyanobacteria, the scientists made a third light-harvesting addition to the system and achieved a further boost in performance. Using electromagnetic field simulations and measurements, the authors showed that the zinc oxide nanorods scattered light across the spectral regions favoured by the cyanobacteria towards the organisms at the top of the photoanode. Furthermore this scattered light was amplified by the gold nanoparticles. They concluded that as well as harvesting light through interband transition and hot-electron injection, respectively, the zinc oxide nanorods and gold nanoparticles improved the power output of the cyanobacteria.

Dark current and a sunny outlook

In addition to their environmentally friendly nature and self-healing ability, cyanobacteria have another trick up their microscopic sleeves when it comes to biophotovoltaics. The researchers demonstrated that their solar cell even produced photocurrent in the dark. This is because cyanobacteria can continue to break down stored carbon intermediates accrued during light periods. Given the cyclic nature of the energy provided by the Sun, this so-called “dark current” could help to reduce demand on energy storage systems.

In light of their results, the authors believe that this study paves the way for further development in the field of biophotovoltaics, enabling energy generation which is both efficient and sustainable.

Full details of the research are reported in Nano Letters.

Ammonia emissions can drive urban smog formation

Nitric acid and ammonia vapours can condense onto new aerosol particles and rapidly accelerate their growth, the CLOUD experiment (Cosmics Leaving Outdoor Droplets) at CERN has found. This may explain the smog that can engulf megacities on cold winter days, the researchers say, and provides evidence that tighter controls on ammonia emissions from vehicles and other sources are needed. 

Winter urban smog occurs when pollution particles form and build up in cold air in and over the city that’s trapped under warmer air at higher altitudes. This temperature inversion suppresses convection, preventing the dispersal of the air pollution. But how these new particles continue to form has been a mystery, as theory suggests that they should be rapidly scavenged by the high concentration of pre-existing particles.

“When there is a winter smog episode, we’re seeing new particles continuously forming and these are making the clouds thicker and more opaque,” explains Jasper Kirkby, head of the CLOUD experiment. “It was not understood how these particles could form, because as soon as a small particle forms within a highly polluted environment it will collide with the pre-existing particles and basically be removed from creating new particles.”

The CLOUD experiment is able to replicate the atmosphere anywhere in the world in a special, ultra-clean chamber. It was designed to investigate the impact of cosmic rays on aerosol, cloud droplet and cloud formation, by taking advantage of CERN’s proton synchrotron, an adjustable source of cosmic rays. But it can be used to explore other atmospheric processes.

CLOUD experiment

In the latest work, published in Nature, Kirkby and colleagues mimicked the range of atmospheric conditions typical of polluted megacities to investigate the role of ammonia and nitric acid in particle formation. They varied the temperature of the chamber from 20°C to −25°C, and adjusted the levels of sulphuric acid, ammonia and nitric acid, as well as aromatic precursors, to cover typical ranges of polluted megacities.

They found that at low temperatures, below about 5°C, nitric acid and ammonia vapours can condense onto freshly nucleated particles that are just a few nanometres in diameter. Due to the high abundance of these vapours, the resulting particle growth rates can be extremely high, reaching well above 100 nanometres per hour. This speeds the particles through the so-called “valley of death”, where very small particles are most vulnerable to loss, in a few minutes.

Kirkby says that nitric acid and ammonia are volatile vapours that are continuously exchanging with particles in the atmosphere, “so they previously were thought to be playing just a passive role, adding a bit of mass to smog, but not really driving the processes”. He adds: “What we found is that they are actually key players and they are helping new particles form in highly polluted environments.”

Due to the strong temperature dependence, the researchers expect the conditions necessary for rapid particle growth to occur in inhomogeneous urban settings, especially in wintertime, driven by vertical mixing and strong local sources of emissions such as traffic. Although the rapid growth may only last for a few minutes at a time, across an urban area this effect could still lead to the build-up of high concentrations of visible particles and dense smog.

The study also found that at temperatures below -15°C, ammonia and nitric acid can condense together and form their own ammonium nitrate particles – which then rapidly grow.

Global emissions of ammonia are dominated by farming. In cities, however, ammonia and nitric acid emissions are largely due to vehicles. Vehicle emissions of nitric acid, which derives from nitrogen oxides, are currently controlled. Ammonia emissions, however, are not. In fact, Kirkby says that such emissions are increasing in cities, as catalytic convertors on vehicles are creating ammonia.

According to Kirkby, the study results are significant for human health and urban pollution. “It has highlighted the importance of controlling ammonia inside urban environments,” he explains.

Why ultrafast detection is ultra-good

I once watched a YouTube video that blew my mind. Created in 2011 by Ramesh Raskar and colleagues at the Massachusetts Institute of Technology (MIT) in the US, it showed a pulse of visible laser light less than one-trillionth of a second (10–12 s) long, travelling down an empty, horizontal plastic fizzy-drink bottle (see video below). The light bounced off the inside of the cap, creating a blinding flash as it did so.

Involving millions of repeated measurements, the filming technique was complicated, but it allowed the MIT group to create videos at an effective rate of a trillion frames per second. So although the light took just a billionth of a second (10–9 s) to travel down the bottle, you could see the movement and reflection of the pulse in all its glory. Obviously, to allow the event to be watched properly, the video was shown in super-slow motion, with the light’s journey lasting about 10 seconds when viewed online.

The video was just one of the latest attempts in the quest for “ultrafast photography”, an endeavour that began in 1878 with the recording of a galloping horse and was followed nine years later by the photograph of a bullet moving faster than the speed of sound. Decades later, the development of charge-coupled devices (CCDs) and then complementary metal-oxide-semiconductor (CMOS) technology let researchers study moving objects at rates of up to 107 frames per second. But the 2011 video showed that events could now be captured at a trillion (1012) frames per second.

The big snag with the MIT filming method was that the researchers had to repeat the experiment over and over again, gradually building up their animation of the light. Synchronizing the camera and the laser that generates the pulse – to ensure the timing of every exposure was the same – also needed lots of advanced optical kit and delicate mechanical control. Indeed, it took only a nanosecond for light to scatter through the bottle but an entire hour to collect all the data, prompting Raskar to joke that his system was “the world’s slowest fastest camera”.

However, in 2014 researchers at Washington University in St Louis, US, led by Liang Gao (now at the University of California), developed a way of directly imaging ultrafast events in a single go, rather than having to film them repeatedly. Their technique, dubbed “compressed ultrafast photography”, operates at rates up to 1011 frames per second (Nature 516 74). They used it to take a snapshot of a single photon, which meant they had video recorded the fastest physical entity in the universe (see video below).

Medical magic

With such ultrafast imaging techniques constantly evolving, researchers have naturally wondered about other potential applications, such as studying ultrafast phenomena in biological cells. But could we extend such techniques so they don’t just work with visible light (roughly 1 eV) but detect much higher-energy X-rays (0.1–100 keV) and gamma rays (>100 keV) too? Being able to track such photons with high temporal and spatial resolution would have a massive impact in medical physics.

High-energy photons are already vital in X-ray computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography. While some of these techniques simply take snapshots of structures within the body (think of a standard X-ray of a broken bone) others look at physiological functions such as blood flow, metabolism and chemical absorption. Being able to detect high-energy photons with higher spatial and temporal resolution would allow us to follow dynamic processes in the body even more closely and continuously.

Take PET, which can measure how fast glucose is consumed around the body, or provide information about blood flow and oxygen use. The technique involves injecting patients with a radioactive tracer, such as fluorine-18. Positrons emitted by the radioisotope collide with electrons in the patient’s tissue, annihilating each other to produce two gamma-ray photons with an energy of 511 keV. Travelling in opposite directions, they’re captured by “scintillator” crystals, triggering the release of lower-energy photons that a photodetector converts into an electric signal.

In revealing the distribution in time and space of the annihilation events, PET therefore indicates roughly where the radioisotope tracer has spread to at a particular time. That information is vital because it tells us how fast the tracer has been able to travel to and from various tissues and organs in the body. PET, in other words, helps us to understand biochemical processes in the body and distinguish between normal and abnormal behaviour.

figure 1

Today’s best state-of-the-art clinical PET scanners can achieve a resolution of about 3–4 mm in a scan that lasts a couple of minutes and they offer reasonably good (10%) energy resolution, thereby providing vital information about the biochemical behaviour of the radiotracer in the body. Better images are possible using a new variety of machine known as time-of-flight PET (TOF-PET) scanners, which can measure the tiny time difference between when the two annihilation photons are detected. The world’s current fastest TOF-PET commercial scanner, which is built by Siemens, can resolve processes down to just above 200 picoseconds (200 × 10–12 s) (figure 1).

That kind of timing resolution is excellent and, what’s more, the signal-to-noise ratio is 2.5 times higher than with PET scanners that don’t use timing information. So, if – and it’s a big if – detectors could operate at still faster speeds of, say, 10 ps, we’d get even better maps of where the photons originated. Indeed, the signal-to-noise ratio of the images is expected to rise by a factor of 12 according to a formula developed by Maurizio Conti and colleagues from Siemens in 2013 (IEEE Transactions on Nuclear Science 60 87).

Access all areas

Huge improvements of that kind, brought about by high-resolution, ultrafast gamma-ray detectors, could bring many practical benefits in medicine. Clinicians would be able to examine patients faster, reduce the isotope doses they need, and improve the quality, resolution, accuracy and precision of the images that are so vital for medical diagnoses. But such detectors could benefit many other areas too, which is what prompted the European Union (EU) to fund the Fast Advanced Scintillation Timing (FAST) project.

Led by Etiennette Auffray Hillemanns, a physicist at the CERN particle-physics lab near Geneva, FAST ran from 2014 to 2018. Also known as Action TD1401, this multidisciplinary project (of which I was communications manager) was part of the European Cooperation in Science and Technology (COST). Bringing together hundreds of academics and industrialists from 25 different nations, our aim was to create fast, scintillator-based detectors with sub-100 ps timing precision.

In medical physics, such fast photodectors could help develop TOF-CT scanners (Phys. Med. Biol. 65 085013), Cherenkov imaging and neutron imaging. In biology, they could be vital for high-throughput microscopy, time-gated optical tomography, fast cell sorting, and applications where you want to use very low levels of light so you don’t destroy a live sample. Such detectors could also benefit astronomy by boosting our attempts to image gamma-ray bursts and other cosmic sources of high-energy radiation. Future high-luminosity particle colliders – such as the upgrade to CERN’s Large Hadron Collider – could reap the reward too. At a more everyday level, fast detectors will help in automotive LIDAR (light detection and ranging), which is vital for autonomous, self-driving cars.

With so much potential, members of the FAST project decided to focus on three key areas. The first was to find materials that can produce light more quickly (and given that there are many different types of scintillating crystals, this was a big job). The second was to find ways to transport the light faster through the material to ensure it’s collected more quickly by the photodetector. This is a key issue because the crystal in a PET scanner usually has to be 2–3 cm long to stop enough high-energy photons on their journey to the photodetector and so get a big enough signal. Finally, and most importantly, we wanted to develop ways to detect the scintillation light faster and convert it more quickly into an electric signal, while still measuring the deposited energy of the high-energy photon with good precision.

In reaching this final goal, it’s worth noting that fast detectors have been made possible thanks to solid-state silicon photomultipliers, which have replaced conventional vacuum-based photomultiplier tubes as the best way of recording signals from scintillator crystals in PET scanners. Consisting of thousands of single-photon avalanche diodes (SPADs), silicon photomultipliers have continually smashed their time-resolution records. But as Stefan Gundacker from the University of Milano-Bicocca and colleagues at CERN showed last year (Phys. Med. Biol. 64 055012), if we could read these photomultipliers at high frequency, we’d be able to manipulate the signal from each diode and still extract timing and energy information with high resolution.

The key thing about the PET technique is that it produces many scintillation photons, but they don’t all appear at the same time. Silicon photomultipliers help by lowering the threshold between the signal and electronic noise, thereby letting you measure more accurately when the very first scintillation photons in the crystal are produced – and therefore boost the timing resolution. Indeed, the quest to improve PET and other medical-physics applications is what inspired the early developers of silicon photomultipliers in the first place.

In the case of scintillator crystals made from lutetium oxyorthosilicate doped with cerium and calcium (LSO:Ce:Ca), specially adapted front-end electronics can detect the difference in the arrival time of the two annihilation photons – the so-called “coincidence time resolution” (CTR) – of 98 ps for a 2 cm-thick crystal (the size you typically get in an ordinary PET scanner) and just 58 ps for a 3 mm-thick crystal. Even better, the high-frequency-readout means you can exploit another signal too – Cherenkov light emitted by scintillator electrons that have picked up energy from the 511 keV gamma-ray photons and so started moving at faster than the speed of light in the material. This light gives you a way of “time stamping” the arrival of the very first scintillation photons with high timing precision.

FAST team

For another common PET scintillator – bismuth germatate (Bi4Ge3O12 or BGO) – the CTR was 277 ps for 2 cm-thick crystals and 158 ps for 3 mm crystals. These figures are interesting because although BGO appears slower for PET imaging, it is much cheaper to make than LSO. The other plus-point of BGO is that it’s denser than LSO, which means that a bigger fraction of the incident gamma-ray photons lose all their energy when they interact with the crystal, thereby strengthening the signal. The downside of BGO is that it’s got poor time resolution, so you can’t usually distinguish between events more than a few nanoseconds apart. But with the possibility of extracting timing information using Cherenkov radiation, we might see TOF-PET scanners fitted with BGO crystals at a lower cost.

Another way we could measure photons in PET scanners with a CTR approaching 10 ps would be to use direct-band-gap semiconductors that emit light from excitons (electron–hole pairs). One FAST project, for example, developed a material based on highly luminescent semiconducting nanocrystals made from zinc oxide and gallium embedded in a scintillating polystyrene matrix (Optics Express 24 15289). It emits lots of high-intensity light when the excitons in the polystyrene decay over sub-nanosecond times and then transfer their energy to the nanocrystals, which then scintillate. A second group, meanwhile, showed that a multiple-quantum-well heterostructure based on InGaN/GaN can result in ultrafast luminescence excitation rise and decay times within picoseconds (J. Luminescence 208 119).

Another promising scintillation mechanism for ultrafast timing is known as “hot intraband luminescence”, or IBL (J. Luminescence 198 260). It occurs when a high-energy photon deposits its energy in a scintillator, which within picoseconds spits out a handful of photons over a broad range of energies. Although the IBL yield is far too low for scintillators, it could possibly be combined with other fast-scintillation techniques to time-tag scintillation events and significantly improve time resolution. The IBL yield could also possibly be increased further by engineering the band structure of the material to boost the number and probability of allowed radiative intraband transitions.

In addition to investigating ultrafast scintillation processes, FAST investigated the key parameters for photodetectors and electronics to achieve a good timing resolution. For photodetectors, the photodetection efficiency and the single-photon response are essential to catch the scintillating photons with the fastest timing resolution from the SPADs and provide an electrical signal with the lowest timing jitter. The project also saw researchers develop electronics that could resolve events at picosecond time scales while also ensuring the detector used as little power as possible yet still having a large bandwidth and low noise.

Crystal-clear future

horse gait series of photos

We’ve come a long way from that early 1878 recording of a galloping horse. Indeed, when it came to improving the timing precision of detectors, the FAST project exceeded our expectations. We showed it’s perfectly possible to achieve a CTR of 58 ps and indeed we are now targeting 10 ps timing levels, which would be invaluable in many areas of fundamental and applied science. I believe this objective may be demonstrated in the lab in the foreseeable future, though it’s hard to predict when we’ll see an entire PET scanner built from such fast high-energy photon detectors, let alone become financially viable. Still, I can’t wait to see the first images from an entire-body PET scanner operating with 10 ps CTR detectors (Med. Phys. 10.1002/mp.14122).

The EU FAST project ended in November 2018, but its activities are still alive via the Crystal Clear Collaboration, co-ordinated by researchers at CERN, which played a big role in FAST. Members of the collaboration have already launched a competition, challenging research teams to build the first detector that can resolve events with a time resolution of below 10 ps. Inspired by great historical contests of the past, such as the challenge to find a way to determine longitude at sea, this competition will, I hope, speed up the development of nuclear-instrumentation techniques and applications. And the more teams that join in, the likelier we are to succeed in this grand quest.

Dynamic stimulation of visual cortex lets blind people ‘see’ shapes

A US-based research team has successfully demonstrated how dynamic stimulation of the visual cortex enables blind and sighted people to “see” shapes – a technique that could one day be used to convey entire visual scenes to patients.

Neuroscientists and neurosurgeons have long known that electrical stimulation of electrodes implanted in the visual cortex using small currents produces the perception of a small flash of light, known as a phosphene. This process could serve as the basis for a visual cortical prosthesis (VCP), a device that could restore some visual abilities to blind patients. Although some VCPs were tested in the 1960s and 1970s, they had limited effectiveness and were constrained by the technology of the time. But now a new wave of teams are attempting to produce a modern VCP, using improved electrodes and better wireless data and power transfer technology.

Two such teams, based at Baylor College of Medicine (BCM) and the University of California, Los Angeles (UCLA), have carried out clinical trials and tests of a VCP device called Orion, produced by Second Sight Medical Products. The results, recently published in the journal Cell, show that the Orion device is a safe and effective means of providing patients with some visual experience.

“Our research team is specifically trying to understand how to make the subjects be able to see and discriminate between visual forms such as simple objects or letters. Our key finding is that we can more effectively communicate visual forms to the subject if we use a dynamic electrical stimulation protocol,” says William Bosking, assistant professor of neurosurgery at BCM, who co-wrote the article with senior author Daniel Yoshor, professor of neurosurgery at BCM.

Put simply, this means that instead of treating the electrodes on the array like pixels in a video display, and sending various current levels to all of them at once in an attempt to convey a particular form or shape to the patient, the device instead stimulates only the electrodes that outline the shape it is trying to convey, and stimulates them in a rapid dynamic sequence.

Dynamic stimulation

“This is analogous to how you might trace out a letter on someone’s palm or forearm if you were trying to convey a letter to them by touch,” explains Bosking. “We find that using this dynamic sweeping of activity across the visual cortex allows the subjects to perceive and discriminate letters reliably.”

Visual scenes

The Orion VCP system consists of a camera, which captures an image of the visual scene in front of the patient, a visual processing unit that the subject wears on their belt and which performs some filtering of the camera image, and a transmitter worn on a headset that delivers wireless data and power to a receiving coil implanted under the skin. It also contains circuitry to handle the final conversion of signals into currents to be sent to the electrodes, as well as the electrode array itself, which consists of a flexible sheet with 60 embedded electrodes that lies on the surface of the visual cortex.

“Our research team focuses on understanding how to use the implanted electrodes to produce the best visual experience for the patients. Second Sight is the hardware manufacturer in this case, and we are trying to understand how to best use that hardware to allow the patients to see and discriminate visual forms,” says Bosking.

Bosking notes that with VCPs tested in the past, subjects did not really see or perceive coherent visual forms, instead they tended to see blobs of light in front of them in various locations. “Our new electrical stimulation protocol … produces more coherent perception of visual form, and sends less current to the brain,” he adds.

Looking ahead, the team hopes to test its stimulation protocol in VCPs that have a greater number of implanted electrodes – hundreds to thousands as opposed to the 60 used in the current device. The researchers also want to work with computer vision specialists, engineers and visual neuroscientists to optimize algorithms that could be used to convert a camera image into dynamic stimulation sequences, which could convey entire visual scenes to the patients.

“This would mean continuously updating and identifying each of the salient objects and contours in the scene, and converting these into dynamic sequences that would be delivered to the electrode array in an interlaced fashion,” Bosking explains. “We hope that these techniques will prove useful in future VCPs no matter what type of electrodes or other stimulation technologies are used.”

Bio-bots with spinal cords have natural walking rhythm

A research team from the University of Illinois at Urbana-Champaign has successfully integrated an intact spinal cord with muscle cells to make moving “bio-bots”. The integration of muscle and spinal cord gives the bio-bots’ movement a natural walking rhythm.

Three-dimensional structures that mimic parts of the human body can be used as models to help researchers understand how the actual systems in the body work, or to test new therapies or drugs. Previous versions of bio-bots could move when their muscles contracted. Now, with a spinal cord, the bio-bots can move with a more natural rhythm.

These bots were made for walking

To make the bots, the researchers 3D-printed a small skeleton consisting of two supports and a flexible bridge between them. They then added muscle cells to the skeleton, which grew into a ring of muscle.

One of the more difficult parts of this work was integrating an intact rat’s spinal cord: no one has previously managed to continue to grow an intact rat spinal cord once it had been removed from the animal. After extracting the spinal cord, the researchers added it to the skeleton along with the muscle cells. After one week, they saw that junctions had formed between the spinal cord and muscles. These junctions looked like those formed when neurons connect to muscles in the human body and appeared to show that the spinal cord and muscles were communicating.

To make the muscles contract, the researchers added glutamate to the outside of the bio-bots. Glutamate is a molecule that nerve cells use to send signals to other cells, and adding it to the bio-bots caused their muscle cells to contract in a walking rhythm. Once the glutamate was washed away, the movement stopped.

Taking the next steps

This research marks the first time that a rat spinal cord has been integrated with muscles to make a primitive nervous system. The researchers hope their bio-bots could be used to study the peripheral nervous system – a difficult part of the body to examine in live animals or humans. Their next step will be to improve the bots’ gait to make it resemble a normal walk.

“The development of an in vitro peripheral nervous system – spinal cord, outgrowths and innervated muscle – could allow researchers to study neurodegenerative diseases such as ALS [amyotrophic lateral sclerosis] in real time with greater ease of access to all the impacted components,” says Colin Kaufman, the paper’s lead author. “There are also a variety of ways that this technology could be used as a surgical training tool, from acting as a practice dummy made of real biological tissue to actually helping perform the surgery itself. These applications are, for now, in the fairly distant future, but the inclusion of an intact spinal cord circuit is an important step forward.”

The bio-bot is described in APL Bioengineering.

Microwave timing signals get hundredfold boost in stability

The best-ever conversion of an optical clock’s time signal to a microwave signal has been demonstrated by Takuma Nakamura, Frank Quinlan and colleagues at the National Institute of Standards and Technology (NIST) in the US. The teams achieved the result through a setup involving optical frequency combs and state-of-the-art optical detectors. Their achievement could have significant benefits for fields as wide-ranging as navigation, astronomy, and tests of fundamental physics.

By exploiting the highly stable time signals produced as electrons oscillate between atomic energy levels, today’s optical clocks enable researchers to measure time with unparalleled degrees of accuracy. Indeed, optical clocks produce much better time signals than conventional atomic clocks, which operate at much lower microwave frequencies.

However, optical clocks are difficult to use in most practical timing applications because their optical signals oscillate much too quickly to be counted by electronic devices. To take full advantage of its stability, the optical signal must be converted to an electronics-friendly microwave signal. Two decades ago, the “optical frequency comb” (OFC) was developed to make this conversion.  An OFC produces femtosecond laser pulses with an optical frequency spectrum containing sharp peaks (or tones) that are evenly-spaced in frequency – much like the teeth of a comb.

Frequency gap

Importantly, the frequency gap between these tones is the same as the frequency at which the pulses are emitted by the laser – and this frequency is as stable as the optical time signal itself. When the train of comb pulses is fired at an optical detector, the output is a stable timing signal at a microwave frequency.

While this conversion is straightforward in principle, in practice it has proven very difficult to develop technology that delivers a microwave signal with the desired accuracy and stability. For one thing, the optical detectors themselves can introduce instability to the microwave signal.

Now, Nakamura, Quinlan and colleagues have made an important step forward by improving both their OFC design and optical detectors. The team used signals from two optical clocks as inputs to two independent OFCs that each produced a pulse train. These pulses were sent to two different photodiode detectors to create two independent microwave signals. By comparing these two signals the team concluded  that the microwave timing signals remained in phase with the clocks’ signals to within an error of just 1 in 1018. This makes the signal around 100 times more stable than the current best microwave time sources.

The team envisages numerous possible applications for these stable sources. The ability to maintain stable and accurate microwave signals over large distances would enable more effective synchronization between radio telescope arrays; improvements to radar accuracy; and better communications between satellites. Elsewhere, it could allow for more effective tests of fundamental physics and could even bring about long-awaited improvements to the current SI definition of the second.

The research is described in Science.

Innovation must continue in a COVID-19 world

Can a fusion-reactor design be tested from home? Can your Japanese collaborator understand your mass-spectrometer widget via a shared drawing on a Zoom call? Is an inventor allowed back into their lab to pick up their memory stick? Does your broadband connection work when you need it to? Given the ongoing COVID-19 pandemic, these are just some of the questions now being asked as we encounter a “new normal” in our working lives.

We are in a situation of huge uncertainty. And uncertainty is never great for any business, particularly when it comes to innovation – especially now as most staff are working at home, or at least trying to. Every well-run hi-tech business should ensure that innovation is part of a wider business model, which means that a patent-filing strategy – with its associated cost and time-management – also needs to fit in with that model. If there’s a disconnect here, then there is an increased risk of wasting money, time and effort with little impact.

I work with physicists and engineers from a wide range of entities, including start-ups, spin-outs, universities, small-to-medium enterprises (SMEs) and global corporates. Hard data about what is happening to hi-tech firms is hard to come by. But as a snapshot, there seem to be “pockets” in play, where some are carrying on as if nothing untoward is happening. Some are even increasing innovative activity, while others less so. Physics-based companies arguably lag behind their chemical and biotech counterparts when it comes to protecting and exploiting intellectual property, but the protection of new ideas remains crucial now, just as it did before the COVID-19 pandemic.

From small to large

Before the pandemic, companies had systems and procedures in place to drive, capture and protect innovation. For universities, that might be to help justify tax-payer investment in research and eventually -generate income from spin-outs. For SMEs, a good patent strategy might go hand-in-hand with protecting market share and cementing collaborations. With corporates, the same may be true, but might also form part of a wider “numbers game” where thickets of patent filings could have different tactical use.

It seems that some start-ups and spin-outs are now in a particular bind. Intellectual property might be their main, or perhaps only, asset. If they are lucky to have long-term funding, then day-to-day operations are likely to be okay, for now, and it could even be time to push for more protection. But what if funding was already tight or upcoming funding looks shaky?

This is tricky. Pressing pause can only last so long with patent-office deadlines still looming, even if temporarily extended. It might well be that the protection of certain innovations needs to be ramped down, or jurisdictional coverage restricted. Or perhaps patent rights will have to be abandoned, which would hit the value or strength of the business.

For universities, the risks and rewards in the short term are perhaps not so prevalent. But similar problems do exist. Universities will face financial challenges over the next few years as government funding gets squeezed and international students fall away, which is why many see the protection of innovation as being key to revenue generation. Only time will tell if the short-term pain will justify the possible long-term gain.

For many SMEs, meanwhile, it is common to protect core innovations and also invest in more speculative technology that might be interesting in the future. As a result, such firms might have more options in terms of what to do next. Some will want to push innovation and protection to get ahead of the competition, while others will prefer to pull back and protect their core technology. Both approaches can currently be seen.

For the big corporates, it is quite common for them to follow an “amplified SME” strategy. They tend to carry out more speculative patent filings to protect against future changes in the direction of their sector. In this case, they will likely ride out the storm and more easily cut spending on patents if necessary or cut research investment in more peripheral areas of interest. They may even throw resources behind a particular area of emerging interest. For example, we are seeing an enormous -impetus in the medical-devices sector, in terms of physics-related technology, and also in pharmaceuticals.

The new normal

So what will happen next? Removing more of the uncertainties will bring about increased stability. That might be in terms of more concrete timelines being put in place for a new normal – for example, getting used to working from home or a home–lab combination. Maybe in future researchers will do lab work and then go home and study the data. Perhaps this will now happen, regardless of lockdown measures? Stability is never a bad thing for investors and long-term bets will still be in play.

I still think there will be a shift to an even more electronic and connected world, and there’s clearly going to be a vast amount of innovation to go along with that. Will physics companies be involved in this and benefit from it? Yes. After all, new innovations will still be sought in the telecoms, telemedicine and consumer-electronics industries, as well others too.

We are in a state of huge flux and the world has more to worry about than just physics-based innovation. But if we want a brighter future we cannot simply press pause forever on innovation, protection or exploitation and forget about them entirely. Innovating and protecting your innovation is rarely, if ever, a short-term strategy. It’s long term, critical and needs to fit in with a business model. While the COVID-19 pandemic might change some of the inputs and outputs to the strategy, these messages remain true.

Floating visual displays, ultrafast imaging and far-UVC light: the June 2020 edition of Physics World is now out

Cover of Physics World June 2020 issue

Hovering 3D images are the stuff of science fiction. Just think of R2D2 projecting a message from Princess Leia in Star Wars, or perhaps Iron Man designing his latest tech in Marvel movies.

But in a Faraday cage at the University of Bristol in the UK, fiction is becoming reality with the help of some speakers and polystyrene balls.

As Michael Allen finds out in the cover feature of the June 2020 issue of Physics World, which is now out in print and digital formats, floating visual displays created using acoustically levitated particles could lead to galleries of singing heads, and even to advances in contactless manufacturing.

If you’re a member of the Institute of Physics, you can read the whole of Physics World magazine every month via our digital apps for iOSAndroid and Web browsers. Let us know what you think about the issue on TwitterFacebook or by e-mailing us at pwld@ioppublishing.org.

For the record, here’s a rundown of what else is in the issue.

• US students hit by extra demands – Graduate students under COVID-19 lockdown complain they are being unfairly asked to carry out more tasks such as online teaching, as Peter Gwynne reports

• From “techno-turkey” to iconic observatory – Launched in 1990, the Hubble Space Telescope has almost single-handedly transformed astronomical research and altered the public’s perception of space. Now, three decades on, attention is turning to its successor, as Keith Cooper reports

• From inflation to science policy – Katsuhiko Sato from the University of Tokyo talks to Graham Jones about his work on inflation and the future of physics in Japan

• Shock and awe – How do you react when something unexpected happens in physics? Robert P Crease runs through the gamut of emotions

• Kicking the habit – We all knew that we should have travelled less and video conferenced more. But with COVID-19, we have no other choice says James McKenzie

• Don’t press pause on innovation – Richard Bray says it’s as vital now as it was before the COVID-19 pandemic for physics-based firms to protect their innovation

• The potential of far-UV for the next pandemic – According to the physicist Charlie Ironside, our ability to deal with future deadly pandemics could be better – if we look to the far-ultraviolet. Jon Cartwright reports on a call to arms for the LED industry

• Making images with sound – Floating visual displays created using acoustically levitated particles could lead to galleries of singing heads, and advances in contactless manufacturing. Michael Allen investigates

• Imaginative intersection – Rachel Brazil reviews Entangle: Physics and the Artistic Imagination edited by Ariane Koek

• Explorer versus salesman – Ian Randall reviews Adventures of a Computational Explorer by Stephen Wolfram

• Safe training: the linac simulator – Marco Carlone, founder of Linax Technologies, talks to Tami Freeman about his motivation for  creating an online learning tool that teaches medical physicists how to use and understand linear accelerators

• Ask me anything – Sara Cruddas, space journalist, TV host, author and director

• From the lab to the courtroom – Sidney Perkowitz on crime-solving physicists Wilmer Souder and John H Fisher, who helped further the field of forensics with their research.

Copyright © 2025 by IOP Publishing Ltd and individual contributors