Skip to main content

Ships can monitor and predict ocean waves using new algorithm

The safety and efficiency of ocean-going vessels could soon get a boost from a new algorithm that can monitor and predict incoming ocean waves. Developed by a team led by Zhengru Ren at the Norwegian University of Science and Technology, the system relies only on information about the motions of ships, with no need for external sensor data. Their mathematical approach could benefit global maritime industries by being cheaper and more accurate than existing techniques.

Ocean waves hold a constant influence over the operation of ships, and the safety of their crews. To streamline the efficiency of maritime activities, operators must continually monitor surrounding “sea states”, which contain information about the heights, frequencies, and directions of incoming waves. This is often done using information from meteorological sensors including satellites and floating buoys. However, each of these measurement techniques has shortcomings, either relating to cost, or the real-time accuracy of their measurements.

Ren’s team introduce a more advanced approach in their method, which predicts future sea states based on real-time observations taken aboard a ship. In developing their algorithm, the researchers aimed for a “nonparametric” approach, which can reconstruct sea states based on their influence over a ship’s motion. This would be far more flexible than existing sensor-based methods, but would first require the team to apply several different mathematical techniques to ensure the best possible accuracy.

Bobbing up and down

To reconstruct surrounding sea states, a vessel’s motions are analysed using Fourier transforms, which gives “cross-spectra” of how the ship bobs up and down. Ren’s team then applied a smoothing function called a Bézier surface; before incorporating an optimization technique to minimize any errors originating from a vessel’s unique responses to waves.

Finally, the researchers applied pre-calculated functions named “response amplitude operators”, which can account for the unique geometries of ship hulls. This enabled their calculations to accurately represent the relationship between vessel motions and specific wave heights. With these combined techniques, Ren and colleagues could faithfully reconstruct the motions of incoming waves, based purely on the motions of a simulated ship.

Without any need to carefully tune the parameters of a model, ship operators could drastically reduce both the time and cost required to monitor surrounding sea states. These advantages are enhanced even further since the techniques can be readily applied in real-time scenarios, without any external sensors. Ren’s team now hopes that their algorithm could soon be widely implemented: improving both the safety and efficiency of shipping industries worldwide.

The algorithm is described in Marine Structures.

Deep learning helps doctors predict gastric cancer metastasis

Research workflow

Early detection of metastasis, in which cancerous cells spread through the body, could turn the tide on cancer and enable clinicians to provide suitable therapies. Importantly, the introduction of artificial intelligence (AI) and enhanced image analysis has helped improve diagnostic accuracy. To assist pathologists in identifying metastatic lymph nodes (MLNs), researchers at Xidian University and Changhai Hospital in China have developed a computational approach to predicting the clinical outcomes of patients with gastric cancer.

Stomach cancer, also known as gastric cancer, occurs when cells in the inner lining of the stomach begin to grow abnormally. If left untreated for several years, these abnormal cells may develop into a tumour. Traditionally, experienced pathologists examine excised lymph nodes for the presence of gastric cancer metastases and evaluate the tissue morphology with the aid of an optical microscope. While this is an acceptable standard of practice, the process can be tedious and may lead to human errors.

The multidisciplinary group, led by Guanzhen Yu and Xiyang Liu, developed a deep-learning framework for identifying and analysing micrometastases (with a diameter of less than 2 mm) in lymph nodes. The framework was designed to uncover the tumour-area-to-MLN-area ratio (T/MLN) from whole slide images. The team tested the approach on two independent datasets of gastric cancer patients.

Diagnostic accuracy with AI assistance

The researchers point out that while pathologists possess better specificity in the detection of tumour tissues, AI offers scalability of performance due to its sensitivity and speed. Combining the two could provide the most clinically meaningful outcome.

The researchers first digitized MLN pathology samples and annotated them to create a training dataset. They then subjected this dataset to a deep-learning algorithm for classification and segmentation. These steps resulted in a precise calculation of the proportions of tumour components and lymph nodes in the samples.

Clinical pathologists reviewed the deep-learning results and reported a 94.5% consistency in lymph node detection between the AI diagnosis and the original diagnosis. Importantly, the researchers demonstrated that, with AI assistance, a pathologist required an average of 2–6  min to diagnose a patient’s lymph node. Without AI assistance, the diagnosis required 3–15 minutes.

This potential to enhance performance with AI-assisted analysis will improve patient prognosis and shorten the time required for making therapeutic decisions, say the researchers, who report their findings in Nature Communications.

Gastric cancer prediction

One challenge when predicting cancer prognosis is the insufficient information acquired during diagnostic evaluation. However, the deep-learning architecture developed in this study was able to efficiently identify MLNs, thereby reducing the rate of missed diagnosis by human pathologists. The researchers note that cancer patient’s outcomes were correlated with the area of metastatic tumour in the MLN. They could therefore use the AI algorithm to calculate the precise number of tumour cells within the MLN and deploy this as a prognostic marker for gastric cancer.

Prediction results

The researchers point out that the prediction results in this study are representative of a gastric cancer cohort from an individual nation. However, they propose that the AI should be tested in a large-scale clinical trial across several countries. They believe that this would validate the algorithm and enable clinicians to improve treatment outcomes.

Ultrasound detector uses optomechanical silicon photonics to boost sensitivity by 100 times

A highly sensitive optomechanical ultrasound detector integrated onto a silicon photonic chip has been developed by researchers in Belgium and Germany. The team, led by Wouter Westerveld at the Interuniversity Microelectronics Centre (IMEC) in Leuven, showed that its device is 100 times more sensitive than state-of-the-art piezoelectric detectors of identical sizes. Their design could substantially improve the performance of ultrasound detectors in a wide variety of biomedical applications.

Arrays of up to 10,000 piezoelectric ultrasound detectors are widely used to build up non-invasive images of living tissues. Unfortunately these detectors have three major limitations. First, there is a fundamental trade-off between the sizes and sensitivities of each sensor element – the smaller the element, the lower the sensitivity. This makes them unsuitable for the large, intricate arrays required to obtain low-noise, high-resolution ultrasound images.

Second, these sensors rely on mechanical resonance at specific ultrasound wavelengths to enhance the amplitude of their signals – restricting the devices to a narrow range of operational wavelengths. Finally, each sensor in the array requires its own electrical wire to transmit its signal to a computer, significantly driving up the cost of large detectors.

Split-rib waveguide

In this latest study, Westerveld’s team overcame each of these challenges using a new optomechanical ultrasound sensor (OMUS). Their design was based on a “split-rib” silicon photonic waveguide: containing a main part, attached to a moveable membrane; and a thinner “rib”, attached to a fixed substrate. This part of the waveguide was arranged in a ring shape, causing it to act as a photonic resonator. Both parts were separated by a tiny gap just 15 nm across, which contained an intense electric field.

When an ultrasound wave distorted the membrane even slightly, the electric field generated a large change in the waveguide’s refractive index – altering the resonant wavelength of the ring-shaped rib in turn. Using a tuneable laser, the researchers could then read out this wavelength in real time, producing a highly accurate signal.

Silicon integration

Westerveld’s team determined that an OMUS measuring 20 µm across is over 100 times more sensitive to ultrasound waves than an identically-sized piezoelectric counterpart. In addition, the sensors could be operated over a broad range of ultrasound wavelengths; while the signals produced by multiple devices could be read out using a single optical fibre. Taking advantage of these improvements, the researchers demonstrated how large, low-cost OMUS arrays could be integrated onto a silicon photonic chip.

The team describe its sensor as a game changer for deep tissue imaging. With such a low ultrasound detection limit, the OMUS is highly suitable for biomedical applications including mammography and tumour detection. It could even be used in miniaturized catheters, and to carry out non-invasive brain imaging through the skull – which was highly impractical in the past, due to the strong ultrasound attenuation of bone.

The research is described in Nature Photonics.

Advanced X-ray imaging creates sound-frequency maps of the human inner ear

Have you ever wondered how auditory information is transmitted from the inner ear to the brain? What about where exactly in the inner ear this takes place? Thanks to work by a research collaboration between Uppsala University and Western University, this is now possible.

Using a novel imaging technique called synchrotron radiation phase-contrast imaging (SR-PCI), the team has performed the first three-dimensional frequency analysis of the human cochlea, showing where the various sound frequencies are captured.

The human cochlea is a spiral structure of the inner ear. Sound vibrations are transmitted to the cochlea and then transduced into electrical activity along the basilar membrane (BM). The BM is a soft-tissue structure that categorizes different acoustic vibrations based on their frequency and produces a spatial frequency map in the cochlea.

Since the late 1990s, researchers have attempted to image the fine structures of the human inner ear using synchrotron radiation, but the technique could not resolve the boundaries between the BM and the rest of the cochlea. To overcome this, one solution was to use contrast agents for better soft-tissue visualization; however, non-uniform distribution of contrast and tissue shrinkage caused problems. And while other researchers have used SR-PCI, they could not develop complex 3D frequency maps of the cochlea. Although some attempted 3D reconstruction from two-dimensional histological sections, the process was laborious and prone to artefacts.

The working principle

Now, the research collaboration – led by Helge Rask-Andersen at Uppsala University and by Hanif Ladak and Sumit Agrawal at Western University – has successfully created a three-dimensional representation of sound-frequency mapping in the human cochlea, using SR-PCI to image adult human cadaveric cochlea. The team performed the SR-PCI study at the Canadian Light Source in Saskatoon, publishing the results in Scientific Reports.

The authors

SR-PCI is unique because it can enhance soft-tissue contrast while minimizing artefacts that may be introduced through staining, sectioning and decalcification in histopathology. In SR-PCI, varying material properties within the sample cause phase shifts that are then transformed into detectable variations in X-ray intensity. These variations can help to provide edge contrast to highlight soft tissues.

The new 3D cochlear model shows where the various frequencies of sound are captured and reveals the detailed anatomical structure of the intact cochlea. This offers many advantages. First, accurate tonotopic frequency distributions could result in improved surgical outcomes for cochlear implant recipients. In addition, this new knowledge could help to better individualize the programming of cochlear implants for future patients, so that each area in the ear can be stimulated with the correct frequency. This will help to improve the sound quality for cochlear implant users.

First author Hao Li

CZT detector technology: ready to shine in next-generation medical imaging systems

H3D is a US technology and research-focused business that provides high-performance imaging spectrometers for real-time identification and localization of gamma-ray sources. Over the past decade, the Ann Arbor, Michigan-based manufacturer has made its name serving a diverse base of end-users with specialist measurement solutions for gamma-ray imaging and spectroscopy. That customer base includes government agencies and nuclear first-responders, radiation safety officers at nuclear power plants, and international inspectors tasked with safeguarding and compliance in accordance with nuclear non-proliferation treaties.

“Our commercial instruments are based around cadmium zinc telluride (CZT) radiation detectors – a technology that combines industry-leading energy resolution and spatial resolution with room-temperature operation,” explains Willy Kaye, founder and CEO of H3D. As well as 10 full-time, PhD-level staff working on CZT device development and optimization, H3D draws on a broader pool of engineering and technical talent across the regional supply chain – most notably the Detroit car industry. “The emphasis at H3D is on robust, fully integrated radiation measurement solutions ready for field deployment,” Kaye adds. “To make this possible, we leverage expertise from a range of industry partners to ensure best practice in areas like manufacturability, packaging, mechanical testing, control electronics and software.”

The logic of diversification

If that’s the back-story, the next chapter in H3D’s development is already taking shape, building on those solid foundations to address CZT growth opportunities in the medical imaging market. Front-and-centre in H3D’s diversification effort is the M400 Series, a compact and customizable module capable of high-resolution gamma spectroscopy in a range of medical imaging applications. “The M400 is an off-the-shelf commercial product that can be easily ‘tiled’ to form imaging arrays with enhanced sensitivity,” says Kaye. “We anticipate broad applicability across multiple imaging modalities and clinical use cases.”

Right now, H3D’s engagement with the medical imaging community manifests in a targeted programme of R&D collaboration. It looks like a win-win: proof-of-concept experimental projects allow H3D engineers to gather custom requirements at scale, while simultaneously educating medical imaging specialists about the versatility and capability of CZT technology. One such collaboration is with the Maryland Proton Treatment Center in Baltimore. This six-year R&D effort, funded by the US National Institutes of Health (NIH), is evaluating the potential of CZT imaging in proton cancer therapy, an advanced form of radiotherapy that allows precise radiation delivery to complex tumour volumes while sparing healthy tissue and organs-at-risk.

Specifically, the M Series forms the basis of a high-energy, high-flux spectrometer that’s able to image high-energy gamma rays produced during proton therapy (the spectrometer being tiled up with 16 M400 units – i.e. 64 CZT crystals and a total crystal volume of 310 cm3). Although this is still early-stage evaluation work, the long-term objective is an on-board imaging system to ensure that the bulk of the radiation payload from the proton treatment beam is deposited into the tumour rather than adjacent healthy tissue.

Hao Yang

“The prompt gamma-ray emissions from proton interactions with tissue can be used to monitor the proton beam in vivo, providing real-time knowledge of tumour location, beam position, and beam penetration depth during treatment,” explains Hao Yang, a product development engineer at H3D.

“Worth noting,” he adds, “that the readout system is optimized for the high-flux conditions of proton cancer therapy, though ultimately it’s the system integration aspects that will be key to successful commercial engagement with the proton therapy OEMs.”

Functional imaging

In a separate collaboration, H3D engineers are investigating opportunities in small-animal positron emission tomography (PET), a functional imaging technique that uses radioactive tracers to track changes in the metabolism of diseased tissue (e.g. cancerous tumours) and physiological processes such as blood flow and chemical absorption. As such, small-animal PET provides a key imaging modality for the preclinical evaluation of new drugs, allowing biomedical scientists to track a drug’s behaviour over time versus a range of metrics such as treatment efficacy, biodistribution, toxicity and excretion. These small-animal studies, in turn, inform the regulatory approvals process ahead of advanced clinical trials in human patients.

Willy Kaye

With this in mind, H3D has developed a prototype imaging detector (based on four specially arranged M400 modules) for applications in small-animal PET studies. The custom system, which is currently being evaluated by researchers at the University of Illinois at Urbana-Champaign, is capable of achieving 0.5 mm spatial resolution (versus 1 mm for the best commercial scintillators). “This is a significant step forward for small-animal PET and the best positional resolution of any traditional PET detector,” claims Kaye. Separately, the Urbana-Champaign scientists have also purchased 50 M400 modules to evaluate novel detector arrays for next-generation functional imaging systems.

Another application of interest for H3D is the monitoring of radioisotope distribution within the body during diagnostic or therapeutic medical procedures. The gamma-emitting technetium-99m, for example, is used for functional imaging of the skeleton and a range of organs (including the heart, liver, kidney and gall bladder), while iodine-131 is widely deployed for the treatment of an overactive thyroid gland or as a post-surgical follow-up when treating thyroid cancer.

In this scenario, H3D’s gamma-ray imaging spectrometers enable clinicians to validate their organ transport models for radiopharmaceuticals by providing precision overlay of gamma and optical images of the patient in real-time. “While we are confident that we have developed a truly unique measurement tool,” concludes Kaye, “as sensor developers we will need to rely on the creativity of our potential partners to figure out how to improve patient outcomes based on this technology.”

Product focus: the M400 Series

Versatile by design

H3D’s M400 Series is a custom integrable CZT detector module for high-resolution gamma spectroscopy and imaging. The product, which can be used as a single detector or tiled together for increased sensitivity, is suitable for a range of OEM system applications including drones, robots and medical imaging arrays. Technical specifications include:

  • Spatial resolution: <0.5 mm (≥140 keV)
  • Energy resolution: ≤1.1% FWHM at 662 keV (coincident interactions combined); enhanced resolution version also available (≤0.8% FWHM at 662 keV)
  • Crystal volume: >19 cm3 CZT (484 detector pixels)
  • Spectroscopy range: 50 keV to 3 MeV
  • Compton imaging range: 250 keV to 3 MeV (optional)
  • Dimensions: 10.2×5.3×5.3 cm and 0.6 kg

Free-space laser link beats the stability of optical clocks

Physicists in Australia have demonstrated how to create an exceptionally stable laser link to send frequency information through the atmosphere. The researchers say that fluctuations in the laser’s frequency are so minuscule that after just a few seconds of averaging such a link could be used to flawlessly transmit timing signals from the world’s most accurate optical clocks. This, they argue, offers the prospect of a global timing network that uses satellites to synchronize optical frequencies between continents.

This ability to connect optical clocks globally could potentially allow physicists to test general relativity, search for dark matter and detect any variabilities in the fundamental constants. It might also be used to improve satellite-based navigation and timing, as well as geodesy – thanks to the effects of gravitational time dilation at varying altitudes.

Optical clocks can now achieve uncertainties about 100 times lower than microwave-frequency, caesium-based atomic clocks – at roughly 1 part in 1018 – while their frequencies have been compared by linking them through optical fibre at distances of up to nearly 2000 km. But extending such comparisons across the globe will be tough, given the huge cost of laying a dedicated fibre with suitable amplifiers. Satellite radio links, on the other hand, are fine for microwave clocks but are orders of magnitude too imprecise for optical timekeepers.

As David Gozzard, Lewis Howard and colleagues at the University of Western Australia in Perth explain in a preprint uploaded to the arXiv server, any link must have a more stable frequency than the optical clocks they connect. Otherwise, the supreme accuracy of those clocks will go to waste. Stability can be raised by averaging a signal over a longer times, but that time is very limited – with some experts anticipating that optical clocks might soon become stable to one part in 1018 after running for just 100 s.

Atmospheric turbulence

The challenge for developers of free-space links is overcoming atmospheric turbulence. Fluctuations in the refractive index along the path of the laser slightly speed up or delay the light’s arrival, leading to phase instability. What is more, turbulence also causes the beam to wander off target and scintillate. This diminishes the beam intensity very briefly but repeatedly, with the loss of signal limiting the averaging time and with it the frequency stability.

To demonstrate how to overcome these problems, Gozzard and colleagues set up a laser transmitter and receiver on the rooftop of their university’s physics department. They then measured the stability of a beam bounced off a corner-cube reflector on another roof 1.2 km away. That 2.4 km horizontal round trip, they say, had about the same level of turbulence as would a link established between the ground and a satellite about 500 km up in a low-Earth orbit.

To maximize the system’s stability, the researchers were able to continually realign the reflected beam using a “tip-tilt” mirror moving in response to the fluctuating output from a photo detector that could capture intensity drops lasting less than a millisecond. In tandem, they used a phased-locked loop together with an acoustic-optic modulator to shift the light’s frequency.

Free-space attempts

This is not the first attempt to stabilize frequency transfers in free-space. The Australian group and colleagues in France reported in January having achieved a stability of 1.6 parts in 1019 after just 40 s of averaging. This compares well with 6 × 10-19 after about 30 h averaging, reported last year by researchers at the National Institute of Standards and Technology in the US when using a 1.5 km open-air link with optical clocks at either end. A group at the Korea Advanced Institute of Science and Technology has also carried out similar measurements on an 18 km open-air link.

However, the latest work pushes up stability significantly. Carrying out their experiment for two weeks in September 2020, Gozzard and colleagues found that the phase stabilisation technology alone allowed a fractional stability of 1 × 10−19 after averaging for a minute. By also stabilizing the amplitude, they were able to reduce signal loss and raise the stability to about 6 × 10−21 after 5 min. As they point out, this happens to be about the length of time that contact can be maintained with a satellite in low Earth orbit.

They then worked out what these figures would mean when linking up with a satellite in orbit. They found that the lower signal bandwidth due to the greater distance would slightly lower stabilities but still keep the system extremely competitive with the best optical clocks. They calculate that frequency comparisons would be limited by the clocks’ own instabilities after just a few seconds of averaging.

To test the technology for real, Gozzard and his colleagues are currently building a 0.7 m telescope on the ground and are hoping to get access to a satellite from either the French space agency CNES or a private company. He points out that they will have to contend with quite severe Doppler shift – a satellite’s movement towards and away from the ground station will cause the incoming signal to abruptly rise and fall by about 10 GHz. But he reckons that experience gained by the team dealing with high-precision microwave shifts on the Square Kilometre Array radio telescope should help them maintain current precision. “We’re confident we can adapt to the Doppler shift,” he says.

Promethean Particles spins out continuous process for making nanoparticles

Promethean Particles produces nanoparticles using a technique called hydrothermal synthesis; how does this process work?

Hydrothermal just means hot water. If you ever made a crystal garden when you were younger, you know that it involves dissolving large amounts of coloured metal salts in a small jar of water. What this does is create a super-saturated solution where the metal ions are ready to effectively “crash out” at the first opportunity. Put a piece of string into this solution and impressive-looking crystals grow onto the string over the next few days or weeks.

Hydrothermal synthesis has been used for hundreds of years to make large crystals (on long timescales) using a very similar process but in large batch autoclaves. In contrast, our crystallization process takes seconds because we are only growing nanocrystals, which are very small clusters of atoms, maybe 100 to 10,000 atoms in size. The nucleation of these particles is instantaneous and occurs in a continuous flow process. This flow process is designed to create a super-saturated solution just for the briefest moment, allowing nanoparticles to form.

How did you develop and commercialize the process?

The original idea for our continuous process came from Japan in the early 1990s, when a very well-known academic called Tadafumi Adschiri described a continuous process using two flows (a very hot flow and a cold flow containing the dissolved metal salts) introduced together in a high pressure T reactor arrangement. At first, we struggled to make nanoparticles in a way that avoided blockages. It took a few years to solve this problem because it was tricky to understand what was happening inside our steel reactor during the high-temperature, high-pressure process.

Once we perfected the process and reactor design, we filed a patent and started generating interest from companies that might want to take it forward. Eventually we took the step to commercialize the technology ourselves and formed Promethean Particles at the end of 2007. The route to commercialization is always an interesting one and the choice of whether to license intellectual property or create a spin-out company is one that academics with good ideas must wrestle with.

The choice of whether to license intellectual property or create a spin-out company is one that academics with good ideas must wrestle with

Did you get guidance from the University of Nottingham when making that decision?

Like most universities, Nottingham has a tech transfer office, and we discussed the possibility of a spin-out with them. We had a market survey commissioned to try to understand the potential marketplace for our products. The results were overwhelmingly positive, and so this gave us the initial confidence that there was a market for this technology.

In particular we discovered that industrial users of nanomaterials were frustrated with the quality of the nanoparticles they were buying and the lack of available industrial scale for other nanoparticles they would otherwise be interested in. One of the key things we offer our customers is the ability to enable them to achieve production scale. It was clear that we could meet their needs in different market sectors from coatings to medical applications, but the technology needed developing and scaling up to meet industrial demand. This positioning was key to our decision to spin out the technology rather than to license it.

Promethean Particles opened a manufacturing facility in Nottingham in 2016. What type of nanoparticles do you make there and what are they used for?

The first products that we made were ceramics – metal oxides that can be used in reflective coatings, high-temperature materials, catalysts or as strengthening materials for fabrics. Today, we can make materials in eight different materials classes including metals, metal oxides and metal-organic frameworks. Furthermore, within some of those classes, (e.g. doped metal oxides) there is a virtually limitless combination of materials that we can produce. These can be used in everything from batteries to anti viral coatings.

We produce both standardized and bespoke materials. We work closely with our customers to understand their needs and take them through our new product development process that first designs the nanomaterial solution, then develops it so it can be scaled, before delivering them the products that they are looking for.

We can make materials in eight different materials classes including metals, metal oxides and metal-organic frameworks

Can you give a flavour of some of the products you are currently working on?

We are working in several different market sectors and we have a few products that are either in or close to market.

We are working on a product that stops ice from building up on aeroplanes. Currently, de-icing is an expensive, time-consuming process that can lead to flight delays. It is done by spraying chemicals on aircraft, which removes ice and creates a temporary coating that prevents ice from forming – but nanoparticles could offer a better, more sustainable route to keeping the outside of the plane ice free. Nanomaterials are already used in exterior coatings to help aeroplanes fly more efficiently. By altering the morphology of these particles and their surface chemistry, we can stop water molecules building up and prevent the formation of ice. Essentially, this is the creation of a textured super-hydrophobic coating, which is also how self-cleaning glass works.

Because of the COVID-19 crisis, there has been great interest in the antiviral and antimicrobial additives that we can produce. We have been developing copper and silver nanoparticle-based inks for printing circuit boards, but it turns out that these metallic nanoparticles are also very good at killing the COVID-19 virus. We have other healthcare/PPE applications that are currently undergoing testing. Healthcare has become a much bigger thing for us in the past year.

We also get a lot of interest in our metal-organic frameworks (MOFs), which are porous materials that have an extremely high surface area. Indeed, a sample of MOFs powder that you can hold in your hand has the same surface area as an office block. This property means these MOFs can be used for gas storage, gas capture or chemical filtration.

We also make a dispersion that can be used as a thermal fluid to improve the efficiency of heating and cooling systems including air conditioning units. Reducing energy consumption is a key sustainability goal. More efficient heating and cooling systems would make a significant difference to global energy demand.

Bad metals turn over a new leaf

Bad metals may not be so bad after all – at least, not for theorists.

Unlike conventional metals, where electrons travel freely with few interactions and little resistance, bad metals contain electrons that move slowly and exhibit strong correlations with their neighbours. This behaviour is so atypical for metals that physicists have long regarded it as incompatible with existing theories.

Now, however, an international team led by researchers from the Vienna University of Technology in Austria have found that it may be possible to describe bad metals with conventional theories after all. This finding, based on optical spectroscopy of metallic crystals that can be made into insulators by slightly changing their chemical composition, could also advance our understanding of materials such as high-temperature superconductors, for which a complete theory is still lacking.

At the metal-to-insulator transition

Led by Andrej Pustogow, the team focused on metallic materials known as molecular charge-transfer salts. These plate-like single crystals are roughly 1 x 1 x 0.3 mm3 in size and can be grown in the laboratory using electrochemical techniques. While they typically take on the properties of a metal, if small amounts of selenium are incorporated into their structure, they become insulators. Notably, at the transition point between metal and insulator (known as the Mott transition), the crystals’ electrical resistance becomes extremely large – much larger, in fact, than should be possible according to conventional theories of metals.

Pustogow and colleagues suspected that this effect could be frequency-dependent. To test their hypothesis, they studied the material’s optical conductivity, which is the electrical conductivity in the presence of an alternating electric field. (“Optical” in this context covers the entire frequency range, not just the visible part of the electromagnetic spectrum.) In their technique, they took advantage of the fact that, in their insulating form, the crystals are transparent to infrared light over a large range of wavelengths, like glass.

When they measured how much light this bad metal reflected and transmitted at different infrared frequencies, they found that while it conducts hardly any optical current at low light frequencies, it behaves like a conventional metal at higher frequencies, conducting infrared current well.

Defects may be responsible

According to Pustogow and co-workers, low levels of defects, or impurities, in the material may be responsible for this behaviour. As the material transitions to an insulating state, the impurities are no longer adequately shielded by the metallic phase, so they start to prevent some areas of the crystal from conducting electricity. This occurs because electrons remain localized (correlated) in these areas instead of moving through the material.

“Our results show that optical spectroscopy is a very important tool for answering fundamental questions in solid-state physics,” Pustogow says. “Many observations for which it was previously believed that exotic, novel models had to be developed could very well be explained by existing theories if they were adequately extended.”

The new work could also shed fresh light on the physics of high-temperature, or “unconventional”, superconductors. These materials, which are related to bad metals in that their electrons are also strongly correlated, were discovered half a century ago, but are still not fully understood.

The researchers, who report their work in Nature Communications, say they now plan to perform similar studies on “strange” metals and other materials classed as non-Fermi-liquids.

Quantum computer based on shuttling ions is built by Honeywell

A quantum charged coupled device – a type of trapped-ion quantum computer first proposed 20 years ago – has finally been fully realized by researchers at Honeywell in the US. Other researchers in the field believe the design, which offers notable advantages over other quantum computing platforms, could potentially enable quantum computers to scale to huge numbers of quantum bits (qubits) and fully realize their potential.

Trapped-ion qubits were used to implement the first quantum logic gates in 1995, and the proposal for a quantum charged coupled device (QCCD) – a type of quantum computer with actions controlled by shuffling the ions around – was first made in 2002 by researchers led by David Wineland of the US National Institute of Standards and Technology, who went on to win the 2012 Nobel Prize for Physics for his work.

Quantum gates have subsequently been demonstrated in multiple platforms, from Rydberg atoms to defects in diamond. The quantum computing technology first used by IT giants, however, was solid state qubits. In these, the qubits are superconducting circuits, which can be mounted directly on to a chip. These rapidly surpassed the benchmarks set by trapped ions, and are used in record-breaking machines from IBM and Google: “Working with trapped ions, I would be asked by people, ‘Why aren’t you working with superconducting qubits? Isn’t that race pretty much already settled?’,” says Winfried Hensinger of the UK’s University of Sussex.

Progress is slowing

Recently, however, the progress made using superconducting circuits appears to be slowing as quantum computers integrate more and more qubits. To interact properly, the qubits must be identical and, whereas two copies of the same ion are guaranteed by quantum mechanics to be indistinguishable, fabricating identical circuits is near-impossible. Fabrication directly onto a chip also places superconducting circuits in thermal equilibrium with the chip: “If you build a superconducting qubit-based quantum computer, you have to cool that machine all the way to millikelvin temperatures,” says Hensinger. “That works fine if you have 10, 100…maybe 1000 qubits, but it’s going to be really challenging when you go to really large numbers.”

Some large companies have recently shown interest in the trapped ion platform, among them the multinational technology conglomerate Honeywell, which formed Honeywell Quantum Systems in 2020 to focus solely on the technology.

The firm’s latest result, unveiled in Nature, is the first demonstration of a fully functional QCCD. The device uses ytterbium-171 ions as qubits, which are chilled to their quantum ground states by barium-138 ions using a process called sympathetic cooling. The setup is contained in a linear trap above a chip cooled to around 10 K in a vacuum chamber. Ions held within the trap are shuffled between positions by dynamic electric fields, while quantum logical operations on the ions are performed by laser beams.

Teleported CNOT gate

The researchers demonstrate a sufficient set of gates to perform universal quantum logic. In addition, they created a teleported CNOT gate, which allows for non-destructive mid-circuit measurement – a crucial component for quantum error correction.

Their device has only six qubits, compared to 53 superconducting qubits in Google’s Sycamore – the machine with which Google claimed quantum advantage in 2019. However, Honeywell’s computer is arguably more powerful because of the flexibility of the QCCD architecture: “These ions are fully connected,” explains team member David Hayes; “With superconducting qubits or things like them, you can’t have a qubit over here talk to a qubit over there if there’s a whole bunch of qubits in the way – you have to move that information through there, and there’s a whole bunch of errors that will accumulate along the way.”

Hensinger is impressed with the Honeywell device: “This is really a phase change now we have a complete machine built on a shuttling-based approach,” he says; “It has been demonstrated with all the key ingredients. People often ask me when we can have a million-qubit machine: obviously there are still many, many challenges to be overcome, but I think this [research] demonstrates that it is a straight engineering path.”

Chris Monroe of University of Maryland, College Park, a co-author of Wineland’s on the original 2002 paper, who now runs the spin-off company IonQ, agrees: “In this field, every single little piece has been demonstrated separately. One of the important features of this work is that it integrated lots of them in one system. I love the QCCD idea: I actually coined that phrase myself.” He cautions, however, that, “the QCCD works great with six or eight ions, but when you get to 80, 200 or 300 ions, to enjoy that full connectivity you’re going to be spending a lot of time separating chains, moving ions around, getting them into position, doing the gate and then returning them to where they were.”

Accelerometer sensitivity gets a laser boost

An accelerometer that uses laser light instead of just mechanical strain can register changes as small as tens of billionths of the acceleration due to Earth’s gravity, making it far more sensitive than commercial devices. With further improvements, the developers of the new optomechanical sensor say it might be used to orient aircraft, satellites and submarines, and could even serve as a portable reference to calibrate accelerometers already on the market.

Accelerometers – sensors that detect sudden changes in velocity – have many applications. Among other things, they help trigger the deployment of airbags in cars, keep rockets and aeroplanes on the correct flight path, provide navigation for self-driving vehicles and rotate images so that they stay the right way up on your mobile phone. In general, they work by tracking the position of a freely moving “proof” mass with respect to a fixed reference point in the device. The distance between this proof mass and the reference changes whenever the device slows down, speeds up or switches direction, producing a signal that can then be detected.

Distance change between two micromirrors

The new accelerometer developed by Jason Gorman, Thomas LeBrun, David Long and colleagues at the US National Institute of Standards and Technology (NIST) uses infrared light to measure the change in distance between two micromirrors in a configuration known as a Fabry–Perot cavity. In their set-up, the proof mass is a single crystal of silicon with a mass of between 10 and 20 mg, and it is suspended from the first mirror using a set of 1.5 μm-thick flexible silicon nitride (Si3N4) beams. Being suspended in this way allows the proof mass to move freely, with nearly ideal translational motion. The second mirror, which is concave, acts as the accelerometer’s fixed reference point and thus cannot move.

When the team directs infrared laser light into the optical cavity, most frequencies of light are entirely reflected. However, light of a certain frequency can resonate – or bounce back and forth – between the two mirrors in the cavity, increasing its intensity. When the team makes the device accelerate, the proof mass displaces relative to the concave mirror, and this displacement produces a change in the intensity of light reflected from the cavity.

The NIST researchers track this change using a stable single-frequency laser “locked” to the cavity’s resonant frequency. By continually matching the laser’s frequency to the resonant frequency of the cavity, they can determine how much the device has accelerated. The result is a device that can sense displacements of the proof mass that are smaller than a femtometre (10–15 m) and detect accelerations as low as 3.2 × 10-8 g, where g is the acceleration due to Earth’s gravity. This is better than any accelerometer on the market today of comparable size and bandwidth, the team says.

A simple spring

While the concept of an optomechanical accelerometer may sound simple, being able to accurately convert the displacement of the proof mass into an acceleration has proved challenging. In the new work, however, the proof mass and supporting beams are designed so that they behave like a simple spring (or harmonic oscillator) that vibrates at a single frequency in the operating range of the accelerometer. This approach, say the researchers, makes the setup easy to model using first-principles calculations.

Using this technique, which they report in Optica, Gorman, LeBrun, Long and colleagues extended their approach to achieve measurement uncertainties of around 1% over a wide range of resonating frequencies (from 100 Hz to 15 kHz). Their device also does not need to be calibrated before use since it uses laser light of a known frequency to measure acceleration. It might thus ultimately serve as a portable reference standard for other accelerometers on the market (all of which do need to be calibrated) and so help make them more accurate.

In the future, the NIST group plans to refine its system so that it can be deployed in the field as an accurate sensor and intrinsic standard for acceleration. “Work is also under way on advanced applications of the technology ranging from searches for new physics to medical diagnostics and satellite measurements for climate change studies,” LeBrun tells Physics World.

Copyright © 2026 by IOP Publishing Ltd and individual contributors