Skip to main content
Data acquisition and analysis

Data acquisition and analysis

New algorithm puts time-scrambled data into chronological order

04 May 2016
Graphic illustrating the difference between raw timing data and the processed data
Wrinkles in time: scrambled (left) and unscrambled time sequences. (Courtesy: Allie Kilmer/UWM)

An international team of scientists has developed an algorithm that can put data with large time uncertainty into chronological order. After applying statistical techniques to data obtained with a 300 fs (300 × 10–15 s) timing uncertainty, the team was able to describe the laser-driven explosion of a nitrogen molecule with 1 fs resolution – an improvement in time resolution of two orders of magnitude. Because the algorithm is based on statistics, it could potentially be applied to other disciplines with timing uncertainty, such as climate science and astronomy.

Abbas Ourmazd of the University of Wisconsin-Milwaukee and colleagues used data from an imaging experiment at the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in the US. The experiment involved firing two consecutive pulses at nitrogen molecules – with each molecule consisting of two nitrogen atoms that share three of their outermost electrons in a triple chemical bond. The first pulse (called the trigger) comprises infrared light. It rips away the inner unshared electrons from each atom in the molecule, leaving the chemical bond intact but the two atoms positively charged. The two positively charged atoms then repel each other, with a force that explodes the molecule apart. Within picoseconds of the first pulse, a second pulse (called the flash) comprising X-rays is fired at the nitrogen, which allows the team to measure the momentum of the molecule as it explodes.

Femtosecond snapshots

The shortest flash pulses were less than 10 fs in duration, meaning the team could take 10 fs-long “snapshots” of the nitrogen molecules as they exploded. However, despite the shortness of the pulses, the snapshots could only achieve 300 fs resolution of the explosion.

“The difficulty with the process is that you can’t quite control the delay between the trigger and the flash,” Ourmazd explains. The uncertainty in the timing of each laser pulse means that the sequence of snapshots is slightly out of chronological order and is not evenly spaced in time, which “blurs” the details of the molecular explosion.

In the past, physicists have attempted to reduce timing uncertainty by physically adding new hardware to the experiment. However, this hardware approach is expensive and can only reduce the uncertainty by a factor of 10, at best, Ourmazd says.

Poorly shuffled cards

Instead, the team created a “re-ordering algorithm”, which Ourmazd says is similar to re-organizing poorly shuffled playing cards. “Imagine you have a perfectly ordered deck of cards and then you shuffle them,” he says. “The order is scrambled somewhat, but the original order never completely goes away.” By examining enough sequences of the cards, or data, you can extract a pattern that can tell you the original order. The algorithm used 100,000 snapshots of the nitrogen explosion up to 5 ps (5 × 10–12 s) after the first laser pulse, taken over many runs of the experiment.

The algorithm’s description of the molecular explosion matched theoretical calculations to 1 fs – shorter than the duration of the flash pulse. The physical reason the researchers were able to see faster than the laser pulse, Ourmazd explains, is that the pulses have sharp peaks, and therefore can achieve higher resolution than their average duration.

This is new to physics, but it shouldn’t be. It’s used all the time by Google
Abbas Ourmazd, University of Wisconsin-Milwaukee

The algorithm’s statistics methods – for example, first mapping the data onto a higher-dimensional curved mathematical surface – originate from the field of data science. “We’re really just applying the latest data-science techniques to physics,” Ourmazd says. “Remarkably, this is new to physics, but it shouldn’t be. It’s used all the time by Google.”

Because timing uncertainty is a problem in many scientific disciplines, the algorithm could potentially be applied to other big scientific questions, says Charlotte Haley of the Argonne National Laboratory, who was not involved in the work. For example, many astronomical and climate-science models use data that predate the digital age. “We might be looking at time-series data collected by hand,” she says. “You can never be sure if somebody recording the data did so at that particular time.”

However, the algorithm still lacks a formal mathematical framework. “They’ve established that it works well with this experiment and numerical simulation,” Haley says. “But we can’t [generally] quantify the exact reduction in uncertainty. There’s a lot of mathematics that still needs to be done.”

The research is described in Nature.

Related events

Copyright © 2024 by IOP Publishing Ltd and individual contributors