Skip to main content

Pouring cold water on particle X17, flying suits for humans, why that Tesla window shattered   

If you have been scratching your head about curious reports of the discovery of a fifth (or possibly sixth) force and a mysterious particle called X17, check out Matt Strassler’s recent blog “Has a new force of nature been discovered?”.   

“I thought I’d better throw some cold water on that fire; it’s fine for it to smoulder, but we shouldn’t let it overheat,” writes Strassler. 

When will we be able to slip on a flying suit and soar in the sky like Tony Stark in Iron Man? 

Physics student Daria Stekolnikova has written a book called The Flying Humans to answer that question. She is trying to raise a little over £3000 to get the book published and is looking for backers.  

It looks like a donation will get you a signed copy of the book and you can find out more here 

I’m sure that by now just about everyone has seen that bizarre video showing Elon Musk unveiling the Tesla pickup truck. The infantile design of the vehicle – a four-year-old could draw a better truck – combined with the easily-shattered “Tesla Armor Glass” made me think that the event was some kind of joke. But apparently it was for real.  

Rhett Allain has looked at the physics of why the truck’s window broke so easily. You can read more in Wired.    

A decade of Physics World breakthroughs: 2009 – the first quantum computer

“A tour de force” is how physicist Boris Blinov from the University of Washington described research carried out at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, in 2009. For decades physicists had dreamt of building a quantum computer that can solve problems faster than a conventional counterpart. Then in August 2009, a NIST team led by Jonathan Home unveiled the first small-scale device that could be described as a quantum computer. The work represented a huge step forward – so much so that we choose this development as the very first Physics World 2009 Breakthrough of the Year 10 years ago in 2009.

Building up to the breakthrough, Home’s team had used ultracold ions to demonstrate separately all of the steps needed for quantum computation – initializing the qubits; storing them in ions; performing a logic operation on one or two qubits; transferring the information between different locations in the processor; and reading out the qubit results individually. But in 2009, the group made the crucial breakthrough of combining all these stages onto a single device. Home’s set-up had an overall accuracy of 94% – impressive for a quantum device – but not good enough to be used in a large-scale quantum computer.

At the time, the field of quantum computing was very much in its infancy and more work needed to be done before they could become a commercial reality. Since Home’s work  researchers have made a number of big steps forward in quantum computing including the first implementation of a quantum version of the “Von Neumann” architecture found in personal computers, making quantum computers more robust as well as programmable and reconfigurable.

Several major technology firms have entered the field or ramped up their efforts too – notably Google, Microsoft and IBM – while the Canadian firm D-Wave Systems has been selling quantum computers with an increasing number of “quantum bits”, or qubits. Its latest model having 2000.

The most recent and – to date – highest-profile result was the announcement in October that Google had used a 53-qubit quantum computer to reach “quantum supremacy” – a term to denote that a quantum computer can solve a problem in a significantly shorter time than a conventional (classical) computer. Although, Google’s machine outperformed a classical computer on a very specific problem, the move marked a major milestone for the field.

A decade on from the world’s first quantum computer on a chip, we have now entered the era of quantum supremacy or “quantum advantage” as some would rather have it. The coming decade promises to be equally exciting.

Plasmonic nanocubes make an ultrafast thermal camera

Researchers at Duke University in the US have developed a cheap and easy-to-construct thermal camera that can capture a multispectral image half a million times faster than existing broad-spectrum detectors. The new camera owes its speed to advanced plasmonic and pyroelectric materials, and its inventors say it might find applications in medicine and food inspection as well as the up-and-coming field of precision agriculture.

Thermal cameras can sense radiation across a wide range of frequencies within the electromagnetic spectrum, but they suffer from relatively slow response times, on the order of a few milliseconds. They are also bulky, difficult to make and can cost hundreds of thousands of dollars.

Now, researchers led by Maiken Mikkelsen have constructed a new type of photodetector that can be integrated into a single chip, and is capable of recording a multispectral image in just 700 picoseconds (10-12 s). With a potential cost of just $10 in terms of the materials employed, the device is also a fraction of the price of conventional thermal cameras. These are costlier, among other reasons, because of the expensive materials used in their lenses, which are usually made of germanium.

Plasmonic metasurface and pyroelectric thin film

The new camera contains a metallic material with a structure that can be fine-tuned to interact with light in very specific ways. It works by exploiting the physics of plasmons, which are quasiparticles that arise when light interacts with the electrons in a metal and causes them to oscillate. The shape, size and arrangement of nanoscale structures within so-called plasmonic materials make it possible to support plasmons at specific frequencies. Hence, by adjusting these structural parameters, the researchers can dictate which frequencies of light the material will absorb and scatter.

In the Duke group’s camera, the plasmonic metasurface is made from silver cubes a few hundred nanometres in size. These cubes are placed a few nanometres above a layer of gold (just 75-nm thick) that rests on a thin film of aluminium nitride (105-nm thick). The aluminium nitride is a pyroelectric, meaning that it produces a voltage when it is heated.

When light hits the surface of a nanocube in the thermal camera, it excites the electrons in the metal, trapping the light energy at a frequency that depends on the size of the nanostructure and its distance from the base layer of gold. The heat from this trapped light energy is enough to change the crystal structure of the pyroelectric aluminium nitride below the nanocube, creating a voltage that can then be measured.

Fast process

This process is very fast because over 98% of the light falling on the camera is converted into localized surface plasmons that are confined between the silver nanocubes and the gold film. These plasmons decay in just femtoseconds (10-15 s), generating heat through the scattering of electrons with vibrations of the material lattice (phonons) – a process that takes several picoseconds. This heat then quickly diffuses though the gold film into the underlying aluminium nitride, again in just tens of picoseconds.

Maiken Mikkelsen

For the camera in this work, which is described in Nature Materials, Mikkelsen and her colleagues constructed four photodetectors, each tuned to respond to wavelengths of between 750 and 1900 nm. In principle, however, the researchers could create systems that would respond to frequencies ranging from the ultraviolet region (by substituting platinum, silver or aluminium for the gold layer) to the mid-infrared (by fabricating larger nanocubes and adjusting the spacings between them).

Although photodetectors with pyroelectric materials have been made before, and commercialized, these earlier devices could not focus on specific electromagnetic frequencies. This, coupled with the fact that thick layers of pyroelectric material were needed to produce a sufficiently strong electrical signal, meant that they operated at much slower speeds. The new plasmonic detector can be tuned to specific frequencies and traps so much light that it generates a lot of heat. “That efficiency means we only need a thin layer of material, which greatly speeds up the process,” says study lead author Jon Stewart.

A host of applications

Mikkelsen and colleagues believe the new device could have a host of applications. Physicians, for instance, might use a fast, cheap multispectral imager to measure the spectral “fingerprint” of a biomedical sample, and thereby distinguish between cancerous and healthy tissue during surgery. Food safety inspectors might use the device to determine whether a food sample contains harmful contaminants, such as bacteria.

Another important area is precision agriculture. To the naked eye, a plant may simply appear green or brown, but Mikkelsen notes that the light reflected from its leaves at frequencies outside the visible part of the spectrum contains a cornucopia of valuable information. Multispectral images of plants might indicate the type of plant and its condition, such as whether it needs watering, is stressed or has a low nitrogen content, which would indicate that it needs more fertilizer. “It is truly astonishing how much we can learn about plants by simply studying a spectral image of them,” she says.

Hyperspectral imaging could thus allow farmers to apply fertilizer, pesticides, herbicides and water only where needed, saving valuable resources, reducing costs and pollution. For example, a camera might be mounted on an inexpensive drone that maps a field and transmits that information to a tractor designed to deliver fertilizer or pesticides at variable rates across the field, Mikkelsen adds.

Microneedle patch delivers long-acting reversible contraception

In an effort to increase access to long-acting contraception, a US-based research team has developed a microneedle patch that slowly releases contraceptive hormone for more than a month, and generates no biohazardous sharps waste (Science Advances 10.1126/sciadv.aaw8145).

According to team member Mark Prausnitz, from the School of Chemical and Biomolecular Engineering at Georgia Institute of Technology, there is a particular need for contraceptive methods that are both long-acting and capable of being self-administered. This is because most current long-acting contraceptives, for example subcutaneous implants and IUDs, must be administered by a healthcare worker. Meanwhile, most self-administered contraceptives, such as daily pills, need to be taken frequently.

“To enable women to achieve the convenience and reliability of a long-acting contraceptive that can be self-administered, we have developed a microneedle patch that can be painlessly administered to the skin and deposit microneedles below the skin surface to slowly release contraceptive hormone,” says Prausnitz.

Effervescent microneedles

The microneedles are made out of biodegradable polymer that encapsulates the contraceptive. To facilitate rapid separation of the microneedles from the patch backing, Prausnitz and his team – including colleagues at Georgia Tech plus scientists from the University of Michigan and the non-profit human development organization FHI 360 – incorporated effervescent material at the patch– microneedle interface. When the patch is applied to the skin, an effervescent chemical reaction occurs and weakens the interface between the microneedles and the patch backing.

“The final result is that a woman can press a patch to the skin, leave it in place for less than one minute, during which time effervescence is occurring, and then remove and discard the patch backing,” Prausnitz explains.

Microneedles deposited below the skin surface slowly biodegrade and release the contraceptive hormone levonorgestrel for approximately one month. “We did a series of in vitro experiments to develop the patch and confirmed its performance in vivo in a study using rats,” Prausnitz adds.

Towards human tests

In Prausnitz’s view, the key challenges in the study related to how best to formulate microneedles for slow release of the contraceptive and enable rapid separation of the needles from the patch backing. Slow-release systems using biodegradable polymer such as polylactic-co-glycolic acid are well known, but not in the form of microneedles, which adds additional constraints to the design.

“A notable issue for us was that, when we cast the solution to make the microneedles onto a silicone mould, there are complex transport processes taking place,” he says. “While we have a lot of experience making microneedles that dissolve quickly, and therefore involve casting with aqueous formulations, these microneedles required the use of organic solvents, which added new complexities.”

Other challenges included the fact that, upon casting onto the mould, the solvent can be absorbed into the mould and evaporate. The polymer and drug dissolved in the solvent can also precipitate.

“Solvent wetting of the mould surface matters, as does contact and adhesion of the precipitating polymer and drug to the mould surface. It is difficult to fully understand and control these transport phenomena carried out on the micron scale,” says Prausnitz.

Ultimately, the team overcame these issues by using a co-solvent system that minimizes solvent penetration into the mould and controls the rate of precipitation of the polymer and drug. To generate the effervescence required to separate the microneedles from the patch, the team incorporated sodium bicarbonate and citric acid into the patch backing. Contact with the skin’s interstitial fluid initiates a chemical reaction that generates carbon dioxide bubbles. This bubble formation mechanically weakens the interface between the microneedles and the patch.

While the researchers have not yet tested the patch’s contraceptive efficacy in humans, evaluation on 10 women showed that it was easily and painlessly applied.

“The next steps for us in this project are to make larger patches that contain more drug, suitable for giving human doses rather than the dose used in the current study for rats, and to prepare the patches with the quality control needed to be used in a human study,” adds Prausnitz. “We hope to carry out our first human study using these patches within the next two to three years.”

Physics and film, a match made in Hollywood

“Physics at the movies” is the theme of the November issue of Physics World magazine. In this star-studded episode of the Physics World Stories podcast, Andrew Glester interviews a trio of people who have worked on – or inspired – Hollywood sci-fi blockbusters.

First up, Glester travels to MCM Comic Con in London to meet Paul Franklin, a member of the team that won the 2014 Oscar for Best Visual Effects for its work on Interstellar. Franklin is the creative director of DNEG, which has worked with director Christopher Nolan on his various other films including Inception, The Dark Knight trilogy and Dunkirk. But the conversation focuses on Interstellar and what it was like to work with science advisor Kip Thorne, a process that even led to a scientific paper about previously unseen details of black holes.

Next up, Glester is in conversation with Jill Tarter, former director of the Search for Extraterrestrial Intelligence (SETI). Tarter is said to be the inspiration for Ellie Arroway, the lead character in Carl Sagan’s novel Contact, which was adapted into the 1997 blockbuster of the same name starring Jodie Foster. Tarter describes how she reentered astronomy thanks to a government scheme, and shares anecdotes about working with Foster to portray her personality on screen.

Finally, Glester catches up with Andy Weir, author of the book The Martian, which was adapted into the 2015 film directed by Ridley Scott and starring Matt Damon. Weir speaks about the calculations and thought-experiments that underpinned some of the book’s plot. He admits that he never expected the story to appeal to such a wide audience and that Mark Watney – the story’s lead character – is a version of himself with all the good traits magnified.

To find out more about about physics at the movies take a look at the November special issue of Physics World, which features interviews with the actors Benedict Cumberbatch and Daniel Radcliffe.

Celebrating 10 years of the Physics World Book of the Year

This episode of the Physics World Weekly podcast is a special celebration of our Physics World Book of the Year award, to mark its 10th anniversary . Ahead of announcing this year’s shortlist of Top 10 physics books (which will be out on 6 December), we look back at the last decade of popular physics books and award-winning science writing.

In the podcast, Physics World’s current and previous reviews editors Tushna Commissariat and Margaret Harris, together with editor-in-chief Matin Durrani, discuss some of their favourite books from the 100 that made it to our shortlists this past decade – How to Teach Physics to Your Dog, How the Hippies Saved Physics, Serving the Reich, Furry Logic, Inferior, The Glass Universe, Hidden Figures,  The Dialogues and many more – as well as chat about some pet peeves and personal favourites of science writing.

Oh, and make sure you tune into the December Physics World Stories podcast to find out the winner of the 2019 Book of the Year!

‘Extraordinary’ black hole weighs in at 70 solar masses

A 70-solar-mass black hole has been discovered orbiting a star in the Milky Way. The object is the heaviest stellar-mass black hole detected so far in our galaxy. Its very existence is puzzling astronomers because black holes of this size are not expected in the Milky Way.

Formed by the gravitational collapse of massive stars, black holes are difficult to observe because light cannot escape their powerful gravitational pull. As a result, astronomers look for the effects of a black hole on its surroundings. This is relatively easy to do for some supermassive black holes, which weigh-in at millions or billions of Suns and light-up the centres of galaxies including the Milky Way.

Much more difficult to see are stellar-mass black holes, which have tens of solar masses. Since 2015, the LIGO (and subsequently Virgo) gravitational wave detectors have spotted the merger of several pairs of stellar-mass black holes – suggesting that black holes of at least 70-80 solar masses existed long ago in distant galaxies.

Gas gobblers

Closer to home in the Milky Way, astronomers have found several black holes by looking for X-rays produced when gas from a nearby star is sucked into the objects. While there are an estimated 100 million stellar-mass black holes in the Milky Way, only about two dozen have been spotted using X-rays – suggesting that most black holes are not gobbling gas from companions. These black holes all weigh in at under 30 solar masses.

Now, an international team of astronomers has used the LAMOST telescope in China to detect a stellar-mass black hole by its effect on a companion star in a binary system. The binary system about 15,000 light-years away in the Milky Way and has been dubbed LB-1.

The black hole was identified by detecting a periodic change in the radial velocity of the companion star as the two objects orbit each other. Every 79 days, the star accelerates towards and then away from Earth – and this can be detected as a Doppler shift of the light from the star. By studying other aspects of the light from the star, the team could also conclude that is about eight solar masses.

“Extraordinary” mass

The team also observed light coming from a disc of hydrogen surrounding the black hole, allowing them to track the motion of the black hole itself. Putting this information together, the astronomers worked-out that the black hole is 70 solar masses – something they describe as “extraordinary”.

The  team was led by Jifeng Liu of the National Astronomical Observatory of China of the Chinese Academy of Sciences and he explains why he was so surprised: “Black holes of such mass should not even exist in our galaxy, according to most of the current models of stellar evolution”.

Liu adds, “We thought that very massive stars with a chemical composition typical of our galaxy must shed most of their gas in powerful stellar winds, as they approach the end of their life. Therefore, they should not leave behind such a massive remnant [black hole].”

“LB-1 is twice as massive as what we thought possible,” adds Liu, “Now theorists will have to take up the challenge of explaining its formation”.

The observation is described in Nature.

Confocal Raman microscopy: correlate and accumulate

WITec, a German manufacturer of nano-analytical microscope systems, is on a mission. Put simply, the Ulm-based vendor wants to democratize the use of confocal Raman microscopy across diverse research and industry applications spanning surface chemistry, solid-state physics, food science, environmental research, pharmaceuticals R&D and geosciences – and that’s just for starters.

This powerful diagnostic technique, which is fast losing its outsider status, provides label-free and non-destructive imaging (qualitative and quantitative) to map the chemical composition of heterogeneous samples at the submicron scale. What’s more, owing to the high confocality of WITec’s Raman microscopes, it’s possible to generate volume scans and 3D images using 2D image slices from a series of stepped focal planes.

Modular thinking

WITec’s immediate challenge, in terms of its product roadmap for confocal Raman microscopy, is to balance and prioritize user requirements that can sometimes appear contradictory: ease of use versus diverse functionality; optimized sensitivity across a range of applications; low cost plus leading-edge performance. With this in mind, WITec is pioneering a modular design strategy that gives users the ability to reconfigure their confocal Raman microscope across multiple unique research applications.

“We can optimize our systems for a range of requirements using specific combinations of optical building blocks – lasers, filters, lenses, spectrometers and detectors,” explains Olaf Hollricher, director of R&D and co-founder of WITec. “It’s a little bit like Lego in that users can upgrade and adapt the instrument as their research extends in new directions.”

Olaf Hollricher

For the customer, that clear upgrade path ultimately translates into cost savings, productivity gains and enhanced research outcomes. Equally significant, claims Hollricher, is the opportunity to combine confocal Raman microscopy with other microscopy techniques – for example, scanning electron microscopy (Raman-SEM) and atomic force microscopy (Raman-AFM).

The rationale is clear: by integrating different modalities into a single instrument platform, the user can build a more comprehensive picture of the sample under study. The Raman-AFM technique, for example, uses Raman imaging to obtain information about the type and distribution of molecules in a sample, while the AFM determines the surface characteristics. In this way, users are able to visualize chemical and morphological properties using a single instrument.

“The demand for these correlative microscopy techniques is growing among research and industry end-users,” says Hollricher. “Our customers recognize that they cannot get all the sample information they need using a single technique.”

The key to success is integration of complementary techniques into a unified, hybrid instrument design. “In order to correlate the data from these different approaches, it is essential to match and examine the same sample region in each case,” explains Hollricher. “If different instruments are used, finding and matching that sample region is a challenging and time-consuming exercise.”

Capabilities aside, the roll-out of these new correlative microscopy techniques is all about assimilation and education when it comes to the wider commercial opportunity. “After we introduced our Raman-SEM combination, it immediately triggered new sales,” says Hollricher. “However, you also have to be proactive and educate your current and prospective user base about the opportunities opened up by such innovations.”

Know your market

Right now, Hollricher and his WITec colleagues are focused on the final promotional push of the year at the Materials Research Society (MRS) Fall Meeting in Boston, Massachusetts (1–6 December). “You have to be present,” says Hollricher, pointing out that an international forum like MRS Fall is a great place to meet face-to-face with new and existing customers, R&D partners, as well as component and subsystem suppliers.

“After all,” he adds, “it’s much easier to do business if you know who you’re talking to and you’ve already met at a show like MRS Fall. These days, it’s also noticeable that prospective customers do their homework online before heading to a show, so their questions come from an informed perspective rather than a general fact-find.”

In anticipation of those questions, WITec will showcase a number of workflow innovations at MRS Fall. A case in point is TrueSurface, an optical correction technique that enables 3D chemical characterization of rough, inclined or irregularly shaped samples – including pharmaceutical coatings, composite emulsions and complex semiconductor structures – while retaining the strong rejection of out-of-focus light afforded by confocality.

TrueSurface

Put simply, the TrueSurface module ensures the surface of large or coarsely textured samples remains in constant focus during a measurement. The sensor actively monitors and maintains a set distance between the objective and sample surface with submicron resolution. “This closed-loop operation can eliminate any temperature-induced focus drift during measurements with long integration times – for example, if you have to use a very low excitation intensity to measure overnight because your sample is sensitive,” Hollricher explains.

Also set to feature prominently on the WITec booth is ParticleScout, an analysis tool that enables researchers to find, classify, quantify and identify particle contaminants quickly and easily on a sample surface. ParticleScout employs a multistep process, beginning with a bright- and dark-field illumination survey of particle contaminants.

Image stitching then combines many measured regions into a large-area overview, while focus stacking allows larger particles to be sharply rendered for accurate outline recognition. Categorization of particles of interest into a ranked list is followed by automatic acquisition of a Raman spectrum for each particle, while the integrated TrueMatch Raman database software provides streamlined evaluation and identification.

“There are a growing number of applications where users need to analyse organic and microplastic particle contaminants,” notes Hollricher. “This is a hot topic for scientists in public water companies, breweries, bottled-water manufacturers, the food industry and in environmental agencies.”

Looking further ahead, Hollricher sees automation as the “next big thing” in confocal Raman microscopy. “Until recently, users needed a lot of expertise and domain knowledge to get the most out of Raman microscopy. What I see down the line is a move towards database-enabled search that yields valuable information and insights from a system that’s easy to operate for specialist and non-specialist users alike.”

Figures of merit

Speed, sensitivity and resolution are three must-haves in a high-quality confocal Raman microscope – and they should not be mutually exclusive. The ideal Raman imaging system should therefore be configured such that high-resolution images with a high signal-to-noise ratio can be acquired in a short period of time.

Speed: In the past, exposure times from minutes to hours were common for acquiring a single Raman spectrum. Today, it takes just one second to record more than 1000 Raman spectra – i.e. a Raman image can be generated in a few minutes using optimized optics and an electron-multiplying CCD camera. Fast acquisition times are particularly important for measurements on sensitive and valuable samples in which the excitation energy must be kept as low as possible.

Sensitivity: A confocal beam path (i.e. using a diaphragm aperture) is essential for achieving the best possible sensitivity – eliminating light from outside the focal plane to increase the signal-to-noise ratio. The entire Raman imaging system should also be optimized for high light throughput. That means a spectrometer with a throughput in excess of 70%, designed for measurements with low light and low signal intensity. CCDs optimized for spectroscopy (greater than 90% quantum efficiency in the visible range) are most commonly used as detectors, while almost lossless photonic fibres ensure efficient light and signal transmission.

Resolution: Spatial resolution of a confocal Raman microscope is determined by the numerical aperture of the objective and the excitation wavelength – i.e. the smaller the aperture, the higher the spatial resolution. Lateral resolution is typically about 200–300 nm; depth resolution less than 1 micron. Spectral resolution defines the ability of a spectroscopic system to separate Raman lines adjacent to each other. A spectrometer design free of coma and astigmatism ensures symmetric spectral peaks, while the type of grating, focal length of the spectrometer, pixel size of the CCD camera and the aperture size all affect the spectral resolution. At room temperature, the width of the Raman lines is typically 3 cm–1, though some applications (low-temperature studies or stress analysis) may require significantly higher resolution.

Bread machines get a much-kneaded physics makeover

Could physics help us make better bread? Yes, say researchers at the Technical University of Munich in Germany. Their findings – based on a 3D simulation of dough kneading in an industrial kneader – reveal that radial mixing techniques work better than vertical mixing, and that a device with a highly curved spiral arm or two spiral arms that mimic kneading by hand could make dough that is well-aerated, absorbs water well and is elastic.

Bread dough contains four main ingredients: flour, water, salt and a leavening agent such as yeast. Kneading develops the dough’s gluten network and produces a material that behaves in a way that is somewhere between a viscous liquid and an elastic solid when it is deformed. Kneading also incorporates air into the dough, which is important for making it rise once it’s in the oven.

As regular readers of Physics World will recall, both professional and skilled amateur bakers – physicists or not – know that bread dough must be kneaded for just the right amount of time, and in a particular way, to produce the desired texture. Overkneading produces a dense and tight dough that absorbs water less well and does not rise in the oven. Underkneading is just as catastrophic, reducing the dough’s ability to hold onto those precious air bubbles.

Although humans have been making bread for 8000 years, precise information on the changes that occur during kneading and their effect on the dough’s quality is still lacking. Now, however, researchers led by Natalie Germann have performed 3D computer simulations of bread dough that take into account both its viscous and its elastic properties, while also factoring in the free surface that forms between the air and dough when it is kneaded in an industrial 3D spiral kneader.

Model ingredients

To simulate the viscosity of the dough, Germann and colleagues used a single-mode White-Metzner model, which is good at predicting the rheological (flow) behaviour of viscoelastic materials under high shear rates and in all dimensions. They combined this model with a modified Bird-Carreau model, which describes the dough over a wide range of shear rates. This latter model simulates how the dough deforms depending on its viscosity as well as the time it takes for it to relax.

To make their model’s predictions as realistic as possible, the team applied it to computerized geometries based on the dimensions and structures of real-world industrial kneaders. They also conducted experiments aimed at generating realistic input parameters for the model and testing its predictions.

These experiments were carried out using an industrial kneader that consists of a rotating spiral arm and a stationary rod. The researchers prepared their bread dough by mixing 500 g of type-550 wheat flour, 296 g of decalcified water and 9 g of salt in a Diosna SP12 spiral mixer. They pre-mixed the dough for 60 seconds at a speed of 25 Hz before mixing it for 300 seconds at 50 Hz. The kneading arm moved in the same direction as the bowl but at a rotational speed that was 6.5 times higher. To prevent moisture loss and evaporation, the finished dough was covered with a plastic film and allowed to rest for 20 minutes before rheology and tensiometry measurements were performed.

Although Germann and co-workers were able to use a commercial rheometer (an Anton Paar MCR 502) to measure how their dough flowed at 24 °C, measuring the dough’s surface tension proved more difficult. Such measurements could not be done directly because a liquid-air interface is necessary. To overcome this problem, the researchers placed a layer of liquid salt solution on the dough’s surface and measured the surface tension of this solution as it diffused into the liquid phase of the dough.

Migrating and rod-climbing behaviour

The resulting simulations provided valuable insights on the processes occurring inside the dough and on its surface, such as the way air gets incorporated into the dough, and how “dough pockets” – or lumps – form and break up. The model also reproduced some macroscopic dough behaviours that the team observed in their experiments. For example, the elasticity of dough enables it to overcome gravitational and centrifugal forces during kneading, meaning that the dough “migrates” towards the rotating rod before climbing up it. This rod-climbing phenomenon is well-described by the Munich team’s models.

As a final step, the team compared the results from their simulations with screenshots from a high-speed video camera that recorded the dough-kneading process in the laboratory. In these shots, they observed the dough convecting around the inner stationary rod thanks to the rotation of the outer cylindrical bowl. They also observed spiral flow patterns created by the spiral kneading arm located between the stationary rod and the bowl.

In their paper, which has been published in Physics of Fluids, the researchers report that their model accurately predicts the experimentally-observed values for the curvature of the free surface of these spiral flow patterns. They also report being able to predict the formation, extension, and breakup of dough pockets using their numerical approach.

Radial mixing is better

The researchers say that their work represents an advance on previous studies that only considered the purely viscous properties of bread dough. Earlier works also restricted their simulations to simplified geometries, such as a concentric cylinder setup, Germann explains. These simplifications meant that the normal stress effects responsible for the rod-climbing phenomenon were absent, because the elasticity of the material was not considered.

“Our computer simulations showed that vertical mixing isn’t as good as the radial mixing in the spiral kneader we considered in our work,” says Germann. “In the future, mixing performance may be enhanced by using a more highly curved spiral arm or two spiral arms similar to kneading by hand.”

MRI safety: an urgent issue for an increasing crowd

© AuntMinnieEurope.com

The cornerstone of a safe MRI workplace is repeated and updated MRI safety training and awareness. The number of MRI scanners is increasing, and scanners are also moving toward higher field strengths, both in private practice and at hospitals and institutions all over the world. Consequently, there is a large and increasing crowd of radiology staff and others who need MRI safety education to keep our working environment a safe place – and one that no one should be afraid to enter.

When we discuss MRI safety education, we must remember that the knowledge and skills need to be repeated and reactivated regularly, just as we are expected to participate in heart and lung rescue repetitions and fire drills at regular intervals.

What are the risks?

The MRI-associated risk most people are familiar with is of a projectile. There are several well-known and well-proven recommendations to prevent metal and ferromagnetic items from being brought into the examination room.

MRI safety

Screening for metal is important, and when it comes to screening patients, a number of routines should be used, including filling in a special screening form and changing from street clothes into safe clothing. The MR radiographer also needs to interview the patient right before entering the examination room to check that the patient has fully understood the information, and there must never be any unknown circumstances; if there are, further investigations must be done. These procedures are very important and must never be excluded.

It is also possible to use a ferromagnetic detector as a support to the screening procedure. Such a detector is a good asset if you want to reduce the risk of something being accidentally taken into the room. At the same time, it is important to know that while a ferromagnetic detector may increase MRI safety, it should never replace any of the ordinary screening procedures used.

Awareness of implants

Another risk that is greater these days has to do with the distribution of radiofrequency energy that may result in heating of the patient and/or heating of the patient’s implants. Heating injuries have increased due to the use of more efficient and powerful methods and scanners. Occasionally, they are also caused by a lack of MRI safety competence regarding how to position the patient, etc.

It is of the utmost importance to identify every implant – a task that can be both time-consuming and difficult. We must remember that it is essential to find out if the MRI examination can be performed on a patient with a certain implant and, if so, how it can be done safely.

Working with MRI requires clear and well-founded working routines that are never abandoned. It is of great importance to be alert and never take things for granted in our special environment where a lot of different professions are involved. It is a truly challenging environment, and teamwork is necessary. Working alone with MRI examinations and equipment should never be an option, and all members of the scanning team must have a high level of MRI safety skills.

How to minimize risk

There are several ways to improve MRI safety besides sufficient safety training and education and installing helpful devices. We must not forget that equipment vendors play an important role when it comes to improving the safety situation, and their support and collaboration are greatly appreciated.

We need solid recommendations regarding education and routines to be followed by everyone working in the scanner environment. A better understanding of MRI safety risks and more resources for safety education are needed to maintain a high level of competence among the growing group that needs MRI safety training.

Another area for improvement is the reporting of incidents. Today, there are several different incident reporting systems, locally and sometimes nationally, but most of us working with MRI safety know that the reported incidents to date only show the tip of the iceberg.

What we really need is a general, efficient, easy-to-access reporting system. A dream scenario would be if every single accident could be reported and analysed, and any improvements made would also be registered. This information would then be available in a database so the whole MRI community could learn from it. The reporting system would be a most welcome tool for everybody working to improve MRI safety locally as well as globally.

Titti Owman Titti Owman is an MR radiographer/technologist and a member of the Safety Committee of the Society for MR Radiographers & Technologists (SMRT) and a past member of the Safety Committee of the International Society for Magnetic Resonance in Medicine. She is a founding member of the national 7-tesla facility within the Lund University Bioimaging Center, Sweden, and is also a research coordinator/lecturer at the Center for Imaging and Physiology, Lund University Hospital. She is also past president of the SMRT.

  • This article was originally published on AuntMinnieEurope.com ©2019 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.
Copyright © 2025 by IOP Publishing Ltd and individual contributors