Skip to main content

Physical cues are crucial to neuronal differentiation

The extracellular matrix is an organized network of fibres that acts as a support structure to cells and directly influences their behaviour. The interaction between a cell and the extracellular matrix is crucial to tissue-specific cell behaviour. Indeed, scientists have manipulated this cell-matrix relationship by using biomaterials to create environments that can facilitate cell growth, to create tissue for research or transplantation purposes.

The use of human stem cells presents exciting opportunities for tissue engineering. Pluripotent stem cells enable the growth of multiple different cell types, which contain the same genetic code as the person from whom the cells were originally extracted. Stem cells can be directed to grow into a certain cell type by selecting an appropriate extracellular environment – through specialized materials – to provide physical cues that direct growth. Hydrogels are commonly used due to their biocompatibility and tuneable physical properties.

Researchers from the University of Akron have studied the effect of different extracellular environments on promoting differentiation of neural stem cells into neurons (Biomed. Mater. 13 024102). They performed the investigation by using hydrogels with different physical properties, i.e. stiffness and cell binding domains.

The researchers found that neuronal differentiation was promoted when cells were grown on soft surfaces, with a stiffness of 0.1–0.8 kPa being favourable over 4.2–7.9 kPa. In addition, they observed an increased expression of a key neuron protein marker (β-III tubulin), as well as significantly increased neurite growth, in the softer hydrogels (0.1 and 0.8 kPa). The stiffer hydrogels, on the other hand, promoted glial cell differentiation instead of neuronal.

On top of investigating the effect of stiffness on neural stem cell differentiation, the researchers also studied the impact of physical binding of cells to their surrounding environment on differentiation. They established that the amount of binding sites available to cells is important – with differing levels affecting neural differentiation – but not as important as the physical stiffness of the extracellular environment.

To investigate binding, the researchers introduced a specific peptide sequence, Arg-Gly-Asp (RGD), to the synthetic hydrogel used in this study. RGD is a peptide sequence present in multiple biopolymers of the extracellular matrix – fibronectin and collagen, for example. It enables cell binding through multiple integrin receptors at the surface of cells.

This study echoes seminal papers exhibiting that a physical stiffness of below 1 kPa is favourable for neuronal differentiation from stem cells, while stiffness above1 kPa promotes glial cell formation. It builds on previous work by establishing that a small difference in stiffness can switch the lineage of neural stem cells from neuronal to glial, with previous investigations having only tested 1 and 10 kPa hydrogels for neuronal and glial growth.

This research presents a general guideline for differentiating neural stem cells to neuronal cells, which will aid engineering of brain tissue in future. This information could be manipulated to create multicellular brain constructs from patient stem cells, potentially creating translational and stratified models for medical research in the lab. The authors suggest that an interesting progression from their study would be to investigate the cell-to-cell communication and interactions within the neural stem cell niche, as opposed to the cell-to-extracellular interactions exhibited here.

US National Science Foundation clamps down on misconduct

One of America’s leading research-funding agencies has announced new steps to eliminate sexual harassment and similar transgressions in science and engineering. Responding to increasing reports of sexual misconduct by individual grantees and in institutions that employ them, the National Science Foundation (NSF) will now require that every grantee organization report cases of sexual harassment. The agency will also update its web resources on harassment policies and has clarified how NSF employees should report and handle complaints of sexual harassment.

France Córdova, NSF director, outlined the steps in a letter to presidents of universities and colleges, and the heads of other grant-receiving organizations. “NSF is committed to promoting safe, productive research and education environments for current and future scientists and engineers,” Córdova wrote. “The Principal investigator (PI) and co-PI and all grant personnel must comport themselves in a responsible and accountable manner, including during the performance of award activities conducted outside the organizations, such as field sites or facilities, or during conferences and workshops.”

Community effort

The NSF policy comes at a time of increased concern about sexual harassment, which has also hit the scientific community. Dartmouth University, for example, recently put three psychology professors – and recipients of NSF grants – on paid leave pending the results of a criminal probe into sexual misconduct. In February, the NSF removed Boston University geologist David Marchant as a PI of a grant following charges – which Marchant denies – that he had sexually harassed graduate students during field studies in Antarctica two decades ago.

The upgraded NSF policy now requires that grantee organization’s report findings of sexual harassment, or any other kind of harassment regarding a PI, co-PI or any other grant personnel. It also expects all grant-receiving organizations to establish and maintain clear and unambiguous standards of behaviour to ensure harassment-free workplaces. Finally, the policy requires NSF’s Office of Diversity and Inclusion to ensure that NSF-funded programmes and projects are free of discrimination.

“NSF is working to make certain that awardee organizations respond promptly and appropriately to instances of sexual and other forms of harassment,” says Córdova. “A community effort is essential to eliminate sexual and other harassment in science and to build scientific workspaces where people learn, grow and thrive.”

Reports of coal’s terminal decline are premature

Rapid expansion of coal power plants in Turkey, Indonesia and Vietnam – climate targets need active policy.

While fewer new coal-fired power plants are now being built in China and India, the planned expansion in the use of coal in fast-growing emerging economies, such as Turkey, Indonesia and Vietnam, will in part cancel out the reduction. Only if the countries of the world actively counteract this trend, they can achieve the climate goals agreed in the Paris Agreement.

These are the results of the study “Reports of coal’s terminal decline may be exaggerated,” by researchers from the Mercator Research Institute on Global Commons and Climate Change (MCC) and the Potsdam Institute on Climate Impact Research (PIK), published in the journal Environmental Research Letters.

“The coal problem is by no means self-defeating, despite all the advances in renewable energy. If the international community wants to achieve its greenhouse gas emission reduction goals to avoid the greatest climate risks, then it must act decisively,” said MCC Director Ottmar Edenhofer, who is also Chief Economist at PIK.

“It would take a coal exit, worldwide. The best way to do this is, from an economic point of view, a substantial carbon pricing. It may look different from one country to another, but a coalition of pioneers should do the first step – this very decade. ”

In 2016, China and India have each cancelled more than 50 percent of their plans to build new coal-fired power plants, according to the study. However, globally coal investments are further increasing. Turkey, Indonesia and Vietnam, for example, plan to increase their capacity altogether by about 160 gigawatts altogether. This is about as much as the output of all existing coal-fired plants in the 28 EU countries.

In addition, other countries’ planned future investments in coal have been massively extended in 2016. Investment plans in Egypt, for example, have increased almost eightfold, while they have nearly doubled in Pakistan. These developments jeopardize countries’ ability to meet their Nationally Determined Contributions (NDCs), as CO2 emissions from coal-fired power plants would increase almost tenfold from 2012 to 2030 in Vietnam, for example, and almost quadruple in Turkey.

“It is true that China has recently invested less in coal and has perhaps even passed its peak in carbon emissions,” said Edenhofer. “This has rightly received a lot of attention – but to speak of the end of coal is premature. Recent data also show that China is increasingly investing in coal-fired power plants abroad.”

If current plans are implemented, emissions from coal would nearly exhaust the remaining global carbon budget, which is determined by the Paris Agreement’s target to limit global warming to less than two degrees Celsius. According to the International Panel on Climate Change (IPCC), if the world wants to likely stay below the two-degree threshold, it may only release another estimated 700 to 800 gigatons (Gt) of CO2 into the atmosphere.

However, the existing infrastructure including, for example, power plants and buildings, is expected to emit about 500 Gt already if used to the end of its lifecycle. The coal-fired power plants currently under construction and those additionally planned would amount to another 150 Gt.

Under these circumstances, additional emission growth, e.g., resulting from growth within the transport sector or agriculture would then exceed the total budget. The newly published study is based on data of the US-based organisation CoalSwarm and the International Energy Agency (IEA), as well as on subsequent research by the authors.

“Although the costs of renewables have recently fallen, they still can’t compete with cheap coal in many parts of the world,” said Jan Steckel, head of the MCC working group Climate and Development.

“The financial costs for renewable energy in these countries are stagnating at a relatively high level. In order to incentivize investments in renewables, capital costs would have to be reduced by means of intelligent policies, such as the use of credit default swaps.”

The researchers advocate politically feasible solutions for a global coal exit. For example, coal could be pushed out of the energy markets by means of a roadmap to shut down coal mines, stricter power plant regulations and higher carbon prices worldwide. This could be combined with using the revenues from carbon pricing for a socially just transition of tax systems or the expansion of socially necessary infrastructure.

Silicon qubits show promise for quantum computers

A new two-qubit quantum processor that is fully programmable and single electron spins that can be coherently coupled to individual microwave-frequency photons are two of the latest advances in the world of solid-state spin-based quantum computing. The breakthroughs could help in the development of large-scale spin-based processors in the future.

While classical computers store and process information as “bits” that can have one of two logic states – “0” or “1” – a quantum computer exploits the ability of quantum particles or bits (qubits) to be in a “superposition” of two or more states at the same time. Such a device could, in principle, outperform a classical computer on certain tasks, such as factoring large prime numbers and sorting large random lists, thanks to it being massively parallel.

In recent years, researchers have succeeded in making qubits from a number of solid-state materials, including semiconducting quantum dots and superconductors. Semiconductor spin qubits appear to be better for a number of reasons. For one, they last for a relatively long time before decohering (interacting with their environment). They can also be controlled electrically and can be integrated with high density on a chip.

The problem, however, is that it is still difficult to control the state of individual spin qubits and intertwine multiple qubits in a controlled way.

A complete set of operations in arbitrary combinations

A team at QuTech and the Kavli Institute of Nanoscience Delft led by Lieven Vandersypen and another led by Jason Petta at Princeton Universty have now succeeded in overcoming these problems.

Vandersypen and colleagues have made a new two-qubit device based on silicon that they can program to perform a complete set of operations in arbitrary combinations. “Such programmability means that the processor can run any algorithm the user designs and this is the idea people have in their minds of what a useful future quantum computer would look like,” explains team member and lead author of the study Thomas Watson.

A naturally “quiet” environment for qubits

The new device looks very much like a transistor, he says. “We apply control signals to metallic electrodes on a silicon chip to isolate, measure and manipulate the two electron spin qubits on the chip. As a proof of principle, we show that we can use these (electrical) control signals to run two different algorithms – the Deutsch-Jozsa algorithm (which tests whether a function is odd or even) and the Grover-search algorithm (which searches for the right answer in an unsorted set of data).”

The processor has the added advantage of being made from silicon, which provides a naturally “quiet” environment for qubits, he adds. Silicon is also the most widely used material in the semiconducting industry, so there is a great opportunity here to be able to scale up the number of qubits in our device, which would allow it to run more complex algorithms still.

Transferring information

Meanwhile, Petta’s team, which includes researchers from the University of Konstanz, has succeeded in coherently coupling a single electron spin with a single microwave-frequency photon. This has been difficult to achieve until now since spin qubits only weakly interact with their environment (a property that allows them to have a long lifetime, as mentioned).

“Such coupling is important because it allows us to transfer information from the spin qubit (which is stuck on a wafer) to a photon (which can be routed over relatively long distances on a chip),” explains Petta. “In the future, we should be able to couple any spin qubit on a processor to any other spin qubit on a processor using the photons as an information ‘link’.”

Faster than “bad” processes

The key here is for the spin to communicate with the photon in an extremely clean material system that allows for long spin coherence times, he says. “What happens naturally in most systems is that the photon leaks from its cavity, and the qubit quantum state collapses. We need to be able to couple the spin to the photon faster than the rate at which these ‘bad’ processes occur.

“In our work, we coupled silicon and superconducting niobium to couple semiconducting spin qubits to superconducting circuitry, a set up that allows us to trap the photon for a relatively long period.”

Fast spin-photon coupling rates

The operation time of such a device is slow, however, at around 0.1 seconds (which is like having a computer with a 10 Hz clock speed), explains Petta, so we sped up the interaction using a two-step process that couples the charge of the spin electron to the electric field of the photon cavity. “The spin of that same electron is coupled to its position due to a magnetic field gradient, which generates a spin-orbit interaction,” he says. “Combining the electric field coupling and spin-orbit coupling is what allows us to achieve the fast spin-photon coupling rates (of more than 10 MHz) we observed in our experiments.”

And that is not all. As well as demonstrating this spin-photon coupling, the Princeton-Konstanz researchers also found they could use light to measure the orientation of a single electron spin. “In the short term, we believe that the technology might be useful for high-fidelity readout of single quantum states,” Petta tells nanotechweb.org. “As mentioned, spin-photon coupling should also allow us to couple spins separated by large distances, and possibly even allow for chip-to-chip coupling.”

So, where next? Watson and colleagues say they will now be increasing the number of qubits in their devices and Petta’s team will couple one spin to another across a chip roughly 1 cm in size using a single microwave-frequency photon.

The research from both groups is published in Nature. Their papers are here and here.

Simulations reveal how sharp boundaries endure in soft tissue

Sharp boundaries between different tissues have been modelled by a group of physicists led by Daniel Sussman at Syracuse University in the US. The team used simulations to calculate values of surface tension in cell populations, showing how different cell types maintain soft yet distinctive borders between each other. Their work could make it easier for scientists to model a variety of complex living systems.

Biological processes including embryo development, tumour metastasis and wound healing, all rely on different types of cells becoming compartmentalized with clear boundaries between different cell populations. These systems were previously modelled by treating cells like immiscible fluids such as oil and water. However, living cells can adapt their properties based on their surrounding environments and this makes them very complicated to model as fluids.

Perfectly-fitting polygons

Sussman’s team simplified the process by creating 2D simulations of a population of one type of cell, surrounded by cells of another type. Based on the “vertex model”, the cells in the simulations were modelled as perfectly-fitting polygons, whose shapes were determined by the positions and types of their neighbours. The researchers altered the simulation so that on the borders between different populations, the cells would respond to the different types of neighbouring cell by adapting their shapes to form smooth, distinct boundaries.

To measure the properties of the system, the team determined the surface tension across the “droplet” formed by the central cell population. The surface tension across small fluctuations on the droplet’s surface was also worked-out. They then simulated what would happen if the droplet is squeezed the between two solid plates, which would normally increase the surface tension across droplets of regular fluids.

Low tension

However, Sussman’s team found that the surface tension across the squeezed droplet was, in fact, lower than in the fluctuations. This meant that while bulk tissues can remain soft and squishy under pressure, cells on the tissue boundary will maintain a shape that sustains sharp boundaries between tissues. The researchers’ work is the first to reveal these effects of surface tension and could prove invaluable to simulations of biological tissues in the future.

The simulations are described in Physical Review Letters

Climate change to shift timing of glacial runoff

The distribution of freshwater runoff from glaciers throughout the year may change as the climate warms, predicts a study in Nature Climate Change. During this century, for around half the 56 glaciers examined, annual runoff is expected to increase to a peak then reduce, whilst annual runoff for the remainder seems already to have peaked and is predicted to continue to fall. What’s more, the proportion of the run-off taking place in the melt-season looks set to decrease in more than a third of the basins.

Matthias Huss of the ETH Zürich and the University of Fribourg, Switzerland, together with Regine Hock from the University of Alaska Fairbanks, US, simulated the development of glaciers under climate change for all 56 large drainage basins containing glaciers outside Antarctica and Greenland.

The researchers’ predictions indicate that annual freshwater runoff from glaciers will increase with rising global temperatures, before falling again as the glacier shrinks. Ultimately, each glacier will either disappear or settle at a smaller size but with the same annual runoff as before, having reached a new size equilibrium. Melt-water will no longer contribute to overall runoff as an equivalent amount of water will accumulate in the glacier in winter. If other factors, such as precipitation, have remained unchanged over time then total runoff will return to its initial level.

This annual pattern contrasts with the runoff taking place in summer, during the melt-season. The pattern is similar initially; projected runoff increases with global temperature and reaches a peak. It then falls, stabilizing below its pre-warming level. This is because some of the melt-season runoff represents the melting of a percentage of the glacier itself. As the glacier has reduced in size, this volume is smaller. The annual runoff has returned to its original level thanks to the glacier’s new size equilibrium, yet melt-season runoff has fallen, so a greater proportion of runoff must occur outside the melt-season. This probably results from winter rains and snow, as the glacier is not contributing melt-water and has little effect on winter runoff.

Although there was substantial variation between basins, the researchers came to some key conclusions. More than one third of the basins are set to experience an overall reduction in runoff greater than 10% during at least part of the melt-season by the end of the century, they found. It is important to note that these glacier-containing basins house almost one-third of the global human population. The disruption of glacial runoff could have serious effects on these communities, as well as countless natural habitats downstream from melting glaciers.

Huss and Hock used the Global Glacier Evolution Model (GloGEM) to predict glacier size changes and runoff. The researchers based their calculations on three different greenhouse gas concentration predictions developed by the Intergovernmental Panel on Climate Change (IPCC). RCP2.6 is the closest to the targets set out in the 2015 Paris Climate Agreement, while RCP4.5 and RCP8.5 both predict higher emissions and later peaks in atmospheric greenhouse gas concentration. The simulations ran until the year 2100.

Contact allergy: where are the allergens found?

Nickel remains the primary cause of contact allergy, despite efforts to minimize exposure. Nickel is classified as a hapten, a small molecule that, upon combining with proteins, evokes an immune response. To determine the mechanism involved in contact allergy, a team of scientists from Chalmers University and University of Gothenburg, Sweden, has investigated the penetration and distribution of this hapten through the human skin using imaging mass spectrometry. Their results indicate a new method for investigation of skin distribution of contact allergens, providing an alternative to animal experiments (Contact Dermatitis 78 109).

Exposure to nickel

Imaging mass spectrometry is a valuable analytical tool that uses a beam of primary ions to visualize the spatial distribution of chemical molecules on a surface by their molecular mass. The researchers chose time-of-flight secondary ion mass spectrometry (ToF-SIMS) for this study as it provides high-resolution images.

They exposed samples of human skin (obtained from breast reduction surgery) for 24 h to a nickel sulphate solution; control samples were exposed to deionized water. Both the samples from the tissue exposed to nickel and the control tissue were frozen in liquid nitrogen, sliced and analysed by ToF-SIMS.

Nickel distribution

Analysis of the ToF-SIMS images generated a visualization of the distribution of nickel ions as a function of depth into the sample. The results indicated that the highest intensity of nickel ions was observed in the stratum corneum (the outermost layer of the epidermis, regarded as the major barrier to chemical transfer through the skin). The authors observed a lower density of nickel ions in the upper epidermis, as well as a rapid decrease in the number of nickel ions with skin depth.

Moreover, the authors reported that collagen (one of the most abundant proteins in our body and the building block to the health of our skin) is not found in the stratum corneum but at the interface between the epidermis and the dermis (the inner layer of the skin). Collagen was found there together with lipids with a low mass (such as cholesterol and phosphatidylcholine (PC) headgroup). Surprisingly, these remained unaffected by the nickel sulphate exposure. In contrast, lipids with higher mass were found to have an altered biomolecular composition compared with control tissue, indicating a physiological effect due to nickel sulphate exposure.

Ni+ (red), cholesterol (green) and PC headgroup (blue)

This study is the first to offer information regarding penetration of a hapten in human skin ex vivo and validates ToF-SIMS as a tool to acquire high-resolution images of ion distribution in different layers of the skin. This approach may be expanded for investigation of other skin sensitizers, thus providing new avenues for chemical testing while representing a way to reduce the number of animal experiments.

TRAPPIST-1 exoplanets could harbour significant amounts of water

The chances that several exoplanets in the TRAPPIST-1 system could be habitable have been boosted by new measurements that push the envelope of what exoplanet science can do with today’s telescopes.

The TRAPPIST-1 system is just 39.6 light-years away and comprises seven small worlds that orbit a lone red dwarf star. The inner three worlds were discovered in 2016 by astronomers using the Transiting Planets and Planetesimals Small Telescope (TRAPPIST) at the European Southern Observatory in Chile. The outer four planets were spotted a year later and it is it is possible that all seven worlds could potentially be habitable.

Now, new results have narrowed down the masses of the planets, confirming that they are all likely to be rocky, without the extended atmospheres that miniature versions of Uranus and Neptune would have. Furthermore, five of the planets have densities that suggest a significant amount of water, which is vital for life as we know it.

Transit timing variations

Astronomers led by Simon Grimm of the Centre for Space and Habitability at the University of Bern in Switzerland have produced the most accurate calculations of the planetary masses yet by taking advantage of a phenomenon known as transit timing variations, or TTVs. The seven worlds gravitationally push and pull on each other, resulting in the timing of their transits in front of their star being delayed or advanced by up to 1 h.

In the TRAPPIST-1 system, “the TTV are much stronger and more complicated than in many other systems that have fewer exoplanets,” says Grimm. The challenge of disentangling the data to calculate the planetary masses required new code written to handle 35 different parameters, five for each exoplanet. These are mass; orbital period; eccentricity; the argument of perihelion (the angle between its perihelion position and where the exoplanet’s inclined orbit passes through the ecliptic plane); and the mean anomaly (an angle required to calculate an exoplanet’s location on its elliptical orbit at any given time). The code produced a range of different solutions, from which Grimm’s team determined the configuration that best fits the observational data.

Water worlds

The most massive of the seven worlds is exoplanet “c”, the second from the star, with a mass 1.156 times that of Earth. The least massive is exoplanet d, with less than a third of Earth’s mass. The magnitude of the transits tells astronomers the radii of the exoplanets, and from their radius and mass, their densities can be calculated.

This is where things start to become interesting. Based on their densities, all seven worlds are predominantly rocky, but contain up to 5% water. This is much more water than Earth’s oceans (which amount to just 0.02% of Earth’s mass). However, it remains to be seen whether the TRAPPIST-1 water is present on the surface of the exoplanet in vast, deep oceans, or whether it is as vapour in a dense, steamy atmosphere, or whether it is spread around inside the exoplanet, much like how Earth’s mantle contains the equivalent amount of water as in the oceans.

Based on its temperature, exoplanet e would be the most similar to Earth according to Grimm. This world has 77% of the mass of Earth, but is a little denser, indicating a large iron core and a thin atmosphere, possibly even thinner than Earth’s.

Hubble finds no hydrogen

Meanwhile, new observations by the Hubble Space Telescope support the conclusions of the TTV calculations, confirming the likely terrestrial status of the TRAPPIST-1 exoplanets by ruling out the existence of the extended hydrogen envelopes found in the atmospheres of Uranus and Neptune. Pushing Hubble’s powers of resolution to the limit, a team led by Julien de Wit of the Massachusetts Institute of Technology using Hubble’s Wide Field Camera 3 to make infrared spectroscopic observations did not detect any large, puffy atmospheres around exoplanets d, e, f and g, “leaving many more terrestrial-like possibilities to be explored with future telescopes such as the James Webb Space Telescope,” says Hannah Wakeford, a member of de Wit’s team from the Space Telescope Science Institute in Baltimore.

Hubble had previously searched for hydrogen-rich atmospheres around the innermost exoplanets b and c in 2016, while observations of the outermost world, h, remain inconclusive. The next step is to look in the ultraviolet for hydrogen escaping from the exoplanets’ atmospheres. For the innermost worlds, this would be a sign of a greenhouse effect, whereby the temperature boils the oceans, filling the atmosphere with water vapour, which is then broken down into oxygen and hydrogen by ultraviolet light from the star, allowing the hydrogen to escape into space. It is the same scenario that has taken place on Venus.

Best opportunity

However, Wakeford says that while it is difficult to draw any direct analogues to the Sun’s planets, TRAPPIST-1 “still represents the best opportunity we have for studying Earth-sized worlds outside of our own solar system”.

The research is described in Nature Astronomy and in an upcoming paper in Astronomy and Astrophysics.

Ultrasound stimulates peripheral nerves in vivo

Focused ultrasound (FUS) provides non-invasive, targeted therapy for a wide range of clinical applications. Recently, FUS has proved effective at stimulating or inhibiting neuronal activity in both the central nervous system (CNS) and peripheral nervous system (PNS), offering the potential to replace existing neuromodulation therapies, which are either targeted but invasive, or non-invasive and non-specific.

To date, in vivo studies examining the physiological effects of FUS stimulation have only targeted structures in the CNS. Now, Columbia University researchers have demonstrated the first successful in vivo FUS stimulation of the PNS (Phys. Med. Biol. 63 035011).

“Neurostimulation of the CNS excites (or inhibits) many neurons, while with PNS stimulation, we are targeting either the axon or nerve body of a specific, well understood, nerve branch,” explained first author Matthew Downs. “FUS could be a powerful tool to target multiple nerve types including the vagus nerve, which has the potential to treat diseases such as epilepsy, depression and metabolic disorders.”

It’s also easier to deliver FUS to peripheral nerves, as the ultrasound does not need to pass through skull, which leads to energy loss and scattering. “If we can achieve similar effects by stimulating the peripheral nerves as we do the CNS, then the technique can be more easily implemented,” noted Downs.

Physiological response
To demonstrate FUS stimulation of peripheral nerves in vivo, the researchers stimulated the sciatic nerve in anesthetized mice. They targeted the nerve using ultrasound imaging and stimulated it with a 3.57 MHz transducer. At the same time, they recorded electromyography (EMG) signals via needle electrodes placed into the tibialis anterior muscle, which is activated by sciatic nerve stimulation.

Preliminary experiments revealed a set of FUS parameters that successfully elicited EMG signals and observable muscle contractions. These included a 35-100% duty cycle (DC) and a stimulation duration of 0.8-10.5 ms. An 8 ms stimulation of the sciatic nerve typically produced a single EMG spike, and occasionally a second EMG signal, plus an electromagnetic field (EMF) artefact from the transducer. Reducing the stimulation duration to 0.8 ms at 100% DC (continuous wave) produced single EMG responses with reduced EMF noise.

Stimulation success rates with pressure and pulse length

Varying the duration between 1 and 10.5 ms did not affect the latency (time between stimulation and response) or intensity of the EMG signal, thus the researchers combined results from this range. “From there, we wanted to see if we could achieve stimulation with a shorter duration, so we pushed the software to as fast as it could go, which turned out to be 0.8 ms,” explained Downs. “This was also an attempt to get as close as we could to normal electrical stimulation duration.”

Moving the FUS focal spot away from the sciatic nerve eliminated both observable muscle activation and EMG activity. To further verify that these effects were due to stimulation of the nerve, rather than the surrounding tissue, the researchers clipped the nerve downstream of the FUS target; this transection eliminated the EMG signal.

Comparing ultrasound with electrical nerve stimulation revealed that FUS could elicit comparable EMG spikes. The latency of the electrically stimulated EMG response (average 2.1 ms) was comparable to that of both the 0.8 ms and the 1-10.5 ms FUS stimulation groups. These findings suggest that FUS could serve as an alternative or complimentary treatment for peripheral nerve conditions currently treated with electrical stimulation.

Safety first
To assess the safety of this procedure, the researchers performed an open field test, in which they observed mice in a 30 cm2 box before and three days after FUS stimulation. The motion of the mice did not significantly change after stimulation and was similar to that of a control group – implying that FUS had not damaged the nerve or surrounding tissue.

They also monitored the time mice spent in the box centre and along its walls, to determine anxiety levels. The behaviour of the stimulated group was similar to the control and baseline groups, indicating that the FUS stimulation parameters used to elicit EMG responses are safe. H&E staining revealed no damage to the FUS stimulated nerves, reinforcing the safety of the technique.

Underlying cause
To determine whether FUS elicited a thermal effect, the researchers embedded thermocouples in an ex vivo mouse hind limb adjacent to the sciatic nerve. Stimulation at higher FUS pressures caused a 1.09°C increase in temperature – significantly lower than the 20°C increase shown previously to block nerves from firing.

They also measured the acoustic radiation force generated from the transducer and used this to determine the deformation at the focus. FUS parameters employed in this study generated a large enough displacement (up to 422 µm) to facilitate firing of an action potential that (according to prior studies) will elicit EMG activity. These in vivo findings agree with published ex vivo results stating that FUS excitation of the PNS is a mechanical, not thermal, effect.

The researchers are currently working to develop elastography-based targeting techniques for real-time monitoring of the transducer’s focal area to ensure accurate nerve targeting. They are also investigating whether cavitation is generated during stimulation, to further elucidate the mechanics of peripheral nerve stimulation. “We have also begun preliminary experiments with stimulation of human peripheral nerves using this technique,” Downs told medicalphysicsweb.

 

3D culture system enables stem cell expansion

Human pluripotent stem cells are cells that can be grown into multiple different types. The differentiation of these stem cells provides a variety of human cell types for use in tissue modelling, cell therapies and even tissue transplantation. However, one of the major limiting factors to the effective use of pluripotent stem cells comes from “scaling-up” of these cells to produce the large number needed for effective studies; with logistical challenges including the expensive techniques required to do this. Additional issues arise from the methods used to culture stem cells, with the majority of protocols relying upon 2D surfaces, which don’t replicate the 3D microenvironment native to cells in the body.

AlgTube

Researchers at the University of Nebraska have developed a potential solution to these issues by creating a cost-effective 3D culture system (named the AlgTube 3D cell culture system) that enables the expansion and differentiation of cells. The AlgTube system is based around a 3D culture platform that allows cell growth within a 3D environment. This offers advantages over traditional 2D culture methods, which fail to replicate the native environment in which stem cells grow and differentiate within the body (Biofabrication 10 025006).

In this system, cells are suspended within alginate hydrogel tubes. This enables interactions between cells encapsulated within the hydrogel, as well as expansion within the hydrogel. The AlgTube system offers a large scope of expansion based on the much-increased area for growth provided by the 3D environment. Hydrogels are solid polymer-based (alginate-based, in this system) biomaterials, which are highly water saturated and porous, and are widely utilized for 3D culture of cells. Alginate, in particular, is affordable and biocompatible, making it perfect for widespread use in hydrogel form.

Improving on cell factories

Whilst “cell factories” (cell culture flasks with large surface areas and multiple plastic layers) enable expansion of stem cells in 2D, their use can be arduous due to their size and they require multiple incubators. AlgTube can provide similar expansion rates to cell factories in a much smaller system, reducing the reliance upon large lab incubators and producing cells that are grown within a physiologically relevant environment.

The research group developed specific spatial configurations that allow for efficient oxygen and nutrient diffusion throughout the 3D system. The system encapsulates cells within a 400 µm diameter, avoiding any physical stress and enabling cell culture medium to easily diffuse to the cells.

The process of cell growth and expansion within AlgTube requires an initial clustering phase – where cells aggregate into clusters – followed by the cell expansion phase, where a large cell mass develops within the device.

The researchers also tested the effect of different initial cell seeding densities on the rates of expansion. They observed similar final cell counts when seeding 1 or 10 million cells, and all cells expressed the important pluripotency marker OCT4. This allows researchers employing the AlgTube system to use their stem cells more sparingly without compromising time or final cell numbers.

Growth kinetics of cells at various seeding densities

The researchers also demonstrated that the AlgTube system can facilitate growth of different stem cells, with induced pluripotent stem cells (iPSCs), mesenchymal stem cells (MSCs) and embryonic stem cells (ESCs) being grown within the system for 10 days without altering their stemness (retention of stem cell pluripotency). These cells could also still be differentiated to the different terminal lineages (ectoderm, mesoderm and endoderm) allowed by pluripotent stem cells.

Future opportunities

The AlgTube system provides a useful tool for producing a large amount of stem cells for a variety of purposes. It also allows for the growth and expansion of cells within a physiologically relevant and 3D microenvironment. This system may enable the easy mass expansion of stem cells for lab groups, without requiring the money and resources of a large pharmaceutical company, thus enabling important studies by university and smaller industrial companies. The authors note that future AlgTube research would benefit from investigating the integration of various human cell types into the system, such as adult stem cells or T-cells.

Copyright © 2026 by IOP Publishing Ltd and individual contributors