Skip to main content

Multisource CBCT enhances extremity imaging

Cone-beam CT (CBCT) scanners offer the advantages of being compact in size, less costly, and more portable than multidetector CT systems. Image quality, however, is often impacted by artefacts such as streaking or shading, associated with cone-beam effects and/or nonuniformity in image quality near the ends of the field-of-view (FOV).

Research is underway to improve CBCT image quality. Biomedical engineers at Johns Hopkins University are working with collaborators in radiology and at Carestream Health to develop a multisource CBCT scanner dedicated for extremity imaging. The team has now demonstrated that the prototype research scanner reduced cone-beam artefacts over a single-source CBCT scanner and provided a larger longitudinal FOV (Med. Phys. 45 144).

Three-source configuration

The gantry enclosure of the scanner is a C-shaped, self-shielded carbon fibre hull enclosing the source and detector, enabling an arm or leg to be placed through a door within the “C”. The inner diameter of the gantry is 20 cm, and includes an expanded region for placement of a foot. The circular detector orbit extends to 210°, with the flat-panel X-ray detector (FPD) passing within the closed door during a scan. The source-to-axis distance is 40.4 cm, and the source-to-detector distance is 53.8 cm.

The scanner’s X-ray source consists of three separate anode-cathode units evenly distributed with 12.7 spacing along the longitudinal direction. Each anode-cathode axis is oriented along the longitudinal axes and angulated to present a focal spot at the centre of the detector. Each source has a separate primary collimator with three output windows on the X-ray tube housing, allowing each source to cover a large FOV at the FPD. The FPD was read in 2 x 2 binning mode at a pixel size of 0.278 mm, with a nominal readout rate of 25 frames per second.

Image protocol and acquisition

The prototype CBCT scanner

The researchers investigated the image quality and dose characteristics of the prototype scanner. For the study, they acquired images from both single-source and three-source configurations. The scan protocol for all acquisitions used 90 kV, 6 mA tube current and 20 ms pulse width for each source. A total of 600 projections (200 for each source) were acquired. They utilized a custom filtered back projection (FBP) algorithm for reconstruction, using a simple linear combination of the three single-source FBP reconstructions via a voxel-based weighting “map”. They also used a soft-tissue kernel to visualize soft-tissue details of muscle, cartilage, tendons and ligaments above the knee and joint space.

Senior author Jeffrey Siewerdsen and colleagues measured radiation dose using a Farmer chamber in three CTDI phantoms stacked end-to-end. They performed measurements at the central and four peripheral locations on the axial plane and along the longitudinal direction.

For visual assessment of cone-beam artefacts in the reconstructed images, the researchers used a modified Defrise phantom and anthropomorphic knee and hand phantoms. To characterize image performance, they focused on the uniformity, noise, noise-power spectrum (NPS) and cone-beam effects in 3D images reconstructed from the three-source configuration and the three individual sources.

Comparative findings

The dose distribution for the three-source configuration exhibited asymmetry due to both the short-scan geometry (radial asymmetry, in which dose was higher at the periphery closer to the X-ray source) and the multi-source arrangement (more uniform longitudinal dose distribution for the three-source configuration). The authors note that such asymmetric dose distributions are important to consider in rigorous dose characterization of this and other CBCT systems.

Sampling characteristics of the three-source configuration were superior overall. Areas of greatest reduction in noise and cone-beam artefacts were identified near the central plane of each X-ray source, increasing at locations intermediate to each source. The authors reported that this configuration demonstrated a more complete 3D sampling with an overall reduction in the “null cone” of unsampled frequencies.

Cone-beam/3D sampling effects in a modified Defrise phantom were also better overall. The three-source configuration exhibited high-fidelity reconstruction at the midplanes of the central, upper and lower sources. Its longer FOV (up to about 30 cm) offered better visualization of long-bone fractures. While cone-beam artefacts were visible in images acquired from both configurations, images for the three-source configuration were of superior quality, enabling clear visualization of joint spaces, cartilage and tendon.

“The multiple-source configuration appears particularly beneficial for imaging scenarios in which the anatomy contains high-contrast surfaces at large distance from the central axial plane and/or throughout the FOV, as with the hand, knee or foot,” wrote the authors. The system is now undergoing additional clinical studies of diagnostic performance and utility.

Cloud quantum computing calculates nuclear binding energy

Cloud quantum computing has been used to calculate the binding energy of the deuterium nucleus – the first-ever such calculation done using quantum processors at remote locations. Nuclear physicists led by Eugene Dumitrescu at Oak Ridge National Laboratory in the US used publicly available software to achieve the remote operation of two distant quantum computers. Their work could lead to new opportunities for scientists in many fields who want to use quantum simulations to calculate properties of matter.

In previous research, scientists have worked alongside quantum computer hardware developers to create quantum simulations. These typically use between two and six qubits to calculate a quantum property of matter – calculations that can be extremely time-consuming to do with a conventional computer. As the number of qubits available in quantum computers increase, the hope is that quantum simulations will be able to do calculations well beyond the reach of even the most powerful conventional computers. However, doing simulations alongside quantum computer specialists can be an inefficient process and the research would be much more streamlined if scientists were able to operate quantum computers themselves.

Rigetti and IBM

In response to this issue, two companies have released software which allows their quantum computers – Q Experience from IBM, and the 19Q from Rigetti – to be operated remotely through cloud services. The IBM quantum processor has 16 qubits, while the Rigetti device has 19 qubits. Dumitrescu’s team used the software to calculate the binding energy of the deuterium nucleus – the energy required to prise apart the proton and neutron that comprise the nucleus.

The team’s novel method required some precautions. Working via the cloud, the rate at which calculations could be made was limited, meaning the researchers needed to adjust their quantum measurements to account for the slower rate. With such measures in place, Dumitrescu’s team calculated the binding energy on both quantum computers to within 2% of the actual measured value.

Non-specialists should succeed

The success of Dumitrescu and colleagues’ experiment demonstrates that scientists do not need to be quantum computer specialists to operate quantum computers themselves. Soon, scientists in many different fields could be making use of cloud quantum computing.

The calculations are described in a preprint on arXiv.

In praise of (total) demand response

“If we could manage to adjust all energy demand to variable solar and wind resources, there would be no need for grid extensions, balancing capacity or overbuilding renewable power plants. Likewise, all the energy produced by solar panels and wind turbines would be utilised, with no transmission losses and no need for curtailment or energy storage.”

So says an interesting, wide ranging but well-referenced article in Low Tech Magazine. It goes on: “Of course, adjusting energy demand to energy supply at all times is impossible, because not all energy using activities can be postponed. However, the adjustment of energy demand to supply should take priority, while the other strategies should play a supportive role.”

The Low Tech Magazine article first looks at the problems with trying to balance the variable inputs from renewables by adjusting supply. On back-up supply it says: “For a power grid based on 100% solar and wind power, with no energy storage and assuming interconnection at the national European level only, the balancing capacity of fossil fuel power plants needs to be just as large as peak electricity demand. In other words, there would be just as many non-renewable power plants as there are today.” It concludes: “Such a hybrid infrastructure would lower the use of carbon fuels for the generation of electricity, because renewable energy can replace them if there is sufficient sun or wind available. However, lots of energy and materials need to be invested into what is essentially a double infrastructure. The energy that’s saved on fuel is spent on the manufacturing, installation and interconnection of millions of solar panels and wind turbines.”

Another way to avoid energy shortages is to have overcapacity: “If solar power capacity is tailored to match demand during even the shortest and darkest winter days, and wind power capacity is matched to the lowest wind speeds, the risk of electricity shortages could be reduced significantly. However, the obvious disadvantage of this approach is an oversupply of renewable energy for most of the year. During periods of oversupply, the energy produced by solar panels and wind turbines is curtailed in order to avoid grid overloading.”

That certainly has been a problem in China, with wind curtailment running at 15% or more, but also in the UK, with large constraint payments having been negotiated to compensate wind generators for lost income. Although provocative, this may be cheaper in the short term than building extra grid links, but Low Tech Magazine says, longer term, “Curtailment has a detrimental effect on the sustainability of a renewable power grid. It reduces the electricity that a solar panel or wind turbine produces over its lifetime, while the energy required to manufacture, install, connect and maintain it remains the same. Consequently, the capacity factor and the energy returned for the energy invested in wind turbines and solar panels decrease.”

Curtailment would become even more of an issue if renewables expand so as to be able to meet peak load, without any balancing (via back-up plants, storage, and/or imports) being available. That’s an unreal assumption: some would be. But if not: “In the case of a grid with 80% renewables, the generation capacity needs to be six times larger than the peak load, while the excess electricity would be equal to 60% of the EU’s current annual electricity consumption. Lastly, in a grid with 100% renewable power production, the generation capacity would need to be 10 times larger than the peak load, and excess electricity would surpass the EU annual electricity consumption.”

What about supergrids to trade this excess and balance shortfalls with imports? It says that for a European grid with a share of 60% renewable power, grid capacity would need to be increased at least sevenfold – and 12 times for a 100% share. And even so, that wouldn’t deliver full reliability: there would still be a need for some backup – supergrids would reduce this to at best 15%, at very high cost and with some transmission losses. So more renewable inputs, or supply/storage backup, would be required to compensate.

Finally, storage. Well, like supergrids, storage systems are expensive and lossy. And the scale needed to balance renewables fully would be gigantic: “It has been calculated that for a European power grid with 100% renewable power plants (670 GW wind power capacity and 810 GW solar power capacity) and no balancing capacity, the energy storage capacity needs to be 1.5 times the average monthly load and amounts to 400 TWh, not including charging and discharging losses. To give an idea of what this means: the most optimistic estimation of Europe’s total potential for pumped hydro-power energy storage is 80 TWh, while converting all 250 million passenger cars in Europe to electric drives with a 30 kWh battery would result in a total energy storage of 7.5 TWh. In other words, if we count on electric cars to store the surplus of renewable electricity, their batteries would need to be 60 times larger than they are today (and that’s without allowing for the fact that electric cars will substantially increase power consumption). Taking into account a charging/discharging efficiency of 85%, manufacturing 460 TWh of lithium-ion batteries would require 644 million terajoules of primary energy, which is equal to 15 times the annual primary energy use in Europe. This energy investment would be required at minimum every twenty years, which is the most optimistic life expectancy of lithium-ion batteries.”

So all these options require major efforts and just raise the cost of renewable supply. By contrast, not using power does not. And Low Tech Magazine then looks at the prospects for demand management. It says demand management is “usually limited to so-called ‘smart’ household devices, like washing machines or dish-washers that automatically turn on when renewable energy supply is plentiful”. Actually, there are many others, not just domestic time-of-use tariff systems, but also large company-based demand response interruptible supply schemes. However, it is true that, as the article says, “these ideas are only scratching the surface of what’s possible”. For example, it says “If the UK would accept electricity shortages for 65 days a year, it could be powered by a 100% renewable power grid (solar, wind, wave and tidal power) without the need for energy storage, a backup capacity of fossil fuel power plants, or a large over-capacity of power generators.”

Well maybe, but here’s where it gets a bit dubious – and extreme. Could we really adjust personal, domestic and industrial activities in response to natural energy flows? As a second article notes, we certainly did in the past. Corn grinding mills only ran when there was wind, ships were often becalmed. But an advanced global economy was nevertheless sustained e.g. using the trade winds and, increasingly, water power.

New energy storage technology might make parts of this less problematic now, and some aspects might actually be socially welcome – days off work when there were wind lulls! But modern economies are based on 24/7 production and consumption. Can we undo some of that? Make hay while the sun shines?

Overall, a bit OTT and pessimistic on supply-side balancing, knocking down each option separately. In reality, a synergistic mix of the options, including flexible smart grid demand-side response, backup supply and storage (including of Power to Gas-derived hydrogen), along with supergrid balancing, may be able to reduce system costs. Indeed, one study suggested that “flexibility can significantly reduce the integration cost of intermittent renewables, to the point where their whole-system cost makes them a more attractive expansion option than CCS and/or nuclear”. But it’s fun and paradigm challenging! And here’s a full-on off-grid decentralist view, living fully on the wild side, with a lot of daylight-only power use.

Or try this for a somewhat different, and arguably equally contentious, view. Surely energy waste is always bad: yes, the renewable energy resource is large and free, but the technology is not and using it has costs. We need to use renewable energy efficiently – to limit the use of materials and land.

Physicists bag Australian and Senior Australian of the Year, Doomsday Clock ticks closer to midnight

By Hamish Johnston

Today is Australia Day, when the prestigious Australian of the Year award is conferred. This year’s winner is the quantum physicist Michelle Simmons of the University of New South Wales who famously built a transistor from just one atom and also created what could be the world’s thinnest wire. Also honoured today is biophysicist Graham Farquhar of the Australian National University. He is Senior Australian of the Year for 2018 and an expert in photosynthesis. It looks like this will be a bonzer year for physics in Oz.

On an entirely unrelated note, the imminent destruction of the world became a bit more likely this week according to the Science and Security Board of the Bulletin of the Atomic Scientists, who have moved their Doomsday Clock ahead 30 s to two minutes to midnight. This is the closest the clock – which gauges the possibility of a nuclear Armageddon or other technologically-driven catastrophes – has been to midnight since the height of the Cold War in 1953. Explaining the move, the board cites ongoing nuclear tensions on the Korean Peninsula along with growing animosity between the US and Russia, conflict in the Middle East and territorial disputes in the South China Sea. An insufficient response to climate change is also identified as a major threat to global stability.

Perovskites help windows become solar cells

Photovoltaic windows, which go from being transparent to opaque and which can convert absorbed sunlight into electricity, are a promising green technology. One new such window, based on the reversible phase transition in thermochromic halide perovskite cells, could be integrated into smart buildings, cars, large-screen displays and potentially many other technologies.

Windows that go from being transparent to opaque have been around in one form or another since the 1970s. These devices are made from electrochromic, thermochromic and liquid crystal materials and their transparency is controlled by simply absorbing or reflecting sunlight. Although they provide shade when it is sunny and hot outside and become transparent when it is dark or cold, they do not convert incoming solar energy into electricity, which means that it is, to all intents and purposes, wasted. Although photovoltaic windows that can harvest and exploit incoming solar energy are beginning to appear, most of the semiconductors used in these structures cannot be reversibly switched between a transparent phase and a non-transparent phase without deteriorating their electronic properties.

A team led by Peidong Yang of the University of California at Berkeley, the Lawrence Berkeley National Laboratory and the Kavli Energy NanoScience Institute has now utilized the reversible phase transition in purely inorganic mixed halide perovskites to produce two intrinsic characteristic states – one that is highly transparent (and which can thus function as a window) and the other that strongly absorbs light (and which can thus function as a shade). The structure can also generate electricity from incoming sunlight and the two states can be switched back and forth automatically or when needed, explains team member Jia Lin.

“We are able to reversibly switch between the transparent and non-transparent phases in the halide perovskite semiconductor without deteriorating its electronic properties – something that has not been achieved before,” he tells nanotechweb.org. “The concept could have broad applications in smart optoelectronics, such as windows in buildings and cars, information displays and many more.”

Inorganic halide perovskites

Halide perovskites have the chemical formula ABX3 (where A is typically methylammonium, formamidinium or caesium, B is lead or tin, and X iodine, bromine or chlorine). They are one of the most promising thin-film solar-cell materials around today because they can absorb light over a broad range of solar-spectrum wavelengths. Researchers have in fact succeeded in increasing the power-conversion efficiency (PCE) of solar cells made from these perovskites from just 3% to more than 22% in the last five years. This means that their PCE is now comparable to that of silicon-based solar cells.

Recent experiments revealed that substantial structural changes occur in the purely inorganic perovskites caesium lead iodide/bromide when they undergo phase transitions – often between a room-temperature non-perovskite phase (low-T phase) and a high-temperature perovskite one (high-T phase). These two phases have distinct optoelectronic properties (which is not the case for the more well-known organic-inorganic hybrid halides, such as perovskite methylammonium lead-iodide).

“We used these inorganic mixed perovskites as the photoactive materials and added the inorganic oxide materials nickel oxide and zinc oxide as charge (electron and hole) extraction layers,” explains team member Letian Dou. “We then added semi-transparent indium tin oxide (ITO) and fluorine-doped tin oxide (FTO) layers as electrodes to make an inverted planar p-i-n heterojunction architecture.”

“We fabricated all the layers (except for the FTO and aluminium) using solution-processing, which is an easy and cost-effective technique. We then applied heat and moisture to control the phase transition of the perovskite layer.”

Window when cool or dark to solar cell when hot and sunny

The researchers found that the material did not degrade, even after 10 switching cycles and that 85% of efficiency was retained after 40 cycles. In these cycles, they heated the samples to 190°C before cooling to room temperature. Switching to the solar state takes about 30 minutes, while the reverse process (of becoming transparent) takes several hours.

Ideally, windows made from this structure would be self-adaptive and switch between the window mode when it is cool or dark outside to the solar cell/opaque shade mode when it is hot and sunny, says Lin.

The team, reporting its work in Nature Materials doi:10.1038/s41563-017-0006-0, is now busy trying to better understand the mechanism behind phase transitions in inorganic perovskite systems in general. “We will also be trying to enhance the power output when the device in the high-T phase solar cell mode and better control the switching process, including lowering the phase transition temperature,” adds Lin. “We are also improving the device architecture so that it is stable over a longer period of time.”

Nanowire LEDs require combination modelling

LEDs based on nanowire structures could enable flexible devices, and displays covering a wider range of colours, as well as easing lattice mismatch strains when integrating with silicon. Despite the progress made from investigations of these devices over the past 15 years, further theoretical and computational studies are needed to understand how they can be optimized. Researchers in Finland and Sweden have now demonstrated that certain common assumptions when modelling planar LEDs are not valid for their nanowire counterparts and that sophisticated models are needed that integrate both electronic and wave optic effects.

Effective models are a delicate balance of what can and can’t be safely assumed. When characterizing LEDs people had always assumed an internal quantum efficiency of 100% at low temperatures, and a light extraction efficiency that does not change as a function of temperature. However as Pyry Kivisaari started to discuss nanowire LEDs with Nicklas Anttu during a post doc at Lund University in Sweden, they began to question the validity of these assumptions.

By integrating a model for wave optics with carrier dynamics, Kivisaari, Anttu and co-author Yang Chen at Lund University found that despite the prevalence of assumptions to the contrary, temperature did affect the light extraction efficiency and emission enhancement of their LED devices. “I believe the assumption is justified in LEDs that have a planar active region where there are not such strong wave optical effects in the extraction efficiency,” says Kivisaari. “So this is really nanowire- or nanostructure-specific finding.”

The results were also able to identify significant effects from surface recombination at recombination velocities – a measure of the recombination probability – of 104 cms-1. Kivisaari highlights previous experimental work from researchers at Paul-Drude Institut in Berlin and the Russian Academy of Science who had reported high rates of recombination at temperatures as low as 10 K, resulting in low photoluminescence efficiencies. “We confirm that based on theory this might happen if you have a large enough surface recombination velocity, so you can get low efficiency even at really low temperatures.”

In addition, they found that the angular distribution of the emitted light changed depending on the temperature and the diameter of the nanowires in the array, suggesting for the first time that temperature-controlled angle-resolved measurements could provide important insights for studies of the optical response of these structures.

Kivisaari and Anttu

Combining models

Kivisaari describes Anttu as an expert in the scattering matrix method, a numerical approach for solving the Maxwell equations to calculate how an electromagnetic field propagates in a system. Fortunately Kivisaari’s own expertise lies in carrier dynamics, so they were able to integrate the calculations for the wave optical response of the system with the drift-diffusion equations of electrons and holes in the structure.

At the heart of their approach is the Lorentz reciprocity theory – that the position of a light source and the resulting electromagnetic field can be interchanged without affecting the relationship between them. “We use the reciprocity principle to define how much the emission of light is enhanced in a specific mode,” says Kivisaaari. “The larger the electromagnetic field of the mode in the nanowire structure, the stronger the emission, so we calculate an enhancement factor based on the electric field, which we then plug in into the electron-hole recombination model or formulae.”

Using the reciprocity principle in this way, the model closely resembles what might be calculated for a solar cell, and Kivisaari believes there is likely much that can be learned about LEDs from solar cell research and vice versa. “The authors present an impressive computational method to analyse and design optoelectronic nanowire devices,” says Bernd Witzigmann – a world expert in nanowire research who heads the Computational Electronics and Photonics Group at the University of Kassel in Germany, and was not involved with the current research. He adds, “Their results highlight the importance of a multi-physics method that includes electromagnetic, electronic and thermal phenomena. Even more, in nano-devices, some of these phenomena are closely coupled, as shown in the emission features of a nanowire array LED. Computational design is a pre-requisite in order to exploit the full potential of an exciting technology.”

Next steps

Kivisaari is keen to come up with new device structures, especially based on diffusion-driven charge transport, and develop better simulation models. He is now a member of a group at Aalto University focused on electroluminescent cooling, where light carries away heat. He hopes to develop his model to understand the optoelectronics in those structures too, as well as emerging materials such as molybdenum disulphide. He remains motivated by technology that benefits the environment, such as photocatalytic water splitting to produce hydrogen fuel, which can also exploit nanowires and is another area he finds interesting for future work.

Full details of the combination model are available in Nano Futures.

Urban forests add to cities’ health and wealth

Planting more urban forests is a simple way not only to improve the health of a city’s people, but to make them wealthier too.

Climate scientists who calculated the value of urban forests to the world’s great cities have now worked out how town planners can almost double their money. Just plant 20% more trees.

More than half the world now lives in cities, and one person in 10 lives in a megacity: one that is home to at least 10 million people.

The trees that shade the parks and gardens and line the urban streets – London planes, limes, magnolias, pines and so on – are known to add to property values and to make living conditions better for millions who must endure the increasing heat extremes of the urban world.

Last year researchers put a value on the contributions of the urban forest: $500 million to the average megacity, they calculated, in pollution absorbed, temperatures lowered and moisture taken up.

More needed

Now Theodore Endreny, professor of environmental resources engineering at the State University of New York, and colleagues from Parthenope University in Naples, Italy, report in the journal Ecological Modelling that there is more to be done.

Tree canopies already cover 20% of the area of their 10 sample megacities in five continents. They looked at their models of tree cover, human population, air pollution, energy use, climate and spending power and found room for improvement: the same cities could find room for 20% more forest.

“By cultivating the trees within the city, residents and visitors get direct benefits,” Professor Endreny said. “They’re getting an immediate cleansing of the air that’s around them.

“They’re getting that direct cooling from the trees, and even food and other products. There’s potential to increase the coverage of urban forests in our megacities, and that would make them more sustainable, better places to live.”

Cities are afflicted by the notorious heat island effect, and climate scientists have repeatedly warned that extremes of heat and humidity could rise to potentially lethal levels in many of the world’s great cities.

The latest study is part of a wider shift in approach by urban planners and civic authorities to seek ways to mitigate the climate change driven by ever more profligate consumption of fossil fuels, without actually adding to this consumption by installing ever more air conditioning plant.

And on the same day, a second team of scientists emphasised the same conclusion: work with nature to confront climate change and improve the lives of people in the developing world, put at risk by climate change driven in part by the despoliation of the forests and the degradation of the land.

They argue in the journal Science that a better understanding of the way nature – in the form of forests, wetlands, savannahs and all the creatures that depend on the natural world – underwrites human wellbeing should inform political and economic decisions.

Local knowledge

In many cases, this would involve attending to the wisdom and experience of local communities and indigenous people who depend more directly on nature’s riches.

“Nature’s contributions to people are of critical importance to rich and poor in developed and developing countries alike. Nature underpins every person’s wellbeing and ambitions – from health and happiness to prosperity and security.

“People need to better understand the full value of nature to ensure its protection and sustainable use,” said Sir Robert Watson, chair of the Intergovernmental Platform on Biodiversity and Ecosystem Services.

“This new inclusive framework demonstrates that while nature provides a bounty of essential goods and services, such as food, flood protection and many more, it also has rich social, cultural, spiritual and religious significance – which needs to be valued in policymaking as well.”– Climate News Network

• This report was first published in Climate News Network

Small modular nuclear reactors are a crucial technology, says report

Small modular nuclear reactors (SMRs) offer a way for the UK to reduce carbon dioxide emissions from electricity generation, while allowing the country to meet the expected increase in demand for electricity from electric vehicles and other uses. That is the claim of Policy Exchange – a UK-based centre-right think tank – which has published the report Small Modular Reactors: The next big thing in energy?. Written by the energy-policy specialist Matt Rooney, the report calls on the UK government to support the development of a SMR.

SMRs are usually considered to have electrical outputs of about 300 MW or less. In comparison, the Hinkley Point C facility currently under construction in southwest England will comprise two reactors, each capable of generating 1630 MW of electricity. SMR components would be standardized and manufactured at central facilities before being assembled on site. While the first few SMRs would be expensive to build, standardized mass production would bring down the price of subsequent units – according to proponents of the technology.

Falling prices

This, says Rooney, is unlike conventional large-scale power reactors, which have become more and more expensive over the years. In his report he argues that SMRs offer a much more cost-effective way of generating electricity. “Each unit would require a smaller investment than large reactors and their modular nature means that they can be built in a controlled factory environment where, with increased deployment, costs can be brought down over time through improved manufacturing processes and economies of volume,” writes Rooney.

Rooney claims that SMRs would be useful for smoothing-out fluctuations in solar and wind-generated energy. He points out that shortfalls in solar and wind are a significant problem in the winter, when demand is high and the UK can experience week-long periods of weak sunlight and light winds. Such fluctuations could be smoothed-out using batteries, but Rooney claims that creating sufficient battery capacity would be extremely expensive.

Heat and hydrogen

The report also claims that SMRs offer flexibility in terms of the type of energy they produce. When renewable output is high, Rooney says that SMRs could switch over to producing hydrogen by the electrolysis of water. The hydrogen could be injected into the UK’s natural gas grid to reduce carbon dioxide emissions from domestic boilers and other gas appliances. He also points out that waste heat from an SMR could be used to heat local buildings.

In December 2017, the UK government announced that up to £100m will be made available for the development of SMRs. Rooney says that the government should move swiftly to develop at least one SMR under this initiative.

Liver cancer organoids could enhance drug development

Researchers at the University of Cambridge have developed human liver cancer organoids – small three-dimensional (3D) versions of the human cancer grown in the laboratory – which behave the same outside of the human body as they would inside. The researchers isolated liver cancer organoids from patient samples to successfully grow seven “tumouroid” lines for about one year in culture without losing the cancer’s characteristics (Nature Medicine 23 1424).

Even after months of being cultured outside of the human body, the human tumouroids exhibited their original metastatic potential when transplanted into a mouse. This 3D culture method is an attractive alternative to using animal models, which are expensive and highly time-consuming. These 3D models are also preferable to currently used two-dimensional models, in which cells start behaving very differently from how they would inside the body and change genetic profiles drastically from the original cancer.

The tumouroids expressed genes that corresponded well with those in the cancer from which the line originated. Cancers are known to develop random genetic alterations, which can differ from patient to patient – these tumouroids preserved each individual patient’s cancer-related genetic profile.

Correctly assessing genetic alterations is important in patient-specific therapies and in establishing prognosis of the disease. The researchers demonstrated the effective use of the model by identifying several genes associated with liver cancer that had not been reported before. Furthermore, they found that the overexpression of four particular genes (C19ORF48, UBE2S, DTYMK and C1QBP) was associated with poor-survival prognosis.

The team also used the tumouroid lines as a drug-screening platform against 29 anti-cancer drugs that are currently in clinical use or under development. They applied the platform to identify patient-specific drug sensitivities. While many tumouroids were found to be resistant to the majority of the drugs, some were found to be sensitive to drugs based on their specific genetic alterations.

The researchers have developed a valuable 3D model – by retaining the characteristics of the original cancer, the tumouroids can effectively be used for liver cancer research, drug screening and discovery of new biomarkers for drug development. This novel system could enable personalized medicine in liver cancer treatment.

Bacterial DNA sequencing could help design ‘friendlier’ nanoparticles

Metagenomic analyses of large groups of microorganisms can be used to evaluate how bacteria and other microbes behave when exposed to nanomaterials. The results from such studies could help screen novel nano-products with respect to their impact on the environment and focus future testing. They could also help in the design of more environmentally friendly nanoparticles based on a particle’s shape or its surface coating.

Studying the effect of nanomaterials on living cells is important since they are increasingly being used in consumer goods, say the researchers who led this new study, Peter Vikesland and Amy Pruden of Virginia Tech in the US. Sequencing the DNA of microorganisms after they have been exposed to nanomaterials is one way of doing this and it could even help in the design of nanoparticles that would be safer down the road.

Microbial communities play an important role in the ecosystem. They are now also routinely used in wastewater treatment, for example, to remove pollutants such as pathogens, pharmaceuticals and other molecules, and are thus increasingly being exposed to nanoparticles. Until now, most microbial nanotoxicity studies mainly focused on pure cultures, but assessing real-world microbial communities should be better in terms of the ecological information that can be obtained.

Metagenomics is the high-throughput sequencing of microbial community DNA extracts and, because it directly sequences this DNA, it can be used to characterize changes in the genes themselves and their function.

Gold nanoparticles ideal for this study

Vikesland and Pruden and colleagues have now employed this technique to evaluate how nanoparticles, and especially their surface coating and morphology, affect bacterial populations. The researchers looked at how microbial communities (from waste-water-activated sludge) behaved when exposed to gold nanoparticles with various surface coatings and shapes. The nanoparticles studied were either spherical or rod-shaped and were functionalized with either cetyltrimethylammonium bromide (CTAB) or polyacrylic acid (PAA).

“Gold nanoparticles were ideal for this study because they are generally inert at their core,” explains Vikesland. “This allows us to manipulate and separate out the effect of morphology (for example, of rod versus sphere shapes) and surface coating (PAA as opposed to CTAB). “In our experiments, we dosed particles with varying properties into a complex microbial community that is involved in purifying wastewater through the breakdown of ammonia and organic matter.

TEM images of the nanoparticle suspensions

“Our metagenomic sequence analyses revealed that the taxonomic and functional microbial community structure underwent a unique succession pattern in the presence of gold nanoparticles, and in particular spheres, relative to controls that were not exposed to nanoparticles,” he tells nanotechweb.org.

Metagenomics is a “sensitive tool”

The result illustrates that metagenomics is a very sensitive tool and that it might be used to screen novel nanomaterials in the future for their potential environmental impact and so also help in the development of more focused testing, adds Pruden. “Surprisingly, we also learnt that sphere-shaped nanoparticles impact the microbial community more than rod-shaped ones and that the shape of nanoparticles is more important than their surface coating.”

The team, reporting its work in Nature Nanotechnology doi:10.1038/s41565-017-0029-3, says that it is now busy further developing tools for metagenomics-based risk assessment – in particular with respect to antibiotic-resistance genes and their relation to environmental stressors. “For example, we want to establish benchmarks for which types of genes and which profiles in specific microbial communities may be of concern and assign relative ranking criteria to inform future testing and monitoring.”

For more research into the life cycle of nanomaterials visit the Nanotechnology focus collection.

Copyright © 2026 by IOP Publishing Ltd and individual contributors