The efficiency of optical rectennas made from carbon nanotube diode arrays can be improved by adding a double insulator layer to them. Doing this also makes the devices stable in air for the first time. The new rectennas could be used to harvest light and even waste heat to operate low-power devices – for example in Internet of Things (IoT) applications.
A rectenna is a combination of an antenna and a rectifier and converts alternating current (AC) into direct current (DC). An optical rectenna converts electromagnetic fields at optical frequencies into electrical current.
In this work, by a team of researchers led by Baratunde Cola at the Georgia Institute of Technology, the antenna is an array of vertically aligned multiwalled carbon nanotube (MWCNT) diodes with open ends.
“We first made this device in 2015 and have now improved on it,” explains Cola. “We did this by sequentially depositing two different oxides (Al2O3 and HfO2) on the aligned CNT arrays and then deposited a top metal layer film at the tips of the array. We made electrical contact with leads connected to the metal films and the substrate on which we grew the CNTs.”
Efficient broadband electromagnetic absorbers
The devices work by exciting AC waves in the CNTs with light, heat or any energy in the form of electromagnetic waves, he tells nanotechweb.org. CNTs are efficient broadband electromagnetic absorbers and the arrays are particularly good at increasing the antenna’s light absorbing efficiency.
“Since the AC waves move on the scale of femtoseconds, the diodes must open and close on this timescale too if they are to act as switches. The only diodes that can do this are tunnelling diodes that work thanks to quantum mechanical tunnelling, so these are ones we employed.”
The double insulator structure helps optimize the efficiency of this tunnelling and when the AC electron wave produced in the CNTs reaches the diode junction, it opens and electrons pass through. Many of these electrons are then trapped as the wave moves back in the opposite direction as the junction quickly closes. This produces a positive unidirectional flow of electrons that generates current and voltage in the millivolt range, which is enough to do useful work in the circuit and operate low-power devices.
Powering IoT devices
Since the principle for converting heat into electricity is the same as that for light, Cola says that the rectenna could power IoT devices. This would be especially good if it were to harvest wasted heat (from industrial pipes and smoke stacks, for example).
“With the growth of IoT, the need for power sources is huge and our technology could harvest radio-frequency energy, heat and light, making it the most versatile choice for powering a connected world,” he adds.
Wildfire, climate and ecosystem are interactive components of the Earth system (Bowman et al 2009, Andela et al 2017). Climate and fuel moisture, which is heavily impacted by atmospheric conditions, are primary drivers for fire occurrence and behavior, while vegetation provides necessary fuels for combustion (Pyne et al 1996). On the other hand, fires can feedback climate and ecosystems by emitting carbon and aerosols (Kloster et al 2010, Ward et al 2012, Urbanski 2014), which can affect the global carbon cycle and atmospheric radiation. Removal of trees by fires and subsequent multiple-year vegetation regeneration modify albedo and leave area index (Gitas et al 2012, French et al 2016), which would further change the land–air fluxes of heat, water and momentum.
Research has traditionally focused on the wildfire impacts of climate and vegetation, using the approaches developed mainly based on empirical and statistical weather–fire behavior relationships as well as empirical and process-based vegetation–fire relationships. Recent studies have turned more attention to the feedbacks of fires to climate and ecosystems (Liu et al 2013). A most sophisticated tool for understanding the complex interactions is Earth system modeling such as the Community Earth System Model (CESM) (Hurrell et al 2013). An Earth system model includes atmospheric models to provide the environmental conditions for wildfires such as droughts, to simulate atmospheric radiation and climate effects of fire carbon and particle emissions, and to calculate the disturbances in land-air fluxes due to fire induced changes in vegetation coverage, albedo and roughness. An Earth system model also includes vegetation models such as dynamic global vegetation models (DGVMs) (Bachelet et al 2001) to simulate the carbon, water and nitrogen cycles in the terrestrial ecosystems driven by atmospheric chemistry, climate, land-use and land-cover types and disturbances such as fires.
An urgent issue in fire–climate–ecosystem interactions is the future fire trends under climate change. Many general circulation models (GCMs) have projected significant climate change by the end of this century due to the greenhouse effect. This would affect weather conditions important to fire ignition and spread, including drought and heat wave frequencies (IPCC 2013), wind strengths (McVicar et al 2012), and potential for lightning changes (Clark et al 2017). Climate change would also affect fuel loading and moisture (Flannigan et al 2015), which would affect all aspects of fire behavior (burned area, occurrence, duration, intensity, severity, seasonality, etc) (Spracklen et al 2009). A special power of the Earth system models is their capacity in projecting future climate change with the ecological and environmental consequences and feedbacks (Kloster et al 2012, Li et al 2014, Ward et all 2016).
In a recent study, Wotton et al (2017) extended the fire impacts of climate change from fire behavior to fire suppression in the Canadian boreal forest. The authors projected future fire intensity based on climate change projections from three GCMs and the Canadian Forest Fire Behavior Prediction System and found that the number of crown fires would likely increase. They examined future operational fire intensity thresholds used to guide fire suppression decisions and showed that the fraction of fires that are beyond the capacity of suppression would increase substantially, even doubling by the end of this century in some climate change scenarios.
These findings suggest the needs for new development and applications with Earth system modeling of fire-climate-ecosystem interactions. First, the human factor for fire termination should be treated more dynamically. Fire termination is an essential process besides fire ignition and spread in any fire module of a DGVM. It is determined by both natural factors such as weather, fuel availability and geographic barriers and human causes such as suppression. The fraction of unsuppressed fires in most vegetation models is assumed to be inversely proportional to population density with constants determined empirically or based on historical data (Pechony and Shindell 2010). The result of increasing fires unsuppressed under changing climate from Wottonet al (2017) suggests that climate factor needs to be included in the calculation of the fraction.
Second, the post-fire vegetation restoration could be different between the present and future. Besides DGVM simulation of post-fire dynamic tree regrowth, an Earth system model also can predict the long-term climatic impacts through specifying fire-induced land-surface property disturbances based on historical data. More crown fires and larger fraction of unsuppressed fires revealed in Wotton et al (2017) mean longer periods of tree regeneration in the future under climate change than the current estimation. Thus, the fire feedbacks to climate and impacts on the carbon cycle related to fire emissions and uptake by new generated trees would be more significant in the future. Thus, there might be a problem with the use of the historical approach in Earth system modeling to investigate the feedbacks of fires to climate and ecosystems in the future.
Finally, the black carbon (BC)–albedo–snow feedback induced by wildfire could be weaker than previously estimated. Boreal fires contribute more BC to the Arctic than anthropogenic sources during the summer (Stohl et al 2006) and the deposition of BC reduces albedo and increases solar radiation absorbed by the surface, which in turn accelerates snow and ice melting (Hansen and Nazarenko 2004). However, wildfires play an opposite role by removing trees, leading to more snow coverage and therefore larger albedo (French et al 2016). This would partially offset the snow and ice melting role of the BC–albedo–snow feedback. The finding of more crown fires under climate change from Wottonet al (2017) suggests an even greater importance of the tree-removal and snow increase effect in the future. Simulations with Earth system models are needed to have a quantitative comparison between the two opposite roles.
Regional differences are concerned when making global implications of the findings from Wotton et al (2017). In tropical forests, there are no snow related feedbacks; with more water and energy supply, tree regeneration might be less impacted even with more crown fires in the future. In a savanna, the above issues could be less significant because almost all fuels are removed by fires. Even in the boreal regions outside Canada, fire suppression management and thresholds could be different, which would lead to varied impacts of climate change on crown fires and unsuppressed fire fraction. This concern could be addressed through Earth system modeling and comparison of the impacts of climate change on fire severity and suppression in various geographic regions.
Another concern is the magnitude of projected climate change and fire impacts. Among the three GCMs used for this study, the Hadley GCM has typically wetter conditions overall across much of the forested area of Canada. Therefore, the fire danger levels are lower and the summary results of fire behaviour indices which emphasize the extremes are less remarkable. Thus, the future increase in crown fires and fraction of unsuppressed fires is generally much smaller for Hadley GCM than the other two GCMs. Analyses of climate change scenarios from other GCMs would provide more robust evidence and quantitative estimates of the magnitude for the fire impacts of climate change.
Acknowledgments
The author wishes to thank an ERL editorial board member for the valuable comments that have led to substantial improvement to the description of fires and the Earth system. This study was supported by the USDA National Institute of Food and Agriculture (NIFA) Agreement 2013‐35100‐20516.
In PET scans, patients are injected with radioactive tracers, which emit positrons that annihilate upon collision with electrons in tissue. The result is the release of photon pairs, which trigger signals in external detectors – enabling medical physicists to create maps of internal biological processes. Cherry says the process is “a beautiful example of Einstein’s E = mc2 equation”, before explaining how tumours can be diagnosed from their high demand for the biological tracer.
Depending on the scale of the affected area, patients often require a sequence of scans so that clinicians can piece together 3D images of their body. Therefore, the ability to image an entire body in one scanning procedure would bring many advantages. It could reduce running costs and the time required for scans, as well as the amount of radioactive tracer material required. Cherry also speaks about how the technology could be used to monitor and diagnose other medical conditions that affect large areas of the body, including infections and inflammations.
This is the final interview in a three-part series that profiles pioneering medical physicists. Last week’s video featured Katia Parodi of Ludwig-Maximilians University in Germany speaking about using acoustic signals to track ion-beam therapy. The previous week, we profiled Bas Raaymakers from UC Utrecht, speaking about using magnetic resonance imaging (MRI) alongside radiotherapy. All three scientists are board members of Physics in Medicine & Biology, a journal published by IOP Publishing, which also publishes Physics World.
Cancer is a critical societal issue. Worldwide, in 2012 alone, 14.1 million cases were diagnosed, 8.2 million people died and 32.5 million people were living with cancer. These numbers are projected to rise by 2030 to reach 24.6 million newly diagnosed patients and projected deaths of 13 million. While the rate of cancer diagnoses is growing only steadily in the most developed countries, less developed countries can expect a two-fold increase in the next 20 years or so. The growing economic burden imposed by cancer – amounting to around $2 trillion worldwide in 2010 – is putting considerable pressure on public healthcare budgets.
Radiotherapy, in which ionizing radiation is used to control or kill malignant cells, is a fundamental component of effective cancer treatment. It is estimated that about half of cancer patients would benefit from radiotherapy for treatment of localized disease, local control, and palliation. The projected rise in cancer cases will place increased demand on already scarce radiotherapy services worldwide, particularly in less developed countries.
In 2013, member states of the World Health Organisation agreed to develop a global monitoring framework for comprehensive, non-communicable diseases (NCDs). The aim is to reduce premature mortality from cardiovascular and chronic respiratory diseases, cancers and diabetes by 25%, relative to 2010 levels, which means 1.5 million deaths from cancer will need to be prevented each year.
Advanced cancer therapy techniques based on beams of protons or ions are among several tools that are expected to play a significant role in this effort (see Therapeutic particles). In addition, advanced imaging and detection technologies for high-energy physics research – many being driven by CERN and the physics community – are needed. These include in-beam positron emission tomography (PET) and prompt-gamma imaging, and treatment planning based on the latest Monte Carlo simulation codes.
Optimal dose
The main goal of radiotherapy is to maximize the damage to the tumour while minimising the damage to the surrounding healthy tissue, thereby reducing acute and late side effects. The most frequently used radiotherapy modalities use high-energy (MeV) photon or electron beams. Conventional X-ray radiation therapy is characterized by almost exponential attenuation and absorption, delivering the maximum energy near the beam entrance, but continuing to deposit significant energy at distances beyond the cancer target. The maximum energy deposition, for X-ray beams with energy of about 8 MeV, is reached at a depth of 2–3 cm in soft tissue. To deliver dose optimally to the tumour, while protecting surrounding healthy tissues, radiotherapy has progressed rapidly with the development of new technologies and methodologies. The latest developments include MRI-guided radiotherapy, which combines simultaneous use of MRI-imaging and photon irradiation. Such advanced radiation therapy modalities are becoming increasingly important and offer new opportunities to treat different cancers, in particular the combination with other emerging areas such as cancer-immunotherapy and the integration of sequencing data, with clinical-decision support systems for personalized medicine.
Figure 1: The Bragg peak of protons and other ions
However, if one looks at the dose deposition profile of photons compared to other particles (figure 1), the conspicuous feature of this graph is that, in the case of protons and carbon ions, a significant fraction of the energy is deposited in a narrow depth range near the endpoint of the trajectory, after which very little energy is deposited. It was precisely these differences in dose – the so-called Bragg-peak effect – that led visionary physicist and founder of Fermilab, Robert Wilson, to propose the use of hadrons for cancer treatment in 1946.
Several advantages
Hadron or particle therapy is a precise form of radiotherapy that uses charged particles instead of X-rays, to deliver a dose of radiotherapy to patients. Radiation therapy with hadrons or particles (protons and other light ions) offers several advantages over X-rays: not only do hadrons and particles deposit most of their energy at the end of their range, but particle beams can be shaped with great precision. This allows for more accurate treatment of the tumour, destroying the cancer cells more precisely with minimal damage to surrounding tissue. Radiotherapy using the unique physical and radiobiological properties of charged hadrons, also allows highly conformal treatment of various kinds of tumours, in particular those that are radio-resistant.
Figure 2: Hadron therapy facilities worldwide
Over the past two decades, particle-beam cancer therapy has gained huge momentum. Many new centres have been built, and many more are under construction (figure 2). At the end of 2016 there were 67 centres in operation worldwide and another 63 are in construction or in the planning stage. Most of these are proton centres: 25 in US (protons only); 19 in Europe (three dual centres); 15 in Japan (four carbon and one dual); three (one carbon and one dual) in China; and four in other parts of the world. By 2021 there will be 130 centres operating in nearly 30 countries. European centres are shown in figure 3, while figure 4 shows that the cumulated number of treated patients is growing almost exponentially.
At the end of 2007, 61,855 patients had been treated (53,818 with protons and 4,450 with carbon ions). At the end of 2016 the number had grown to 168,000 (145,000 with protons and 23,000 with carbon ions). This is due primarily to the greater availability of dedicated centres able to meet the growing demand for this particular form of radiotherapy, and most probably in future it will have a larger growth rate, with an increase of the patient throughput per centre.
Figure 3: European hadron therapy facilities
Particle-physics foundation
High-energy physics research has played a major role in initiating, and now expanding, the use of particle therapy. The first patient was treated at Berkeley National Laboratory in the US with hadrons in September 1954 – the same year CERN was founded – and was made possible by the invention of the cyclotron by Ernest Lawrence and subsequent collaboration with his medical-doctor brother, John. The first hospital-based, particle-therapy centres opened in 1989 at Clatterbridge in the UK and in 1990 at the Loma Linda University Medical Center in the US. Before this time, all research related to hadron therapy and patient treatment was carried out in particle-physics labs.
In addition to the technologies and research facilities coming from the physics community, the culture of collaboration at the heart of organizations such as CERN is finding its way into other fields. This has inspired the European Network for Light Ion Therapy (ENLIGHT) to promote international discussions and collaboration in the multidisciplinary field of hadron therapy, which has now been running for 15 years (see Networking against cancer).
Figure 4: Patients treated with protons and carbon ions worldwide
Were it not for the prohibitively large cost of installing proton-therapy treatment in hospitals, it would be the treatment of choice for most patients with localized tumours. Proton-therapy technology is significantly more compact today than it once was, but when combined with the gantry and other necessary equipment, even the most compact systems on the market occupy an area of a couple of hundred square metres. Most hospitals lack the financial resources and space to construct a special building for proton therapy, so we need to make facilities smaller and cheaper, with costs of around $5–10 million for a single room, similar to state-of-the-art photon-therapy systems. An ageing population, and the need for a more patient-specific approach to cancer treatment and other age-related diseases, present major challenges for future technologies to control rising health costs, while continuing to deliver better outcomes for patients. Scientists working at the frontiers of particle physics have much to contribute to these goals, and the culture of collaboration will ensure that breakthrough technologies find their way into the medical clinics of the future.
Most treatments targeting Alzheimer’s disease focus, to no avail, on directly reducing the accumulation of cognitive function-impairing amyloid plaques in the brain. A research team from EPFL, working in animal models and human neuronal cells, has shown that tackling mitochondrial defects might instead be the key to designing new efficient treatments (Nature552 187).
A hallmark of Alzheimer’s disease is the accumulation of toxic plaques formed by the abnormal aggregation of beta-amyloid protein inside neurons. While no definitive cure exists, many treatments have tried to reduce the development of those plaques. Recently, a tentative vaccine managed to bind virus-like particles to them, opening the window for a potential immunogenic therapy for Alzheimer’s disease (Towards a vaccine for Alzheimer’s disease).
However, none of these approaches have been validated in humans or proved to be efficient. Consequently, other methods are being investigated, and Johan Auwerx and his team at EPFL are taking an original path: approaching Alzheimer’s as a metabolic disease.
Under this paradigm, Alzheimer’s disease would be caused by defects in the chemical reactions of the body’s cells, which would alter their normal metabolism, such as the process of converting food to energy on a cellular level.
To support this view, the researchers used the fact that mitochondria, the energy-producing powerhouses of cells, are dysfunctional in the brains of Alzheimer’s patients. As cells age, they are exposed to increased levels of damage, which affect their mitochondria, making them dysfunctional. Such mitochondria would normally be replaced by autophagy, but over time, the cells become less efficient at removing defective mitochondria. They cannot defend themselves against Alzheimer’s disease.
A problem of quality control?
The researchers identified two key mechanisms controlling the quality of mitochondria: mitophagy, a process that recycles defective mitochondria; and mitochondrial unfolded protein response (UPRmt), which protects mitochondria from stress stimuli. These actions are key to delaying or preventing excessive mitochondrial damage. The team therefore hypothesized that boosting the activity of these two mechanisms might help to slow the progression of the disease.
To test their hypothesis, the researchers attempted to turn on the UPRmt and mitophagy processes by administering various in vitro and in vivo organisms with two commonly-used drugs: the antibiotic doxycycline and the vitamin nicotinamide riboside.
Comparison between diseased and treated mouse brains
Next step: human trials
In a worm model of amyloid-beta disease, treated animals showed a remarkable improvement in health, performance and lifespan when compared with the control group. Protein analysis also displayed a significant reduction of plaque formation in worms exposed to the drugs. The same results were observed in cultured human neuronal cells.
Experiments in a mouse model of Alzheimer’s disease were even more encouraging. Not only did the mice exhibit the same improvement in their mitochondrial function and a reduction in the number of amyloid plaques, but the researchers also observed a striking normalization of their cognitive function, the main objective for any treatment of Alzheimer’s disease.
According to the World Health Organization, Alzheimer’s disease affects 35 million people around the world, with the figure expected to exceed 100 million by 2050. With life expectancy increasing, finding effective approaches to tackle Alzheimer’s is a necessity. The results of Auwerx’s work are yet to be validated in humans, but they may pave the way for a metabolic response to the disease. The fact that the two mitochondrial quality control processes identified can be activated in the same way in worms, mice and cultured human cells is cause for optimism.
A shark skin-inspired design can dramatically improve the lift of an aerofoil, according to researchers in the US. The tiny tooth-like scales on a shark’s skin called denticles have previously been shown to reduce drag, this latest research shows that they also boost the lift-to-drag ratio of an aerofoil. As well as offering paths to improved aerodynamic design, the researchers say that their work provides important insight into the role of shark morphology on swimming efficiency.
Like most fish, a shark’s skin is covered in scales. But shark scales are different to those of most fish. Known as dermal denticles, they resemble teeth, and their top surfaces feature ridges that run from front to back. “Their structure varies from species to species and even within different regions of a shark,” explains August Domel at Harvard University.
Thrust generation
Previous studies have shown that the denticles generate vortexes that reduce drag, improving the hydrodynamic performance of sharks and allowing them to swim faster. But Domel and colleagues, wondered if there was more to the story. “We hypothesized that on a shark, these denticles may also be beneficial for thrust generation by enhancing the force normal to the denticle,” Domel told Physics World. “With this in mind, we wanted to know if these denticles could, on an aerofoil, enhance the force normal to the denticle – generating lift – and reducing drag.”
To test their theory, the researchers looked to the fastest shark in the world, the shortfin mako, Isurus oxyrinchus. They took micro-computed tomography scans of the shark’s denticles and then 3D printed 20 aerofoils with idealized models of the denticles on the top edge. The model denticles had a curved profile, from front to back, and three ridges: a central ridge, and two lower profile and shorter outer ridges that curved inwards slightly at the front. They were the same shape across the different aerofoils, but varied in arrangement (single or multiple rows), position, size and tilt angle.
Enhanced performance
When the researchers tested these aerofoils in a water flow tank they found that while most behaved similarly to a denticle-free control, some exhibited significantly enhanced performance. The best performing aerofoil featured a single row of roughly 2 mm wide denticles spaced 1 mm apart, positioned a quarter of the way along the length of the aerofoil from the front.
On this best performing aerofoil, at almost all angles of attack lift was increased and drag was reduced, compared with the denticle-free control. The denticle aerofoil even generated lift at zero angle of attack, while the control generated none. Overall, the denticles created lift-to-drag ratio improvements of up to 323%, compared with the control.
Separation bubble
According to the researchers, there are two mechanisms responsible for these results. The denticles generate a short separation bubble that provides extra suction to the aerofoil, enhancing lift, and the curvature of the denticles creates low-profile vortexes that reduce drag and prevent lift losses at higher angles of attack.
By increasing lift, the researchers say, that the shark skin probably enhances thrust and increases the shark’s self-propelled swimming speeds. They add that the mechanisms discovered could also be used to improve aerodynamic design.
“All lifting surfaces in aerial devices, such as drones, airplanes, and wind turbines, are composed of aerofoils,” explains Domel. “Enhancing lift and reducing drag on an aerofoil ultimately leads to lower energy consumption for these aerial devices, and our designs have shown a lot of potential so far in improving these aerodynamic features on an aerofoil.”
Quasiparticles within a rotating sample of superfluid helium-3 create unexpected friction in a material that is supposed to undergo frictionless flow, according to Jere Mäkinen and Vladimir Eltsov at Aalto University in Finland. Their discovery could have a wide range of applications from neutron stars to quantum computers.
Helium-3 atoms are fermions, which pair up at ultracold temperatures to form bosons that can then create a superfluid – a fluid with zero viscosity that undergoes frictionless flow. This pairing interaction is somewhat similar to what happens to electrons in superconductors and could also occur to neutrons within the cores of a neutron stars.
Physicists know of two distinct phases of superfluidity in helium-3. The B phase occurs at low pressure and low temperature, whereas the A phase occurs at higher pressures and temperatures.
In a spin
In their experiment, Mäkinen and Eltsov rotate a cylindrical container of the B-phase superfluid, which created vortexes within the helium-3. The container is then stopped from rotating and the pair monitor the continuing rotation of the superfluid using nuclear magnetic resonance and two quartz tuning forks that are immersed at the bottom of the container.
These vortexes should rotate in a smooth and stable manner and should not be able to exchange kinetic energy with their surroundings. Instead, the physicists observe deviations from vortexes with perfect cylindrical symmetry that lead to turbulence. This indicates that the helium-3 is not flowing in a perfectly frictionless manner.
Writing in Physical Review B, Mäkinen and Eltsov suggest that the source of this friction is quasiparticles that become trapped within the cores of vortexes. As the vortexes accelerate, the quasiparticles gain energy, which they can then dissipate to their surroundings in the form of friction.
Neutron star glitches
Understanding how to minimize this type of friction could be important to those trying to boost the performance of components for superconductor-based quantum computers, which involve the flow of superfluid-like supercurrents. The energy-dissipation process observed at Aalto could also provide insight into the physics of neutron stars, which are believed to have a superfluid component in their cores. Unexplained sudden changes in the rotation speeds of some neutron stars – called “glitches” – could be caused by a similar phenomenon, for example.
The research could even provide insights into why turbulence occurs in everyday materials, something that has proven very difficult to calculate.
Here at Physics World we’re planning to publish special issues later this year on time (in July) and on SI units and measurement (in November). So to find out more about the latest work in these fields, we decided to accept a long-standing invitation to visit the National Physical Laboratory in Teddington, just outside London, yesterday.
The NPL is the UK’s national measurement laboratory and, as we found out on our visit yesterday, it’s home to almost 900 staff. Or, to be precise, there are 879 staff – after all, accuracy is the name of the game for the NPL, which provides vital measurement services to industry and plays a key part in global attempts to revise the SI measurement system.
It was a busy day for us and we came away with some useful ideas for possible articles for those two Physics World special issues. But rather than giving you a detailed run-down of everything we saw, here instead are 12 “fun” facts we picked up during our visit. Sometimes it’s those little details that make lab visits so intriguing.
Cool stuff
1 The NPL houses what it calls “the most accurate thermometer on Earth”, which yields Boltzmann’s constant by measuring the speed of argon atoms.
2 The experiment is run by Michael de Podesta, who says the most famous person to visit his experiment has been the actor and writer Stephen Fry.
3 Until 1963, the metre was defined using etchings on a metal bar with an X-shaped cross section made from 80% platinum and 20% iridium.
4 The NPL’s copy of the “Tresca bar”, which is now redundant, has a scrap-metal value of £150,000 and is hidden in a vault that can only be accessed by four staff, who have special keys.
5 The metre is now defined in terms of the speed of light and the NPL has five helium-neon lasers, which companies can have their own lasers calibrated against.
6 The lasers are called Mildred, Donald, R18, Sydney and G14. Donald is the best of the lasers and Andrew Lewis – the lab’s “head of length” –- joked that this Donald was “the most stable thing we have”.
Metre master – Andrew Lewis, the National Physical Laboratory’s wonderfully titled “head of length”, actually has to be more of an expert in measuring frequencies
7 The NPL has a secret lab where scientists are trying to create a working “indoor GPS” system, which would locate the position of objects to micrometre accuracy using the same principle that mobile phones are pinpointed thanks to satellites.
8 The NPL is the birthplace of the “Watt balance”, which measures mass by balancing gravitational and electrical force, and is also known as the “Kibble balance” after its inventor Bryan Kibble (1938-2016), who worked at the lab for many years.
9 The NPL has housed two of the 10 or so Watt balances that have ever been built, but the magnet from the first balance can’t be moved out of its home in the historic Bushey House because it weighs six tonnes.
11 Ian Robinson and colleagues at the NPL are trying to build a small version of a Watt balance that can be used by companies once the kilogram has been redefined.
Mass masters – Ian Robinson (right) and Stuart Davidson with their test version of a small-scale Watt balance.
12 The NPL also works on airborne particle metrology, with Paul Quincey among those studying “black carbon” – soot particles spewed out from vehicles. These particles and others are responsible for about a quarter of anthropogenic climate change.
We learned a lot more than the above of course – and there’s a huge amount of work at the lab we didn’t get time to see. Still, for more on metrology, stay tuned for our upcoming special issue. In the meantime, don’t miss Michael de Podesta’s feature on measuring temperature and this great overview about the idea behind redefining SI units.
In a world where post-truth, alternative facts and fake news are everyday terms, a team of researchers have developed a strategy to help debunk misinformation about climate change.
Dr John Cook, from the Centre for Climate Change Communication at George Mason University, Virginia, is the paper’s lead author. He said: “Misinformation spreads easily, and can have profound consequences for society if left uncorrected. Climate science is particularly problematic because it describes such a complex system.”
Dr Cook explained: “When people lack the expertise to evaluate the science, they tend to substitute judgment about something complex (i.e., climate science) with something simpler (i.e., the character of people speaking about climate science). This can leave them vulnerable to misleading information. The advantage of our approach is that you don’t need to be an expert in argumentation or climate science to put it to use.”
The team’s six step process, based on critical thinking methods, helps people detect and analyse poor reasoning. It includes detailing argument structures; determining the truth of premises, and checking for validity, hidden premises, or ambiguous language.
Co-author Peter Ellerton, from the University of Queensland, said: “Often, refuting denialist arguments focuses on scientific information – showing that temperatures are in fact rising, or that there is indeed a scientific consensus that human activity is responsible.
“We complement this approach by helping find the flaws in misinforming arguments and explain how the reasons they offer don’t support their conclusions.”
The researchers applied their approach to 42 common climate science denialist claims, and found that, in a variety of different ways, all demonstrated erroneous reasoning.
Dr Cook explained: “A common denialist claim that we’ve seen recently goes: ‘It’s cold outside, so global warming isn’t happening’. If we break the logic of this argument down, we see that it implies that because some parts of the world are experiencing record cold temperatures, the world cannot be warming. While the first premise is true – some places are seeing record cold temperatures – the argument as a whole is false, because global warming doesn’t mean that cold will never happen anywhere.
“Instead, it means cold events are less likely to happen over time. Global warming is like rigging the weather dice, making it more likely to get hot days.”
Co-author David Kinkead from the University of Queensland added: “Despite finding fallacies of reasoning in every one of the 42 claims we analysed, we need to recognise how effective they can be in misinforming the public. So, it’s essential that we refute and neutralise the influence of misinformation.
“We hope our work will act as a building block for developing educational and social media resources, which teach and encourage critical thinking through the examination of both misinformation and fallacious reasoning.”
The use of tissues with cells previously removed (decellularized) is gaining traction in tissue engineering research. These decellularized grafts serve as a scaffold for new cells to grow in an environment similar to native conditions, because decellularization minimally affects the tissue microstructure. The preservation of this microstructure and other functional compounds is believed to enhance cell attachment and proliferation and, hence, healing of the tissue.
Now, researchers from the Department of Plastic Surgery at Shanghai Jiao Tong University, led by Guangdong Zhou, have shown that these constructs promote faster and better healing in cartilage defects in a porcine model (Biomed. Mater.13 025016). Furthermore, when adding cells from the host into the decellularized construct before implantation, they observed superior results.
Schematic representation of decellularization process
Is animal tissue a suitable material?
Although the use of decellularized tissues for cartilage repair has been reported in numerous small-animal studies, fewer groups have studied their implantation in large animals. In this study, the researchers produced decellularized cartilage from pigs, characterized it and implanted it in cartilage defects of different pigs – a process known as allotransplantation (transplantation between individuals of the same species). They used poly-glycol acid (PGA), a widely used polymer in research and clinics, as a control.
After decellularization, the processed tissues maintained their structure and some biofunctional compounds, such as growth factors – molecules able to promote cell metabolism and proliferation. This finding was crucial to prove the maintenance of the microstructure and biofunctionality of the processed tissue. In addition, the processed tissues presented a higher compatibility with cells than the polymer scaffolds.
Acellular cartilage and PGA scaffolds with and without stem cells
Decellularized cartilage performs the best
Next, the researchers implanted three materials into created cartilage defects: decellularized cartilage loaded or unloaded with stem cells derived from the bone marrow of the same animal, or PGA scaffolds loaded with stem cells. This enabled them to analyse three different conditions, together with a non-treated group used as a control. Interestingly, unloaded decellularized tissue provided better healing than the cell-loaded PGA material, exhibited by better tissue regeneration and recovery of mechanical compressive properties. This suggests that unloaded decellularized scaffolds trigger a better host response than polymer scaffolds, even ones loaded with the animal’s own cells.
However, the most promising results originated from the implanted decellularized cartilage loaded with stem cells. After implantation, this material was able to promote the shift of stem cells into cartilage cells, a process named “chondrogenic differentiation”, which the authors related to a superior healing than the rest of the conditions. This confirmed the potential of allografts in combination with autologous cells (cells coming from the patient) for treatment of degenerated cartilage.
Future perspectives: a long road to walk
Despite the positive results observed in this study, the authors point out that “the underlying mechanisms … are still unknown”. In addition, the use of decellularized tissues is a double-edged-sword, and many issues like “biosafety and long-term fate” still need to be investigated. Nonetheless, the current study makes an important step in the path towards achieving complete regeneration after the treatment of cartilage defects, although there is still a long way to go.