A new scanning technique that uses hyperpolarized carbon-13 MRI to monitor the metabolism of different types of breast cancer can identify how rapidly a tumour is growing. The technique, developed by researchers at Cancer Research UK Cambridge Institute and the University of Cambridge, could help doctors prescribe the best course of treatment for a patient and follow how they respond.
Breast cancer accounts for roughly a quarter of all cancer cases worldwide and is the leading cause of cancer death in women. There are many subtypes of breast cancer, with some being more aggressive than others. The new technique is the first to be able to detect differences in a tumour’s size, type and grade (a measure of how fast it is growing), while also yielding information on the variations in metabolism between different regions within the tumour.
Measuring pyruvate metabolism
The technique works by measuring how fast a tumour metabolizes a naturally-occurring sugar-like molecule called pyruvate. In their experiments, the researchers used pyruvate that they had labelled with carbon-13, a heavier isotope of carbon. They then hyperpolarized, or magnetized, this carbon-13 pyruvate by cooling it to -272°C and exposing it to an extremely strong magnetic field and microwave radiation in a special machine called SPINlab. The frozen pyruvate sample was then thawed, dissolved into a solution and injected into the patient, who immediately underwent an MRI scan.
Tumours make use of large amounts of sugar and take up more pyruvate than normal tissue, explains team member Ramona Woitek. Inside the tumour, pyruvate is converted into lactate as part of a natural metabolic process. Since magnetizing the carbon-13 pyruvate molecules increases the strength of the MRI signal by 10 000 times, the researchers can monitor this process and visualize it dynamically in MRI scans.
The rate of pyruvate metabolism – and the amount of lactate produced – varies not only between different tumours, but also between different regions of the same tumour. By monitoring this conversion in real-time, the researchers say they are able to determine the type of cancer being imaged. They can also determine how aggressive a tumour is, as faster growing tumours convert pyruvate more rapidly than less aggressive ones.
MR imaging of a breast tumour. Left panel: anatomic image of a breast cancer (arrows); middle panel: pyruvate in the tumour; right panel: lactate in the tumour. (Courtesy: R Woitek)
Monitoring chemotherapy
The technique will be useful for monitoring patients undergoing chemotherapy, says Woitek. It will allow determination of how efficient a given treatment is by imaging a tumour before and after the therapy, and repeatedly during the course of treatment. “Identifying patients that do not respond to a treatment will thus allow us to change the therapeutic strategy early on,” she says. “Conversely, we may even be able to reduce a treatment dose if a patient is responding well, thus sparing them unnecessary side effects.”
“Researchers are beginning to understand that the many different types of breast cancer respond differently to different treatments,” Woitek tells Physics World. “The new hyperpolarized carbon-13 MRI approach could allow us to identify the optimal treatment for each individual patient.”
The researchers, who report their work in PNAS, have tested their technique on seven patients, all with different types and grades of cancer. They say they now hope to study larger groups of patients.
The first highly transparent, touch-responsive and conducting ultra-flexible thin sheets of indium tin oxide (ITO) have been made by researchers in Australia. Made using a new liquid metal printing technique, the ITO sheets are just 1.5 nm thick and can be deposited onto a variety of substrates – which can then be rolled up like a tube. They might be used to make the touchscreens of the future and could potentially be manufactured via roll-to-roll (R2R) processing – just like newspapers.
ITO is a transparent semiconductor and is used in applications such as touchscreens, smart windows and displays. The conventional way to make ITO involves evaporating the material in high vacuum and condensing it onto a surface such as a glass sheet using sputtering or pulsed laser deposition. These techniques are costly and time consuming. What is more, a relatively thick layer of ITO needs to be deposited to obtain a fully conductive film, which makes it brittle and thus unsuitable for flexible electronics applications.
Liquid ITO alloy
The new method was developed by a team led by Torben Daeneke of RMIT University in Melbourne and Dorna Esrafilzadeh and Kourosh Kalantar-Zadeh from the University of New South Wales, Sydney. The technique uses a liquid ITO alloy that melts between 150°C and 200°C. The researchers allowed this melted alloy to oxidize and cool down in air and tuned its composition so that the natural surface oxide has the same composition as commercial ITO.
They discovered that the oxidation process is self-limiting, meaning that the oxide always has the same thickness – of about 1.5 nm. If the surface of this liquid metal is then brought into contact with a substrate such as glass or plastic, the nanometre-thick 2D ITO sheets adhere to the substrate.
The team also found that when they printed two layers of ITO, they observed a van der Waals gap between them, indicating that they had made a new type of ITO. The films can be laid on top of each other like thin paper and the greater the number of layers, the more conductive the material is. “You can bend hundreds of these ‘papers’ together and they don’t break,” explains Kalantar-Zadeh.
Much more transparent
The liquid metal printed ITO is much more transparent than conventional ITO while still being highly conductive (it has a sheet resistance of just 5.4 kΩ), adds Daeneke. Indeed, a single layer of 2D ITO only absorbs about 0.7% of visible light. This is roughly 8-10 times less than a single layer of graphene (another highly transparent 2D material made of a sheet of carbon atoms) and less than the 5-10% of standard conductive glass. And since 2D ITO is extremely thin, its mechanical properties change, making it highly flexible. This will allow the creation of a new generation of flexible, transparent and printed electronics, say Kalantar-Zadeh and Daeneke.
The fabrication process is extremely easy and accessible to all, they add. The fact that the 2D ITO can be printed at low temperatures and in air not only makes it cheaper than conventional methods but also resolves the size limitations dictated by techniques that require a vacuum.
Fully functional touchscreens
The researchers showed that their technique can produce centimetre-squared-sized samples that are of a high enough quality to make fully functional touchscreens. They have also applied for a patent for their technology.
“Our technique could change the way we make transparent electronics”, they tell Physics World. “In the future we could simply print displays and touchscreens like we print newspaper. And since the 2D ITO is highly flexible, we could also create a new generation of displays that can be rolled up or folded.”
The researchers report their work in Nature Electronics and say they are now working on up-scaling their process. “We expect that automation will allow us to produce much larger samples than the centimetre-sized 2D ITO sheets we have produced thus far,” says Kalantar-Zadeh and Daeneke. “To this end, we are now looking for commercial partners that will help us move towards metre-scale production.”
An optical fibre tipped with an inorganic scintillator makes an effective real-time dosimeter for small-field radiotherapy. Researchers at Aix-Marseille University and the Paoli-Calmettes Institute in France created such a device and compared its performance with a pair of commercial small-field dosimeters. The team found that the new device has a much smaller sensitive volume and is less susceptible to detector noise from Cherenkov radiation. It also exhibits excellent dose–response linearity and its output is stable over time (Med. Phys. 10.1002/mp.14002).
Using radiotherapy to treat early-stage tumours or tumours surrounded by critical organs requires small, sharp-edged radiation fields. Verifying the dose delivered in these cases means using a dosimeter with a correspondingly small sensitive volume: larger radiation detectors lack the spatial resolution needed to capture the high lateral dose gradients at the field margins.
To produce a dosimeter suitable for the job, Sree Bash Chandra Debnath and colleagues used silver-doped zinc sulphide (ZnS:Ag) as a scintillator – a compound long known to emit visible light when irradiated by X-rays. The team chose an inorganic scintillator because organic compounds generate more charged particles under ionizing radiation, contaminating the signal with high levels of Cherenkov radiation.
The researchers fabricated the dosimeter by dipping the end of an optical fibre into a mixture of powdered ZnS:Ag and poly(methyl methacrylate) (PMMA) dissolved in an organic solvent. After drying, the result was a ZnS:Ag-filled PMMA sphere about 200 µm across, which they coated in silver to keep out ambient light. At the other end of the optical fibre, the researchers fitted the photon counter and read-out electronics, which translated the scintillation signal into equivalent dose in real time.
To test their inorganic scintillator detector (ISD), the researchers used it to measure the dose inside a water phantom, which they exposed to X-ray fields as small as 0.25 cm2. They compared the ISD’s performance to that of two small-field dosimeters currently used in the clinic, one employing an ion chamber, the other a synthetic diamond.
Debnath and colleagues estimate the sensitive volume of the ISD to be a disc whose diameter and thickness correspond, respectively, to the width of the fibre core (100 µm) and the distance that light travels between emission and reabsorption by the scintillator (1.5 µm). This corresponds to a volume of about 1.2 × 10-5 mm3 – far smaller than that of the ion-chamber (0.016 cm3) and diamond (0.004 mm3) dosimeters.
Another area where the ISD could offer an advantage is in its high signal-to-noise ratio. This comes about because of the low levels of Cherenkov radiation produced in the small inorganic scintillator and the narrow optical fibre. The researchers think that they can reduce the Cherenkov noise even further in a future version of the device by using an entirely plastic-free fibre.
In all other respects, the ISD performed as well as required for a clinical dosimeter, and comparably to the two benchmark devices. Its dose response was linear over a wide range of dose rates and was consistent over multiple irradiations.
As inorganic scintillators are not tissue-equivalent in terms of radiation absorption, the team’s new device might be best suited to external dosimetry; used internally it would itself risk altering the dose distribution. The actual effect is likely to be small, however, and Debnath is optimistic about its potential in both situations.
“Indeed, we have already tested it for brachytherapy, and probably one of our next articles will focus on this application,” he says. “It might perturb the dose distribution, but negligibly, due to the small volume of the sensor.”
The team also intend to compare the ISD to a range of water-equivalent devices such as plastic scintillator detectors and radiosensitive films. Replacing the scintillator material with a more water-equivalent organic compound is a possibility but would, they think, come at the cost of increasing the detector’s size.
Deep question: Why do some people still think global warming is a conspiracy? (Courtesy: Shutterstock/Bernhard Staehli)
Global warming is a plot manufactured by a global community of scientists. United Nations panels deliberately understate the radiation levels of the Fukushima and Chernobyl disasters. US media outlets contrive “fake facts” to refute Tweets of Donald Trump. Venal politicians are behind Ebola and other epidemics.
Groundless conspiracy theories are now an established feature of the political landscape. They resemble epidemics themselves, appearing from nowhere, spreading like wildfire, disrupting normal life, and being all but impossible to stop. They threaten democracy by poisoning the ability of voters to lucidly deliberate issues of human life, health and justice.
In her recent book Democracy and Truth, the University of Pennsylvania historian Sophia Rosenfeld argues that conspiracy theories thrive in societies with a large gap between the governing and the governed classes. Such conditions, Rosenfeld writes, allow some of the governed to reject the advice of experts as out of touch with “the people”, and to create a “populist epistemology” associated with an oppositional culture.
Populists, Rosenfeld continues, “tend to reject science and its methods as a source of directives”. Instead, such people prefer to embrace “emotional honesty, intuition and truths of the heart over dry factual veracity and scientific evidence, testing and credentialing”. Modern science accentuates the gap between experts and non-experts, making it possible for populists to interpret “factual veracity” as tainted.
Galileo’s gap
In my book The Workshop and the World: What Ten Thinkers Can Teach Us about Science and Authority, I argued that this scientific gap emerged with Galileo. Writing in his 1623 book The Assayer, Galileo used a striking image to defend his seemingly heretical studies of nature. The book of nature, he wrote, “is written in mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it”.
The use of mathematics creates a rift between those unable to understand this special language and those who do, making it easy for the former to distrust the latter.
The use of mathematics creates a rift between those unable to understand this special language and those who do, making it easy for the former to distrust the latter. Galileo’s Gap, as I call it, has widened in size and consequence in the four centuries since then, feeding the frequency and severity of conspiracy theories.
Hard to believe, but I received hate mail after The Workshop and the World came out. Some concerned what I’d written about The Preaching of St Paul – a 1649 painting by Eustace Le Sueur that now hangs in the Louvre museum in Paris. This dramatic and imposing work shows St Paul looming above a pile of burning books, some with geometrical figures on their pages. The not-so-subtle intent was to portray heretics who read the book of nature as dangerous criminals.
Contemporary conspiracy theories, I wrote, show that St Paul is back.
My critics were furious. The painting is not about Galileo, they chastised me, but a passage in the Book of Acts 19:19, where St Paul’s preaching prompted mystics to have “brought their books together, and burned them”. Besides, the critics added, this issue can be settled factually by noting that the figures on the pages of the burning books resemble nothing found in maths texts. What’s more, no trace exists of Le Sueur’s intent, or that of the religious authorities who commissioned the painting. I must surely therefore be part of a conspiracy to slander the good saint.
I responded that of course the figures in the burning books were not in modern maths texts; they are what a religious firebrand of 1649 might think geometrical figures looked like. I also said that no factual information about the painting’s creation could help us to understand its meaning, which can be understood only in the light of its historical context.
Le Sueur, a religious painter funded by church commissions, composed the work at a time when the most fundamental issue confronting the Catholic Church was that its claim to have the sole authority to interpret the Bible was being torpedoed by growing evidence in support of Galileo’s mathematically based findings. Only that explains why a devout Catholic painter would devote enormous time and resources to create a 4 m high work about a handful of words in the Bible that mention book-burning – and then paint geometrical figures on the books’ pages.
In a similar vein, the playwright Arthur Miller did not compose the 1953 play The Crucible because he had an interest in the Salem witchcraft trials. He did so to address the persecutions of supposed communist subversives taking place in the US in the 1950s. I probably did not convince my respondents. But their accusations that I had joined an anti-Christian conspiracy stopped.
The critical point
Modern anti-science conspiracies differ from their 17th-century antecedents, which emerged principally from the Church. Contemporary sponsors of conspiracy theories are multiple, spread not by preachings and paintings but by the Internet, and are energized by the ability to self-select information. But then, as now, conspiracy theories are not a sign of irrationality. Instead, they spring from the attempt by non-experts to make sense of often overwhelming and contradictory information based on personal values, available evidence, whom one trusts, and experience.
To reduce the impact of conspiracies, there’s little point quoting mainstream experts, citing scientific papers, appealing to facts, or even teaching more science, for all these things will be said to belong to the conspiracy.
To reduce the impact of conspiracies, there’s little point quoting mainstream experts, citing scientific papers, appealing to facts, or even teaching more science, for all these things will be said to belong to the conspiracy.
Far more effective is to provide people with better tools to make sense of their personal, political and social experience. Yet the disciplines that cultivate these interpretive tools, collectively called the humanities, are largely having their resources redirected to the sciences.
Ironically, the dazzling and visible successes of the 21st-century sciences are overshadowing and undermining the 21st-century humanities that ground the authority of the sciences themselves.
When time crystals were first proposed in 2012 by physicist Frank Wilczek they seemed like an exotic consequence of quantum mechanics in systems of many interacting particles. Wilczek argued that such systems broke symmetry in time, changing so as to return periodically to the same state just as ordinary crystals exhibit periodicity in space.
Subsequent experimental work has found that quantum time crystals can exist in systems maintained out of equilibrium by some driving force. Now Norman Yao of the University of California at Berkeley and colleagues suggest that time crystals can arise without the need for quantum physics at all. They argue that purely classical systems of oscillators such as coupled pendulums could have the same time-crystal order as their quantum counterparts. What is more, time crystals could be made experimentally, and might even exist in nature.
A non-equilibrium (or discrete) time crystal responds to a periodic driving force by showing some kind of oscillation in time with a period different to – generally some whole multiple of – the driver. In the 1830s Michael Faraday showed in theory that a class of periodically driven oscillators now known as parametric resonators can undergo “period-doubling”, meaning that they oscillate at half the driving frequency. This is precisely the kind of so-called subharmonic response that characterizes time crystals.
Sparking discussions
This long history has sparked discussions of whether purely classical systems might show time-crystal behavior. Last year, a team from the Swiss Federal Institute of Technology (ETH) in Zurich showed experimentally that two coupled oscillating strings displayed period-doubling. They pointed out that there is a close analogy between this behaviour and that seen in quantum many-body time crystals.
But a true time crystal needs something more, says Yao’s team. Discrete time crystals (DTCs) are “open” systems that are kept out of equilibrium by some energy input from the environment. In general, this input causes the system to slowly heat up. Over time its temperature would rise without limit, eventually “melting” the time crystal so the periodic order disappears.
In quantum DTCs this heating is prevented by “many-body localization” (MBL), whereby disorder in the arrangement of component parts inhibits energy exchange between their energy levels, preventing the spread and equilibration of heat.
Heat bath
There is no known classical analogue of MBL and so it was not clear if classical DTCs would be stable against heating. One way to avoid heating classically is by dissipation: coupling the system to a heat bath, for example via friction for a mechanically oscillating system.
This is what happens in Faraday’s parametric resonators and the ETH vibrating strings. But Yao and colleagues point out that this adds noise to the system, and a crucial question is whether the time-crystal oscillations can withstand it. This because true time crystals must also be stable against perturbations and noise in the driving – just as a space crystal is resilient to fluctuations.
The researchers have now identified a simple classical system that could have DTC behaviour in the presence of noise. It is a series of pendulums or oscillators, arranged in a row and connected to one another as if by springs.
Slightly nonlinear
The oscillators must be slightly nonlinear, which means that they do not undergo perfectly harmonic oscillation. Meanwhile, the dissipative coupling to the environment could be achieved by viscous friction
“We argue that classical DTCs could exist in principle even if there’s noise”, says Yao’s Berkeley colleague Michael Zaletel. “That hadn’t been shown before.”
The team show that this classical DTC will crystallize from a time-symmetric (non-periodic) state as the noise is reduced and the strength of the coupling between pendulums is increased, in an abrupt phase transition. It is closely analogous to the way a space crystal freezes from a liquid as it is cooled (lowering the noise) and/or the intermolecular forces get stronger. In computer simulations, the researchers see their period-doubled time crystal “nucleating” out of the time-symmetric state like a crystal growing from a seed.
This behaviour does not persist indefinitely, however. At any finite temperature, it will decay very slowly and eventually “melt”. There is some quantity (noise or temperature) that controls the lifetime of the TC. “We don’t have a true classical DTC”, Yao says. “Ours dies at very long times; we think”. But the system looks like one unless you watch it long enough. The researchers call such a system an “activated time crystal”.
The surface waves studied by Faraday and the mechanical model studied experimentally by the ETH team, they say, were probably like this – but because the influence of thermal noise is so tiny for macroscopic oscillators, the DTC would have decayed only extremely slowly, making it last much longer than the experimental timescale.
Not a new phase of matter
Because this new time crystal is not infinitely long-lived, says Vedika Khemani at Stanford University in California, it cannot be considered a new phase of matter. “As far as we know, MBL quantum systems are still the only examples of many-body time-crystals”, she says.
Yao and colleagues argue, however, that an indefinitely persistent classical DTC might be possible if the oscillators or pendulums are coupled together in a more complicated way. That suggestion stems from their analysis of a different kind of system called a cellular automaton, made from many identical components whose states depend on those of their neighbours. Yao and Zaletel admit they have no rigorous proof of this yet – and that making a mechanical system governed by such rules could be “insanely complicated.”
Yao and colleagues believe the system they have simulated might occur in the real systems such as coupled oscillating Josephson junctions, or quasi-classical excitations of electrons called charge-density waves. “Experiments on charge-density waves were done in the 1980s that showed what looks to the eye like period doubling”, says Zaletel. “It would be very interesting to go back to these experiments and check.”
The researchers speculate that systems like theirs in which time-crystal oscillations exist for long if not infinite times might even be found in living systems such as colonies of interacting cells. Such periodicity at a subharmonic frequency determined by the internal dynamics of the system might be useful in biology, they say – and their relatively simple prescription for an activated DTC could be the preferred one. “It’s definitely useful to get oscillations in biology”, says Zaletel, “and it’s usually enough to have them for finite but long times.”
When the leaves of the aquatic lotus plant, Nelumbo nucifera, float on water they are large and flat, with short ruffles along their edges. Intriguingly, though, lotus plants that grow on stems elevated above the water produce cone shapes with larger, longer undulations. Now a team of engineers in China has found that the difference isn’t genetic but is instead due to mechanical effects created by the water on which the leaves float.
Fan Xu, a mechanical engineer at Fudan University in Shanghai, started studying the shapes of lotus leaves when he noticed the different leaf shapes growing in and around ponds in China.
In the latest study, published in Physical Review Letters, Xu and colleagues turned to mathematical models and a leaf-like material to test the hypothesis that water conditions influence leaf shape.
To accurately model how leaves grow, the team cut leaf shapes out of a rubber material that grows when in contact with water. Because sunlight stimulates lotus-leaf growth, plants tend to curve towards the sun and grow at different rates in different parts of the leaf, depending on where the light hits. To mimic this, the researchers wetted the fake leaves at points where growth would be expected. They also floated some of the model leaves on water, to observe how this affected their growth.
Energy differences emerge
Both their model leaves and the mathematical simulations produced leaves that matched those seen on ponds and waterways – flat, floating leaves with tight ruffles around their edges and non-floating cone-shaped leaves with larger undulations flowing towards the centre of the leaf. The explanation for these patterns turns out to be a combination of the biophysical effect of the layer of water on the leaf’s underside, plus the leaf’s natural tendency to grow in as energy-efficient a manner as possible.
When the lotus leaves are free from the water, Xu and colleagues explain, the models show that the most energy efficient way for lotus plants to grow is to produce a cone shape with long, large wave-like oscillations. For a leaves growing on water, though, producing such a shape would require the leaves to lift the water that adheres to their undersides. Instead, leaves grow flat and ruffle at the edges, which is much more energy efficient. Leaves on water also produce wave-like undulations, rather than growing flat, because they produce more material and surface area when they are growing than can be contained in a flat sheet – meaning that they buckle and wrinkle.
“We find, both theoretically and experimentally, that the short-wavelength buckled configuration is energetically favourable for growing membranes lying on liquid, while the global buckling shape is more preferable for suspended ones,” the study authors conclude.
The researcher say their work highlights how biophysical effects can affect plant morphogenesis. Such knowledge could, they add, be harnessed to control the morphology of human-made materials.
Skin cancer is one of the most common cancers worldwide and early detection, particularly of melanoma, is crucial to improve survival. With this objective in mind, there has been an influx of new dermatology smartphone apps that aim to help people with suspicious skin lesions decide whether to seek further medical attention.
Many of these apps use artificial intelligence algorithms to classify images of lesions into high or low risk for skin cancer (usually melanoma) and then provide a recommendation to the user. But how accurate are these algorithm-based smartphone apps? And how valid are the studies used to assess their accuracy? A research team led by Jon Deeks at the University of Birmingham and Hywel Williams at the University of Nottingham, aimed to find out (BMJ 10.1136/bmj.m127).
The researchers identified nine relevant studies that evaluated six different skin cancer detection apps. Six studies evaluated the diagnostic accuracy of the apps by comparison with histology, three verified the app recommendations against a reference standard of expert recommendations.
The team found that the studies were small and overall of poor quality. For instance, studies included suspicious moles chosen by clinicians not app users, and used images taken by experts on study phones, rather than by users on their own phones. Images that could not be evaluated by the apps were excluded. And many studies did not follow up on lesions identified as “low risk” by the apps, removing the opportunity to identify any missed cancers.
“This is a fast-moving field and it’s really disappointing that there is not better quality evidence available to judge the efficacy of these apps,” says Jacqueline Dinnes from the University of Birmingham’s Institute of Applied Health Research. “It is vital that healthcare professionals are aware of the current limitations both in the technologies and in their evaluations.”
Future studies, the researchers suggest, should be based on a clinically relevant population of smartphone users with concerns about a skin lesion. Studies must include follow-up of all lesions, not just those referred for further assessment. It’s also important to report all of the data, including failures due to poor image quality.
Poor regulation
Despite the limitations of this evidence base, two of the apps have obtained European CE marking: SkinScan and SkinVision. SkinScan was evaluated in a single study of 15 moles including five melanomas, with 0% sensitivity for detection of melanoma. SkinVision, meanwhile, was evaluated in two studies (252 lesions including 61 cancers) and achieved a sensitivity of 80% and a specificity of 78% for detecting malignant or premalignant lesions. Three studies verifying SkinVision against expert recommendations showed its accuracy was poor. While SkinVision produced the highest estimates of accuracy, its actual performance is likely to be worse, because studies were small and did not evaluate the app as it would be used in practice.
The researchers point out that smartphone apps are defined as class 1 devices (the European classification for low-risk devices such as plasters and reading glasses) for CE marking. They note that no skin cancer assessment app has received regulatory approval in the US, where the FDA has a stricter assessment process for smartphone apps.
“Regulators need to become alert to the potential harm that poorly performing algorithm-based diagnostic or risk monitoring apps create,” says Deeks. “We rely on the CE mark as a sign of quality, but the current CE mark assessment processes are not fit for protecting the public against the risks that these apps present.”
The researchers conclude that their review “found poor and variable performance of algorithm-based smartphone apps, which indicates that these apps have not yet shown sufficient promise to recommend their use.” They emphasize that healthcare professionals must be aware of the limitations of such apps and inform potential app users about these limitations.
“Although I was broad minded on the potential benefit of apps for diagnosing skin cancer, I am now worried given the results of our study and the overall poor quality of studies used to test these apps, says Williams. “My advice to anyone worried about a possible skin cancer is ‘if in doubt, check it out with your GP’.”
Arctic anomaly: A map showing the cosmic microwave background (CMB) temperature as observed by ESA’s Planck satellite. While fluctuations in the CMB were expected, and were observed by Planck, an unforeseen anomaly is the cold spot (circled), which extends over a large patch of sky and has a much lower temperature than expected. (Courtesy: ESA/Planck Collaboration)
Significant events in time and space tend to leave indelible marks on the cosmos. It’s fair to say that the Big Bang – the cataclysmic event that gave rise to our universe some 14 billion years ago – has undoubtedly left its footprint on everything we observe today. The most permanent of marks, though, is its afterglow, in the form of the cosmic microwave background (CMB) – the primordial microwave radiation that fills the universe.
For the first few hundred thousand years, our new-born universe was a teeming, hot, dense plasma, made up of nuclei, electrons and photons. But once it was 380,000 years old, the universe had expanded and cooled to below 3000 K, allowing neutral atoms, including atomic hydrogen to form. Thanks to the absence of free electrons, light finally abounded the universe (from the “surface of last scattering”) and the CMB emerged. This relic radiation is therefore a perfect record of our universe during its infancy. The CMB as we detect it today has been stretched to its current microwave wavelength due to the universe’s expansion, and cooled to a temperature of just 2.7 K.
Hot and cold
At first glance, the CMB has a nearly perfect black-body spectrum (uniform temperature), and looks isotropic to scales of around 10–5 K. But at micro-kelvin scales we begin to see variations in temperature, in the form of hot and cold patches. Essentially, tiny quantum density fluctuations that occurred when the universe was just born meant that matter was not evenly distributed. Instead, some areas of the universe are more densely packed than others, giving rise to the large-scale “cosmic web” network of matter we see today. Thanks to this variation, light travelling through a densely populated region has to overcome a deeper gravitational pull, and so appears redshifted; while light passing through a less dense region will appear blueshifted.
This increase or decrease in wavelength of the CMB photons is reflected as temperature variations on the μK scale – in other words, there exists a correlation between the temperature anisotropies in the CMB and the large-scale structure of the universe. This gravitational effect, which causes the large-scale anisotropy of the CMB, is known as the Sachs–Wolfe effect, and is broken into two parts. There is the ordinary (or non-integrated) Sachs–Wolfe effect, which applies to the early universe only, and describes the gravitational redshift of light from the last scattering surface. Then there is the integrated Sachs–Wolfe (ISW) effect, which depends on the effects of changing gravitational potentials (due to matter-density variations) on CMB photons.
The primordial fluctuations in the CMB afford strong support for the theory of cosmic inflation – the extremely rapid expansion that cosmologists believe our universe underwent when it was a mere 10–35 s old. These fluctuations, along with gravitational lensing (the bending of light due to matter, as predicted by Albert Einstein), and the Sunyaev–Zel’dovich effect (the distortion of energy in the CMB photons, due to high-energy electrons in the galaxy clusters they pass through), help us to determine the relative abundance of dark matter and dark energy in the universe.
Perhaps the most intriguing mystery concerns a large and unusually cold patch on the CMB, more than a billion light-years across. First observed by NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) in 2004, and later confirmed by the European Space Agency’s Planck satellite, the so-called “CMB cold spot” is about 70 μK colder than the average CMB temperature, and appears in the southern celestial hemisphere.
While it’s possible that the cold spot could have originated from the primordial density fluctuations that created the rest of the CMB’s temperature anisotropies, it’s unlikely. Those anisotropies have a Gaussian distribution, which allows for small variations (on the scale of 18 μK), but not large ones. But in some places, the cold spot is nearly 150 μK colder than the mean CMB temperature, far in excess of that expected from a Gaussian distribution. Also, the radius of the cold spot subtends about 5°, whereas the largest fluctuations of the primordial CMB temperature occur on angular scales of about 1°, making it even more unnatural.
It should come as no surprise then that researchers the world over have posited a number of theories regarding the CMB cold spot, and many studies are being done to uncover its origins. Some early explanations suggested that the cold spot could merely be foreground contamination (in the form of galactic dust and synchrotron radiation) from within the Milky Way, or the result of unusual celestial objects. But observations by the NRAO VLA Sky Survey (NVSS), the 2 Micron All-Sky Survey (2MASS) and the Sloan Digital Sky Survey (SDSS), along with detailed images from the Hubble Space Telescope, all showed no such objects.
1 It all started with a Big Bang
(Courtesy: ESA/C Carreau)
An illustrated history of our universe, going back 14 billion years to the Big Bang, showing the main stages of its evolution.
Veiled void?
One promising explanation for the phenomenon is that there exists a vast cosmic “super-void” between us and the cold spot, thanks to the large-scale structure of the universe. While galaxies still exist within voids, their density of matter is much less (one-tenth of the average) than in other regions of the universe. On the flip side, the universe also contains superclusters – huge regions with many more galaxies than normal.
As it turns out, both super-voids and superclusters have a significant impact on CMB photons due to a variation of the integrated Sachs-Wolfe effect, known as the late-time ISW effect. Essentially, CMB photons passing through superclusters – gravitational “valleys” – gain potential energy as they roll into the valley, and so heat up. All things being equal, this energy should be used up by the photon, as it climbs back out of the valley when leaving the supercluster. But all things aren’t equal because dark energy, which causes the accelerated expansion of our universe, comes into play.
In the time it takes the photon to travel across the supercluster, dark energy stretches the valley, somewhat flattening it, such that the exiting photon does not need all of the extra potential energy to climb back out. The CMB light therefore holds on to some of the gained heat. In a similar way, super-voids are considered gravitational “hills”, and CMB photons lose more energy while entering a void and climbing uphill than they regain as they roll downhill while leaving the (now somewhat bigger) void. This loss in energy chills the photons. The ISW effect could therefore be a promising explanation for the CMB cold spot, should a super-void exist in the direction of the spot.
The late-time ISW effect could be a promising explanation for the CMB cold spot, should a super-void exist in the direction of the spot
Hide and seek
Indeed, in 2007 a study by Lawrence Rudnick and colleagues from the University of Minnesota claimed to observe a super-void in the region, thanks to a significant dip in extragalactic brightness and number-counts of radio sources in the region, as seen in the NVSS radio catalogue. But a 2010 follow-up study, by Kendrick Smith of the University of Cambridge and Dragan Huterer from the University of Michigan, found no statistically significant evidence for the aforementioned dips in NVSS maps, and so discounted the existence of a super-void. A 2014 study, led by Seshadri Nadathur of the University of Helsinki, does not debate the existence of a large void in the area, but claims that the ISW impact on the CMB from the proposed super-void is nowhere near enough to explain the cold spot.
The Helsinki researchers further claim that in order to see the necessary photon-cooling via the ISW, the super-void would have to be so massive and so empty that its very existence would contravene the standard model of cosmology for our universe, known as ΛCDM. This model suggests that the universe is governed by the competing forces of dark matter (which accounts for 26.8% of the universe’s mass/energy) and dark energy (68.3%). The gravitational tug of dark matter works against the accelerated expanding force of dark energy. Nadathur’s work finds that any structure capable of explaining the cold spot would be an anomaly itself.
On the other hand, also in 2014, a team of astronomers led by István Szapudi of the Institute for Astronomy at the University of Hawaii claimed to find a rare super-void, almost 1.8 billion light-years across, in the region. At the time, Szapudi described it as “the largest individual structure ever identified by humanity”. Using optical observations from Hawaii’s Pan-STARRS1 (PS1) telescope, combined with infrared data from NASA’s Wide Field Survey Explorer (WISE) 2MASS catalogue, the team surveyed relatively nearby galaxies that lie within the cold spot’s boundaries, and detected a super-void. According to Szapudi’s team, the void is a mere three billion light-years away from us, which may be why it was not detected in previous searches that focused on the distant, early universe. Despite its discovery, the researchers found that even this large super-void does not fully account for the CMB cold spot’s temperature drop, as the cooling ISW effect would be a maximum of 20 μK.
Temperature test
In an attempt to test Szapudi’s discovery, a 2017 study led by Ruari Mackenzie of the University of Durham used spectroscopic data from the 2dF-VST ATLAS Cold Spot Redshift (2CSz) survey at the Anglo-Australian Telescope in New South Wales to study the redshift of galaxies in the line of sight to the cold spot. As a control for their work, the researchers also made the same measurements along a different line of sight. Mackenzie’s team found three voids out to a distance of three billion light-years, and a possible fourth void beyond that. Although no individual void was as large as Szapudi’s super-void, the four combined showed an ISW cooling effect of 31 μK – still not enough to explain the cold spot.
Unfortunately, the researchers found a similar density of voids along the control line too, which led them to believe that voids are not the answer. Instead, they conclude that the cold spot may indeed have somehow originated in primordial density fluctuations. As it stands, the presence of Szapudi’s super-void is difficult to explain in the standard ΛCDM, while simulations show that a random, non-Gaussian quantum fluctuation in the CMB has a 1 in 50 chance of birthing the cold spot.
Unusual structures?
Another explanation for the CMB cold spot could be the unusual motion of galaxies in the region, due to extreme gravitational effects. This could be similar to the effects observed due to phenomena such as the “Great Attractor” – an apparent gravitational anomaly at the centre of the local Laniakea Supercluster (within which our Milky Way galaxy is located) caused by an enormous concentration of mass – and the related “Dipole Repeller”, which is a centre of effective gravitational repulsion in the large-scale flow of galaxies in our Local Group, caused by the likely presence of a large super-void. Both of these have their own imprints on the CMB dipole, but so far no such unusual structures have been observed in the CMB cold spot area or the surrounding regions.
A November 2019 study by Qi Guo and colleagues at the Chinese Academy of Sciences reported the existence of 19 dwarf galaxies that are dark-matter deficient, which is surprising because these small galaxies are usually dominated by dark matter. Of these galaxies, 14 are isolated, and not satellite galaxies to large ones like ours, which means that their lack of dark matter was not the result of some interaction with larger galaxies or other dwarf galaxies. These low-dark-matter dwarf galaxies may affect the results of other ISW studies, all of which are impacted by dark matter. It will be interesting to see if a significant number of galaxies in the CMB cold spot region exhibit such behaviour.
Perhaps the answer to the CMB cold spot lies within the theory of inflation itself, instead of beyond it. It might be that during the inflationary epoch in our universe’s infancy, a local patch of the universe underwent a longer period of inflation, which resulted in the formation of a cold spot in that region. That’s the solution put forth in 2016 by Yi Wang, of the Hong Kong University of Science and Technology, and Yin-Zhe Ma, of South Africa’s University of KwaZulu-Natal, in which they proposed a “feature-scattering” inflationary mechanism that predicted localized cold spots (and no hot ones). However, questions will arise about the validity of such theories if we look at other phenomena, including the effect of such anomalies on density perturbations, and the evolution of the stars and galaxies in that region.
Several solutions
As researchers attempt to uncover which solution is the right one for the cold spot, it’s important to make sure that each explanation is considered in the larger context of other cosmological observations such as observations of type Ia supernovae, baryonic acoustic oscillations (BAO), and the overall CMB observations themselves. It has also recently been suggested that a more accurate measurement of the Hubble constant could be made using the gravitational waves from the merger of heavy objects like neutron stars. These observations, with some variations in the values of the measured cosmological parameters, support ΛCDM and so any workable solution must agree with them.
However, alternatives to the Standard Model will also play an important part in exploring anomalies such as the cold spot. For example, in a 2019 study Eleonora Di Valentino of the University of Manchester, and colleagues, carefully analysed the Planck 2018 CMB data, and their finding challenges our usual ΛCDM assumption of a flat universe. Their results point to a “closed universe”, which contradicts our current assumptions, and deviates from our usual understanding of inflation theory. While their results were inconclusive (just over 3σ), this issue undoubtedly needs to be further investigated.
2 Compare and contrast
(Courtesy: ESA/Planck Collaboration)
The Plank satellite observes the CMB in detail, at large angular scales (≥ 5°). The top map depicts the CMB’s temperature fluctuations, and the bottom map shows the polarization amplitude fluctuations. While the temperature map clearly shows the cold spot, the anomaly was not detected, at least with any statistical significance, in the polarization map. The lack of statistically significant anomalies in the polarization map does not rule out the potential relevance of those seen in the temperature map, but makes it more challenging to understand the origin of this puzzling feature.
A more exotic explanation?
Thanks to the lack of a clear-cut explanation for the CMB cold spot, a more unusual and out-there possibility has been suggested – that the cold spot might be evidence of a “collision” between our universe and a parallel universe. This falls under the multiverse theory, according to which our universe is one of many that occasionally collide or interact with one other. Parallel universes could have interacted with ours thanks to quantum entanglement between the universes before they were separated by cosmic inflation, and interactions would leave a mark on the CMB. But again, extraordinary claims such as this one require extraordinary evidence, and should be consistent with other cosmological observations.
In this case, such a collision between universes should produce an identifiable polarization signal in the cold spot, as suggested by Tom Shanks of the Centre for Extragalactic Astronomy at Durham University in 2017. Incidentally, the latest results from the Plank team in 2019 involved a further analysis of the polarization of the CMB radiation (which is almost completely independent from its temperature profile), to further probe the nature of anomalies like the cold spot. Planck’s multi-frequency data are designed to eliminate foreground sources of microwave emission, including gas and dust in our galaxy. Despite careful analysis, the Plank team saw no significant traces of anomalies in the polarization maps (figure 2). According to the team, these latest results neither confirm nor deny the nature of anomalies like the cold spot, leaving the door open to many possibilities.
Improving our understanding of the cosmological parameters will provide better possible explanations of unusual phenomena like the cold spot. For this, we require new data from highly sensitive telescopes such as the MeerKAT array and upcoming facilities like the Giant Magellan Telescope and the Square Kilometre Array. We also need to develop a better understanding of the nature of dark energy and how it affects the evolution of our universe, to get deeper understanding of the working of the ISW effect.
Better view: The Giant Magellan Telescope, currently under construction in Chile, is just one of the advanced facilities that may help us understand the cold spot. (Courtesy: Giant Magellan Telescope – GMTO Corporation)
As it stands, our current understanding of the CMB cold spot doesn’t lead us to any clear conclusions. Fully understanding it either requires much better observations, or a revision of our understanding of the universe. Hopefully, future observations from more advanced ground- and space-based telescopes can guide us towards a better explanation of this enigmatic scientific phenomenon.
The European Space Agency (ESA) has launched a new mission to take the most detailed view yet of the Sun and its polar regions. The Solar Orbiter spacecraft will get as close as 42 million kilometres to the Sun – about a quarter of the distance between the Sun and Earth – to capture regions that have never been seen before and probe its electromagnetic environment. It was launched today on an Atlas V rocket from Cape Canaveral in Florida at 04:03 GMT.
It is the most important UK space-science mission for a generation
Chris Lee
Once in orbit around the Sun – with a maximum heliographic latitude of 24° – the spacecraft will have a close-up high-latitude view of the star, including its poles. The hope is that this positioning will improve our understanding of how the Sun creates and controls the heliosphere, the vast bubble of charged particles around the Sun and its planets. New information on the Sun and its atmosphere will also improve predictions of solar storms, which can disrupt satellites and infrastructure on Earth.
‘A big beast’
Solar Orbiter consists of six remote-sensing and four in situ instruments. The remote sensing equipment will perform high-resolution imaging of the Sun’s atmosphere and solar disc, while the in situ instruments will measure the solar wind, electric and magnetic fields and waves, and energetic particles.During orbit, the in situ instruments will run continuously while the remote-imaging instruments will operate when the craft is in the closest approach to the Sun as well as at the minimum and maximum heliographic latitudes. As the mission progresses, the orbital characteristics will change, with individual orbits being dedicated to specific science questions.
The UK space industry has been heavily involved in the development of the Solar Orbiter mission, investing £20m in the development and building four of the instruments. “I am incredibly excited by the Solar Orbiter,” says Chris Lee, chief scientist at the UK Space Agency. “It is the most important UK space-science mission for a generation, both in terms of our leading industrial role on the satellite itself and our key academic roles on the science payload.” Lee adds that the mission is a “big beast” for the UK space community, with the mission also contributing to space-weather forecasting.
The Solar Orbiter will now unfold its 18 m-long solar array and fly past Earth once and Venus several times, using the gravity of the planets to adjust its trajectory and place it into its tilted, highly elliptical orbit around the Sun. It is expected to reach operational orbit in just under two years, with the mission scheduled to last seven years (including the initial two-year cruise), with a possible three-year extension.
A biodegradable nerve guide embedded with growth-promoting proteins that can regenerate long sections of damaged nerves has been developed by researchers in the US. The technology, which has been tested in monkeys, could offer an alternative to nerve grafts for patients who have experienced nerve injury, and help restore their motor function and control, the team claim (Sci. Transl. Med. 10.1126/scitranslmed.aav7753).
Every year millions of people experience peripheral nerve damage, with the resulting gaps in their nerves impacting movement and daily life. In the US, trauma-related injuries to peripheral nerves account for around 5% of people entering trauma centres. In addition to traumatic accidents, nerve damage can have other causes, such as medical treatment, diabetes and birth trauma.
With assistance, nerves can regrow and be repaired across small gaps. This is limited, however, to gaps of around 8 mm. For larger injuries, the current standard treatment is to remove nerve tissue from elsewhere on the patient – often from the back of their leg – and use it to bridge the gap.
But such autografts require additional surgery, with risks of complications, to remove the donor nerve. In addition, explains Kacey Marra, an expert in tissue engineering at the University of Pittsburgh, this method of repair is not optimal “because often what you are replacing is a large motor nerve and the nerve that the surgeons primarily use is a small sensor nerve, so that is not a good match and leads to about 50–60% functional recovery”.
Looking for an alternative to autografts, Marra and her colleagues have spent more than a decade developing a biodegradable guide that releases a growth factor to stimulate nerve regrowth. Building on previous studies in rodents, the team’s latest work suggests that the conduit could be used to bridge large nerve gaps.
Microspheres containing a neural growth factor adhering to the nerve conduit during the manufacturing process. (Courtesy: N B Fadia et al. Science Translational Medicine 2019)
The guide is a hollow tube made from polycaprolactone, a biodegradable polymer used for dissolvable stitches and other biomedical applications, with microspheres embedded in its walls. These capsules slowly release a natural neurotrophic factor that aids in nerve repair.
The researchers tested the technology in macaques, removing a 5 cm section of nerve from the forearm and replacing it with their conduit. They compared this approach with autografts – where they removed and flipped around a 5 cm section of nerve – and a guide containing empty microspheres.
After a year, the recovery of the monkeys treated with the nerve guide was similar to those who received autografts, with both groups recovering around 75% of their pre-operative function – based on their ability to pinch and pick-up a sugar pellet. And the nerves repaired using the guides conducted signals faster than the autografts. Functional recovery and conduction velocity of those treated with the empty conduit was significantly lower.
The team’s aim is to produce an off-the-shelf guide that can be used to regenerate large peripheral nerve gaps. “That is the goal,” Marra explains, “that you pull it out of the refrigerator, and you cut it to the size that you need… Over time the nerve will regenerate, but the material will slowly dissolve away.”
They are now working towards their first in-human clinical trial, Marra tells Physics World.
Not everyone is convinced, however, that alternatives to autografts are needed, or that conduits are the answer. Lars Dahlin, professor and senior consultant in hand surgery at Lund University, tells Physics World that although there may sometimes be drawbacks with the use of the person’s own nerve, “basically nerve autografts are working very well across long and short defects”.
Dahlin adds that the idea of nerve guides with growth factors has been around for a while, but there are still many questions without clear answers, such as which growth factor or factors are best, what concentrations to use, and when to add the growth factors, as some are needed early and others late in the regeneration process.