Skip to main content

US eclipse spoiler alert

Later today, parts of 14 US states will witness a total solar eclipse. The umbra – the shadow cast when the Moon fully eclipses the Sun – will first be seen on the coast of Oregon at 10:18 PDT. The 70 mile-wide path of totality will then sweep across the US, seen last in South Carolina. The event offers solar scientists a unique chance to study the Sun’s outer atmosphere – the corona – which will be visible in much greater detail while the Sun’s brightness is blocked by the Moon. The eclipse also allows scientists to test their predictions of the Sun’s behaviour.

A team at Predictive Science, Inc. in the US has been running large-scale simulations of the Sun’s surface, and hence corona, since 28 July, working alongside NASA, the Air Force Office of Scientific Research and the National Science Foundation. Using multiple massive supercomputers, the researchers have created highly detailed solar simulations timed to the moment of the eclipse, including visualizations of what people in the path of totality will see. They will then compare their predictions to observations made during the eclipse to test the viability of their models.

Better than powerful telescopes

“The solar eclipse allows us to see levels of the solar corona not possible even with the most powerful telescopes and spacecraft,” says Niall Gaffney of the Texas Advanced Computing Center, which is home to one of the supercomputers used, Stampede2. “It also gives high-performance computing researchers who model high-energy plasmas the unique ability to test our understanding of magnetohydrodynamics at a scale and environment not possible anywhere else.”

To create the models, the researchers used magnetic-field maps, solar rotation rates, data from the Helioseismic and Magnetic Imager (HMI) on board NASA’s Solar Dynamics Observatory , and mathematical models of how magnetohydrodynamics – the interplay of electrically conducting fluids like plasmas and powerful magnetic fields – effect the corona.

Coronal prediction

One of the team’s simulations predicted a coronal mass ejection – a large release of plasma and magnetic fields – for during today’s total eclipse. If it occurs, it will be near the east limb of the Sun and provide a spectacular sight for the millions of people watching in the US.

But as well as potentially providing spoilers for spectators, the simulations are important for predicting powerful solar storms, such as one that occurred in 1859. Known as the Carrington Event, the storm caused telegraphs to short and catch fire, and auroras were visible from the Caribbean. In 2008, the National Academy of Science predicted that such an event today would cause more than $2 trillion in damages. “The ability to more accurately model solar plasmas helps reduce the impacts of space weather on key pieces of infrastructure that drive today’s digital world,” Gaffney says.

The work will be presented at the Solar Physics Division meeting of the American Astronomical Society.

Squid see clearly in low light using linked proteins

The eyes of squid and fish are miracles of evolution, producing bright, sharp images with limited light. How they do this, however, has been a mystery. But now researchers in the US have analysed the lenses in the eyes of a common squid species and found that proteins in different cells gel at different concentrations, giving the eye a refractive-index gradient that removes aberrations. The research could help to produce self-assembling nanoparticles.

A perfect lens should focus all parallel incident wavefronts at a single point. In spherical lenses, wavefronts strike the edge of the lens at a much steeper angle that those striking nearer the centre. They are therefore refracted more and brought to a closer focus – an effect called spherical aberration. The eyes of humans and some other land-dwelling animals have evolved non-spherical lenses to remove these aberrations. “The trade-off there is that our eyes aren’t as sensitive,” explains biophysicist Alison Sweeney of Penn State University in Philadelphia: “But given that we’re living on land where there’s lots of sunlight available – that’s a good trade-off.” In the dimmer conditions in the oceans, however, capturing every possible photon becomes crucial.

In 1854, James Clerk Maxwell showed that spherical aberrations could be eliminated by creating a lens with a variable refractive index that decreased from the centre towards the edge, so that wavefronts passing closer to the middle of the lens would experience a higher refractive index. Subsequently, biologists discovered that the eyes of fish and cephalopods (including squid and octopuses) had lenses exactly like this, but how they were formed was unclear.

Protein gradient

The refractive index of an eye lens depends principally on the protein concentration. Maintaining a refractive-index gradient, therefore, entails maintaining a concentration gradient without the proteins changing concentration randomly or sticking together into clumps, both of which could scatter light and turn the lens opaque.

Sweeney and colleagues dissected lenses from the eyes of the longfin inshore squid into four concentric layers. They confirmed that the refractive index of the layers increased from around 1.34 in the outermost layer to nearly 1.6 in the centre – consistent with Maxwell’s ideal spherical lens. They calculated from this that the outermost layer of the lens was a water-rich gel containing only 4% protein, whereas the innermost layer was a dense network of proteins with no free water.

The team isolated messenger RNA – the molecule a cell uses to trigger the building of particular proteins – to characterize the protein mixture in each lens section. Squid-lens proteins have “a pair of arms that flexibly and floppily stick off the protein into space”, says Sweeney. The researchers concluded over 40 different proteins were present. “The mixture of proteins found at every position in the lens seems complex,” says Sweeney. However, the length of the arms tended to increase towards the outside of the lens. The team concluded that the arms, whose function has previously remained unexplained, were used to link the molecules together.

Arms linked

The team used multiple independent analytical techniques to estimate that, at the edge of the lens, each protein molecule was bonded to only about two others, forming long protein chains with occasional cross-links. In the centre, however, each protein was bonded to nearly six others, forming a dense network. The team suggests that the long arms of the outermost molecules form links at very low concentrations, allowing the solution to gelate at a very low density. The short-armed molecules in the centre, however, have to pack more densely to form a gel. The researchers hypothesize that each cell stops protein synthesis when the proteins gelate, thereby allowing it to achieve the appropriate density and refractive index for its position.

Sweeney is sceptical, however, about direct optical applications of the discovery: “I think we probably have better optics than this,” she says. She thinks, however, that nature may teach engineers more general lessons about how to assemble nanoparticles. “I don’t know that anybody can make a centimetre-scale hunk of material out of nanoparticles where all the nanoparticles are well ordered, but that’s exactly what a squid lens does,” says Sweeney. “What can we learn from these genetically encoded linkers that allows a squid lens to make a bulk, ordered material out of [nanoscale proteins]?”

“How squid and fishes can achieve such a perfect lens is a research question which has been around for quite some time, and this is the first explanation of how it could work in principle,” says structural biologist Tobias Madl of the Medical University of Graz in Austria. He says the large number of proteins present need confirmation, however. “It could be that not every RNA is transcribed into a protein.”

Natural selection

Condensed -matter physicist Francesco Sciortino of Sapienza University of Rome calls the work “outstanding”. “In Rome we have developed new concepts to explain the ability of some systems to form stable gels. What Allison and co-workers have been able to show is that natural selection had already been able to exploit the particular physics, which we have only recently understood.”

The research is described in Science.

‘Call to arms’ for bioengineers to reduce animal testing

The burden of meeting the food and healthcare requirements for a growing and ageing world population lies on the shoulders of the pharmaceuticals and agribusiness industries. A recent review in Biofabrication highlights the substantial cost of chemical development in agribusiness, pharmaceuticals and even consumer goods, and how this can be minimized by utilizing cutting-edge technology from various fields (Biofabrication 9 033001).

The authors point out that much of the cost of chemical development stems from the need to pass rigorous safety tests before a candidate compound is labelled as safe for consumers and the environment. At the moment this relies on animal research to predict the effect on humans. But while animal research has been the gold standard for many years, its cost and relevancy has now started to be challenged.

Despite researchers’ best efforts, the models used in animal testing are never entirely indicative of potential human toxicity, and high attrition rates in drug development is partly caused by improper translation from animal research to the clinic. The review, authored by members of various chemical development industries, academic researchers, and the NC3Rs (an organization focused on the discovery and application of new technologies and approaches to replace, reduce and refine the use of animals for scientific purposes), gives several examples where recent advancements in biofabrication and bioprinting may provide suitable tissue-engineered models to replace animal use.

The number of alternatives is booming

As an example, liver toxicity is a common benchmark to assess the suitability of novel agricultural chemicals before they are introduced to the market. In vitro models are inadequate due to their simplicity, which to date has dictated the use of animal research. Instead, laboratory tests based on bioengineered human tissue could provide a stepping-stone in the development process, improving the reliability of in vitro tests and reducing the need for animal research.

Likewise, drug development for respiratory diseases such as asthma is hampered by high attrition rates. In this case many new drugs fail at the clinical trial stage, even though they proved effective in prior in vitro or animal studies, since current in vitro disease models fail to reproduce the complexity of the respiratory system. New technologies such as bioprinting could enable these intricate tissues to be manufactured in the laboratory, which may prove an invaluable tool for drug development.

The authors note that consumer goods is another industry sector that may benefit from collaboration with bioengineers. As of 2013, European Union law has forbidden the sales of products that have been tested on animals, which leaves cosmetics and personal care manufacturers with limited means of testing new products. One of the most active fields in bioengineering today is the production of biomimetic and relevant human skin tissue , and these techniques offer promising alternatives to over-simple in vitro skin models and animal research.

Collaboration will be key

Biofabrication has exploded as a very dynamic field over the past 20 years and will soon produce many technologies for industry to exploit and improve product development. As the review suggests, opening channels of communications between bioengineers and chemical developers will be key to stepping further away from animal research.

Indian independence, Doppler effect on a train, contagious science

By Michael Banks

This week India celebrated 70 years of independence. So what better way to mark the occasion than a music video? Step forward 20 or so scientists from the Indian Space Research Organisation (ISRO), who dub themselves the Rocket Band. Over the space of 18 months, they worked feverishly to create a seven-minute music video entitled “I am an Indian”. Mostly shot on the coast of the Arabian Sea, the video features the researchers walking along the beach as well as an animation of the Indian flag being put on the surface on the Moon. “We have a lot of talent in ISRO, making rockets comes naturally to many of us while making music is tough but it is not rocket science,” aerospace engineer Shiju G Thomas told NDTV.

(more…)

Tiny traps bring quantum magnetic memory under control

Superconducting materials offer a promising substrate for computers of the future because of their resistanceless operation at low temperatures. Fluxons, quantum tubes of magnetic field that thread through a superconductor, could act as tiny, efficient memory elements in such devices – but only if their arrangement can be controlled. Now, researchers in Austria have developed nanostructured superconductors that can trap and arrange fluxons in non-uniform patterns. These patterns could provide the basis of information storage in superconductor-based computers.

To improve electronic devices while decreasing their energy consumption, scientists are searching for new information-processing technologies using new materials. Superconductors are one solution because they have zero resistance – and therefore no energy loss – when cooled to low temperatures. Unfortunately, many phenomena used in classical electronics work differently in superconductors, including the magnetism that is so useful for information storage.

When a certain type of superconductor is placed in a magnetic field, the field enters the material in tiny tubes of a field called fluxons surrounded by current vortices. Fluxons naturally form a uniform hexagonal lattice inside the superconductor, which in terms of information is equivalent to a blank piece of paper. If we are to encode information in this arrangement, we need to be able to control the pattern.

Caught in a trap

In research that could represent the first step towards such an achievement, Wolfgang Lang and his team at the University of Vienna and Johannes-Kepler-University Linz have produced a nanoscale array of traps that can cause the fluxons to get stuck in non-uniform arrangements. These traps could be the bits for a fluxon computer, where the fluxon occupation, controlled by a magnetic field, defines each trap as a 0 or 1.

The traps even help stabilize the non-uniform arrangements for longer-term information storage. Georg Zechner, the lead author of the paper, says: “Even after days, we have observed precisely the same arrangement of fluxons – a long-term stability that is rather surprising for a quantum system.”

The fluxon trap technique was developed with the industrial partner IMS Nanofabrication AG, Austria. The researchers created the structure by bombarding the superconducting material with helium ions through a mask, patterning an array of defects. The process is quick, easy to perform at an industrial scale, and avoids contact with the material surface.

The next step is to find a way to change and detect the arrangement of fluxons easily, including more complex nanostructured patterns. Building on the progress made by Lang and the team, these methods could become the writing and readout techniques for a superconductor-based memory device.

Full details of the research are reported in Physical Review Applied 10.1103/PhysRevApplied.8.014021.

Light is seen to scatter off light

The idea that particles of light can interact with one another – known as light-by-light scattering – has finally been observed some 80 years after it was first predicated. That’s the claim of members of the ATLAS collaboration at CERN in Geneva, who have combed through data they took in 2015 when lead ions collided with each other in their detector. Some scientists, however, dispute the priority of the finding, arguing that light-by-light scattering was observed by an experiment at the SLAC National Accelerator Laboratory in California 20 years ago.

Classically, light cannot interact with light because photons – even though they mediate interactions between charged particles – do not themselves carry charge. However, Heisenberg’s uncertainty principle, a cornerstone of quantum mechanics, says that photons can briefly transform into “virtual” pairs of particles and antiparticles, such as electrons and positrons. There is then a tiny chance that these virtual particles can recombine to create pairs of real photons.

The upshot is that two photons, each producing a virtual particle-antiparticle pair in the process, can scatter off one another. In doing so, they change direction but do not lose any energy. The interaction, in other words, is elastic.

Seeking the light

The idea of looking for this phenomenon at the LHC was put forward in 2012 by CERN’s David d’Enterria and Gustavo Da Silveira, now at the Federal University of Rio Grande do Sul in Brazil. They proposed studying those events in which lead ions do not physically collide with one another but nevertheless pass by close enough that their electromagnetic fields interact a lot. Any light-by-light scattering that does take place would be revealed by two photons flying away from the centre of the detector in opposite directions (to conserve momentum), while the lead ions would continue on an almost undisturbed path around the LHC ring. Technically, those photons, being force mediators rather than particles in a beam of light, are virtual. But because the lead ions travel close to speed of light, the electromagnetic fields associated with them become relativistically compressed. The squeezed field lines at that point therefore resemble a single line, which is characteristic of a real photon. The photons can thus be regarded as “quasi-real”.

Flash of inspiration

Putting the proposal into action, members of the ATLAS collaboration analysed data from lead-ion collisions taking place in their detector during 2015. As they report in Nature Physics, out of a total of four billion events they identified just 13 that could have been due to light-by-light scattering. These were events comprising a single flash of light at two diametrically opposed points in the ATLAS calorimeter, but with no sign of any other particle emission – and in particular no curved tracks from charged particles travelling through the detector’s magnetic field.

The researchers also worked out how many background events would be likely to have produced the same signal during the data-taking period. Such events could include the rare occasions when electrons inside the detector radiate almost all of their energy away in the form of photons. Concluding that the combined background would, on average, yield only 2.6 events, they calculated that their 13 candidate events had a statistical significance of 4.4 standard deviations, just a little short of the 5 standard deviations conventionally required to claim a discovery in particle physics.

Priority claims

Writing a “news and views” piece to accompany the latest paper, Spencer Klein of the Lawrence Berkeley National Laboratory in California points out that ATLAS is not the first experiment to provide evidence of light-by-light scattering. In 1975 physicists in Germany observed photons elastically scattered by the electromagnetic field of a nucleus. However, in that case the photons in the nuclear field were entirely virtual. As such, according to Klein, the ATLAS collaboration “reports the first direct evidence for light scattering from light”.

Adrian Melissinos of the University of Rochester in the US, however, disputes this. In 1997 he was part of a group that published what he regards as direct evidence of light-by-light scattering at the E144 experiment at SLAC. The experiment involved firing photons from an intense laser at high-energy electrons to boost the former to gamma-ray energies, and then recording the few times when those gamma-ray photons interacted with the laser photons.

The experiment was not set up to monitor elastic scattering, but instead detected positrons generated during the inelastic scattering of photons. Nevertheless, Melissinos argues that E144 provided just as direct an observation of light-by-light scattering as has ATLAS. He also points out that the photons in their case, being produced by a laser, were fully real.

Looking ahead

ATLAS should start collecting new data from lead-lead collisions at the end of next year, and should also benefit from an LHC intensity upgrade due for the middle of the next decade. With more statistics, scientists will be more able to work out the contribution of various different charged particles in the scattering process – be they electrons and positrons, muons or even heavier particles from beyond the Standard Model that would signify new physics.

For the moment, however, ATLAS deputy spokesperson Andreas Hoecker is happy simply to have seen the long-predicted effect. “Even without any new physics, light-by-light scattering is already very interesting,” he says. “It is a very beautiful phenomenon.”

Tunable lens brings adjustable focus to fluorescence-guided surgeries

Davide Volpi and collaborators at the Oxford Institute for Radiation Oncology in the UK have developed an electrical lens system for fluorescence-guided laparoscopic surgery, a minimally invasive procedure that exploits fluorescent biomarkers to help clinicians remove cancerous or abnormal tissues from the abdomen.

Such minimally invasive techniques have become increasingly popular over last few years, since they reduce recovery time and the overall risk to the patient, and the addition of fluorescence imaging offers improved contrast to allow smaller lesions to be treated.

In many cases, however, surgeons must work with very small areas of fluorescence in the tissue. A laparoscope that can zoom into an area of interest would therefore help to clinicians to distinguish and investigate these smaller areas, which should lead to more successful surgeries.

Improving image quality without distorting light

Volpi and colleagues have developed a tunable lens systems (TLS), a device that exploits electronics to change focus during operation, that can be integrated into commercially available laparoscopes (Biomed. Opt. Express 8 3232). While common in consumer cameras, such adjustable focus control has not yet been incorporated into a clinically viable setup. Furthermore, the research team made their device suitable for fluorescence-guided surgeries, for which it is vital to keep the light free of distortion as it passes through the lens.

The optical performance of the Oxford team’s TLS offers two key advantages. First, its fast response time of less than 7.5 ms allows it to switch focus extremely quickly, potentially allowing a fully autofocus capability or the use of two or three preset focus points (see visualization below).

Second, the TLS has been specifically designed to prevent the light distortion caused by chromatic aberrations. This is important for fluorescence-guided surgeries, since the most effective biomarkers generate fluorescence in the near-infrared (NIR). Optical tests showed that the TLS offered achromatic performance in the visible and NIR, enabling simultaneous imaging of white-light reflectance and fluorescence from the biomarkers.

Volpi and collaborators show in their paper also tested the performance of the TLS on animal models, obtaining in vivo images of tumours in mice. Fluorescence images obtained with a laparoscope fitted with the TLS appear noticeably sharper than a similar system equipped with a visible-NIR lens, suggesting that the TLS offers a viable option for fluorescence-guided laparoscopy surgeries in the clinic.

Automatic image algorithm separates overlapping cells

Macrophages – the ‘big eaters’ within the immune system – could offer new insights into cell migration if it were possible to accurately visualize their interactions. But researchers still struggle to distinguish overlapping cells in microscope images, which makes it difficult to locate and track these cells. While various methods have been used to segment overlapping cells, a UK team of biomedical engineers, computer scientists and biologists have now worked together to develop an automatic algorithm that could improve these techniques, and also enable more accurate cell location and tracking (MIUA 2017).

The accurate segmentation of biomedical images is a key focus for researchers who are trying to improve image analysis of overlapping cells. José Alonso Solís-Lemus and co-workers at City University of London and King’s College London have now developed an algorithm that improves the detection and segmentation of fluoresced overlapping cells of fruit fly embryos (Drosophila melanogaster). The algorithm, called Anglegram, offers new insights into how macrophages use signals to help them migrate.

Anglegram reveals junctions between overlapping cells

In the fluorescence images analysed by the team, the red signal corresponds to the cell nuclei, while the green represents the intracellular proteins, known as microtubules. The key challenge when analysing these images is that the green fluorescence from the microtubules are not strong enough to accurately determine the cell segmentation.

Anglegram combined with junction splicing provides the most accurate cell segmentation

The Anglegram algorithm can help when a cluster of two or more overlapped macrophages is detected by the presence of two or more red nuclei. The algorithm works by automatically detecting the junctions between overlapping cells, as determined by the inner angles of the cell boundaries. This produces a 2D matrix resembling a heat map, which is known as the Anglegram. Boundary points representing the junctions can then be located based on the maximum intensity projection of the Anglegram matrix.

The researchers tested the performance of the Anglegram algorithm by combining it with three different segmentation methods, and comparing the results with a fourth technique that was used on its own as a benchmark. All segmentation methods differ in the way they operate, but the junction slicing technique produced the most accurate segmentation when combined with Anglegram.

The results suggest that Anglegram offers a promising method for improving the segmentation of overlapping cells. Further development of the algorithm could eventually make it possible to detect overlapping cells when more than two cells are clumped together.

Celestial companion

It’s safe to say that finding a book about eclipses is an easy task – they number in the hundreds and vary from tiny pocketbooks to glossy coffee-table books with incredible pictures. In this saturated market, it can be difficult to pick out a useful guide to learning about eclipses and how to view them. In case you are lucky enough to be in the path of the upcoming eclipse this month, but don’t feel fully prepared when it comes to, say, photographing this seemingly rare event, or you are really keen on getting to grips with the science in detail, then Totality: the Great American Eclipses of 2017 and 2024, written by Mark Littmann and Fred Espenak, is the book for you.

At first glance, it may come off looking too much like a science textbook, and in some ways it is. But don’t let that put you off. Littmann is an award-winning astronomy writer, while Espenak is better known as “Mr Eclipse” and between the two of them, Totality covers pretty much everything you would possibly want or need to know about eclipses. The book is detailed, but the language is clear, simple and even poetic at times, as the authors describe each aspect of an eclipse.

The chapters cover everything from the mythology and lore of eclipses; how our understanding of these events has grown over the millennia; the scientific impact of eclipses, including the eclipse of 1919, which the authors describe as the “eclipse that made Einstein famous”; and of course step-by-step guides on how to safely observe and photograph a total solar eclipse. The chapter on photography is especially useful for those who may be keen on capturing this event but are unsure of what works and what equipment is necessary. Littmann and Espenak explain how even the simplest of cameras or a smartphone can be used, while also detailing techniques for those with more photographic equipment and/or ability. Each chapter is full of tables, charts, diagrams and maps, so if an in-depth study on all things eclipse-related is what you are looking for, then get a copy of Totality.

  • 2017 Oxford University Press 347pp £25hb

To the ends of the Earth

A total solar eclipse

On 30 September 1131 BC, according to physicist and science writer Frank Close, the prophet Joshua looked to the heavens and witnessed one of several solar eclipses mentioned in the Old Testament. But what makes this event special for Close is Joshua’s description of the Sun stopping in the sky and the Moon reversing direction during the eclipse. As a scientist, Close knows that the Moon does not go into reverse, so he was convinced that Joshua experienced an optical illusion – and he wanted to see it too. Eclipse: Journey to the Dark Side of the Moon is the story of Close’s obsession with solar eclipses, which began 63 years ago in East Anglia, UK, where an eight-year-old Close experienced a partial eclipse. Since then, he has travelled to the ends of the Earth to witness total eclipses – racing across the sands of Libya and chasing the darkening Sun on both the Pacific and Atlantic Oceans. Indeed, it was while bobbing off the coast of Fiji in less-than-perfect weather conditions that Close finally experienced the “Joshua illusion”. Much of the book is travelogue, and provides a taste of the package tours that take enthusiasts to some of the most improbable places in search of totality. Close also includes a few amusing anecdotes about his fellow fanatics – including one man who seemed convinced that he was going to be plucked off a boat by an alien spacecraft during an eclipse. Close stresses that every eclipse is unique, with an event in the desert being very different from one witnessed at sea. The hook for this book is the total eclipse that will sweep across a vast arc of the US on 21 August. Close wraps up his book with a few tips on how to best view the event – avoiding clouds and traffic jams when possible. And if you miss this month’s eclipse but happen to be in North America in 2024, an eclipse will sweep across Mexico, the US and Canada.

  • 2017 Oxford University Press 240pp £12.08
Copyright © 2026 by IOP Publishing Ltd and individual contributors