Skip to main content

New knit theory could help make smart self-folding materials

As well as being a popular and pleasant hobby, knitting is a thousand-year-old technology and, unlike weaving, it can produce loose yet extremely stretchable fabrics. Researchers in France have now developed a model to describe how individual stitches in a knitted fabric deform when stretched. The work could help in the design of thread-based smart self-folding materials with specific and complex elastic properties for use in applications such as sports textiles or soft robotics.

“Our model and experiments have identified which aspects are purely structural (like the shape of the knitted fabric under tension) and those that depend on the material itself (such as the amplitude of the applied force),” explains Samuel Poincloux of the Ecole Normale Superiéure in Paris. “Our model is based on a Lagrangian approach, which means that we want to find the shape of the fabric that minimizes energy, which is assumed to depend on the bending energy of the yarn, while respecting certain constraints. This assumption implies that the yarn is inextensible, so the first limit in our model is to keep the length of the yarn constant when deforming the entire fabric.

“The second constraint ensures that the topology of the yarn (the stitch pattern) is conserved during the deformation.”

Looking at the behaviour of individual stitches

Unlike woven fabrics, which are made using multiple threads that cross each other, and which can be modelled by the so-called Chebyshev net, knitted materials consist of a single thread that forms intertwined loops, or stitches. Although the constituent yarn does not deform much when stretched, the individual stiches can deform by a great deal because they are curved and because the yarn can slide from one stitch into neighbouring stiches. This is what makes knitted materials so flexible (they can stretch to twice their length) and explains why they can easily drape over other objects.

Previous models of knitted material assumed that it stretches uniformly when deformed, but these theories neglect the position of individual stitches in the fabric. The new model takes into account the fundamental mechanical behaviour of interlocking threads in each stitch.

Knit behaves similarly to a rubberlike material

The researchers crafted a fabric using a model elastic yarn knitted into the common stockinette, or “point Jersey” pattern in which the stiches are organized along rows and columns (called “course” and “wale” respectively). The 27-cm-wide sample was made from a thin nylon-based filament and consists of a grid of 51 x 51 stitches. They then subjected the material to some mechanical tests in which they stretched it along the wale direction at a constant speed of 0.1 mm/s while clamping it along the course direction. They followed the mechanical behaviour using a traction bench equipped with a dynamometer and imaged the stitched pattern using a digital camera.

The team, which is led by Frédéric Lechenault, found that the knit behaves similarly to a rubberlike material: It is very stretchable and has a geometric Poisson ratio of nearly 0.5. The individual stitches also deform by elongating in a way that minimizes the energy associated with bending the thread.

Although the model was specifically applied to the stockinette pattern, it provides a general framework for studying a large class of knitted materials, including those used in advanced engineering and biomedical applications, say the researchers.

Detailing their work in Physical Review X 10.1103/PhysRevX.8.021075, they say that they now plan to study the effects of friction at the stitch crossing points in their knit. “We discovered that this friction adds a fluctuating component to the mechanical response and is similar to avalanching phenomena such as earthquakes or granular materials,” Poincloux tells Physics World. “A paper on these results has been just been accepted in Physical Review Letters and a preprint is available at arXiv:1803.00815.

Sonablate HIFU treats prostate cancer, minimizes side effects

SonaCare Medical has reported five-year outcomes from a study of focal therapy using its Sonablate high-intensity focused ultrasound (HIFU) system. The multicentre study included 625 patients with clinically significant non-metastatic prostate cancer (Eur. Urol. 10.1016/j.eururo.2018.06.006).

With a median follow-up of five years, failure-free survival at one, three and five years was 99%, 92% and 88%, respectively, equivalent to that achieved with surgery. Metastasis-free, cancer-specific and overall survival at five years were 98%, 100% and 99%, respectively. The study reports that 98% of men maintained pad-free urinary continence after their procedure and 85% maintained erectile function – improved outcomes compared with those seen for surgery and radiation therapy.

“Focal HIFU is a major shift in treating men with early prostate cancer,” says contributing author Hashim Ahmed from Imperial College London. “Our study shows that cancer control in the medium term is very good and, importantly, men can expect a low-risk of side effects. All men who are suitable for focal HIFU should be told about this treatment option so they might consider it as an alternative to radical prostatectomy or radiotherapy.”

The study concluded that failure-free survival with HIFU can be equivalent to that of surgery, but with a side-effect profile far more beneficial to the patient’s quality-of-life post-procedure. Mark Carol, CEO of SonaCare Medical, notes that the study encompasses the largest focal treatment patient population to date, followed for the longest period of time.

“Until now, otherwise healthy men with prostate cancer faced the prospect of leaving the hospital after treatment with their cancer treated but with a compromised quality-of-life,” Carol explains. “This study shows it is possible to achieve whole-gland equivalent cancer control rates without the concomitant side-effect profile of whole-gland treatments. Now, otherwise healthy men with prostate cancer can leave the hospital post focal HIFU treatment with their cancer under control yet still healthy. They can even return back to work and activities of daily living the very next day instead of having to wait the weeks required with surgery.”

Magnetic particle spectroscopy may improve stroke diagnosis

Dartmouth‐Hitchcock researchers

A recent research paper in the journal Medical Physics sheds new light on the use of non-invasive magnetic particle spectroscopy to evaluate blood clots associated with a number of different diseases – ranging from stroke to myocardial infarction to deep vein thrombosis – holding out the promise of significantly earlier and more accurate diagnosis.  So, how exactly does magnetic particle spectroscopy work?  How can it be used it to assess blood clots?  And how might this approach be implemented in clinical settings?

The paper describes an investigation into how magnetic spectroscopy of nanoparticle Brownian rotation, a type of magnetic particle spectroscopy that’s particularly sensitive to the Brownian rotation of magnetic nanoparticles, was used to detect and characterize blood clots (Med. Phys. 10.1002/mp.12983).

As co-author John Weaver, professor of radiology at the Dartmouth‐Hitchcock Medical Center, explains, nanoparticle spectroscopy is used to measure the magnetization produced by magnetic nanoparticles in an alternating magnetic field – with an in-depth knowledge of the spectrum allowing users to measure “how free the nanoparticles are to follow the magnetic field”.

“Bound nanoparticles are less free than unbound nanoparticles, so we used spectroscopy to measure the number of bound nanoparticles,” Weaver explains. “If the nanoparticles are coated with antibodies, you can measure the molecular concentration of the molecule the antibody binds.”

The ties that bind

As part of their experiments, the research team coated magnetic nanoparticles with molecules that bind thrombin, the molecule that initiates the clotting process in blood.  Thrombin helps bind both cells and molecules in the initial stage of clotting.  New clots have a great deal of thrombin on their surface and, as the clots age, molecules of fibrin – an insoluble protein formed from fibrinogen during the blood clotting process, which forms a fibrous mesh to impede the flow of blood – become organized and clots present less surface thrombin.

In vitro spectrometer

“We reported an initial study where we measured the number of nanoparticles bound to the clot and how tightly they were bound.  In this preliminary study, we found that new clots have more nanoparticles bound to them and that the nanoparticles bound to the clots are more tightly bound,” says Weaver.

Put simply, this means that the “relaxation time” of the bound nanoparticles reduces as clots become more mature and organized. As such, the total number of nanoparticles that are capable of binding the clot gets lower and lower as clots age.  This discovery allowed the research team to confirm that the way nanoparticles interact with thrombin in clot formation differs over time.  On older clots, the nanoparticles bind the thrombin on the surface, whereas in clots still under development they bind a number of thrombin molecules or even become “trapped” in the clot matrix during formation.

At this stage, Weaver says that he and the rest of the research team are developing technology to evaluate the clot composition and structure in patients who are suffering acute ischemic stroke.  When clots have lodged in the large arteries supplying the brain, it is now becoming the standard of care to remove the clot with endovascular surgery.  However, he stresses that the success of this therapy may depend “on the nature of the clot, especially its mechanical stability”.

“The clinical application of nanoparticle spectroscopy will require the development of in vivo methods to deliver the nanoparticles, obtain sufficient signal from the nanoparticles on the clot and characterize the clot accurately,” Weaver adds.

Carlo Rovelli: the author of The Order of Time discusses ‘perhaps the greatest mystery’

Writing a popular-science book is not easy: it takes time, effort and dedication. Writing a popular-science book that sells well is even harder. It helps to be famous, of course, which is why the likes of Brian Cox, Michio Kaku and Neil de Grasse Tyson have their books plastered all over the media and piled deep on tables in bookstores. But writing a bestselling popular-science book that drills into some of the most profound questions in physics – and does so with lightness, technical accuracy, brevity and grace – is harder still.

That feat, however, is precisely what theoretical physicist Carlo Rovelli achieved with his breakthrough book Seven Brief Lessons in Physics. It was first published in Italian in 2014 and has since been translated into more than 40 languages and sold more than a million copies. Barely 70 pages long, it is a magnificent example of the adage “less is more”. Tackling everything from quantum physics and cosmology to particle physics, space–time and black holes in so few words might daunt most writers, but Rovelli – who is based at the Aix-Marseille University in France – managed that task with aplomb.

In his new book The Order of Time, Rovelli has adopted broadly the same approach. Focusing here on just one fundamental topic – the nature of time – the book is a little more conventional (and longer) than Seven Brief Lessons on Physics. The first part will be familiar territory to physicists, covering topics such as time dilation, the arrow of time, relativity, synchronization and the notion of the Planck time – the smallest possible length of time, 10–44 s. The second part imagines a world without time, while the third is more speculative, in which Rovelli wonders how we perceive a flow of time in a timeless world.

To me there are two secrets to Rovelli’s success as a writer. First, he has a deep technical knowledge of the subject – basically he knows what the hell he’s talking about ever since hanging a sheet in his student bedroom in Bologna in the 1970s with the Planck length (10–33 m) painted on it as inspiration to understand the world at such tiny scales. Second, Rovelli can condense complex ideas into beautifully written prose, gently guiding the reader through mind-bending ideas without resorting to cliché or stale metaphors. Anecdotes, history, art, philosophy and culture pepper the text.

That’s not to say that The Order of Time is easy; truly fundamental physics rarely is. I got lost several times when Rovelli discussed loop quantum gravity (his own field of research), time emerging in a world without time, the non-existence of space–time (what, really?), and (especially) the idea of “thermal time”. I also felt Rovelli could have been reined in here and there, with his analogies between apple-cider and time along with references to ”our fear of death [being] an error of evolution”. But then as the book is so short, re-reading it wouldn’t hurt or even take much time – assuming we can agree on what time really is.

To find out more about Rovelli’s writing process and thoughts on time, I put some questions to him.

You write in your book that “the nature of time is perhaps the greatest mystery”. What attracts you to this subject?

I got interested in the nature of time because of quantum gravity. It is well known that the basic equations of quantum gravity can be written without a time variable, and I wanted to fully understand what this means. Getting to understand the various sides of this question has been a long  journey.

In a nutshell, how do you understand time?

I think that the key to understand time is to realize that our common concept of “time” is multi-layered. Most mistakes about the nature of time, and much of the confusion, come from taking the full package of properties we attribute to time as forming a unique bundle that either is there or not. Now we understand that many properties we attribute to time come from approximations and simplifications.

Many properties we attribute to time come from approximations and simplifications.

Carlo Rovelli

Can you give an example?

For instance, our common idea that time is one and the same for everybody comes from the fact that we usually move at speeds much smaller than the speed of light with respect to one another. As we drop approximations, time loses properties that we instinctively attribute to it. So we can use the word “time” to mean various things, depending on the generality of the context.

You claim that “divergences of opinion regarding the nature of time have diminished in the last few years”. What are physicists starting to agree upon?

Until a few years ago there were still physicists who thought that the difficult questions raised by quantum gravity about the nature of time could be circumvented simply by using an expansion of the gravitational field around Minkowski geometry to define the fundamental theory. Today few believe this.

Do you think physicists will ever solve the mystery of time?

Yes, I am optimistic. Why not? Physics has solved so many puzzles that appeared mysterious in the past. But I think that a full understanding of why time looks to us the way it does will not be a result that physicists will reach alone. Neuroscientists have to play their part. There are aspects of our intuitive sense of time that, I believe, it is a mistake to search for in physics alone. They depend on the specific structure of our brain.

A full understanding of why time looks to us the way it does will not be a result that physicists will reach alone.

Carlo Rovelli

You say our sense of the direction of time is due to the universe becoming increasingly disordered. Why do you think the cosmos began in such an ordered state?

This is one of the biggest open problems we have today. In the book, I suggest one possible speculative solution to this problem, but this is far from being established or clear. The solution I suggest is that there is a perspectival aspect in the direction of time. The notion of order depends on the ways two physical systems interact, and this may play a role. Remember that we have understood the rotation of the sky as a perspectival fact: our planet and the rest of the cosmos are in relative rotation. I suspect that something of this sort could be in play here.

The Order of Time touches a lot on the philosophy of science; how much philosophy have you studied?

I am not a philosopher, but I have studied philosophy, read philosophy and go to philosophy conferences. The best physicists of the past read a lot of philosophy. Einstein, Heisenberg, Schrödinger, Bohr, Newton – they were all nourished with philosophy. There is a current anti-philosophical fashion in physics, which I think is detrimental for the advancement of science.

Given that time is such a slippery notion, what challenges did you face in writing a book about it?

I had to keep in mind different audiences. I wanted a book that could be read by everybody, but was also meaningful and of interest for the scientists and philosophers immersed in these problems. The challenge was to keep talking to both audiences.

How would you describe your writing style?

I delete more than I write. I keep deleting. I want to say as little as possible, compatible to the main idea I want to transmit. I struggle for clarity, for myself before than the reader. I think that metaphors help. We always think metaphorically. If you read scientists like Feynman or Einstein, they had a concrete visual understanding of what they were doing. I try to get there.

I delete more than I write. I keep deleting. I want to say as little as possible, compatible to the main idea I want to transmit.

Carlo Rovelli

How conscious were you of distinguishing between accepted science and speculation?

Very much so. This is a book that covers both accepted science and new ideas. At the price of repetition, I keep repeating in the book “this is something established”, or “this is something uncertain”, or “this is just an idea I am proposing”, and so on. Before the end of the book, a short chapter summarizes the path made and once again makes the distinctions. I have complained in the past that popular science sometimes forgets this distinction, and I have been careful not to fall into this same mistake.

As a native Italian speaker, do you write in English or Italian?

I write in English when I do science, in Italian when I write for the large public. In spite of having lived outside Italy for 25 years, I find that my mother language is still the one I control better.

So what’s your verdict on the translation?

It was difficult for me to find a good translator because my writing style is unusual, as it mixes scientific precision with literary freedom. The first translators I found were either missing one side or the other: the translation was either imprecise, or flat and boring. Then my UK publisher found Simon Carnell and Erica Segre, a couple of translators who work together, joining their scientific and literary competencies, and I have found their translation perfect. It fully renders the subtleties of the original style without ever losing precision. In fact, there are passages where I like their English version more than my original Italian.

What’s your favourite book about time, not counting your own?

The Direction of Time by Hans Reichenbach. It is full of correct ideas that have not yet been absorbed by everybody.

What’s the most common question you get when speaking about your book?

Can we travel back in time?

And what’s the most surprising question you’ve been asked?

I was once talking abut the role of memory, and coherent traces about the past, in building our idea of temporality and I mentioned something about my father. At the end of the talk, an elderly person raised their hand and asked whether my father had been on stage in a theatre as a young man with a certain theatre company. It was a surreal moment where past time seemed to be constructed from converging memories, realizing in concrete what I was talking about.

What did you make of Benedict Cumberbatch’s audio recording of The Order of Time?

His voice and his interpretation adds depth and meaning to the text, and makes it much better.

You say in the book you don’t fear death. What , if anything, do you worry about?

Oh, plenty of things! Getting old, getting weak, losing love. Plus global warming, increasing belligerence, increasing social inequalities.

If we live in a timeless world, how did you find time to write a book?

I do not know how I find time to write. It is just because I like writing.

  • 2018 Allen Lane 199pp £12.99hb

UK’s access to European funding under threat from ‘third-country’ status

The European Commission (EC) has called for €100bn to be spent on Horizon Europe – the region’s next seven-year framework programme, which runs from 2021 to 2027. The successor to the €80bn Horizon 2020, the budget for Horizon Europe will need to be ratified by the European parliament and member states before it can come into force. Outlining the programme in a speech last month, Carlos Moedas, European Union (EU) commissioner for research, science and innovation, noted that Horizon 2020 had been one of the EU’s success stories.“The new Horizon Europe programme aims even higher,” he said. “As part of this, we want to increase funding for the European Research Council to strengthen the EU’s global scientific  leadership, and reengage citizens by setting ambitious new missions for EU research.”

As part of Horizon Europe, the EC is proposing a new European Innovation Council to “modernize funding for ground-breaking innovation in Europe”. It will aim to bring the most promising high potential and breakthrough technologies from lab to market application, and help the most innovative start-ups and companies scale up their ideas.

Another change to the funding programme regards international partners, dubbed “third countries”. In Horizon Europe, non-EU third countries, such as Canada, Japan and South Africa, will pay as they go, providing they have a free-trade agreement with the EU. But crucially they will not get out more than they put in. The new rules will not apply to countries such as Norway and Iceland, which belong to the European Economic Area.

The UK’s possible third-country status means it could lose access to a lot of extra funding. Indeed, UK scientists have done exceptionally well from EU research programmes. Estimates suggest the UK contributed €5.4bn between 2007 and 2013, but got back €8.8bn.

Close association

The Horizon Europe announcement came after the UK government noted that it would like to “fully associate” with European science programmes after the country leaves the EU. In a speech at Jodrell Bank in Cheshire on 21 May, UK prime minister Theresa May unveiled the government’s vision for EU–UK scientific co-operation. She stated that the UK is willing to pay for a “full association” with Horizon Europe and “close association” of the European Atomic Energy Community (Euratom). “I want the UK to have a deep science partnership with the EU,” she noted. “In return, we would look to maintain a suitable level of influence in line with that contribution and the benefits we bring.”

Not everyone is optimistic. The Nobel laureate Andre Geim from the University of Manchester doubts that the UK government can deliver the proposed plan. “The gang of 27 will place conditions favourable for their own countries,” he told Physics World. “Things for UK science are expected to turn bad for a generation, after the formal Brexit takes place.”

Scientists are particularly concerned about the UK’s future relationship with Euratom. The UK government had ruled out continued involvement with it as members are subject to the jurisdiction of the Court of Justice of the European Union (CJEU). However, the UK’s membership of the ITER fusion experiment is through Euratom, while the Joint European Torus (JET) at the Culham Centre for Fusion Energy (CCFE) in Oxfordshire, is largely funded by Euratom.

To get around the impasse, the UK states that it will now respect the remit of the CJEU when it comes to  participating in EU programmes. “Reaching such an association would allow the continuation of all aspects of our programme,” notes Ian Chapman, chief executive of the UK Atomic Energy Authority, which operates the CCFE.

Analysis: The UK is already losing out

Brexit was always going to be mostly bad news for British scientists. The UK currently gets around 4% of its science funding from the European Union (EU), but wins far more back in return than it gives. Those benefits could evaporate when the UK exits the EU in March 2019 if the UK becomes a “third country” in Horizon Europe. As the European Commission announced last month, third countries will receive only what they put in from the €100bn pot. That seems fair from an EU perspective – why should the UK be out of the club but then get all the benefits?

While the political wrangling will no doubt continue until the UK has struck a final deal, it’s hard to see any scenario where UK science will benefit. UK university departments that are particularly adept at tapping into European funds will either have to find funding elsewhere or scale back activity in areas.

The space sector is likely to be hit too. There is already a fall-out regarding the UK’s participation in the Galileo satellite navigation system. The European Space Agency has approved the procurement of the next batch of spacecraft but with no deal reached between the UK and the EU, British firms are bound to find it harder to win any contracts. The impact of Brexit is already starting to be felt.

Michael Banks is news editor of Physics World magazine

Renewables – limited or big and fast?

In a paper published in the journal Joule, researchers at Imperial College London (ICL) claim that studies that predict whole systems can run on near-100% renewable power by 2050 may be flawed as they do not sufficiently account for the reliability of the supply.

Using data for the UK, the team tested a model for 100% power generation using only wind, water and solar (WWS) power by 2050. The ICL researchers found that the lack of firm and dispatchable “backup” energy systems, such as nuclear or power plants equipped with carbon capture systems (CCS), means the power supply would fail often enough that the system would be deemed inoperable. They found that even if they added a small amount of backup nuclear and biomass energy, creating a 77% WWS capacity system, around 9% of the annual UK demand could remain unmet, leading to considerable power outages and economic damage.

Lead author Clara Heuberger, a PhD student from the Centre for Environmental Policy at Imperial, said: “Mathematical models that neglect operability issues can mislead decision makers and the public, potentially delaying the actual transition to a low carbon economy. Research that proposes ‘optimal’ pathways for renewables must be upfront about their limitations if policymakers are to make truly informed decisions.”

Co-author Niall Mac Dowell, also from the CEP, and director of the Clean Fossil and Bioenergy Research Group, said: “A speedy transition to a decarbonised energy system is vital if the ambitions of the 2015 Paris Agreement are to be realised. However, the focus should be on maximising the rate of decarbonisation, rather than the deployment of a particular technology, or focusing exclusively on renewable power. Nuclear, sustainable bioenergy, low-carbon hydrogen, and carbon capture and storage are vital elements of a portfolio of technologies that can deliver this low carbon future in an economically viable and reliable manner. Finally, these system transitions must be socially viable. If a specific scenario relies on a combination of hypothetical and potentially socially challenging adaptation measures, in addition to disruptive technology breakthroughs, this begins to feel like wishful thinking.”

The study made use of Mark Jacobson’s 100% WWS generic global scenario, but the ICL team had to modify it for their UK version so that they could test its reliability using a system optimisation model. That would not let them retain the wave and tidal inputs that Jacobson had included (2.5% & 1.8% respectively), so proxies were used. The high level of solar PV (~40%) was also not viable in the test model. The final result was that high levels of curtailment were identified (33%), though it was noted that “this could be a loss or an opportunity for processes using this excess power” i.e. power to gas conversion of surpluses, but this wasn’t followed up. It was also noted that “further, demand-side management, which is not included in this model, could alleviate curtailment levels”. There were significant shortfalls, but it might be that the omissions above could go some way to avoiding them.

That’s not to say that full balancing is easy, or that every model does it right, and earlier more extensive ICL work had pointed out the issues. However, the assertion in this new study that nuclear and fossil CCS are vital may raise some eyebrows. Nuclear seems unlikely to be able to make a major contribution to grid balancing (see my earlier post) since it is too inflexible and CCS, with many projects around the world abandoned, seems increasing unlikely to play a major role – an issue I will be exploring shortly in a series of posts.

The promotion of nuclear is, of course, still relentless, despite all the problems. Some claim that nuclear power can scale up quickly enough to meet the global climate threat, while renewables cannot. However, according to US energy guru Amory Lovins, “global and national data show the opposite”. He and his co-authors identify a range of errors, biases and misinterpretations in the studies claiming to prove that nuclear growth has been faster than for renewables. Some focus on programmes in individual countries on a per capita basis. For example, looking at a paper by Cao, Hansen et al., Lovins et al. say “Swedish nuclear power (which in 1976–86 grew 4.4× to 70 TWh/y) is shown as scaling 55× faster than Chinese wind-power (which in 2004-14 grew 124× to 158 TWh/y) – because Sweden’s population averaged 1/158th of China’s. Conversely, China’s unique addition in less than a decade (through 2016) of 25% of global solar photovoltaic (PV) and 35% of global wind-power capacity is shown as the slowest national achievement – an odd description of the nation that in 2016 added over 40% of new global renewable electric capacity, because it’s divided by 1.4 billion Chinese”.

Looking at the global data in absolute terms gives a very different picture. Even comparing specific programmes in specific countries, renewables expansion has been faster relatively than nuclear expansion, as the paper shows in the case of France (nuclear) and Germany (renewables). The former took 30 years to get going, the latter 9 years – as did China’s renewables programme. The timescale comparisons are sometimes even used against renewables. Nuclear has been getting support for many decades, and although that led to growth early on, it has now stalled. Renewables have been ignored until recently. So decadal comparisons are not very helpful since they dilute the impact of renewable’s recent expansion, and even in some cases, by using earlier cut-off dates, ignore it, and the recent nuclear stasis. Similar issues emerge in relation to cost estimates and project completion rates. Nuclear scores poorly on both with, the paper claims, few credible signs of improvement, whereas the costs of renewables are clearly falling rapidly and implementation rates are rising.

There is some room for debate on the cost claims. Variable renewables need balancing, which adds to the cost, and grid integration problems have led to wasteful curtailment of output, notably in China. However, these are not fundamental problems: upgraded smart grid systems with storage and supergrid links can balance variable supply and demand, making the overall system more efficient and cheaper to run than the existing inflexible system, while avoiding the high cost of using nuclear and fossil fuels – points made by Lovins in an earlier study.

In this new paper, Lovins et al. end by saying that they hope their exposition of “how up-start, granular, mass-produced technologies can overtake a powerful centralized incumbent may illuminate whether the pace of global decarbonization must inevitably be constrained by incumbents’ inertias, could be sped by insurgents’ ambitions, or perhaps both”.

Clearly they think renewables can expand fast, which goes against an earlier study by the equally authoritative Vaclav Smil, who concluded that “replacing the current global energy system relying overwhelmingly on fossil fuels by biofuels and by electricity generated intermittently from renewable sources will be necessarily a prolonged, multidecadal process”.

That led to some challenges, including from Ben Sovacool at the Science Policy Research Unit at the University of Sussex, UK, drawing on some historical examples of rapid change, and from a range of other academics.

But perhaps the best challenge comes from the actual progress being made: see the new REN21 annual review, which I look at in my next post.

SUPERINSULATION – Heat Transfer and Influences on Insulation Performance

Date: 25 July 2018, 2 p.m. BST

Presenter: Holger Neumann – Divisional Head of Cryogenics of the Institute for Technical Physics (ITEP) of KIT

 

          

Applications of Cryogenics: Past, Present, Future

Date: 27 July 2018, 2 p.m. BST

Presenter: Jorge Pelegrín Mosquera – research fellow at University of Southampton

Jorge Pelegrin attended the University of Zaragoza (Spain) where he completed a Master’s in physics and physical technology and a PhD in physics, the “Study of thermal stability processes in MgB2 and REBCO wires and tapes”. During this period, he was a visiting researcher at the University of Southampton (UK) and the University of Genoa (Italy), studying and manufacturing superconducting wires and coils.

Jorge is currently a research fellow at the University of Southampton where he has been involved in projects such as the application of superconductors in wind turbines and the properties of materials at low temperature used in the construction of cryogenic tanks.

 

         

Newborn planet spotted by the Very Large Telescope

A newborn planet orbiting a star just 370 light-years from Earth has been spotted by astronomers using the European Southern Observatory’s Very Large Telescope (VLT) in Chile. Dubbed PDS 70b, the huge planet is the first-ever to be seen orbiting within a disc of planet-forming material. Its discovery could provide important clues as to how systems of planets form around stars.

PDS 70b is a gas giant with a mass that is believed to be several times that of Jupiter. It orbits a very young star called PDS 70, which is about 10 million years old and is surrounded by a dense protoplanetary disc of dust and gas. The disk appears to have a void near its centre, which has probably been cleared by the young planet. Astronomers have known about such voids for decades and have long speculated that they are associated with young planets.

Birthplaces of planets

“These discs around young stars are the birthplaces of planets, but so far only a handful of observations have detected hints of baby planets in them,” explains Miriam Keppler of Germany’s Max Planck Institute of Astronomy in Heidelberg, who led the team that discovered PDS 70b. She adds, “The problem is that until now, most of these planet candidates could just have been features in the disc”.

The discovery inspired a follow-up study that was led by Keppler’s Heidelberg-based colleague André Müller and looked more closely at PDS 70b and how it interacts with the planetary disc. This revealed that the planet is orbiting in the middle of the void at a distance of about 22 au from the star – which in the solar system would put it just beyond Uranus.

The surface temperature of PDS 70b is about 1000 °C and the radius of the planet is 1.4-3.7 times that of Jupiter. According to Müller and colleagues, the upper limit is somewhat greater than expected for the age of the planet – which they estimate to be 5.4 million years. Spectroscopic studies of light from the planet suggest that it has a cloudy atmosphere.

600 young stars

The observations were done using SPHERE, which is a planet-hunting instrument on the VLT that was used by two astronomical survey programmes to study PDS 70. One is called SHINE – which aims to take near-infrared images of 600 nearby young stars in a search for new planets. The other is called DISK, which looks at young planetary systems and protoplanetary discs.

SPHERE detects the faint light from planets by first blocking the much brighter light from the parent star using a coronagraph. Then a series of images is taken of the system over time. The position of the planet will change slightly as it moves in its orbit, while the star will appear stationary. By looking at how the image changes with time, astronomers can extract the light from the planet and reject light from the star.

The studies will be described in two papers to be published in Astronomy & Astrophysics and preprints are now available: Miriam Keppler et al; and André Müller et al.

What type of physicist are you: leader, successor or toiler?

Only around 20% of highly cited physicists can be classed as “leaders”, with the rest being “successors,” and “toilers”, according to a new bibliometric study. Carried out by Pavel Chebotarev from the Institute of Control Sciences of the Russian Academy of Sciences and Ilya Vasilyev from the Moscow Institute of Physics and Technology, it examined citation statistics for top physicists, mathematicians and psychologists, finding that researchers can be broadly grouped together in these three distinct categories.

The researchers used citation data from Google Scholar, looking at a number of indicators including the yearly and total citations per year a researcher receives as well as the author’s h-index – a measure of a researcher’s productivity and impact of their publications. They then performed cluster analysis to identify groups of researchers that had similar characteristics.

“We wanted to ask whether we can automatically form clusters when describing the recognition that scientists receive by the scientific community and whether that varies from one discipline to another,” Chebotarev told Physics World.

Extended analysis

When looking at the citation data for mathematicians, psychologists and physicists, the authors identified three broad clusters that are “loosely based” on how the citations per year changes over time. Leaders tend to be experienced scientists who are widely recognized in their fields, which results in an annual citation increase. The successors tend to be early-career scientists who have had a surge in their citations in recent years. Toilers, meanwhile, may have a high citation count, but this stays mostly constant and may even drop slightly.

In physics, the researchers found that 48.5% of the 500 physicists analysed classified as toilers with 31.7% as successors and 19.8% as leaders. This compares to 52.0% of mathematicians being toilers with successors and leaders making up 25.8% and 22.2%, respectively. For psychology, 47.7% are toilers with 18.3% being successors and 34% leaders.

The researchers say that they are now going to extend their analysis to other disciplines including literature, genomics and economics.

Copyright © 2025 by IOP Publishing Ltd and individual contributors