Skip to main content

Build a bot: new book covers history and future of robotics

They grow up so fast. It seems like only yesterday that the cherubic little darling was gazing wide-eyed at the world, waving its arms incoherently as it figured out how to move. Now it reaches out with those pudgy little arms, purposefully picking up toys and stacking them on top of each other as it actively plays. However, this change didn’t happen over months, or weeks, or even since “only yesterday” – it’s been mere hours. The reason is that this is not a child, but iCub, a wide-eyed, one-metre tall robot built by researchers at the Italian Institute of Technology in Genova that is designed to resemble a small child, as well as learn like one.

The robotic youngster starts out only able to move its eyes, learning to focus on objects of interest. As time progresses, skills develop and motor restrictions are unlocked, to simulate muscle development, and iCub learns to point, play with toys and even use objects as crude tools to push buttons.

iCub was developed to explore the so-called “embodied cognition hypothesis”, the notion that the development of human-like cognition is dependent on learning to physically interact one’s environment – and that, by extension, the development of a truly human-like artificial intelligence is dependent on it having a physical body. The rationale for the hypothesis is at the heart of computer scientist Mark Lee of the University of Aberystwyth’s new book, How to Grow a Robot: Developing Human-Friendly, Social AI.

The first third of Lee’s work explores the history of and current developments in robotics and artificial intelligence (AI) – from pallet-carrying bots in warehouses to computer chess champions – and highlights the issues that arise from trying to derive generalized AI from the prevailing task-based approach to developing AI. The middle section moves on to how robots like iCub might be taught to grow and learn through developmental interactions with their surroundings.

The final section speculates on the future of robotics, considering where and how fast AI might develop and touching on topics including the risk of the singularity, a concern that Lee emphatically dismisses. These areas are rich enough that more could easily have been made out of them. It was also a pity to see trans-humanism defined reductively as being solely about “downloading the brain”. It strikes me that less extreme concepts, such as technological augmentation to expand the limits of the human body could have been explored in a book that examines how we are in many ways defined by the nature and extent of our embodiment.

warehouse robots

Overall, How to Grow a Robot is a rich and comprehensive introduction to robotics and artificial intelligence, with a very clear message at its heart. However, it has one central flaw – in my opinion, it is sadly not a sufficiently engaging read.

Both of Lee’s previous books – on intelligent robotics and assembly systems, respectively – appear to have been intended for a specifically academic readership, rather than the popular audience that this book is marketed at. Perhaps Lee, like many scientists before him, found it a challenge to reshape his material for non-specialists. Yes, the explanations are largely there – and I appreciated the use of periodic jargon-busting fact boxes (although I still remain none-the-wiser as to Lee’s distinction between consciousness and self-awareness). However, the work has the feel of a “recommended class reading”, down to the end-chapter bullet lists repeating key “take-aways” for the reader. Personally, I prefer my casual non-fiction to not be presented as if there might be a pop quiz on the material later. More images of some of the commercial robots described in the opening chapter would also have been welcome for the general reader, and were conspicuous by their absence.

Personally, I prefer my casual non-fiction to not be presented as if there might be a pop quiz on the material later

Other structural aspects, meanwhile, serve to bring to mind a different academic format: that of a dissertation or thesis. On the positive side, the work has one well-explained central argument around which the whole book is constructed – a strength some of its popular-science peers could stand to learn from. Unfortunately, this is countered by repeated instances where the author sets up an interesting area of exploration before punting it to a reference text. The clichéd academic phrase “beyond the scope of the present work” is not used, but it might as well have been – variations on “we do not have the space” recur in its place, and create the looming impression of some word limit narrowly met and avenues curtailed. Taking time to direct the reader to extraneous material is admirable, but such links belong in footnotes, not as disruptions to the main text that leave the average reader feeling underserved – especially if they do not have access to the kind of academic library that would be needed to follow up on such references.

In short, How to Grow a Robot is a detailed and informative read – but one whose style and framing might better recommend it to a computer science syllabus rather than your coffee table.

  • 2020 MIT Press 384pp £22.50hb

Giant Magellan Telescope receives cash injection from the National Science Foundation

The National Science Foundation has awarded the GMTO Corporation — the organization overseeing construction and management of the $1bn Giant Magellan Telescope (GMT) – a grant of $17.5m over the next three years to accelerate the construction of the 25 m-wide telescope.

The GMT will be located at Las Campanas in Chile’s Atacama Desert and is on-track for first light in 2029. Hard rock excavation at the site is complete and in October 2019 the GMTO signed a $135m contract with German company MT Mechatronics and US-based Ingersoll Machine Tools in Illinois to design, build, and install the GMT’s telescope structure. This is set to be delivered to the Chilean site at the end of 2025.

[The NSF award] will enable us to accelerate our progress on critical components of the telescope

Robert Shelton

The GMT will have seven circular mirrors, each 8.4 m in diameter. When put together, they will create a telescope equivalent to one mirror 25.4 m wide. Two of the mirrors are complete and in storage in Arizona, while three are in various stages polishing. The final two haven’t been started yet, but the sixth mirror will be cast in early March 2021.

Acting as one

Each primary mirror is flexible, but they must remain in a precise shape for all seven to function together as one. “The mirror itself is supported, like on a bed of nails, where we have about 160 actuators behind each of these mirrors,” says GMTO project manager James Fanson. “We measure the shape of these mirrors and we adjust them every 30 seconds.” The NSF grant provides the funding to test a full-size primary mirror and actuator system, and adds Fanson, “to demonstrate we can control the primary mirrors the way we need to.”

Each GMT primary mirror reflects light to a corresponding 1 m-diameter secondary mirror with 675 actuators, which alter its shape every millisecond to counteract Earth’s atmospheric blurring effect. With the NSF grant, the GMTO will also build a portion of one of the secondary mirror systems. The grant also provides funding to build a laboratory-bench test to simulate the mirrors, actuators, disturbance sources, and that the seven primary mirrors can all phase together as one — technology not used before.

“One of the areas of great emphasis the team has had from the beginning is to tackle the riskiest, most difficult questions early on to make sure they can be surmounted,” says GMTO president Robert Shelton, who adds that the NSF award will “enable us to accelerate our progress on critical components of the telescope”.

Evidence for life is found on Venus, wider access to the best radiotherapy

In this episode the astronomy writer Keith Cooper is on hand to chat about the surprising discovery of phosphine in the atmosphere of Venus. He explains that here on Earth, microbial life is the only natural source of phosphine – which could mean that life exists in the clouds of Venus. Cooper also speculates about how future missions to the “habitable zone” of the Venusian atmosphere could search for life.

Today, there is a huge disparity in cancer care across the world with people living in low- and middle-income countries having limited access to the best radiotherapy treatments. This imbalance is the focus of the social enterprise company EmpowerRT, which was founded by the University of North Carolina medical physicist Sha Chang. In this episode, Chang and her colleague Cielle Collins talk to Physics World’s Tami Freeman about what can be done to provider greater access to the best treatments.

Industrial lasers generate attosecond light pulses

Studies of ultrafast processes could become more widely accessible thanks to researchers at the University of Central Florida (UCF) in the US, who have shown that commercially available, industrial-grade lasers can generate attosecond pulses of light. Until now, such pulses could only be created at large laboratories boasting complex laser systems.

Researchers make attosecond-scale measurements by passing an attosecond light pulse through a material. When this pulse interacts with electrons inside the material, it gets distorted. By monitoring these distortions, scientists can create 3D maps of the electrons and make movies of their motion. As an example, the classical Bohr model of hydrogen indicates that an electron takes roughly 150 attoseconds (10-18s) to orbit the hydrogen nucleus. Measurements with attosecond precision therefore enable researchers to study motion at a subatomic scale, which is vital for understanding fundamental physics phenomena such as interactions between light and matter.

Such measurements are, however, currently only possible in world-class laser facilities. While UCF houses such a facility, and another dozen or so exist worldwide, team leader Michael Chini explains that none of them truly operate as user facilities – that is, institutions that allow scientists from other fields to come in for a short period and use their equipment for research. This lack of access creates a barrier for chemists, biologists, materials scientists and others who could benefit from applying attosecond science techniques to their work, he says.

Obtaining few-cycle pulses from industrial-grade lasers

The extremely short light pulses employed in attosecond-scale experiments consist of a single oscillation cycle of an electromagnetic wave. Such cycles are typically generated by propagating femtosecond (10-15s) laser pulses through tubes filled with noble gases such as argon or neon. The interaction between the light pulses and the gas broadens their spectrum, making it possible to compress them further in time.

Chini and colleagues have now developed a way of obtaining such few-cycle pulses from industrial-grade lasers, which could previously only produce pulses of much longer duration. They achieved their feat by compressing approximately 100-cycle pulses in tubes that contained molecular gases instead of noble gases and varying the length of the pulses sent though the tubes. This procedure made it possible to compress the pulses by a factor of 45, squeezing them down to just 1.6 oscillation cycles. At that point, Chini says, they showed that they could use these compressed pulses to produce attosecond pulses by generating an extreme ultraviolet supercontinuum – something he describes as “a hallmark of attosecond pulse generation”.

Choice of gas and pulse duration are key

Study lead author John Beetar notes that the duration of the initial laser pulse is key. Filling the tube with a molecular gas – and especially a gas of linear molecules, such as the nitrous oxide used in this work – enhances the compression effect because the molecules tend to rotate into alignment with the laser field. However, this alignment-induced enhancement is only present if the pulse is long enough to rotationally align the molecules. The choice of gas is important, too, because the rotational alignment time depends on the molecule’s inertia. To maximise the enhancement, the researchers aim to make this inertia coincide with the duration of their light pulses.

The UCF researchers, who report their work in Science Advances, say that single-cycle pulses are within reach using their technique. With further refinements, Beetar adds that the reduction in complexity associated with commercial, industrial-grade lasers should make attosecond science more approachable and could enable more interdisciplinary applications.

CERN accelerator technology to underpin FLASH radiotherapy facility

How can technology developed at CERN for high-energy physics bring state-of-the-art radiotherapy to a hospital just along the lakeside in Lausanne?

The technologies in question include high-performance electron accelerator components and simulation tools originally designed for CERN’s Compact Linear Collider (CLIC). Now, a collaboration between CERN and Lausanne University Hospital (CHUV) plans to use these to create a system for clinical delivery of FLASH radiotherapy.

FLASH radiotherapy, which involves delivering therapeutic radiation at ultrahigh dose rates of 40 Gy/s and above, vastly decreases normal tissue toxicity while maintaining anti-tumour activity. Of particular note, FLASH should enable dose escalation, potentially offering a new option for cancers that are resistant to treatment. “In all experiments so far, we observed that normal tissues are spared with this type of radiation,” says Jean Bourhis, head of radiation oncology at CHUV. “It’s a really reproducible effect. And there is no sparing of the tumour.”

Bourhis pioneered the development of FLASH radiotherapy, leading the team at CHUV that performed the first FLASH treatment in a human patient in 2018. The patient in question had a resistant superficial skin cancer and was treated with low-energy electrons of roughly 10 MeV. Next, he would like to translate the impressive observations seen in an experimental setting into clinical trials. To treat larger tumours at depths of up to 20 cm in the patient, however, will require much higher energy electron beams.

Walter Wuensch

“A clinical FLASH system must have a high accelerating gradient to achieve the beam energies needed to access deeper-seated tumours, energies in the range of 100 MeV,” explains Walter Wuensch, a senior researcher at CERN. This ability to accelerate beams in a very short distance, he notes, was one of the technologies designed for CLIC. The other key aspect of the high-energy physics study was to deliver a high current in a well-controlled and extremely stable beam – another important requirement for FLASH.

“For some years, CERN has been studying accelerator technology for a possible high-energy physics facility,” says Wuensch. “We have developed prototypes and shown their feasibility and performance. So it was with real excitement that we found out about the needs of CHUV. After some initial discussions it became clear that what we had developed for CLIC seemed an almost perfect match for what is needed for a FLASH facility.”

“The clinical need that we have really converges with the technological answer that CERN has,” adds Bourhis. “This is really powerful.”

The CERN–CHUV partnership has now finished the first phase of its study: moving from an initial idea to creating a conceptual design for the proposed FLASH facility. The next step will be to develop this baseline design in more detail to optimize the system for patient treatments. The team also hopes to collaborate with an industry partner in the radiotherapy field. Alongside, while the machine is being prepared, CHUV will start to prepare the required teams and infrastructure, and submit applications to regulatory agencies so that the treatment can reach patients as soon as possible.

Bourhis predicts that the FLASH facility should be operational within two to three years, at which point the team plans to embark on proof-of-concept in clinical trials. He notes that after these trials, the system could be transferable to other hospitals.

The system will be 2–2.5 times larger than a conventional radiotherapy machine, but should still be compact enough to fit into existing hospital infrastructure. The cost of the first protoype system (being installed in CHUV) is estimated to be about €25m; though if manufacturing scales up, this price should come down. FLASH treatments, however, only require the patient to undergo two or three radiation fractions, compared with 20 or 30 for standard radiotherapy. As such, Wuensch suggests that the eventual cost-per-treatment could be competitive in absolute terms with classical radiotherapy.

“We really appreciate the opportunity to work on something that’s matches so well and is new on both sides,” Wuensch concludes. “It’s a wonderful opportunity to be able to work in the medical field.”

Physics in the pandemic: ‘Our event had grown from a hub for the UK to a truly global event within a matter of days’

Condensed Matter Physics in the City (CMPC) is a long-standing conference series that has become a focal point for researchers in the UK studying strongly correlated materials. First held in 2010, the meeting was conceived by the Hubbard Theory Consortium – a confederation of condensed-matter groups from Royal Holloway, University of London, the University of Kent, the London Centre for Nanoscience at UCL, Imperial College London and the Rutherford Appleton Laboratory (RAL).

This year’s edition of the conference was scheduled to take place its traditional Royal Holloway location at Bedford Square in central London in July 2020, organized by a committee from across the Hubbard Theory Consortium and collaborators. It was to be complemented by a summer school on “Foundations of Quantum Matter” on the Isle of Skye, organized by our UCL colleagues Andrew Green and Frank Krüger.

Then the COVID-19 pandemic struck.

With excellent speakers lined up for both events, we still wanted to go ahead with the conference. But given the uncertainty of whether an in-person meeting would be at all possible, the organizing committee quickly agreed to take the meeting online.

Traditionally, CMPC has provided delegates with lots of time for discussion, offered an informal setting and struck a balance between theory and experiment, while involving junior and senior participants alike. This ethos is epitomized by our colleague Piers Coleman, based at Royal Holloway and Rutgers University, whose enthusiasm for getting theorists and experimentalists together derives from the early days of the field at Bell Labs, Bristol and Cambridge universities, and the Landau Institute.

Thanks to the previous events in the series, including last year’s 10th-anniversary conference, which had events spanning Paris and London, the core group of academics in the Hubbard Theory Consortium felt they were ready to tackle the challenge of recreating some of this conference experience online.

The organizing committee considered how best to ensure that the spirit of free-flowing discussion of our live events could be salvaged online

Through a number of Zoom meetings, our committee considered how best to ensure that the spirit of free-flowing discussion of our live events could be salvaged online. We therefore decided to deliver the conference as an interactive Zoom session, rather than a webinar, which would have allowed only a text-based Q&A channel from participants. To avoid the possibility of any unwanted “Zoom bombing”, we carefully screened conference registrations for authenticity and circulated session links only to registered participants.

From our initial e-mail announcement of the online conference going ahead, the uptake was rapid, with more than 2000 visits to the conference website within two weeks, and registrations from across the globe. To advertise, we only used established e-mail lists from previous events, while organizers e-mailed collaborating groups and people on RAL’s e-mail lists, courtesy of Devashibhai Adroja.

So quick was the response, in fact, that one of my collaborators from Iran had signed up even before I had a chance to mention the meeting in a personal conversation. Lots of registrations came from the UK, the US, India, Germany and Japan, with more from across Europe, Asia, the Americas, the Middle East, Africa and a few from Australia.

Overall, we counted some 670 registrations from people in 36 countries. Unfortunately, we did not collect information about nationalities, which may have been even more diverse. Our event had grown from a hub for the UK to a truly global event within a matter of days.

Encouraging online discussion

So how was the atmosphere of the online format? Unlike a live conference, we scheduled only two or three talks per day, which were held during UK afternoons to make attendance not too difficult for delegates in Asia and the US. Additionally, despite providing talks on YouTube, both live and as recordings, we were overwhelmed by many of our more remote participants’ enthusiasm to brave early mornings or late nights to attend live (the talk listings are still available via our schedule page).

With ample time allocated for discussions, the conversation really did take off despite the lack of face-to-face contact, and the event felt very interactive. There were no technical interruptions to report (except when our Zoom call crashed on the opening day, when numbers soared and we may have encouraged too many cameras to remain open).

The audience was extremely disciplined – especially thanks to the continual efforts of my colleague Sam Carr from the University of Kent to remind everyone of the meeting etiquette (adopted from the PQM group meetings at Kent) and occasional help from co-hosts to mute the odd microphone that had been left open unintentionally.

If anything, the option to ask questions both live by raising a hand, or via text-based chat seems to have lowered the barrier to making interventions, as many participants confirmed in our post-meeting questionnaire, and especially so for the student participants.

To engage students, who made up around 40% of the audience, we created new mechanisms, as the time-limited schedule did not allow the inclusion of additional live talks. Instead, they were allowed to submit pre-recorded talks, with the incentive of a prize for the best talk, which went to Alexandra Ziolkowska from the University of Oxford.

Overall, we were extremely impressed with the quality of student submissions, and there clearly was an audience to be found for them, with several videos being viewed more than 200 times at the time of writing. We also allowed students to lead additional discussion time after the formal sessions, which again drew a good participation, and was found to further lower barriers for participation.

Meeting new people

But what about informal discussions? Online conferences make it trickier to meet new people, get introduced to your long-time physics idols, or to hear or share the latest rumours and gossip. Nonetheless, we found the Zoom session provided a decent workaround, as the list of participants let you see who was there and to then chat to selected individuals or a small group of people.

Zoom provided a decent workaround, as the list of participants let you see who was there and to then chat to selected individuals or a small group of people

We’ve heard that many delegates used this opportunity to talk privately in the background, to follow up on discussion topics raised in the main session, or just make new friends. We also tried to stimulate face-to-face conversation by splitting the conference into break-out rooms during longer breaks, for which Zoom unfortunately only allowed random allocations. Personally, I found this worked surprisingly well on occasion, allowing me to meet a few random people and maybe discover common interests.

However, it really depended on having enough people actually willing to use the break-out rooms, rather than using the online format to “sneak out”, switch to another task or take a comfort break. Unfortunately, only a small number of participants engaged with us on social media, either via our Twitter channel or conference hashtag, though many indicated that our conference notice board had provided them with some helpful information.

From the responses to our post-meeting questionnaire, more than half saw their expectations towards online conferences raised (54%) and only a tiny minority had theirs lowered (4%). Advantages that were often mentioned included not having to spend time travelling and being able to fit in both personal work and conference attendance. Other benefits included not needing to hurry from one conference room to another, not having to worry about disturbing anyone if you arrived late, being able to have a drink, or quickly and discreetly switching to another task if needed. Also, delegates could drop in on the most relevant talks and discussions, while feeling fewer barriers to asking questions.

Several respondents from the developing world highlighted the fact that the online format had given them the chance to attend an event with top experts that they could otherwise not have afforded to attend in person (the event was free of charge). Negative responses were much scarcer, though quite a few people said that the 90 minutes we had allocated for talks – and 30 minutes for breaks – was too long, which meant we may have over-estimated people’s attention span. Excessive screen-time can be quite tiring in the long run, so maybe even shorter conference days would work out better.

Let’s meet again

In the future, we would love to meet colleagues in person again, and I hope our conference funding from the Institute of Complex Adaptive Matter (ICAM-I2CAM), EPSRC and the Institute of Physics can be carried over into 2021 to make this happen. But with the tremendous success of the online meeting this year, any future event, even if held “for real”, will surely have to have an online element associated with it too.

Indeed, our survey indicated a healthy appetite for such a “hybrid” model, with about 30% indicating that they would actually prefer to attend the event online, and 40% wanting to attend at least in part virtually, maybe due to the reduced impact on time and travel commitments. Anyone wanting to keep abreast of announcements for next year can follow our Twitter channel, which also has coverage from the 2020 online edition.

Most of all, what this year’s meeting has confirmed is that there is a global appetite for a deeper understanding of novel materials properties at the quantum level. With topics ranging from spin liquids and neutral Fermi surfaces to novel superconducting states in twisted graphene bilayers and machine learning, we hope we have inspired researchers well beyond the people who usually attend our meetings.

Nifty noise trick makes quantum states live longer

All particles have a wave-like nature, but in the everyday, macroscopic world their quantum behaviour is hidden thanks to interactions with their surroundings – for example via gravity, electromagnetism or heating. Such interactions also mean that a quantum system will quickly lapse into classical behaviour – a process known as decoherence – unless it is isolated from its environment. Scientists at the University of Chicago have now developed a simple strategy that allows quantum systems to fend off decoherence for 10 000 times longer. The technique, which has been tested on solid-state qubits (quantum bits) made from silicon carbide defects, could advance many areas of quantum science, including quantum computing, communications and sensing.

Quantum computing has made significant progress in recent years, and in 2019 researchers at Google unveiled a basic quantum processor that performs certain tasks faster than a conventional supercomputer. In practice, however, the problem of decoherence, which destroys any stored quantum information, must still be overcome before such devices become widespread and able to tackle significant real-world problems. This is because quantum computers work by exploiting the ability of a quantum particle to be in a superposition of two or more states at the same time, and the fragile nature of these superpositions makes them easy to destroy and hard to control.

Silicon carbide point defects

A team of researchers led by David Awschalom recently discovered that silicon carbide – a material already widely employed in high-power electronics – hosts point defects that might help solve this problem. Such defects are attractive because their decoherence time is much longer than the time required to perform a logical operation in a quantum processor – even at room temperature. The defects also contain electron spin states that can be controlled as qubits and manipulated using light.

In their earlier work, the researchers studied a specific crystal structure of silicon carbide known as 4H-SiC that contains naturally-occurring defects called divacancies. These defects correspond to a missing silicon atom next to a missing carbon atom in the material’s crystal lattice, and they are similar to nitrogen-vacancy centres in diamond (which also boast long decoherence times and have already been used as qubits that can be controlled by light at room temperature). Both types of defect form a multi-electron system with a net angular momentum, or spin, that can be aligned either parallel (“1”) or antiparallel (“0”) to an applied magnetic field.

A new trick

Researchers in several groups have explored different strategies for extending the decoherence times of this and other systems. One common approach is to physically isolate the system from its noisy surroundings. Another technique is to make all the materials as pure as possible. Neither task is easy in practice, and Awschalom and colleagues have now devised a very different protocol.

Instead of trying to eliminate noise in the systems’ surroundings, study lead author Kevin Miao says that he and his colleagues, in effect, “trick” the system into thinking that the noise isn’t there. They achieved this by applying an alternating magnetic field to the 4H-SiC divacancy in addition to the electromagnetic pulses (oscillating magnetic fields at microwave frequencies, in this case) employed to control the spin states in quantum systems. These pulses cause the spin of the divacancy to oscillate between its two qubit states (via electron spin resonance), and this oscillation can then be used to “write” quantum information to the sample.

By precisely tuning their additional magnetic field, Awschalom’s team demonstrated that they could rapidly rotate the electron spins in the system and allow it to “tune out” surrounding noise. “It’s like sitting on a merry-go-round with people yelling all around you,” Miao explains. “When the ride is still, you can hear them perfectly, but if you’re rapidly spinning, the noise blurs into a background.”

Divacancy ignores environmental noise

The new technique allowed the researchers to increase the decoherence time of the system to 22 milliseconds – four orders of magnitude longer than it would be without their modification, and far longer than any previously reported electron spin system. This is because the 4H-SiC divacancy, under the influence of the alternating magnetic field, is able to almost completely “ignore” some forms of temperature fluctuations, physical vibrations and electromagnetic noise, all of which are the bane of quantum coherence.

According to Awschalom, the approach creates a pathway to scaling up the numbers of qubits in a quantum processor. “It should make storing quantum information in electron spin practical,” he explains. “Extended storage times will enable more complex operations in quantum computers and allow quantum information transmitted from spin-based devices to travel longer distances in networks.”

The researchers say their approach could also be tested in quantum systems other than 4H-SiC divacancies, such as superconducting quantum bits and molecular quantum structures.

“There are a lot of candidates for quantum technology that were pushed aside because they couldn’t maintain quantum coherence for long periods of time,” Miao stated in a press release issued by the University of Chicago. “Those could be re-evaluated now that we have this way to massively improve coherence.” The best part, he adds, is that “it’s incredibly easy to do. The science behind it is intricate, but the logistics of adding an alternating magnetic field are very straightforward.”

The new coherence protection technique is detailed in Science.

Jupiter-sized planet found orbiting tiny white dwarf star

A huge planet about the size of Jupiter has been spotted orbiting a tiny white dwarf star 80 light-years away. The discovery by an international team of astronomers is puzzling because the planet should have been swallowed up long ago, when the star expanded to become a red giant before contracting to a white dwarf.

This is the first planet known to orbit a white dwarf and its existence suggests that at least some of the Sun’s planets could survive when our star becomes a red giant in five billion years.

The white dwarf (called WD 1856+534) and the giant planet (WD 1856b) were studied using NASA’s Transiting Exoplanet Survey Satellite – which looks for fluctuations in starlight that occur when a planet passes in front of its star. TESS spotted the planet whizzing around the star once every 34 h in an extremely tight orbit that is about 20 times closer to its star than Mercury is to the Sun. Another extraordinary thing about this system is that the white dwarf is about the size of Earth, so the planet is much larger in size than the star it orbits.

No signs of destruction

“We were using the TESS satellite to search for transiting debris around white dwarfs, and to try to understand how the process of planetary destruction happens,” says Andrew Vanderburg at the University of Wisconsin-Madison, who is part of the team that made the discovery. “We were not necessarily expecting to find a planet that appeared to be intact.”

The team then studied the system in more detail using the Gemini Near-Infrared Spectrograph (GNIRS) on the Gemini North telescope in Hawaii. While the star could be seen by GNIRS, no light from an orbiting debris field, nor from the giant planet was detected.

Siyi Xu of the Gemini Observatory explains, “Because no debris from the planet was detected floating on the star’s surface or surrounding it in a disc we could infer that the planet is intact”. She adds, “because we didn’t detect any light from the planet itself, even in the infrared, it tells us that the planet is extremely cool, among the coolest we’ve ever found”. Indeed, NASA’s Spitzer Space Telescope was used to put an upper limit of 17 °C on the temperature of WD 1856b – which is similar to the average temperature of Earth. As a result, it is possible that life exists on the planet.

Hospitable planet

“I think the most exciting part of this work is what it means for both habitability in general – can there be hospitable regions in these dead solar systems – and also our ability to find evidence of that habitability,” says Vanderburg.

WD 1856+534 was once a star like the Sun before it ballooned out to become a red giant – which would have consumed WD 1856b had it been in its current orbit. Instead, astronomers believe that the planet was in a much larger orbit when the star was in its red-giant phase, so instead of being consumed, it was knocked into an eccentric orbit.

Eventually, the red giant burnt out, leaving the cool white dwarf behind. At that point, the planet could have wandered into its current tight orbit – possibly though gravitational interactions with other surviving planets. This journey would have taken billions of years, but astronomers believe that WD 1856+534 is almost 6 billion years old. Because the system is relatively close to Earth, it’s possible that astronomers could spot the other planets in the future.

The observations are described in Nature.

Nanoscale LED shines brighter

An innovative fin-shaped design for light-emitting diodes (LEDs) could not only overcome the devices’ limited brightness, it could also help turn them into lasers. The new scheme, from researchers in the US, could prove valuable in applications including chemical sensing, hand-held communication technologies, high-definition displays and disinfection.

Wide bandgap semiconductor LED technology has developed substantially over the last few decades and is now in common use – for general lighting and displays, but also for a range of applications in areas such as photodetection and optoelectronics. The technology does, however, suffer from a major drawback: as an LED’s current density increases, its internal quantum efficiency (IQE) declines.

This decline in efficiency means that although an LED shines more brightly when supplied with stronger electrical currents, it does so only up to a certain point. After that, its brightness begins to drop off – a phenomenon known as efficiency droop. The extra heat generated in the LED as the applied current increases exacerbates the problem, with IQEs dropping by about 30% as the temperature increases from 23° to 177°C. As a result of these effects, the power of LEDs with an area less than a square micron tops out in the nanowatt (nW) range.

Two mechanisms are believed to contribute to efficiency droop. In the first, known as non-radiative recombination, excited charge carriers (electrons and holes) recombine without emitting light. This unwanted process lowers the efficiency of light generation and increases heat losses, since the electrons and holes recombine by producing phonons – thermal vibrations of the crystal lattice – instead of photons. In the second mechanism, known as Auger recombination, the energy of the electron-hole pair is transferred to another electron or hole, again without a photon being emitted. This charge carrier then normally loses its excess energy to thermal vibrations.

Tiny comb

The new LED design, which was created by researchers at the National Institute of Standards and Technology (NIST), the University of Maryland, Rensselaer Polytechnic Institute and the IBM Thomas J Watson Research Center, had a serendipitous start. The research team did not set out to solve the efficiency droop problem directly. Instead, they were exploring ways of creating micron-sized LEDs for applications such as a miniaturized lab-on-a-chip.

NIST team member Babak Nikoobakht, who conceived the new design, explains that he and his colleagues used the same materials as in conventional LEDs, but formed into a different shape. Unlike the flat, planar design commonly used, Nikoobakht and colleagues built their light source out of long, thin zinc oxide (ZnO) strands, or fins, measuring around 5 microns in length and approximately 160 nanometres in width. The resulting fin LED array, grown on a gallium nitride (GaN) substrate, looks like a tiny comb that extends over areas as large as a centimetre or more.

“We saw an opportunity in fins, as I thought their elongated shape and large side facets might be able to receive more electrical current,” Nikoobakht says. “At first we just wanted to measure how much the new design could take. We started increasing the current and figured we’d drive it until it burned out, but it just kept getting brighter.”

The group’s new ZnO-GaN fin LED emits light at violet to ultraviolet wavelengths and generates about 100 to 1000 times as much power as a typical micron-sized LED – up to 20 mW. The new design does not exhibit efficiency droop, even at record-high current densities of 1000 kA/cm2. Measurements of the device’s total spectral radiant flux also show that its output power increases linearly with drive current.

“One of the most efficient solutions”

Grigory Simin, a professor of electrical engineering at the University of South Carolina, US, who was not involved in the project, says that the new design is one of the most efficient solutions he has seen. “The community has been working for years to improve LED efficiency, and other approaches often have technical issues when applied to submicrometre wavelength LEDs,” he comments in a NIST press release. “This approach does the job well.”

Nikoobakht and colleagues also discovered that as they increased the current supplied to the LED to 1000 kA/cm2, the LED’s comparatively broad-band emission light (with a wavelength in the ultraviolet range, around 385 nm) narrowed to just two wavelengths (403 and 417 nm) of an intense violet colour. At this point, the device started to lase with a brightness of over 20 mW.

According to the researchers, their nanoLED’s enhanced light-emitting performance comes from the fin shape mitigating nonradiative pathways. The large side facets of the fins also allow for effective electrical injection and form a laser cavity.

Nikoobakht notes that converting an LED into a laser usually requires a lot of effort, and typically involves coupling the device to a resonance cavity that allows the light to bounce around and increase in intensity. In this case, however, “it appears that the fin design can do the whole job on its own, without needing to add another cavity,” he says.

While the nanoLEDs and nanolasers described in this work operate in the near UV range, the researchers say their concept could be applied to different materials systems, such as aluminium gallium nitride (AlGaN), boron nitride (BN) or their heterostructures, to develop far brighter deep-UV devices. They report their work in Science Advances.

Physics in the pandemic: ‘Cancer patients in Nepal have been affected miserably by this COVID-19 pandemic’

Tirthraj Adhikari

I started work at the BP Koirala Memorial Cancer Hospital in Nepal at the beginning of this year as a fresh radiation oncology physicist. I had left Italy after graduating from the ICTP, just a month before the start of the COVID-19 pandemic. I was the first clinical physicist to be given an appointment from the Ministry of Health & Population to work as a clinical medical physicist in Nepal.

17 January was my first day in the radiation oncology department, which has seven physicians and four physicists. The hospital has three linacs, a simulator and a brachytherapy system for intracavitary treatment. It is the one of the busiest cancer centres in Nepal, with cancer patients visiting from all over the country. I interacted with patients from remote parts of Nepal, some of whom have to travel 16–20 hours by bus, after having trekked for one or two days. I was eager to chat with them, as I myself come from a remote village in far-west Nepal. For patients from my region, it takes on average 18 hours by bus to reach the cancer hospital.

During treatments, I noticed that – due to a lack of screening and diagnosis – patients would visit the hospital in the late stages of their disease. The major factor is poverty. When someone in a remote village feels sick, they usually visit the pharmacy and buy medicine without prescriptions from physicians. This makes them feel well for some time, but after a while the problems return. After several such attempts, the patient becomes serious and decides to visit the hospital. Unfortunately, by this time the cancer has reached a late stage and treatment becomes difficult.

The patient may have to undergo major surgery, but they cannot afford the cost. If they manage to afford surgery, they cannot afford radiotherapy, which is expensive. It is sad that for many patients who need treatment with a complex radiotherapy technique, we have to shift to a simple technique just because of the cost. For example, one patient with head-and-neck cancer was a candidate for volumetric-modulated arc therapy (VMAT), but we treated him using cobalt-60 radiotherapy because it is relatively inexpensive.

During this period, I created treatment plans for about 230 patients. Most of these were cobalt-60, 2D and 3D conformal radiotherapy (CRT) plans for head-and-neck, cervix and breast cancers. In addition, I created a few intensity-modulated radiotherapy (IMRT) and VMAT treatment plans for prostate and oesophagus cancers, and brachytherapy plans for cervical cancer. I remember that I created more plans for palliative than curative cases.

In addition to the treatment planning, I was involved in quality assurance and quality control in the department. Every day, I’d commute to the hospital early in the morning, do the daily quality assurance of the linacs, and make the CT simulator and brachytherapy machine ready for treatment and simulation. At our department, frequent problems with the machines arise during treatment delivery. I was engaged in troubleshooting of the linacs and CT simulator with senior physicists in the department.

The impact of COVID-19

The first COVID-19 patient in Nepal was identified in early February. He had flown from Wuhan in China to Nepal, and he soon recovered. Then the government declared us a corona-free state, while cases were growing in China, Italy and then the US. However, in March a girl who fled from Qatar to Nepal was diagnosed with COVID-19, and cases grew over time. The government then imposed a lockdown to prevent transmission to the community. This harshly affected all patients, including cancer patients.

As we did not have personal protective equipment (PPE) in our department, the hospital stopped treatments for three days the first time. Treatments soon resumed, but some patients had already left the hospital, as there was uncertainty when treatments would restart and difficulties with accommodation. Those patients who were living in the hospital wards and in the hospital periphery received the remaining fractions of their radiation treatments. But some patients had already travelled more than 600 km to their home.

Under normal conditions, the department used to treat 180–200 patients per day. But with lockdown, the number of patients declined rapidly, to 30–40 patients per day. This happened first because it was difficult for patients to travel the hospital – the lockdown meant there was no public transport and travel by ambulances was not affordable. Second, hospital personnel were afraid to touch or take care of patients without having PPE. During the first period of prolonged lockdown, around 30 patients who were having radiotherapy died at their homes.

After four months of lockdown, the hospital resumed normal activity and our department started to treat normally again. Patients who survived the lockdown came back to the hospital for their remaining treatments. For us, it was difficult to decide whether to treat patients with plans created four months earlier. Some patients had re-simulation after changes in their diagnosis and plans were recreated for them. For those who had already received some radiation fractions, the gap was calculated and dose was managed accordingly.

Following a spike in COVID-19-infected patients, the government imposed a second lockdown from the middle of August. At the time, some hospital personnel were also infected, though fortunately none of our department. Treatment was stopped, again for some days, with the same problems – it was a high risk for us to treat patients without proper PPE and testing. We demanded mandatory PCR tests for patients and their visitors to keep ourselves safe. Then the treatments resumed, but with fewer patients than under normal conditions.

A patient from Bardiya, a district of West Nepal, had started treatment in February. His head-and-neck cancer was being treated by 3D CRT. With three fractions remaining, his treatment was impeded because of the pandemic, and he was not able to resume treatment due to the ongoing pandemic. He contacted me recently saying that he had difficulty in breathing. I suggested that he visit the department soon for follow-up. But in the meantime, the government had imposed the second lockdown. I am wondering how he will manage to travel 600 km. The cancer patients in Nepal have been affected miserably by this COVID-19 pandemic.

Time management

This pandemic has adversely affected everyone’s lives. To keep myself physically active, I do regular yoga in the morning, wearing a mask, and I walk in the evening, maintaining physical distance.

In addition to the usual departmental clinical tasks, I used to talk with seniors, physicians and technicians about treatments, innovations, politics and science-economics. During lockdown, I followed more than 80 national and international webinars. In addition, I composed articles in my native language about nuclear laws in Nepal, radiation protection, cancer radiotherapy, managing errors in radiation therapy, the importance of radiation dosimetry in cancer care, and the increasing needs of the public cancer hospital in Nepal. I also prepared a proposal to establish a cancer hospital in my region, where there is no cancer centre and people have to travel at least 16 hours to the nearest cancer hospital.

To engage myself, I have taken 25 online/remote training courses during this pandemic, covering themes including radiation protection, nuclear safety, radiotherapy safety, image-guided radiation therapy and stereotactic radiosurgery. More importantly, to stay updated, keep myself safe from COVID-19 infection and answer people’s questions, I took 10 online training courses from the World Health Organization. With this training, I gained confidence in how to deal with COVID-19.

Finally, I attended the 2020 AAPM|COMP Virtual Meeting. It was possible to attend this conference as a friend of mine from the US paid the registration fee for me. Because of the time difference between Nepal and the US, I stayed up the whole night to follow the presentations.

Being a fresher, I have never felt nervous in dealing with radiation treatment during the pandemic, despite the risks. We continue to treat cancer patients with limited protective resources from COVID-19. But because of pandemic, I have had plenty of time to learn and implement ideas and techniques that I learnt while abroad. I personally took this pandemic as an opportunity – keeping myself busy by learning and teaching others remotely. Nobody knows what will happen tomorrow, so keeping yourself ready to face any unpredicted situation is wise.

Copyright © 2025 by IOP Publishing Ltd and individual contributors