The latest move marks the closest point to midnight since the clock was set up in 1947, where it began at 23:53. The board says this is largely due to the war in Ukraine, which is now entering its second year.
“Russia’s thinly veiled threats to use nuclear weapons remind the world that escalation of the conflict – by accident, intention or miscalculation – is a terrible risk,” the group writes. “The possibility that the conflict could spin out of anyone’s control remains high.”
The board also says that the war is undermining efforts against climate change given that countries that are or have been dependent on Russian oil and gas have diversified their supplies, leading to increased use of natural gas.
Other factors in the decision include the “breakdown of global norms and institutions needed to mitigate risks associated with advancing technologies and biological threats such as COVID-19”.
When I think of the General Electric Company, more commonly known as GE, I think of the foundations of modern electronics; of the intellectual property war waged between the giants of electricity, Nikola Tesla and Thomas Edison; and of the establishment that went on to electrify the US. GE is the company of jet engines, the X-ray machine and the light bulb, but that is just the beginning, as William Cohan explains in his detailed history of GE, Power Failure: the Rise and Fall of General Electric.
Cohan, a financial writer and former GE employee, has given himself no easy task. Power Failure is a catalogue of GE’s perilous rise and its precipitous fall, and the story involves a vast cast of characters, details of deals both dodgy and doomed, and no small amount of personal drama. Clear-sighted and with precision, we are taken from GE’s official birth in 1892 – a merger of Edison General Electric and rival electric company Thomson-Houston – to its collapse and fragmentation nearly 130 years later in 2021.
Formerly one of the most powerful companies in the world, GE has been drawn and quartered, divesting from much of its core business and getting rid of its capital division. In 2022, after years of severely underperforming, the company split into three: GE Aerospace, GE HealthCare and GE Vernova.
Power Failure begins with John Godfrey Saxe’s poem The Blind Man and the Elephant, chosen by Cohan to outline his central message. There will be many different men in this tale, we are told, and though a few of them might have got some things right, in the end, they will all have been wrong. At its heart, Power Failure is an examination of the cults of personality that were allowed to form around GE’s chief executives. The company became known not only as a hotbed for innovation and business, but a nursery for the greatest executives in American business, in which a “baronial” environment was cultivated that ultimately contributed to the company’s downfall.
Cohan looks at the lives, psychology and economic decisions of each member of this powerful group, recording in great detail not just their biggest deals and decisions, but also their minute interactions with the management board, investors and even spouses. He focuses particularly on two of them: Jack Welch (chair and chief executive 1981–2001) and Jeff Immelt (chief executive 2001–2017). In examining their lives and circumstances, Cohan positions Power Failure as a “whodunit”, where the murder victim is none other than GE itself, and these two men, he argues, are the main suspects.
Power Failure also presents the tussle between good innovation and good business. From a scientific point of view, one of the things I found most interesting about the book is its discussion of how business can predict and influence trends in science and society. One example Cohan gives is GE’s first president, Charles Coffin, who predicted the rural electrification of the US. Cohan describes GE’s early success as one of American capitalism, reiterating how a company can drive technological growth and the development of infrastructure through competition, when the system functions as intended.
Detailed but dismissive
When I picked up Power Failure, I was expecting it to focus on science and technology, but while this is not the case, Cohan’s writing is very readable. In what might otherwise be a dry and technical narrative, he explains things clearly without being patronizing. At times, Power Failure has the intrigue and rollicking mystery of a thriller, while at others it delves deeper into stocks, shares and corporate governance. Cohan is also thorough, drawing on his own years at GE in the 1980s to present a rich account of the company’s operations during the 20th century.
Part of Cohan’s skill is getting out of the way of his own narrative. He portrays the facts that are important in a linear manner, allowing snippets of remembered conversations, anecdotes or phrases from news reports to heighten the drama. But at over 800 pages, the level of detail and number of anecdotes cause the pace to languish. I found myself wondering – at around the 350-page mark – why I was learning about the dining habits of a relatively minor player in GE’s history, only to discover that they disappeared two pages later.
One noteworthy sticking point for me is Power Failure’s lack of social commentary. Cohan presents some of the unsavoury attitudes of men working in finance throughout the 1900s – such as lavish parties, drug abuse, fatphobia and womanizing – but hastens to remind the reader that “There was no #MeToo movement” back in those days and not to judge these actions on our standards of the day. Perhaps the choice to mention these behaviours at all could be seen as a sort of condemnation, but without pausing to reflect on how rampant and endemic these attitudes were and what kind of culture that left in place at GE, it feels as though we’re being told to let the perpetrators off the hook. A rather dismissive tone, especially in a book that lingers so much on the personal details.
Power Failure describes itself as a “cautionary tale about hype, hubris, blind ambition, and the limits of believing…a flawed corporate mythology”. Today, with tales such as Elizabeth Holmes’ Theranos scam, Anna Delvey’s fraudulent foundation or Billy MacFarland’s fake Fyre Festival, we see the book’s themes echoed throughout the world of venture capitalism. Unerring belief and trust in an individual can lead to financial collapse that leaves shareholders and ordinary employees alike struggling. Cohan does not pull his punches when he talks about self-delusion in Power Failure but, make no mistake, he is honest about the real victors. “As usual,” he writes, “the moneymen won out.”
Gamma band brain rhythms, especially those at 40 Hz, have been associated with large-scale brain network activity, working memory, sensory processing, spatial navigation, attention and more. Research has also shown altered 40 Hz rhythms in patients with Alzheimer’s disease, epilepsy and schizophrenia, says Li-Huei Tsai, a professor at MIT and director of the Picower Institute.
In the late 2000s, a collaboration including Tsai’s lab demonstrated a way of stimulating greater 40 Hz rhythm power in the mouse brain. Tsai’s research group then posited that they could harness 40 Hz stimulation to impact the course of Alzheimer’s disease.
Their initial experiments were successful – 40 Hz stimulation improved gamma rhythm power and synchrony, and reduced amyloid and tau levels (hallmark Alzheimer’s proteins) – but the experiments had used an invasive technology, optogenetics, to fuel 40 Hz power and synchrony.
“MIT colleague and collaborator Emery N Brown suggested that for us to advance a therapy, we would need to find a less invasive method of increasing 40 Hz rhythms,” Tsai says. “Our team tried sensory stimulation in mice, and it worked.”
Subsequent experiments using 40 Hz light and sound exposure not only reduced Alzheimer’s-related pathology but also preserved neurons, synaptic connections, and learning and memory in mice.
Sensory stimulation is based on the concept of neural entrainment, the process by which neural activity phase locks to sensory rhythms and improves various aspects of cognitive processing. The researchers’ first 40 Hz sensory stimulation studies in mice were published in the mid- to late-2010s. Encouraged by the results of that work, they moved on to early-stage clinical studies that tested the safety, feasibility and efficacy of 40 Hz sensory stimulation in humans.
40 Hz sensory stimulation enters clinical trials
The researchers’ latest work, published in PLOS ONE, shares results from the group’s Phase I and IIA clinical studies, which were led by Diane Chan, a neurologist at Massachusetts General Hospital (MGH) and postdoctoral clinical fellow in Tsai’s lab.
Study participants were exposed to 40 Hz stimulation for one hour a day for at least three months using a home-based light panel synchronized with a speaker. Electroencephalogram (EEG) electrodes measured 40 Hz rhythm and synchrony after exposure. The Phase IIA study (which included 15 people with early-stage Alzheimer’s disease) also incorporated follow-up visits, MRI scans of brain volume, cognitive testing and sleep monitoring. Treatment and control groups in the Phase IIA study were matched by age, gender, APOE status and cognitive scores.
Participants reported no serious adverse effects from the 40 Hz stimulation and were over 90% compliant in their use of the home-based equipment. EEG scalp electrode measurements showed significant increases in 40 Hz rhythm power at frontal and occipital sites among cognitively normal participants and volunteers with mild Alzheimer’s disease. The eight phase IIA participants who received the treatment did not experience significant reductions in hippocampal volume or increases in ventricle volume, while controls did. Treated patients also exhibited better connectivity across brain regions involved in cognitive and visual processing networks.
Tsai says the results from these early human trials should be interpreted with cautious optimism. After three months, neither the treatment nor control group showed any differences on most cognitive tests (the treatment group improved only on associating names and faces), but the Phase I and Phase IIA studies were conducted in small cohorts and with limited follow-up due to the COVID-19 pandemic. The group’s studies in mice had demonstrated that for cognitive and benefits to be long-lasting, 40 Hz sensory stimulation should be delivered chronically (consistently over a long period of time).
“[Our] results are not sufficient evidence of efficacy, but we believe they clearly support proceeding with more extensive study of 40 Hz sensory stimulation as a potential non-invasive therapeutic for Alzheimer’s disease,” Tsai says in an MIT press release.
Cognito Therapeutics, a start-up company founded by Tsai and Ed Boyden, a neurotechnology professor at MIT, is now moving forward with Phase III clinical trials of 40 Hz sensory stimulation.
In another line of research, the Picower Institute group and MGH are launching a study to test whether 40 Hz sensory stimulation might be an effective preventative measure in people at risk for developing Alzheimer’s disease. Another study will test for benefits in individuals with Down syndrome. They are also planning to investigate the use of 40 Hz sensory stimulation in people with Parkinson’s disease and are performing ongoing cell culture experiments to better understand the cellular and molecular basis of 40 Hz stimulation effects.
Tsai says that this research is a testament to curiosity-driven science and she’s encouraged to see other researchers testing non-invasive 40 Hz stimulation as a potential therapy for Alzheimer’s disease and publishing the results of their own studies.
“Our original experiments in 2009 enhancing 40 Hz rhythms were purely curiosity-driven and not conducted with a clinical endpoint in mind,” Tsai explains. “The story of all that has transpired in this research programme since then might turn out to illustrate, yet again, that basic, curiosity-driven scientific research can produce important, societally beneficial, practical results.”
Safety in numbers Photographed after the Soviet Union exploded its first atomic (fission) bomb in 1949, Leo Szilard believed that peace could only be maintained if both sides had equal numbers of such hugely destructive weapons. (Courtesy: Argonne National Laboratory/AIP Emilio Segrè Visual Archives)
One day in September 1933, Leo Szilard was walking along Southampton Row in London, musing about an article he had just read in The Times. It had reported a speech given by Ernest Rutherford, who had rejected the idea of using atomic energy for practical purposes. Anyone who was looking for a source of power from the transformation of atoms, Rutherford had famously said, was talking “moonshine”.
As he waited at a set of traffic lights at Russell Square, a terrible thought suddenly struck Szilard. If a chemical element were to be bombarded with neutrons, a nucleus could absorb a neutron, split into smaller parts and emit two neutrons in the process. Those two neutrons could divide two further nuclei, releasing four neutrons. When the lights changed and Szilard stepped into the road, the horrific consequences became apparent.
Szilard saw that if you have enough of the element, you could create a sustained nuclear chain reaction that could release vast amounts of energy. With such a “critical mass” as we now call it, the reaction would lead to a nuclear explosion. As a physicist who was always aware of the impact of scientific research, Szilard realized to his horror that a path lay open to a new generation of incredibly powerful bombs.
Working at the time as a medical physicist at St Bartholomew’s Hospital in London, Szilard had various thoughts about which element could be used for such a device. Beryllium was one idea; iodine another. However, a lack of research funds prevented him from carrying out any systematic search. Instead, Szilard filed for – and was awarded – a patent for the neutron-induced nuclear chain reaction, which he assigned to the British Admiralty in 1934 to try to keep the notion of an “atomic bomb” out of the public eye.
Leo Szilard was someone who would consider the long-term implications of science and would analyse the links between scientific discoveries and world events.
Eventually, the nuclear chain reaction was discovered in 1939 by Frédéric Joliot-Curie and colleagues in Paris, and by two groups at Columbia University in New York. One of these was led by Enrico Fermi and the other by Walter Zinn and Szilard himself, who had moved to the US in 1938. As Szilard realized, the neutrons released when uranium nuclei break apart through fission could trigger the self-sustaining chain reactions needed for an atomic bomb.
Such weapons were now a real possibility and, with war in Europe looming, Szilard went on to play a key role in calling for their development. In fact, he later joined the Manhattan Project, which saw the Allies build the atomic bombs that they dropped on Japan in 1945. And yet, despite his seeming pro-nuclear stance, Szilard’s attitude to these weapons – as it was to many matters – was far more subtle than one might think.
World-wide awareness
Born into a Jewish family in Budapest on 11 February 1898, Szilard was a complex character who often foresaw global political developments long before professional politicians ever did. He was someone who would consider the long-term implications of science and would analyse the links between scientific discoveries and world events. But, unlike many physicists, Szilard actively sought to influence the direction of those events.
After the First World War, sickened by the virulently antisemitic atmosphere in his native Hungary, he emigrated to Germany. There Szilard studied physics in Berlin, where he got to know Albert Einstein and other top physicists, carrying out pioneering work linking thermodynamics with information theory. But when Adolf Hitler and the Nazis came to power in 1933, Szilard realized that life would become dangerous for a Jew such as himself.
Although, for expediency, he had converted to Christianity, Szilard knew he had to get out of Germany, moving to London in 1933. As it turned out, Szilard was later glad he did not start his search for the nuclear chain reaction while in Britain. Had he done so, he knew his work might have led to Germany developing the atomic bomb before the UK or US.
Ground-breaking thought It was where Southampton Row meets Russell Square in central London, near the original Imperial Hotel (left), that Szilard suddenly realized in September 1933 that a nuclear chain reaction could liberate vast amounts of energy, with potentially huge consequences for the world – both good and bad. (Courtesy: postcard from 1915, public domain, via Wikimedia Commons (left); Istvan Harigittai (right))
To alert the US authorities that the Germans might be working on such a weapon, Szilard persuaded Einstein – who was then at the Institute for Advanced Study in Princeton – to write to President Franklin Roosevelt. His letter, dated 2 August 1939, eventually led to the creation of the Manhattan Project. Aware of the unprecedented destructive power of nuclear weapons, Szilard wanted the world to know exactly how dangerous these devices could be.
Indeed, as the Second World War rumbled on, he began to realize that atomic bombs had to be deployed. Despite his opposition to these weapons, Szilard’s view was that if people saw how much destruction they would cause, the world might stop developing such devices. He even thought that a pre-emptive war might be needed to shock the world and prevent the proliferation of nuclear weapons.
But he also knew that the most important requirement for any nation wanting to build an atomic bomb was to have access to uranium itself. On 14 January 1944, Szilard therefore wrote to Vannevar Bush – the head of the US Office of Scientific Research and Development – calling for all deposits of uranium to be rigidly controlled, if necessary by force.
“It will hardly be possible to get political action along that line,” he wrote, “unless high-efficiency atomic bombs have actually been used in this war and the fact of their destructive power has deeply penetrated the mind of the public.”
Open to change
Szilard was not, however, someone who would hold on rigidly to pre-existing beliefs. In fact, after Nazi Germany surrendered in May 1945, he began to wonder if atomic weapons should be deployed at all. Szilard organized a petition by 70 prominent scientists urging President Truman not to drop an atomic bomb on Japan. Those efforts proved unsuccessful – the US bombed Hiroshima and Nagasaki on 6 and 9 August – but (if nothing else) Szilard found it important to have the opposition to the bomb recorded.
And yet despite his new aversion to nuclear weapons, Szilard saw a potentially huge peaceful use of nuclear power. After the Second World War, he even started to believe that nuclear explosions could be put to positive effect. It was a topic he discussed with an illustrious group of intellectuals at the New York home of Laura Polanyi (1882–1957), who – like Szilard – was a Jewish émigré from Hungary.
At one of these events, Szilard spoke, for example, about the seemingly crazy possibility of using nuclear explosions to make the rivers in northern Siberia and northern Canada flow backwards. Rather than travelling in a northerly direction out into the Arctic Sea, the water would flow south, irrigating the huge, inhospitable wastelands of central Asia and central Canada. The climate would be changed, allowing everything from palm trees to dates to grow in these previously barren regions.
Social benefits Szilard’s views on the peaceful applications of nuclear power had begun to evolve when he held discussions with groups of intellectuals, like here at Laura Polanyi’s home in Manhattan. (Courtesy: Hungarian National Museum, Budapest)
Szilard’s views on the matter only came to light many years later when the literature historian Erzsebet Vezer spoke to the Hungarian poet, writer and translator Gyorgy Faludy in May 1982. Faludy, who had met Szilard after the Second World War, was favourably impressed by anything nuclear. Having served in the US Army, he had been due to take part in an invasion of the Japanese Islands. His life may have been saved because the invasion was called off after America bombed Japan, ending the war sooner than expected.
Not everyone at that meeting of intellectuals in Polanyi’s house was impressed by Szilard’s ideas, however. One notable opponent was the Hungarian–American social scientist and historian Oszkar Jaszi (1875–1957). He warned that such explosions could cause sea levels to rise by 20 metres, flooding not just coastal cities like New York but also those further inland, such as Milan. His environmental foresight is to be applauded – more so given that we now know that methane and other harmful gases can be released when permafrost regions melt.
Szilard’s views on the peaceful use of atomic explosions came almost a decade before similar ideas were championed by Edward Teller
Jaszi felt that nuclear weapons had made the world an intolerable and uncertain place. If it could be blown to pieces at any moment, why would anyone bother to care for our planet or preserve it for our descendants? We don’t know if Jaszi’s warnings influenced Szilard’s change of heart over nuclear explosions, but he certainly came to realize they had huge environmental and health consequences, however peaceful their original purpose might have been.
What is also interesting about Szilard’s views on the peaceful use of atomic explosions is that they came almost a decade before similar ideas were championed by another émigré Hungarian physicist – Edward Teller. Having masterminded America’s development of the hydrogen (fusion) bomb – a weapon even more powerful than the atomic bomb – Teller had been put in charge of Project Plowshare. It was set up in 1957 by the US Atomic Energy Commission to see if such devices could be used to shift vast quantities of Earth to carve out, for example, new harbours or canals. Szilard was not involved in Teller’s plans, having lost interest in the idea by this stage, which is perhaps just as well given the sheer lunacy of doing civil engineering with hydrogen bombs.
To arm is to disarm
One final example of how Szilard’s views often evolved concerns the hydrogen bomb itself. Given that he was by nature a pacifist, one might think that Szilard would have been against the development of such a device. But then on 29 August 1949, the Soviet Union exploded its first atomic bomb, prompting Szilard to immediately warn of a potential race for hydrogen bombs. If such a race were to start, America should not be left behind and must therefore start work on an equivalent device.
Szilard, however, was extremely worried about whether the US had the ability or motivation to build one. American scientists, he felt, had lost trust in the US government since the Second World War, especially as it had done the very same things for which it had previously condemned Germany, such as indiscriminately bombing civilian targets.
Change of heart Although atomic bombs had devastated Japan at the end of the Second World War, Leo Szilard supported their peaceful use – even toying with the idea of nuclear explosions being used to reverse the flow of rivers in northern Siberia so that the water would return inland and irrigate otherwise inhospitable regions, opening them up for crop-growing. He later abandoned the notion. (Courtesy: iStock/Pro-syanov)
Despite this weakened trust, even the hydrogen bomb’s harshest critics – such as the theorist Hans Bethe – returned to Los Alamos to work on it once President Truman had given it the green light in January 1950. However, Szilard noted, the US would not have succeeded had it not been for Teller, who carried on working alone on such a device even when others were against it. The fact that no-one else was involved put the US in a dangerous position – and Szilard decided to warn the White House of his concerns.
But the official he spoke to failed to grasp the significance of what Szilard told him. Szilard was also shocked to be told not to disclose the name of the person (Teller) who was still working on the bomb. There was so much anti-Communist fervour in the US at the time that should the Russians become aware of Teller’s identity, the official warned, they might paint him as a Communist to such an extent that even President Truman would be powerless to keep Teller in his job. The US, in other words, might lose the very person who could build them a bomb.
We know about Szilard’s views on the hydrogen bomb thanks to a speech he later gave for Brandeis University in Los Angeles in December 1954. His wife Gertrud Weiss gave a copy of his speech to the Hungarian-born Swedish immunologist George Klein and it was later included by the Hungarian physicist George Marx in Leo Szilard Centenary Volume (Eötvös Physical Society 1988). But we also know of Szilard’s support for the hydrogen bomb thanks to a conversation I had in 2004 with the geneticist Matthew Meselson, who had chaperoned Szilard during his 1954 visit to Los Angeles. A record of the conversation appears in a book I edited with Magdolna Hargittai entitled Candid Science VI: More Conversations with Famous Scientists (Imperial College Press 2006).
Szilard felt that the world would be a safer place if we developed hydrogen bombs that are as terrible as possible because this would deter anyone from using them
Szilard’s decision to back America’s development of the hydrogen bomb did not mean he approved of the arms race. He merely wanted the US to start work on such a weapon because he feared the Soviet Union was probably developing one too – as indeed it was, testing its first hydrogen bomb in August 1953. As Szilard made clear when speaking at the Pugwash Conferences on Science and World Affairs in the late 1950s, the world had, perversely, become a more geopolitically stable place now that both sides were armed to the hilt.
He once even suggested enveloping nuclear bombs with a layer of cobalt, which would enormously enhance the radioactive fallout from the bomb. Just as with fission bombs, Szilard felt that the world would be a safer place if we developed hydrogen bombs that are as terrible as possible because this would deter anyone from using them. He, in other words, saw the advantage of “mutually assured destruction” in maintaining peace between the Soviet Union and the US.
Szilard’s attitude reminds me of a remark once made by Alfred Nobel – the founder of the Nobel prizes – that the chemist Linus Pauling cited after being awarded the Nobel Peace Prize in 1963. “The day when two army corps can annihilate one another in one second,” Nobel had said, “all civilized nations, it is to be hoped, will recoil from war and discharge their troops.” Szilard, just like Nobel, realized the power of deterrence in making the world a safer place.
A quasicrystal that was likely formed by a strong electrical discharge through a sand dune has been found by researchers based in the US and Italy. The team, led by Paul Steinhardt at Princeton University, hopes that their discovery could lead to the development of new techniques for creating artificial quasicrystals and help scientists find other naturally-occurring samples.
Quasicrystals are solid materials with atomic structures that have long-range order, but lack the translational symmetry found in regular crystals. Instead, they exhibit rotational symmetry alone, and this curious arrangement gives quasicrystals a range of exotic mechanical, electrical, and optical properties. Once thought to be impossible, quasicrystals were first identified in 1982 and since then several different techniques for synthesizing these materials have been developed – including vapour deposition and the slow quenching of liquid states.
In nature, however, the conditions required to generate quasicrystals are exceptionally rare and the first naturally occurring sample was identified by Steinhardt and colleagues in 2009. What followed was an expedition to Siberia led by Steinhardt, seeking the source of that sample and confirming that it was part of a meteorite.
“Fossilized lightning”
Once it was established that quasicrystals do exist in nature, the race was on to find new examples. Now, Steinhardt and colleagues have discovered a new type of quasicrystal within a sample of fulgurite. Dubbed “fossilized lightning”, fulgurites are tubes of fused material created when a large electrical current travels through sand. Their sample comes from the Sand Hills of north-central Nebraska and was discovered close to a downed power line, which contributed traces of metal to the sample.
With the chemical composition Mn72.3Si15.6Cr9.7Al1.8Ni0.6, the quasicrystal was in a millimetre-sized grain trapped inside the fulgurite. There, the quasicrystal coexisted with a more conventional cubic lattice. The quasicrystal has equally-spaced atomic layers, each with a 12-fold rotational symmetry – something that is impossible in ordinary crystals with translational symmetry.
By studying the sample, Steinhardt and colleagues could piece together clues about its formation. They believe that the quasicrystal likely formed during a strong electrical discharge through sand. This could have been the result of the downed power line, a lightning strike, or a combination of both. Regardless of its source, the discharge would have generated extreme temperatures greater than 1710 °C. This, they say, would have created conditions necessary for a quasicrystal to form in the region between traces of aluminium alloy from the power line and the fused silicate glass from the sand.
Steinhardt’s team hopes that its discovery could lead to new techniques for quasicrystal synthesis through controlled electrical discharges in the lab. This could enable researchers to engineer exotic new properties and may even help them to better identify places where natural quasicrystals can be found, both on Earth and in space.
Paul Steinhardt describes his journey to Siberia in search of quasicrystals in his book The Second Kind of Impossible: the Extraordinary Quest for a New Form of Matter, which has been reviewed in Physics World.
Scientists and engineers will soon be gathering in San Francisco for Photonics West‘s unique combination of scientific conferences, industry symposia, and world-class technical exhibitions. The week-long event kicks off on Saturday 28 January with the BiOS conference and exhibition, which focuses on biophotonics, imaging, and biomedical optics, with its popular Hot Topics session highlighting some of the most exciting research trends.
The main Photonics West event runs throughout the week, with conferences on lasers and optics, a dedicated industry programme, and a technical exhibition featuring more than 1100 companies from all over the world. Plenary sessions for the OPTO and LASE conferences include presentations on high-performance electronic–photonic interfaces, fully integrated photonic systems, and the recent breakthrough in laser fusion at the National Ignition Facility.
This year’s programme also sees an expanded remit for Quantum West, which features two parallel conference sessions as well as a two-day industry forum focusing on the market developments needed to build a commercial quantum ecosystem. A plenary session features an overview of quantum imaging by Miles Padgett of Glasgow University, as well as a perspective on the opportunities for quantum technology by Catherine Foley, Australia’s Chief Scientist.
High-profile industry events during the week include the Start-Up Challenge and 2023 Prism Awards, as well as the co-located AR | VR | MR conference on hardware solutions for augmented and virtual reality. Meanwhile, the exhibition floor will be open from Tuesday to Thursday, allowing delegates to connect with suppliers and learn about the latest innovations in components, instruments, and systems. Some of the latest advances are highlighted below.
Quantum system enables ultra-secure communications
QTI (Quantum Telecommunications Italy), an Italian start-up founded in 2020 to develop industrial-grade systems for secure quantum networks, will be showcasing its Quell-X system for quantum key distribution. Quell-X has been designed to distribute quantum keys for ultra-secure communications, enabling applications such as crypto-key distribution, data-centre security, and medical data protection for civil and governmental use.
Quell-X is versatile enough to be used in any network configuration, ranging from point-to-point links through to trusted-node configurations and ring or star network topologies. The system can be fully integrated into existing telecoms networks, and is compliant with all standards for key management.
Robust and reliable: The Quell-X system from QTI is secure against classical and quantum attack. (Courtesy: QTI)
The Quell-X family comprises three main versions. The core Quell-X product provides reliable and high-performance quantum key generation, includes the standardized key management system, and is fully compatible with third-party encryptor units.
Quell-XC, meanwhile, has been developed in partnership with Telsy, a specialist in cybersecurity and cryptography that made a major investment in QTI in 2021. Quell-XC is natively ready for integration with Telsy’s high-speed encryptors, which offer a throughput of up to 1 Gbps and a latency of around 1 ms. Quantum keys generated by Quell-XC are injected into the encryptors to provide an end-to-end encryption system that is fully compatible with the current telecommunication infrastructure for civil and military applications.
Finally, Quell-XR has been designed for academic and research activities. It generates raw key data for custom post-processing protocols and to facilitate development work, and offers an open platform for customization that can also be interfaced with third-party detectors.
Find out more by visiting QTI at booth number 5347
Quantitative time-resolved fluorescence techniques such as fluorescence-lifetime imaging microscopy (FLIM) are increasingly being used in cell biology to monitor processes such as phase separation, conformational changes, and the interactions between proteins. So far, however, expert knowledge has been needed to obtain accurate and reproducible results from these tools, which has slowed down their widespread adoption.
Streamlined imaging: The Luminosa confocal microscope from PicoQuant has been designed to make it easier to capture high-quality data using time-resolved fluorescence techniques. (Courtesy: PicoQuant)
PicoQuant’s new Luminosa confocal microscope overcomes that challenge by combining state-of-the-art hardware with cutting-edge software to deliver high-quality data while also simplifying daily operation. The software includes context-based workflows to improve the reproducibility of experiments, while features such as sample-free auto-alignment and calibration of the excitation laser power make experiments more efficient. At the same time, every optomechanical component is fully accessible to enable the development of new methods.
During Photonics West, senior scientist Felix Koberling will present two use cases of the Luminosa instrument. First, he will show how the instrument can make it easier for researchers to use single-molecule fluorescence resonance energy transfer (smFRET) in their experimental studies. For example, the FRET efficiency and stoichiometry can be calculated during the experiment, corrected according to standard protocols, and displayed in real time. (Talk 12386-7, 28 January 2023 • 2:00 PM – 2:20 PM PST).
Second, Koberling will describe how Luminosa can be used to streamline FLIM experiments. The instrument’s rapidFLIM module can record images at several frames per second with high photon-count rates, which the software handles with a novel dynamic binning format to enable high-speed automated analysis of FLIM images. (Invited Talk 12384-18, 30 January 2023 • 8:20 AM – 8:40 AM PST).
For a demonstration of Luminosa, and to find out about other innovations from PicoQuant, talk to company representatives at BiOS exhibition booth number 8325 or Photonics West booth number 3325.
Fibre laser offers broadest tuning range
A new tunable laser from NKT Photonics, the SuperK CHROMATUNE, offers gap-free tuning across all wavelengths from 400 to 1000 nm. With a standard power output of 1 mW, the SuperK CHROMATUNE offers a versatile solution for applications such as spectroscopy and microscopy, fluorescence and lifetime imaging, and optical characterization.
Single solution: According to NKT Photonics, the SuperK CHROMATUNE offers the broadest tuning range of any laser on the market. (Courtesy: NKT Photonics)
The CHROMATUNE is based on a fibre-laser platform that ensures excellent stability, reliability, and a lifetime of thousands of hours, without any need for maintenance or service. No alignment or adjustments are needed to deliver the required wavelength, which can be produced just with a press of a button. Meanwhile, an advanced mode provides the flexibility to change the linewidth of the laser, increase the power in different wavelength ranges, and automate wavelength sweeps and other functionalities.
By default the laser runs at quasi-continuous-wave output with megahertz repetition rates, but that can also be altered for applications such as studying lifetime phenomena. Everything is controlled by a user-friendly software interface that also provides dedicated workflows for specific applications, while a free software development kit offers even more flexibility for control or integration.
For more information about the system, visit NKT Photonics at booth number 3201 or contact the company at sales@nktphotonics.com.
MKS Instruments has introduced Newport’s ODiate optical coatings to its broad portfolio of industry-leading optical filters, with applications ranging from life sciences and medical instrumentation through to chemical and material analyses. ODiate coatings are fabricated using a next-generation platform for thin-film deposition, delivering exceptional precision, productivity, and repeatable spectral performance for Newport’s optical filters and dichroic beamsplitters.
Next generation: ODiate coatings enhance the spectral performance of Newport’s optical filters. (Courtesy: MKS Instruments)
The enhanced spectral performance offered by ODiate coatings provides better signal quality, optimized wavelength separation, and consistent filter performance for any instrumentation. Customized coatings in the ultraviolet, visible, and near-infrared enable demanding applications such as high-throughput screening, fluorescence and Raman imaging, molecular diagnostics, flow cytometry, and blood analysis. MKS uses proprietary dynamic optical monitoring and custom metrology to measure the most demanding spectral transitions.
“MKS offers a value-added product development service to our OEM customers alongside our expansive Newport component portfolio,” says Dan Lawrence, general manager of Newport’s Optical Component Business. “From initial design through metrology and testing, we collaborate closely with our partners to develop and deliver innovative and effective optical sub-system solutions. The ODiate coating platform is the next step to ensure our long-running success and continual emphasis on innovation.”
To find out more about ODiate optical filter technology, visit MKS Instruments at booth number 927 or go to www.newport.com/odiate.
Firing a laser beam into the sky can divert the path of a lightning strike, an international team of scientists has found. The researchers say that their work could lead to better lightning protection for airports and other critical infrastructures, as well as paving the way for new atmospheric applications of ultrashort lasers.
Satellite data suggest that across the world there are between 40 and 120 lightning flashes – including cloud-to-ground and cloud lightning – every second. Such electrostatic discharges between clouds and the Earth’s surface are responsible for thousands of deaths and billions of dollars’ worth of damage every year.
The most common protection against lightning strikes is the lightning rod, also known as a Franklin rod. This electrically conducting metal mast offers a preferential strike point for lightning and guides the electrical discharge safely to the ground.
But Franklin rods don’t always work perfectly and provide limited coverage. The area they protect has a radius that is roughly equivalent to their height: a 10 m rod will protect an area with a 10 m radius. This means that reliable protection of large areas of infrastructure requires multiple or unfeasibly tall rods.
As an alternative, scientists have proposed that intense laser pulses could be used to guide lightning strikes. The idea, which has previously only been explored in laboratory conditions, is that the laser beam would act as a large movable rod.
The basic theory behind a laser-based lightning rod is that intense and short laser pulses are fired into the air, where they become sufficiently intense to ionize air molecules. Along these long narrow channels of ionizing laser pulses, air molecules are rapidly heated and expelled at supersonic speeds. This leaves behind long-lived channels of air with reduced density that are more electrically conductive than surrounding regions, offering an easier path for the electric discharges of lightning to travel along.
“When very high-power laser pulses are emitted into the atmosphere, filaments of very intense light form inside the beam,” explains Jean-Pierre Wolf, a physicist at the University of Geneva. “These filaments ionize the nitrogen and oxygen molecules in the air, which then release electrons that are free to move. This ionized air, called plasma, becomes an electrical conductor.”
To test this idea, Wolf and a team of researchers from Europe and the US headed to one of Europe’s lightning hotspots: Säntis mountain in north-eastern Switzerland. On the top of this 2500 m mountain is a 124 m-tall telecommunications tower that is struck by lightning around 100 times a year.
The team installed a specially developed laser near the communications tower. The size of a large car and weighing more than three tonnes, the laser emitted pulses of picosecond duration and 500 mJ energy at a rate of around a thousand pulses per second. Between July and September in 2021 the researchers operated the laser during a total of 6.3 h of thunderstorm activity occurring within 3 km of the tower.
Over the two-month experimental period the tower was hit by at least 16 lightning flashes, four of which occurred during laser activity. All four of these upward lightning strikes were diverted by the laser. The scientists used lightning current measurements on the tower, electromagnetic field antennas and X-ray sensors to capture details of electromagnetic waves and X-ray bursts generated by the lightning discharges to confirm the location of the strikes.
The path of one of the strikes was also recorded by two high-speed cameras. The images show that the lightning strike initially followed the path of the laser for around 50 m.
“From the first lightning event using the laser, we found that the discharge could follow the beam for nearly 60 m before reaching the tower, meaning that it increased the radius of the protection surface from 120 m to 180 m,” says Wolf.
Light-based logic: The optical chirality logic gate is made of a nonlinear optical material that generates an output signal that’s dependent on the chirality of the two input beams. (Courtesy: Yi Zhang/Aalto University)
Light-based optical logic gates operate much faster than their electronic counterparts and could be crucial for meeting the ever-growing demand for more efficient and ultrafast data processing and transfer. A new type of “optical chirality” logic gate developed by researchers at Aalto University works about a million times faster than existing technologies.
Like electrons and molecules, photons have a so-called intrinsic degree of freedom known as chirality (or handedness). Optical chirality, which is defined by left-handed and right-handed circularly polarized light, shows great promise for fundamental research and applications such as quantum technologies, chiral nonlinear optics, sensing, imaging and the emerging field of “valleytronics”.
Nonlinear optical material
The new device works by using two circularly polarized light beams of different wavelengths as the logic input signals (0 or 1, according to their specific optical chirality). The researchers, led by Yi Zhang, shone these beams onto atomically thin slabs of the crystalline semiconductor material MoS2 and onto bulk silica crystals. These nonlinear optical materials can generate light at a different frequency to that of the input beams.
Zhang and colleagues observed the generation of a new wavelength (the logic output signal). By adjusting the chirality of the two input beams, four input combinations – corresponding to (0,0), (0,1), (1,1) and (1,0) – are possible. In the nonlinear optical process, the generated output signal is considered as logic 1 or logic 0 based on the presence or absence, respectively, of this output signal.
Chiral selection rules
The system works thanks to the fact that the crystalline materials are sensitive to the chirality of the input beams and obeys certain chiral selection rules (related to the MoS2 monolayer’s threefold rotational symmetry). These rules determine whether or not the nonlinear output signal is generated.
Using this approach, the researchers were able to make ultrafast (less than 100 fs operating time) all-optical XNOR, NOR, AND, XOR, OR and NAND logic gates, as well as a half-adder.
And that’s not all: the team also showed that a single device could contain multiple chirality logic gates operating at the same time in parallel. This is radically different to conventional optical and electrical logic devices that typically perform one logic operation per device, says Zhang. Such simultaneous parallel logical gates could be used to construct complex, multifunctional logic circuits and networks.
The chirality logic gates can also be controlled and configured electronically in an electro-optical interface. “Traditionally, the connection between electronic and optical/photonic computing has mainly been realized through slow and inefficient optical-to-electrical and electrical-to-optical conversion,” Zhang tells Physics World. “We demonstrate electrical control of the chirality logical gates, opening up an exciting prospect for the first and direct interconnection between electrical and optical computing.”
“Based on this, we hope that all-optical computing modalities can be realized in the future,” says Zhang.
The researchers, who report their work in Science Advances, now hope to improve the efficiency of their chirality logic gates and reduce their power consumption.
Wind energy could help power human missions on Mars, according to a study that used the NASA Ames Mars Global Climate Model to calculate the short-term and seasonal variability of wind power that would be generated by wind turbines on the Red Planet. Led by NASA’s Victoria Hartwick, the research team suggests that the wind could supply sufficient energy on its own or be used in conjunction with solar or nuclear power.
The success of a crewed mission to Mars would rely on many factors including site selection. Previous studies of site viability have focused on access to physical resources including the availability of water or shelter and have not necessarily accounted for the energy-generation capabilities of potential locations. While there has been a lot of research on solar and nuclear energy as Martian energy sources, nuclear power harbours potential human risks and current models of solar systems lack the energy storage capability to compensate for day/night (diurnal) and seasonal variations in generation. It is, therefore, prudent to consider an alternative source such as wind for stable energy production.
Less forceful, but still useful
Wind power is most efficient when atmospheres are thick, but Mars’ low atmospheric density means that wind on the planet produces significantly less force than wind on Earth. For this reason, the Martian wind had not been regarded as a viable energy resource. Hartwick and colleagues have challenged this assumption and shown that diurnal and seasonal fluctuations in solar energy could be compensated for by wind energy. Hartwick says that they “were surprised to find that, despite Mars’ thin atmosphere, winds are still strong enough to produce power across large portions of the Martian surface”.
The study suggests that wind could work in combination with other energy resources such as solar to boost power generation. This could be especially helpful during local and global dust storms, when solar power decreases and available wind power increases. Wind would also be a useful resource at night and around the winter solstice.
Combined system
The team looked at a hypothetical generation system that comprises solar panels and an Enercon E33 wind turbine. The latter is a medium-sized commercially available system that has a rotor diameter of 33 m and is rated at a power output of 330 kW on Earth. Hartwick and colleagues calculate that the turbine could operate at an average operational power output of about 10 kW on Mars
The team’s calculations show that the turbine would increase the percentage of time that the power from the combined system exceeds 24 kW from 40% (solar arrays alone) to 60-90% (solar plus wind). The value 24 kW is significant because it is considered the minimum power requirement to support a six-crew mission.
While the study shows that wind generation is possible, it would only be useful if it could be done in locations on Mars that are suitable for human habitation. Previous work considered geology, resource potential and engineering limitations to evaluate landing sites. Using these criteria, the NASA Human Landing Site Study has identified 50 potential regions of interest. This study did not consider regional energy availability beyond simple latitude and shading considerations for solar. Hartwick therefore believes that wind power could allow more regions to be considered for exploration and settlement.
More opportunities
“By utilizing wind in combination with other energy resources,” says Hartwick, “it may be possible to access some regions of the planet that were previously dismissed, for example, the Mars midlatitudes and polar regions which are scientifically interesting and are closer to important subsurface water ice reservoirs.” These sites would not be viable with solar power being the predominant energy resource.
Hartwick suggests that stability is the most important consideration for powering future crewed missions to Mars – a lot of uninterrupted power must be produced. Using a combination of wind turbines and solar arrays could allow missions to locate throughout a large portion of the planet.
Wind power could also revolutionize how humans obtain energy elsewhere in the solar system. Hartwick says she is “particularly interested to see the power potential on a moon like Titan, which has a very thick atmosphere but is cold”. Nonetheless, there is still interdisciplinary work to be done – especially from an aerospace and engineering standpoint – to determine operational efficiency and technical viability.
Different turbines
While the main part of the research focused on the Enercon E33, the team also looked at different sizes of turbines ranging from microturbines used for small single-family power needs to industry-standard 5 MW (on Earth) turbines, and more. The use of such systems could vary from providing energy for surface habitats and life support systems to maintaining scientific equipment. Another factor that must be considered is transporting wind turbines and associated materials to Mars – a process that would have to minimize the mass sent through interplanetary space. While this transport would have to include excavation equipment, there is some suggestion that Martian soil could be used a replacement for the concrete that is used to anchor turbines on Earth.
As more potential Martian landing sites are identified, future studies could involve high-resolution simulations with the aim of better understanding how specific topography and surface conditions affect the wind. This could change the capabilities of future space operations. Hartwick says that this “is really the gold standard when we consider the energy requirements for a potential human mission to Mars.”
Why is physics different from other sciences? While our perceptions of who does science are broadening – at least in terms of gender – our default image of a physicist remains firmly stuck in the past. Recent studies have shown that we still picture physicists as people who are more innately brilliant, more socially awkward, and less collaborative than those in other scientific disciplines.
These stereotypes are not only wrong – theycan be damaging and can even put prospective physics students off entering the subject. For those already in the field, such stereotypes can make them feel more uncertain about their place in physics altogether. But where do these stereotypes come from? While the media are often blamed for caricatured depictions of physicists, and sometimes rightly so, the persistence of the brilliance stereotype may be down to us.
In one recent study, undergraduate physics students in the UK – alongside their non-physics peers – described physics as requiring more intelligence and being harder than any other science subject. Perhaps we shouldn’t be surprised given that you need such good grades to study physics at university, which embeds the notion of physics as an elite subject and an option only for the very best.
It is not all of physics, however, that is seen as elite. A study carried out in 2020 found that some physics disciplines such as theoretical physics were seen as more difficult and therefore requiring more intelligence than other areas such as experimental physics. In the study, Master’sstudents linked intelligence with credibility and so physicists were not celebrated unless they were studying theoretical physics.
This hierarchy of intelligence makes it almost impossible to gain the prestigious “genius status”. Although often described as a “bad habit” of physicists, such comparisons can affect students, who are forced to defend themselves against the view of “being not smart enough” for topics such as theoretical physics.
But although we think physicists are highly intelligent, few would describe themselves in this way. Studies have found imposter syndrome is especially prevalent among women and ethnic-minority students in physics who do not fit the typical physics genius stereotype. In trying to attain such status, it has been suggested that “passion” is the key ingredient for being recognized as a “proper physicist”.
However, the ability to devote oneself to physics depends on long work hours, which not only leads to a poor work–life balance but is simply impossible for some people with a disability or caring responsibilities. While high intelligence, or genius status, is almost synonymous with physics – it is inaccessible to most.
Praising collaboration
But what does “being smart” even mean in physics? Physicists, and in particular theoretical physicists, are seen as geniuses because much of the cognitive work they do is hidden from view of both the public and students. To dispel the myth of the lone genius, a team of US researchers led by astronomer Mike Verostek from the University of Rochester sought to reveal these hidden cognitive processes – all of which gets missed when we praise intelligence.
The team showed that theorists use a myriad of skills and processes, such as using analogies and assumptions when carrying out a task. Intelligence also means problem-solving, picking up a new skill quickly, having the curiosity to try new ideas or the resourcefulness to reassess old problems. In focusing on skills and processes, we encourage the notion that physics is something you can get better at and is not determined by some fixed level of intelligence when you are born.
In another study, the same team found that collaboration is essential for generating ideas in theoretical physics. When we talk about a physics genius, we often separate the individual from the collective efforts that may have helped in their achievements. As physicists we know that the advancement of physics can only be achieved by communicating with others, having cross-cultural awareness and being able to work in a team.
Praising individual intelligence and celebrating individuals hides this collaboration from the public and recreates the view that physics is a lone endeavour. We must therefore celebrate the interpersonal skills that are required within physics.
So, next time you discuss the Nobel prize, the newest scientific discovery, or the award for best physics undergraduate – think about what you really want to praise and what message you are sending. Are you praising a skill or an effort, a group or an individual, and is what you’re celebrating achievable for anyone with the right support?