Skip to main content

Laser-induced graphene for ‘edible electronics’

Graphene (a sheet of carbon just one atom thick) can be produced on a wide variety of substrates and even materials such as food, cloth and paper, using a simple multi-pass laser technique. Potential applications include flexible, wearable and even “edible electronics”.

A team led by James Tour of Rice University in the US recently developed a way to transform the top layer of a polymer film (polyimide) into 3D graphene foam using a laser beam. This laser-induced graphene (LIG) consists of microscopic, cross-linked flakes of graphene and can be used to make electronic devices, such as supercapacitors, electrocatalysts for fuel cells, radio-frequency identification (RFID) antennas and biosensors, to name a few.

The researchers have now applied their technique to a wide variety of carbon precursors, including polymers, cardboard, cork, cloth and even certain foods, such as toast, coconut shells and potato skins. Unlike the earlier method, which only required a single laser beam pass, the new technique needs multiple “defocused” passes. “This is because the first laser shot converts the substrate into amorphous carbon,” explains Tour, “and the subsequent shot or two photothermally converts this carbon into graphene as the material selectively absorbs infrared light (in the 10.6 micron range).”

Potatoes and coconuts

The wavelength of the light is all-important here and the selective absorption of IR light by amorphous carbon is one reason why mere thermal treatments or irradiation with other wavelengths upon carbon precursors do not produce LIG, he adds.

A defocused laser beam is one that is wider and allows each spot on a target to be lased many times in a single raster scan, he explains. “The common denominator in all the wood materials appears to be lignin, a complex organic polymer that forms rigid cell walls. Cork, coconut shells and potato skins have an even higher lignin content (25, 30 and 36% respectively) than oven-dried wood (18-30%), so it is easier to convert them into graphene. What is more, these materials can be processed under ambient conditions.”

As a proof-of-principle, Tour and colleagues fabricated a micro-supercapacitor on a coconut using their multiple lase method. They compared twice-lased coconut with single-lased polyimide processed using the same laser fluence (around 5.5 J/cm2) and found that the coconut-derived material has a higher areal capacitance.

“Being able to produce graphene so easily on such a wide variety of substrates extends our hope for this carbon material,” Tour tells nanotechweb.org. “It means that graphene electronics could be made on a host of different materials and applications such as edible electronics and sensors, and electronics on paper and boxes could see the light of light of day.

“Very often, we don’t see the advantage of something until we make it available,” he adds. “Perhaps all food will one day have a tiny RFID tag that gives you information about where it’s been, how long it’s been stored, its country and city of origin and the path it took to get to your table.

“LIG tags could also be sensors that detect E. coli or other microorganisms on food. They could change colour and give you a signal that you don’t want to eat this. All that could be placed not on a separate tag on the food, but on the food itself.”

The LIG is detailed in ACS Nano DOI: 10.1021/acsnano.7b08539.

Ozone loss may have caused mass extinction

The loss of ozone may have caused the extinction many millions of years ago of most life on Earth, scientists believe.

Californian scientists have found a new way to account for extinction and to explain mass murder on a planetary scale.

Seven out of 10 land animals perished at the end of the Permian, 252 million years ago. So did 95% of marine species. And the deadly factor at work may have been the destruction of atmospheric ozone, the protective screen in the stratosphere that eliminates harmful ultraviolet light.

Jeffrey Benca of the University of California Berkeley and colleagues report in the journal Science Advances that they irradiated a series of dwarf pines with doses of ultraviolet-B radiation up to 13 times stronger than any on Earth today.

They used 60 pines of the species Pinus mugo, irradiated them for 56 days, and then spent three years examining 57,000 pollen grains produced over that period.

UV-B wavelengths are associated with mutations in DNA, the inheritance mechanism of all life on Earth. The dose chosen was the one to which creatures might have been exposed at the close of the Permian period, an episode characterised by immense volcanic eruptions that would have damaged the upper atmosphere.

Exposed to sterility

And, the researchers found, after two months exposure, the trees survived, but at a cost: they had become sterile. Their cones shrivelled within days of emerging. Once restored to present day, open air conditions, the pines all recovered.

Plants underwrite all animal life: repeated bouts of forest sterility could, researchers think, have played a role in the collapse of the planet’s biosphere.

Research like this is at the heart of climate science: it is a tenet of earth sciences that the present is key to the past. So it follows that what happened in the past could be relevant to the present.

And since biologists have argued that the double punch of habitat destruction and climate change could be precipitating a sixth great extinction, there has always been intense interest in the triggers of the previous five. So far, no other bout of extinction has been on the scale that occurred at the end of the Permian.

That doesn’t mean the latest study has identified the smoking gun: it does, however, add immediacy to new concerns about the present state of the ozone layer.

Even before the first evidence that global warming had already begun, British and US scientists confirmed that human action – in the release of a suite of industrially-important gases called chlorofluorocarbons – had begun to erode the invisible shield of stratospheric ozone that has always sheltered life on Earth.

In a prompt response 30 years ago, the world’s nations banned the use of such gases. Concerted action on the other contemporary alarm, about global warming, has been more difficult to achieve.

Ozone however is not the only suspect in the search for the Permian mass murder mechanisms. Other researchers have already suggested that high atmospheric carbon dioxide levels, driven by enormous, slow volcanic eruptions, could have turned the oceans increasingly acidic.

Dependent on plants

Biologists may never arrive at clinching evidence from the scene of a crime that happened even before the first dinosaurs colonised the planet. And, since the Permian extinction took place over a 500,000-year period, there may be no single murder weapon.

Such studies, once again, illuminate the intricate dependence of all animals on plant life, and all plant life on atmospheric conditions. The research has potent lessons for those already concerned about worldwide forest loss, so far largely due to human action.

“Paleontologists have come up with various kill scenarios for mass extinctions, but plant life may not be affected by dying suddenly as much as through interrupting one part of the life cycle, such as reproduction, over a long period of time, causing the population to dwindle and potentially disappear,” said Cindy Looy, an integrative biologist at Berkeley, and a co-author.

And a third author, Ivo Duijnstee, from the same research team, said: “Jeff, who used his plant growth chambers as a time machine to test the potential of a hypothesis about what may have happened 252 million years ago, provides an excellent example illustrating how the slowly unfolding extinction on land over maybe tens or hundreds of thousands of years may have been caused by reproductive troubles at the base of the food chain.” – Climate News Network

• This report was first published in Climate News Network

Is CT or MR perfusion better for diagnosing CAD?

Recent advances in scanner technology have made the use of CT stress myocardial perfusion imaging feasible. Just how well it matches up to MR perfusion in the diagnosis of coronary artery disease (CAD) is the central question of a German/US study led by Marc Dewey from Berlin.

Senior author Dewey collaborated with colleagues from various healthcare institutions to investigate the diagnostic performance of CT perfusion and MR perfusion in patients suspected of having CAD at the US National Institutes of Health (NIH) and Charité-Universitätsmedizin Berlin in Germany. They found that CT and MR perfusion had similar accuracies in detecting CAD, and that either imaging modality was suitable for evaluating the functional relevance of coronary stenosis.

“The results of this study provide reassurance that clinicians have two viable options in assessing the functional significance of coronary artery disease,” said co-first author Marcus Chen. “The decision regarding which test to select may be institutional and clinician-dependent or vary depending upon regional reimbursement patterns.”

Comprehensive coronary assessment

To accurately diagnose and comprehensively assess coronary artery disease, it is important to evaluate the heart both anatomically and physiologically, i.e., how well it functions. The gold standard for this process has been invasive coronary angiography (ICA) combined with fractional flow reserve (FFR) measurements. But clinicians have been leaning toward different, noninvasive means of detecting obstructive CAD, not only because noninvasive imaging tests can improve clinical outcomes, but also because they save economic resources, according to the researchers (Radiology 286 461).

Coronary CT angiography is one of the most popular noninvasive methods to spot the presence of CAD due to its high diagnostic yield and capacity to shorten hospital stays. And for the functional assessment of CAD, one of the most common noninvasive techniques is myocardial perfusion imaging via SPECT or MR perfusion. With recent technological advancements to CT scanners – such as shortened scanning time, increased area of coverage, and improved temporal and spatial resolution – CT perfusion has also become a viable option.

“A convenient advantage of CT perfusion imaging is given by the fact that it can be performed immediately after CT coronary angiography in the same scan setting, and that no further imaging modality needs to be included,” said co-first author Matthias Rief. “This may also be particularly preferable for the patient, as only one appointment is necessary.”

Yet the viability of CT perfusion in clinical practice is still in question. The medical community is particularly at odds over its use due to safety concerns stemming from radiation exposure.

Though the technique only increases the radiation dose by about 50%, that dose is restricted to a much smaller volume of tissue, thus accentuating the potential harm it could cause, the authors noted. Other techniques such as MR perfusion can determine whether or not a coronary stenosis causes myocardial ischemia without exposing patients to radiation.

Comparable diagnostic accuracy

Seeking to address these concerns with CT perfusion, the researchers compared its ability to that of MR perfusion in diagnosing obstructive CAD in a substudy of the Combined Noninvasive Coronary Angiography and Myocardial Perfusion Imaging Using 320-Detector Computed Tomography (CORE320) multicentre trial.

For the prospective sub study, 92 patients each underwent four imaging exams: invasive coronary angiography with quantitative coronary analysis (QCA), SPECT, CT perfusion (Aquilion One, Canon Medical) and MR perfusion (Magnetom Avanto or Magnetom Sonata, Siemens Healthineers). The median effective radiation dose for the patients was 5.3 mSv for CT perfusion and 19.6 mSv for combined angiography and SPECT.

Overall, CT perfusion and MR perfusion displayed comparable diagnostic performance in the detection of CAD when compared with either the primary reference standard of ICA and SPECT (p = 0.11) or the secondary reference standard of ICA alone (p = 0.27).

No statistically significant difference in accuracy.

“With a comparable diagnostic performance, CT and MR perfusion imaging seem to be equally suited for decision-making about the functional relevance of coronary artery stenosis and act as an alternative to SPECT,” said co-author Patricia Bandettini.

Though the differences in accuracy between CT perfusion and MR perfusion were not statistically significant, there were significant differences for sensitivity and specificity. CT perfusion had a higher sensitivity than MR perfusion based on the secondary reference standard (p < 0.01), but a lower specificity than MR using the primary reference standard (p < 0.01).

The higher sensitivity of CT perfusion may be a result of its enhanced spatial resolution, and its lower specificity might be tied to its susceptibility to beam-hardening artefacts, which can look like perfusion defects, he said.

Promising performance

Though the study included one of the largest cohorts of patients who underwent myocardial perfusion imaging through multiple modalities, it still had a limited patient population from the two sites, the investigators said. The study also demonstrated the wide variability in CAD detection when referring to different standards. The group reported a coronary artery stenosis with 50% or more obstruction in 64% of the patients using invasive angiography alone, but in only 39% of patients after combining ICA with SPECT.

“This [variation] shows one of the major challenges of modern cardiac and coronary imaging, whereby, depending on the reference standard, different results may occur and higher stenosis degrees do not necessarily cause more corresponding perfusion defects,” Rief said.

So far, the results suggest that the complementary and necessary information to identify culprit coronary artery lesions is available through both CT perfusion and MR perfusion imaging, he said. But whether or not performing coronary CT angiography and CT perfusion in an all-inclusive exam is sufficiently effective to warrant its elevated radiation dose will require further investigation.

To what extent clinicians can use CT perfusion to guide patient care decisions – like with FFR – is a pivotal clinical question emerging from this study, according to the authors. They recommend continuing research in the form of randomized trials to establish a more definitive answer.

  • This article was originally published on AuntMinnieEurope.com.
    © 2018 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

Cancer ‘vaccine’ eliminates tumours in mice

Researchers at Stanford University have demonstrated that injecting minute amounts of two immune-stimulating agents directly into solid tumours in mice can eliminate all traces of cancer in the animals, including distant, untreated metastases (Sci. Transl. Med. 10 eaan4488).

PET images of vehicle-treated (control) and CpG-treated mice

Immunotherapy aims to harness the immune system to combat cancer. Immune cells such as T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumour. However, as the tumour grows, it often devises ways to suppress T cell activity. Lead author Idit Sagiv-Barfi and colleagues have developed a method that reactivates cancer-specific T cells.

The approach involves injecting microgram amounts of two agents directly into a tumour site. One, a short stretch of DNA called CpG, works with other nearby immune cells to amplify expression of an activating receptor called OX40 on the surface of the T cells. The other, an antibody called anti-OX40, activates the T cells to attack the cancer cells. The researchers determined that some of these tumour-specific, activated T cells then leave the original site to find and destroy identical tumours elsewhere in the body.

The researchers believe that local application of very small amounts of these two agents could provide a rapid and relatively inexpensive cancer therapy that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

The Stanford team tested this in situ vaccination approach on laboratory mice with tumours transplanted in two sites on their bodies. Injecting one tumour with CpG and anti-OX40 not only shrank this tumour, but caused regression of the second, untreated tumour. The treatment cured 87 of 90 mice of the cancer. Although secondary tumours recurred in three of the mice, these again regressed after a second treatment.

The researchers also examined mice genetically engineered to spontaneously develop highly invasive breast cancers in all 10 of their mammary pads. Treating the first tumour that arose often prevented the occurrence of future tumours and significantly increased the animals’ life span.

“Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumour itself. In the mice, we saw amazing, bodywide effects, including the elimination of tumours all over the animal,” said senior author Ronald Levy. “This approach bypasses the need to identify tumour-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

One agent is currently already approved for use in humans; the other has been tested for human use in unrelated trials. A clinical trial of this treatment was launched in January to test its effect in patients with lymphoma. The trial is expected to recruit about 15 patients and, if successful, the treatment may prove useful for many tumour types.

Levy envisions a future in which clinicians inject the two agents into solid tumours in humans prior to surgical removal of the cancer as a way to prevent recurrence due to unidentified metastases or lingering cancer cells, or even to head off the development of future tumours that arise due to genetic mutations. “I don’t think there’s a limit to the type of tumour we could potentially treat, as long as it has been infiltrated by the immune system,” he said.

Enzyme leaps explain puzzling phenomenon

Enzymes are biomolecules that perform specialized functions, such as the assembly or degradation of small molecules. The small size of enzymes (tens of nanometres) previously limited their study to analysis of the collective properties of large numbers (thousands or more). However, recent advances in nanotechnology have enabled researchers to look at individual enzymes one-at-a-time on the nanosecond time scale.

The team, led by Steven Granick, noted that this behaviour could be explained by a fundamental property of diffusion theory, which states that diffusive substances concentrate in areas in which they move more slowly. As such, they hypothesized that interactions with substrate molecules somehow caused enzymes to diffuse through solutions at greater speeds.

The researchers then went further in their reasoning: they theorized that when an enzyme catalyses a chemical reaction (the cleavage of a covalent bond, for example), some of the energy released in that chemical reaction could be converted into kinetic energy – giving the enzyme some linear momentum. By contrast, enzymes that don’t have substrate bound to them would travel through solution according to the laws of Brownian diffusion, taking smaller steps in random directions. This effect, which they called enzyme-leaping, would create a link between substrate concentration and average enzyme velocity, thus explaining the phenomenon of antichemotaxis.

To validate this theory, Granick and colleagues investigated whether they could observe individual enzymes leaping through solution in the presence of substrate. To track individual enzymes, they used fluorescence correlation spectroscopy (FCS).

In FCS, each enzyme is linked to a small fluorescent molecule and a tiny laser beam (hundreds of nanometres wide) is used to visualize fluorescence in a very small volume of space. When an individual enzyme wanders into the laser beam, its linked fluorescent molecule emits light, which is captured using a photodetector. The fluorophore doesn’t stop fluorescing until the enzyme exits the space illuminated by the laser beam. As such, the enzyme transit time is defined by the length of time that the photodetector records brightness.

Super-resolution imaging and observation of enzyme leaps

Using a state-of-the-art super-resolution technique, the team tuned the width of the laser beam down to 70 nm. Using this minimum beam width, the researchers observed two populations of transit times – one fast and one slow. They attributed the slow population to simple Brownian diffusion – just enzymes moving randomly through solution looking for substrate. However, the fast population exhibited statistical properties that were more consistent with ballistic motion.

This observation suggested that the fast population actually consisted of enzymes that were “leaping” linearly through the space illuminated by the laser beam. The team also found that decreasing substrate concentration decreased the size of the fast population, providing further support for their hypothesis by showing that enzyme leaps were linked to the presence of substrate.

When the researchers increased the width of the laser beam to above 100 nm, the fast population disappeared. This finding indicates that enzyme leaps have a length scale on the order of 50-100 nm, above which the enzyme’s momentum gets dissipated by viscous drag forces. Performing simple calculations, the team proposed that each enzyme kick is initiated by a spontaneous force roughly 1 piconewton in magnitude, transferring kinetic energy 10-20 times greater than thermal energy to the enzyme.

The exact cause of enzyme kicks is not understood. However, antichemotaxis, the property that appears to emerge directly from the existence of enzyme kicks, could have an important regulatory role in the spatial distribution of cellular biomolecules. “This can be biologically useful because it homogenizes the spatial distribution of the enzymatic production, which is essential in the crowded milieu of the cell,” the authors conclude.

Mussels modify sinking of small-sized ocean plastics

SEM images of biofouling on polyethylene

Over time, large pieces of floating plastic waste may sink to the sea floor as the attachment of macro-organisms such as mussels and barnacles weighs down the debris. But what about smaller fragments, particles and fibres – does biofouling remove these so-called microplastics from surface waters too? Scientists are keen to find out so that they can improve transport models and better understand the full impact of all sizes of plastic waste on marine environments.

In a recent study, researchers in Germany incubated millimetre-sized polystyrene and polyethylene particles in estuarine and coastal waters and monitored the sinking behaviour of the pellets back in the lab. The sinking velocities of the polystyrene material, which has a density greater than seawater, increased 16% in estuarine water and 81% in marine water after just a few weeks of incubation, the team found.

On the other hand, the pellets of polyethylene – a lower density polymer – remained buoyant following 14 weeks of exposure to estuarine water. However, the material did start to sink after six weeks of incubation in coastal water due to colonization by blue mussels, reported the Leibniz Institute for Baltic Sea Research (IOW) group.

“These results show that biofouling reduces the effect of density differences between particle and fluid over time, and thus exerts a major control over microplastic sinking,” wrote the scientists in Environmental Research Letters (ERL).

Scanning electron microscope (SEM) images of the surface of the plastic particles revealed differences in the composition of the biofilm between samples incubated at estuarine and coastal stations.

The team highlights that the attachment of fouling macro-organisms appears to be a key factor, as the development of a microscopic biofilm alone didn’t increase the specific density of buoyant microplastic sufficiently to cause samples to sink.

During the experiments, the scientists observed that up to six mussels were connected to individual polyethylene pellets at any one time. Furthermore, the SEM images showed that mussel byssus threads attached to the microplastic particles over an area measuring just 20 microns across.

Nanomesh sensors for home healthcare

A new breathable and flexible on-skin sensor containing an array of light-emitting diodes embedded in a thin rubber sheet can display the moving waveform of an electrocardiogram. This information can then be wirelessly communicated. The device, which is easy to use, might be used in health monitoring applications for self-care in the home.

Wearable and flexible electronics have come along in leaps and bounds in the last few years with devices like biomedical electronic patches and electronic skin that can now monitor vital signs or take an electrocardiogram. These measurements can then be wirelessly sent to a smartphone or computer and be displayed in real time. They can also be sent to the cloud or to a memory device and be stored.

The new device was made by Takao Someya and colleagues at the University of Tokyo’s Graduate School of Engineering and researchers at Dai Nippon Printing. It consists of a 16 × 24 array of 630 nm wavelength (red) commercially-available LEDs and stretchable wiring embedded in a rubber sheet and is just 1 mm thick in total. It is far more robust to stretching than previous such wearable displays thanks to it being built on a novel structure that minimizes the stress resulting from stretching as it is worn.

Buckling helps

“We began by printing stretchable silver pastes on ultrathin plastic foils, which we then laminated with pre-stretched rubber substrates,” explains Someya. “When the substrates relax after the pre-stretching step, buckling structures can form. This buckling helps minimize stress resulting from stretching on the junction between the hard material (the LEDs) and the soft material (the elastic wiring). This stress was one of the reasons for device damage and failure in the past.”

The device can be continuously worn on the skin for a week without causing any irritation, he adds, and is stable in air.

“Although we previously used this sensor to measure temperature, pressure and myoelectricity (the electrical properties of muscles), we were able to take an electrocardiogram for the first time with it.

“And while the current display is connected to a small rigid box that contains batteries, memory and a driving circuit, we will be miniaturizing the box in the near future and connecting it to Wifi,” he tells nanotechweb.org.

Towards commercialization

Indeed, Dai Nippon Printing says that it would like the commercialize the device within the next three years by optimizing its structure and improving the production process. “The fact that we employed techniques widely used in the electronics industry – namely, screen printing the silver wiring and mounting the LEDs on the rubber sheet with a chip mounter and solder paste (routinely used in manufacturing printing boards) – will likely help overcome technical challenges such as large-area coverage,” says Someya.

“We believe that the device might come in particularly useful for the elderly or infirm, because it easy to use,” he adds.” It could be used as a wearable sensor to monitor patient vital signs and so reduce the burden on hospitals and nursing homes by proving at-home care. This should ultimately improve the quality of life for many.”

The research was presented at a news briefing and talk at the AAAS Annual Meeting in Austin, Texas.

Got your physics degree… now what?

Last year, we launched our first ever Physics World Careers guide, which brought together all of our best content from the careers section of our monthly magazine, together with an extensive directory of employers. This year, we’re back with the bigger Physics World Careers guide 2018, packed with case-studies and analyses, to help you choose the right path for your future – as James McKenzie, vice-president for business at Institute of Physics point out in his foreword for the guide, “As physicists, you will have learnt to be logical, analytical and articulate. These are skills that are highly prized by employers and open up many career paths for you.”

More and more, physics graduates take up jobs outside of academia, in a host of different industries, and the 2018 Physics World Careers guide has a special focus on careers in everything from making designer lasers to accelerator physics, and from developing safe nuclear-waste disposal techniques to building a quantum computer for Google. To help you get these exciting roles, we’ve even compiled articles on “Your pathway to industry” and “How to write a CV for industry”.

Of course, for those of you who foresee a future in academic research, we have you covered with our case studies, as well as advice on how to pick a strong, first research topic or project, and much more. Also take a look at how working in a lab can help you become an excellent educator or even headteacher of a school; or go into the fulfilling field of science communication.

And don’t forget to look at our “Beyond physics” section to learn more about the many exciting things you can do after your physics degree that are outside of the field – from classical dance to data science, meet the physicists who have gone beyond the fold, but still put their hard-earned physics knowledge to good use.

You can read Physics World Careers free via the Physics World app, available for web browsersiOS and Android.

Multiple terminal memtransistors step up to mimic brains

Memristors bring the learning functionality of connections in biological neural systems to the connections in electronic circuits. As such they have attracted a lot of interest for their potential use in neuromorphic electronics. However, even state-of-the-art memristor devices have been limited to two or three terminals, whereas in the brain, synapses outnumber the neurons they connect by a factor of a thousand. Now, researchers at Northwestern University in the US led by Mark Hersam have demonstrated the first ever multiterminal memristor based on polycrystalline 2D MoS2.

In conventional electronics the resistance in the connections is set, but in biological neural systems the connectivity changes depending on what signals have previously travelled along that synaptic pathway. These synaptic connections between neurons allow a level of learning and functionality that scientists would like to emulate in neuromorphic electronics.

Memristors are circuit elements, where the resistance changes from high to low depending on what voltage inputs have passed through previously. Leon Chua predicted the existence of these “missing circuit elements” in the 1970s and they have attracted a lot of interest since R Stanley Williams and colleagues at HP Labs “found” them because among other things they show great promise for emulating synaptic connections.

Attempts to produce a memristor with more than two terminals date back as far as the 1960s, with Bernard Widrow and Ted Hoff’s adaptive linear (Adaline) chemical memristor. However, neither the three-terminal Widrow–Hoff memristor nor subsequent set ups of field-effect transistors with nanoionic or floating gates have demonstrated memristive switching. Hersam and his team have now demonstrated the viability of multiterminal memtransistors thanks to the memristive switching mechanism that operates in their MoS2 device.

Moving to multiterminals

Hersam and his colleagues modelled the behaviour of the MoS2 memtransistor based on Schottky barriers – potential energy barriers that charge carriers must overcome or tunnel through to flow. These Schottky barriers change in response to the lowering and tunnelling of image charges under a high voltage bias, and it is this gate voltage dependence that the researchers describe as the “most important feature of the MoS2 memtransistor, distinguishing it from a two-terminal memristor”.

The gate-dependent Schottky barrier switching allows control of conductance in circuits that emulate pre- and post-synaptic modulation by means of additional terminals. While other setups such as AgS2 memristors allow terminal-controlled conductance modulation, switching in these devices is based on filament formation limiting the device to three terminals.

The researchers fabricated a six-terminal synaptic memtransistor from polycrystalline MoS2 and showed they could modulate the switching ratio between any pair of side electrodes by a factor of between 2 and 10 using the gate voltage. They also demonstrated long-term potentiation and depression – learning functions of biological synapses. Since they used polycrystalline materials they suggest that scaling the technology to large areas should be straight forward.

Full details are reported in Nature doi:10.1038/nature25747.

Physicists beat Lorentz reciprocity for microwave transmission

Most devices for the transmission of electromagnetic signals obey Lorentz reciprocity, meaning that signals propagate freely in both directions through circuits. A microwave pulse, for example, can travel in either direction along a waveguide and a light signal can move both ways along an optical fibre. This two-way traffic can cause problems and current technologies for avoiding reciprocity tend to be large and unwieldy. But now physicists in the US have come up with a more practical solution.

Lorentz reciprocity creates challenges for circuit designers because backward-propagating reflections can inject noise into circuits and even damage devices such as lasers. Isolators currently used in radar microwave transmitters, for example, get around this problem using a large external magnetic field. Waves propagating in the opposite direction see the opposite field and are therefore affected differently. However, this requires large, heavy magnets and adds considerably to the circuit’s power consumption. While nonmagnetic isolators have been developed, their performance has been lacking.

Now, Andrea Alù and colleagues at the University of Texas at Austin and City University of New York (CUNY) have shown that two nonmagnetic isolators can be combined to produce a device that transmits a signal almost perfectly in one direction, but has near-zero transmission in the opposite direction.

Fully passive device

In 2014, Alù and colleagues created a nonmagnetic isolator using externally modulated resonators in a loop design called a circulator. “You remove the magnet, but there is an extra complexity and power consumption that comes from the energy you need to effectively move your circuit or change it in time,” says Alù. “That’s why in the past months we’ve started to look at whether we can break reciprocity in a fully passive device that doesn’t need any external input.”

Their solution is a non-linear isolator with a peak transmission frequency that changes in response to the signal itself. This idea is not entirely new: several theoretical papers have been published exploring the properties of various nonlinear resonators as passive isolators in the past few years, but the designs have achieved limited success.

In the new research, Alù’s team shows that time-reversal symmetry places a fundamental constraint on all isolators comprising a single nonlinear resonator. The problem is that to work over a broad bandwidth they must sacrifice transmission efficiency. All is not lost, however, because the team goes on to show that two different types of common resonator connected together by a delay line – of length carefully chosen so that the wave’s phase evolves by the required amount between the resonators – can overcome this constraint. “You can realize a device that, excited from one side, concentrates most of the field in one element and reflects it back but, excited from the other side, has the field in both elements and can transmit,” explains Alù, who is based at the Advanced Science Research Center at the Graduate Center at CUNY.

Broad bandwidth

The combined isolator can offer large transmission and complete isolation over a relatively broad bandwidth. In an experimental demonstration, the researchers achieved much better performance than the best isolation achievable – even in theory – with a single nonlinear resonator.

Their device works at microwave frequencies, which are crucial to telecommunications including Wi-Fi and mobile phones. Alù says that the technology should also transfer relatively easily to other spectral regions such as optics – where isolators protect lasers from detuning or damage by their own reflected beams.

“We are working on an experiment in optics now where we stack two patterned surfaces of silicon – which is a nonlinear material – with a certain separation,” he says. “That structure would work as a mirror from one side and a transparent wall from the other, and it’s fully passive.” One caveat, he adds, is that, as these nonlinear isolators rely on the asymmetry of the signal incidence to break reciprocity, they do not work properly if waves hit both sides of the isolator at the same time: “These isolators are good for pulsed operation,” says Alù.

Fabio Biancalana of Heriot-Watt University in Scotland says, “This is very similar to what in electronics we call a diode, which lets current pass one way but not the other”. He adds, “But whereas electrical diodes are very efficient because electrons interact a lot through the Coulomb interaction, electromagnetic waves don’t see each other in normal conditions, so it’s very difficult to obtain an isolator…The next step would be to try and build a transistor for microwaves – which is basically a diode with two junctions instead of one.”

The isolator is described in Nature Electronics.

Copyright © 2026 by IOP Publishing Ltd and individual contributors