Skip to main content

Chiral surface excitons spotted on topological insulator

The first ever observations of chiral surface excitons have been made by Girsh Blumberg and colleagues at Rutgers University in the US. The team made their discovery after detecting circularly-polarized light emerging from the surface of a topological insulator. Their work could have a range of applications including lighting, electronic displays, and solar panels.

An exciton is a particle-like excitation that occurs in semiconductors. It comprises a bound electron-hole pair, which can be created by firing light at the surface of a semiconductor. After a short period of time, the electron and hole will annihilate and create a photon of photoluminescent light. Photoluminescence spectroscopy is an established technique for studying the electronic properties of conventional semiconductors and is now being used to study topological insulators – a family of crystalline materials with highly conductive surfaces, and insulating interiors.

The surfaces of topological insulators contain 2D gases of Dirac electrons, which behave much like photons with no mass. Theoretical calculations suggest that when such electrons are excited, the resulting holes have mass. Until now, however, physicists have had scant experimental information about excitons comprising massless Dirac electrons and massive holes.

Spiralling electrons

In their study, Blumberg’s team looked at the surface of bismuth selenide, which is a well-known topological insulator. Using a broad range of energies to excite surface excitons, the physicists found that the circularly-polarized photoluminescent light is emitted when the excitons decay. This they say indicates that the electrons are spiralling towards the holes as they recombine. Since spirals cannot be superimposed onto their mirror images, Blumberg’s team describe the quasiparticles as chiral excitons.

The researchers believe that this chirality is preserved as a result of strong spin-orbit coupling, which affects both electrons and holes. This coupling locks the spins and angular momenta of the electrons and holes together. This preserves the chirality of the excitons by preventing them from interacting with thermal vibrations on the surface – even at room temperature. This is unlike conventional excitons, which lose their chirality rapidly through thermal interactions and therefore do not emit circularly polarized light.

The precise dynamics of chiral excitons is still not entirely clear. In future research, Blumberg’s team hopes to study the quasiparticles in using ultrafast imaging techniques. The researchers also believe that chiral excitons may be found in materials other than bismuth selenide.

From a technological point of view, Blumberg and colleagues say that circularly-polarized photoluminescence could make topological insulators ideal for a wide variety of photonic and optoelectronic technologies. They say that bismuth selenide optical coatings would be easy to mass-produce, making them ideal for applications ranging from ultra-clear television screens to highly efficient solar cells.

The research is described in Proceedings of the National Academy of Sciences.

Wireless neurostimulator modulates rats’ behaviour

The e-Particle neurostimulator

Deep brain stimulation (DBS) using implanted neurostimulators is a promising treatment for neurological disorders such as Parkinson’s disease, essential tremor and epilepsy. But wired neurostimulators often come with adverse side effects such as infection, discomfort and the need for surgeries to repair fragile components and replace batteries. For pre-clinical studies, meanwhile, the wires restrict animals’ movement and limit potential applications.

As such, there’s a need for a wireless technology that enables precise, minimally-invasive modulation of deep brain structures. To meet this requirement, a team at Massachusetts General Hospital and Draper is investigating the potential of e-Particle, a novel wireless neurostimulator (J. Neural. Eng. 10.1088/1741-2552/aafc72).

“If we understand how changing brain activity changes behaviour, we can understand the brain better and can design better treatments,” explains senior author Alik Widge. “One of the best ways to change the brain is by stimulating it. “The challenge is that, in animal models, we often do that through big head-mounted tethers, which limit what you can do. You can’t do social behaviour experiments because the tethers will tangle; you can’t let the animals explore burrows and tunnels the way they do in nature. A good wireless technology could change all that.”

Research team

Behaviour modulation

The sub-millimetre sized e-Particle is inductively powered and does not require a battery or head-mounted tethers. To test the device, the researchers performed a conditioned place preference (CPP) task in eight adult rats. They surgically implanted the e-Particle on one side of each animal’s brain, targeting the medial forebrain bundle (MFB), a common target for behaviour experiments. For comparison, they also implanted Plastics One electrodes (a wired stimulator) on the other side.

After recovery from surgery, animals were placed in the CPP field without stimulation for five 15-minute habituation sessions. On the next two consecutive days, they underwent 15-minute stimulation sessions, during which a stimulation pulse was triggered every time they entered a chosen stimulation quadrant (initially, each animal’s least preferred quadrant).

In each round of CPP testing, the rats were randomly assigned to receive either wireless (tens of microamps, 50 Hz) or wired (350 μA, 170 Hz) stimulation. The authors note that the animals remained tethered during e-Particle stimulation, to match conditions. Finally, the researchers performed a test session where animals were placed in the CPP field for 15 minutes with no stimulation.

This analysis revealed that, for the wired group, time spent in the stimulation quadrant significantly increased during the first and second stimulation sessions (indicating strong place preference conditioning) and the test session, compared with the baseline session. Animals in the wireless group also spent more time in the stimulation quadrant during the second stimulation session and the test session, but not during the first stimulation session.

Behavioural results

These results confirmed that the e-Particle could modulate rats’ behaviour by MFB stimulation. Wireless stimulation took slightly longer to achieve CPP than wired stimulation, but ultimately did not differ in the degree of place preference. This longer training time may be related to the e-Particle’s lower pulse amplitude compared with the wired device.

“One of the challenges of this technology design, and of these battery-less, energy-harvesting wireless systems more generally, is that they are limited in their current delivery by the physics of the energy-harvesting system,” explains Widge. “Getting more efficient field-to-current conversion in the same or smaller form factor would be important.”

Brain activity

After completing the CPP tests, each rat received 15 minutes of e-Particle and wired stimulation. Sixty minutes later, the animals were sacrificed and the researchers performed immunohistochemistry to measure c-fos, a marker of recent brain activity.

On the side of the e-Particle implant, there was significantly greater c-fos expression in the nucleus accumbens (which receives MFB projections) than in the motor cortex (which does not). This suggests that the e-Particle successfully activated the MFB and its projections. Widge notes that, although they deliver different amounts of current to the target area, wired and wireless stimulation affected brain activity in a very similar way.

The researchers concluded that the e-Particle can stimulate a specific target and that this stimulation can effectively modulate behaviour. They are now focusing on increasing its ability to activate neural structures while reducing tissue damage.

“To that end, we have a grant with Polina Anikeeva of MIT, who has developed nanoscale particles that similarly harvest energy from frequency-tuned magnetic fields,” Widge tells Physics World. “We’re looking at using her technology to modify reward behaviours very similar to those seen in this study.”

Why artificial intelligence has brought scientists and philosophers together

When the concept of artificial intelligence (AI) was developed in the 1950s, its founders thought they were on the verge of fully modelling human thought processes and intelligence. Indeed, the US economist Herbert Simon – a future Nobel laureate – was so confident of AI’s prospects that in 1965 he predicted machines would, by 1985, “be capable of doing any work a man can do”. As for Marvin Minsky, the Massachusetts Institute of Technology (MIT) cognitive scientist who co-founded its Artificial Intelligence Laboratory, he boldly announced that “within a generation…the problem of creating artificial intelligence will substantially be solved”.

Not everyone was so sure. Hubert Dreyfus, an MIT philosopher, argued that these ambitions were conceived in such a way as to be unachievable in practice and impossible in principle. AI research was doomed to fail because it was based on an incoherent rationalist philosophy. Intelligent human behaviour, he argued, is much richer than information processing. It requires responding to situations with “common-sense knowledge”, which was not amenable to programming.

Dreyfus outlined his thinking in a 1964 article, commissioned by the Rand Corporation (a policy think-tank) entitled “Alchemy and artificial intelligence”. He elaborated the arguments in his 1972 book What Computers Can’t Do. Many arguments concerned what had already been dubbed the “frame problem”.

Get in the frame

Robots, Dreyfus thought, can be programmed to execute complex human tasks like walking over rough surfaces, smiling or placing reasonable bets. But to do so in a human way requires taking into account the specific situation in which these actions occur. To place a judicious bet on a racehorse, for instance, a human normally looks at the horse’s age, past history of wins, the jockey’s training and history, and so forth, all of which change with each race.

No problem, argued Minsky. A robot can be programmed to do that, turning these factors into information for the robot to process with a complete set of rules or “frame”. If you give the robot a frame, it’ll be able to bet on racehorses like a human – maybe better.

But there’s more to it, Dreyfus argued in What Computers Can’t Do. Many other factors are present in racehorse betting, such as the horse’s allergies, the racetrack’s condition, and the jockey’s mood, all of which change from race to race. However, when computer programmers analyse the relevant factors, Dreyfus noted, they end up behaving more like novices and amateurs (who often get things wrong) and less like experts (who get things wrong much less often yet appear to rely on no set “programming” method at all). In this regard, a true expert more closely resembles someone exercising common sense than someone consciously analysing factors.

Some AI enthusiasts accused Dreyfus of using the frame problem to try to drive a stake through the heart of AI. To counter Dreyfus’s objection, they felt, one simply programmed the robot to identify these other factors if and when they are relevant, thus putting another frame around the first. Dreyfus responded that the computer would need still another frame to recognize when to apply this new one. “Any program using frames,” he wrote in his 1972 book, “was going to be caught in a regress of frames for recognizing relevant frames for recognizing relevant facts.” The common-sense knowledge storage and retrieval problem, in other words, “wasn’t just a problem; it was a sign that something was seriously wrong with the whole approach”.

Two approaches

The frame debate turned on a fundamental philosophical difference between a “Cartesian” and a “Heideggerian” approach. In a Cartesian approach, the world in which humans live and cope is assumed to exist entirely apart from the minds that consciously try to “know” it. In this view, to be human is to cope with life using knowledge and beliefs to assess the meaningful factors that they confront. Programming a computer to respond to a situation humanly therefore means supplying it with knowledge and information-processing capacity. If those prove inadequate, the solution is to add in more of the right kind of knowledge or processing ability. This Cartesian-inspired strategy is known as “Good Old Fashioned AI”, or GOFAI.

The Heideggerian approach, in contrast, begins with a very different conception of the human–world relation. It says that the fundamental human experience of the world is not of an external realm of objects that we theorize about. Rather, humans are immersed in the world in a web of connections and practices that make the world familiar. Coping with the world is more like enacting common sense than employing theories and analysing them. Theorizing comes later, as a guide to some forms of deliberate action, where spontaneous employment of common sense is ineffective or otherwise insufficient.

Putting a frame around a situation transforms it from a situation in the world into an artificial world. That’s valuable for executing many tasks, such as playing chess or evaluating complex systems. But if the aim is to make a robot respond to all situations, adding more knowledge or computing capacity won’t do. There is no “frame of all frames” that would allow this. Heideggerian AI, Dreyfus writes, “doesn’t just ignore the frame problem nor solve it, but shows why it doesn’t occur”.

The critical point

AI research has come a long way in the last half-century, and no longer depends on the crude conceptions of intelligence taken for granted in GOFAI. In fact, the more sophisticated recent conceptions were developed in part by incorporating elements of Dreyfus’s initial critique. Mostly absent now, for instance, is the claim that AI will fully model human thinking and intelligence. Arguments about the frame problem have become quite technical, but it remains a core issue. It is one of the few areas where philosophers and scientists have managed to engage each other productively.

Extreme extratropical cyclones could triple in number by century-end

Unmitigated climate change will cause substantial increases in large-scale rainfall events in Europe and North America, according to researchers from the UK.

The analysis suggests that policy makers could need to develop new management strategies “to take into account the changing frequency and intensity of these events”, says Matt Hawcroft of the University of Exeter, UK.

Warmer climates are expected to deliver precipitation extremes of greater frequency and intensity. Precipitation intensity is a product of the Clausius–Clapeyron relation, which describes how much more water the atmosphere can hold as its temperature increases. But the relation is a simple scale that gives little indication where new intense precipitation will occur.

Unfortunately for climate modellers, there are many competing processes that determine the paths storms travel, including equator-to-pole temperature gradients, sea-ice loss, sea-surface temperature patterns and land-sea temperature contrasts, not to mention feedbacks from the storms themselves. The result at present is a large uncertainty in where storms will move under climate change.

Yet Hawcroft and his colleagues at Exeter and the University of Reading, UK, believe it is still possible to gain an insight into the nature and frequency of extreme precipitation changes at the regional level — by analysing the behaviour of the storms themselves in Europe and North America in simulations of both present-day climate and climate change under unmitigated greenhouse-gas emissions.

The team used an algorithm that seeks out vortices to identify storms in their models so that they could assign precipitation to individual events. Then they compared the number and intensity of the cyclones in the present-day and future climate-change simulations.

Hawcroft and colleagues found a big increase in the frequency of extreme extratropical cyclones, with above today’s 99th percentile of precipitation intensity, by the end of the century under unmitigated climate change; the number of extratropical cyclones delivering such intense rainfall more than tripled.

“Even with this uncertainty [in the underlying storm paths], there is quite a lot of consistency in the increase in extreme-storm-associated precipitation,” says Hawcroft.

Hawcroft is now exploring the dynamics that drive inter-annual and sub-seasonal variability in storms producing extreme rainfall, and how those dynamics could change in a warmer climate. The team reported the findings in Environmental Research Letters (ERL).

Mapping the network of international tourism

For people whose travel plans got shredded by this morning’s snowfall across the north-eastern US, it won’t be a surprise to hear that the world’s aviation network is vulnerable to disruption. Some travellers might, however, raise a weary eyebrow at the news that losing highly-trafficked nodes in that network – Logan Airport in Boston, say – is less damaging to the network’s integrity than losing less-busy “feeder” nodes.

This counterintuitive result formed part of a talk by Nuno Araujo on the opening day of the American Physical Society’s March Meeting, which is taking place this week in the snowbound city of Boston. Araujo and his colleagues at Lisbon University, Portugal, have been studying the world aviation network, or WAN, for several years. Their model is based on data from openflights.org, and it takes in such information as the locations of the world’s 3237 airports (nodes) and the structure of the 18125 connections between them (links).

On average, each airport in this network is connected to 19.21 others, while the average number of connecting flights required to get from point A to point B is 4.05. The maximum number of connections, meanwhile, is 12 – and if a 12-connection journey doesn’t sound like your idea of a relaxing holiday, you aren’t alone. The Lisbon group’s latest research combines the WAN information with data on where travellers actually go, and one of their findings is that people who fly for leisure – tourists – strongly prefer destinations that are either nearby (defined as less than 1000km distant), or connected by a single, direct flight.

Speaking as someone who once flew from north-east England to my hometown of Kansas City via London, Reykjavik, New York and Atlanta, I can’t say I’m shocked by this result. The group’s other findings, however, include some head-scratchers. It’s become a commonplace to say that we live in a highly interconnected world. Indeed, the fact that it is theoretically possible to fly from the tiniest, most remote airport in country A, to an equally off-the-beaten-track destination in country B, in no more than 12 “hops”, is in some ways proof of this connectedness. In practice, however, the Lisbon researchers found that only 15% of the world’s countries experience bi-directional flows of tourists. Instead, the dominant pattern is of imbalance: some countries send out lots of tourists but receive few, and many others experience mass influxes of visitors while their own citizens stay at home (presumably, in some cases, to work in the tourism industry).

The group’s map of the world’s tourism “communities” also turns up a few surprises. In tourism terms, Madagascar is part of Europe’s community, not Africa’s. Colombia and Venezuela share more tourists with North and Central America than they do with fellow South American countries such as Argentina and Brazil. And for some reason, the west African nation of Mauritania belongs to a tourism community that includes China, south Asia, Australia and Oceania, but not any of its neighbours, which are instead part of communities centred in sub-Saharan Africa and North America.

Araujo’s latest research is full of such fascinating facts, and it also has a serious purpose. At the end of their paper, the authors state that it is “imperative to further explore how wealth is transferred through tourism” in order to optimize the WAN to meet the demands of this $1340bn-a-year industry. In a highly connected world, it’s not just the major nodes that matter.

How a gamma camera works in cancer treatment

In this short video, Heather Williams from the Christie Hospital explains the principles of how gamma cameras are used within oncology. Williams, a senior medical physicist for nuclear medicine describes how the equipment is used for functional imaging, by tracking radioactive tracers injected into patients. Such gamma cameras are typically used  as part of cancer diagnosis and for the monitoring of treatment.

This video was part of a series of films recorded at the Christie NHS Foundation Trust in Manchester, UK. They included a look inside the centre’s proton-therapy system, its brachytherapy options and MRI equipment.

Janus droplets reflect an explosion of colour

Penguin

A new technique for creating iridescence in droplets has been developed by Lauren Zarzar and colleagues at Pennsylvania State University and the Massachusetts Institute of Technology. The team discovered the technique accidentally and believes that it could have a wide range of applications, from paints to sensors.

The familiar iridescent colours in films of oil or bubbles of soap are caused by interference between reflections from the front and back of the film. A similar effect occurs a result of diffraction from the ridges on a CD or the scales on a butterfly’s wing. All these effects require features on the scale of the wavelength of visible light, which is approximately 400-700 nm.

Zarazar’s team, however, observed iridescent colours in hybrid droplets that were 100 micron in diameter and made from the hydrocarbon heptane and the fluorocarbon perfluorohexane. The droplets had “Janus” morphologies – each was essentially half a heptane droplet stuck to half a perfluorohexane droplet (see figure).

Seeking tuneable lenses

The team was interested in these droplets because heptane and perfluorohexane is a combination that breaks the usual correlation between refractive index and density. Denser materials normally have higher indices of refraction, but as Zarzar explains: “We were intentionally trying to flip it the other way. The fluorocarbons tend to be very dense but also very low refractive index.” The researchers’ intention was that the droplets would behave as miniature, tuneable lenses.

They were amazed, however, to find that the droplets reflected brilliant colours that varied with viewing angle when they were illuminated with far-field white light. “When we see this iridescence, I’m immediately thinking we have something that’s giving us periodicity on the wavelength of light,” says Zarzar, “Because that’s what would lead to interference and structural colour.”

However, they could find no evidence for such periodicity. Moreover, when they looked at the droplets under a microscope, many features they observed made no sense from the point of view of traditional diffractive optics. For example, each droplet seemed to reflect a specific colour of light when viewed from a particular angle, irrespective of the position of other droplets.

“It didn’t matter whether the droplets were close packed or randomly oriented, so it didn’t come from some kind of diffraction grating happening between the droplets or something like that,” says Zarzar.

Counterintuitive effect

An emulsion containing droplets of one size reflected pure colours that changed with the viewing angle. Droplets of different sizes reflected multiple colours at any given angle, however, with such an emulsion producing white light. “There was something happening in a single, 100 micron scale droplet that was somehow causing this effect, and it wasn’t intuitive that you could get interference from something that large that didn’t have any periodicity within it, says Zarzar.

After much head scratching, the researchers realized the phenomenon could arise by total internal reflection. Because it has a lower density, the heptane nestles within the perfluorohexane, with a concave interface between the two liquids (see figure). When light hits this interface at an angle, classical ray optics dictates that, as the perfluorohexane has a lower refractive index, it should refract away from the normal. If the angle of incidence is too great, however, the light cannot escape and undergoes total internal reflection.

“The light may bounce several times around that concave interface depending on the initial angle of incidence, and rays that have bounced different numbers have different path lengths and end up out of phase,” explains Zarzar. “That’s what gives rise to the interference. Our collaborators at MIT have a model that can use the refractive index contrast, the geometry of the interface and the incoming angle of the light to predict the pattern of iridescence that you generate.”

The researchers also showed that, by incorporating an ultraviolet-responsive surfactant into the droplets, they could use UV light to change the interfacial properties of the liquids and thereby control the colours of visible light reflected by the droplets. “We have several publications where we sensitize these biphasic droplets to things like temperature and pH,” says Zarzar. “You could use the colour as a readout of the morphology of the emulsion, which could be used for sensing.” In the nearer term, they hope to produce iridescent paints that do not require nanoscale structures.

Commenting on the team’s discovery, David Weitz of Harvard University says, “I give them a lot of credit for recognizing it and figuring it out. I don’t see that structural colour has ever really had a huge impact technologically, but maybe this will have more of a chance because here you don’t have to worry about precisely controlled thicknesses – I don’t know.”

The effect is described in Nature.

Upconverting nanoparticles allow mice to see in infrared

Mammals can detect light in the visible wavelength range of the electromagnetic spectrum, that is, between 400 and 700 nm. A team of researchers have now extended this capability to the near-infrared in mice by injecting photoreceptor-binding upconverting nanoparticles (pbUCNPs) into the back of their eyes. The technique could be used to develop improved near-infrared and night vision technologies for military and civilian applications and help treat certain ocular defects.

The photoreceptors (rods and cones) in mammalian eyes contain light-absorbing pigments consisting of opsins and their covalently-linked retinals. Wavelengths greater than 700 nm are too long to be absorbed by these photoreceptors, so when these hit the retina no corresponding electric signal is sent to the brain.

Extending the normal wavelength range

In recent years, researchers have been looking to integrate nanoparticles with photoreceptors in the eye so that it can detect light outside of the normal wavelength range. In the new work, a team led by Tian Xue of the University of Science and Technology of China is saying that it has now used pbUCNPs to extend the mammalian visual spectrum to the near-infrared (NIR) range. The particles tightly bind to photoreceptor cells and act as NIR light transducers – capturing longer NIR wavelengths and emitting shorter ones in the visible light range. The rods or cones then send a normal signal to the brain, as if it was dealing with visible light.

The human eye is most sensitive to light around 550 nm. To convert NIR light to this wavelength, Xue and colleagues generated core-shell-structured upconversion nanoparticles (UCNPs) made of β-NaYF4:20%Yb, 2%Er@β-NaYF4. When irradiated with NIR light around 980 nm, these particles convert it into light with an emission peak at 535 nm. To design photoreceptor binding UCNPs, the researchers conjugated concanavalin A protein (ConA) with polyacrylic acid-coated UCNPs (paaUCNPs).

Retina and visual cortex both activated by NIR light

They injected the nanoparticles into the sub-retinal region of the eyes in mice. Thanks to in vivo electroretinograms (ERGs) and visually evoked potential (VEP) recordings in the animals’ visual cortex, they found that the retina and visual cortex of the pbUCNP-injected mice were both activated by NIR light.

Animal behaviour tests (in a water maze) also revealed that the mice were sensitive to NIR light and that they could sense it even in daylight conditions. This means that both NIR and visible light vision are possible at the same time. They were able to distinguish NIR light shape patterns, such as triangles and circles too. The researchers say that they were also surprised to discover that ConA-conjugated nanoparticles allowed the animals to see the patterns with exceptionally low-power density LED light because this is sufficient to activate the nanoparticles.

The pbUCNPs are long-acting and NIR pattern vision can last for over 10 weeks without interfering with normal vision, they report.

The team, which includes scientists from the University of Massachusetts Medical School, says it now wants to try out the technique in dogs and primates

Self-powered

“Current NIR vision technology, such as night vision goggles, makes use of detectors and cameras that do not work at all during the day,” says team member Jin Bao. These devices also require external power sources. In contrast, our injectable solution is self-powered and works with both visible and NIR light at the same time.

“The nanoparticles employed in our work could also allow us to explore a variety of vision-related behaviours in animals,” she adds. “What is more, they could serve as an integrated and light-controlled system in medicine and might be used to improve human red colour vision deficits, as well as in drug delivery for ocular disease.”

Full details of the research are reported in Cell.

Laser pioneer and Nobel laureate Zhores Alferov dies at 88

The Russian physicist Zhores Alferov, who shared the 2000 Nobel Prize for Physics with the US scientists Herbert Kroemer and Jack Kilby, died on 1 March aged 88. Alferov pioneered the creation of semiconductor lasers, which led to a vast number of applications that are now ubiquitous in modern life such as DVD players and mobile phones.

Photo of Zhores Alferov

Alferov was born on 15 March 1930 in Belarus, which was then part of the Soviet Union. After graduating from the Electrotechnical Institute in Leningrad in 1952, Alferov moved to the Ioffe Institute in St Petersburg, where he spent the remainder of his career. In 1970 he was awarded a doctorate in physics and mathematics from the institute and became its director in 1987 — a position he held until 2003.

An optics revolution

It was at the Ioffe Institute where Alferov carried out his Nobel-prize-winning research. In the early 1960s, Alferov began working on semiconductor heterostructures – devices that contain thin layers of different semiconductors, usually based on gallium arsenide, stacked on top of each other. In 1963 he proposed building semiconductor lasers from such heterostructure devices, which was also made independently by Kroemer. In 1969, Alferov and his team built the first semiconductor laser from gallium arsenide and aluminium gallium arsenide and the following year his team managed to get them to work continuously at room temperatures.

The finding revolutionized the control of light signals in electronics in much the same way that the transistor had earlier revolutionized the technology of electric currents. The heterotransistor, or heterojunction, allowed the development of affordable miniaturized semiconductor appliances that have transformed daily life, underpinning a whole range of gadgets, including CD players, fibre-optic-cable networks and more efficient solar cells.

For this work, Alferov shared half the 2000 Nobel Prize for Physics with Kroemer “for developing semiconductor heterostructures used in high-speed- and opto-electronics”, while the other half was given to Kilby for the invention of integrated circuits.

Alferov was awarded many other honours such as the Lenin Prize in 1972, the USSR State Prize in 1984 and was made an Order of Lenin in 1986 – the highest civilian distinction that was bestowed by the Soviet Union. Alferov was also a fellow of the Institute of Physics, which publishes Physics World. In his later life, Alferov moved into politics. In 1995, he was elected to the Russian Parliament, the State Duma, where he was successfully re-elected in 1999, 2003 and 2007.

Portable fluorometer detects breast cancer cells

A portable fluorometer designed to detect fluorescence emitted from labelled cancer cells has been successfully validated by researchers at the University of Saskatchewan. The device — constructed of inexpensive, off-the-shelf products — is targeted at researchers in hospitals and laboratories for use as an alternative to expensive fluorescence imaging microscopes (Biomed. Opt. Express 10.1364/BOE.10.000399).

Principal developer Mohammad Wajih Alam and colleagues developed the device to promote the use and availability of fluorometers to medical and research labs throughout the world, especially in regions where resources are limited. They believe their fluorometer will overcome the prohibitive purchase and maintenance costs of conventional fluorescence detection instruments and the need for trained staff to operate the sophisticated imaging equipment. They hope that their design will foster clinical and biomedical research into fluorescence-based detection of multiple types of cancer.

The team designed the system so that it can be easily assembled using readily available, economical equipment. The fluorometer consists of a flashlight, a photodiode that responds in the range 400–1100 nm, an emission filter, a microcontroller and an LCD screen. The photodiode, which is immediately below the emission filter, is placed inside a custom-built 3D-printed sample chamber that houses the detection circuitry.

Visible light from the flashlight excites the sample, in this study, a breast cancer cell line engineered to express green fluorescent protein (GFP). As soon as a fluorescent signal is generated, the emitted signal is detected by the photodiode. The emission filter, chosen to match the emission wavelength of the GFP, ensures that only the emitted fluorescence from the sample reaches the photodiode. The microcontroller controls the detected signal, converting the analogue signal generated by the photodiode to digital, and communicating it to the display.

To validate the fluorometer, the researchers tested the system using cultured cells seeded on coverslips and mounted on glass slides. They first measured the fluorescent signal from a control cell sample (breast cancer cells without GFP), repeating the process 10 times to obtain an average base measurement. They then measured breast cancer cells expressing GFP in the same way. The system compares the average values from the samples, and if the difference is greater than 700 mV, the LCD screen displays a “Cancercell found” message.

As the confluency of cultured cells can affect readings, the researchers evaluated cell samples with confluency of between 30% and 95%. They determined that their fluorometer required a minimum of 60% confluency to differentiate between control and cancer cells. Using cultured samples with confluency greater than 60%, the device identified all control cells, and correctly detected nine out of 10 cancer cells.

Khan Wahid

“Our fluorometer detected fluorescence emitted from human breast cancer cells genetically engineered to express the green fluorescence protein. These cells served as a biologically appropriate and technically convenient clinical proxy of patient tissue for the fluorescence-based selective-detection of breast cancer cells,” wrote the authors.

“We are trying to expand the use of this device, which we designed to detect fluorescence emitted from cancer cells cultured in vitro, to see if our already compact prototype system can be further miniaturized,” Alam tells Physics World. “We are also currently working towards investigating other types of cancer with our fluorometer.”

“This device can work as an alternative to expensive commercial microscopes where the resources are limited. It can be used in remote places where diagnosis is not possible due to resource constraints,” Alam adds. “Our fluorometer is not a substitute for an MRI scanner, nor is it intended to be. But it does exhibit immense potential for future applicability in the selective detection of fluorescently-labelled breast cancer cells.”

Copyright © 2025 by IOP Publishing Ltd and individual contributors