Skip to main content

Willard Boyle: 1924โ€“2011

Willard Boyle, who shared the 2009 Nobel Prize for Physics, has died at the age of 86. Boyle was awarded one half of the prize with George Smith for inventing the charge-coupled device (CCD) camera. Boyle and Smith were both working at Bell Laboratories in New Jersey when they made their discovery in 1969 – Boyle was director of device development at the lab and was Smith’s boss; Smith was a department head. The other half of the 2009 prize went to Charles Kao for his work on optical fibres.

Boyle was born in Amherst, Nova Scotia, on 19 August 1924. His family moved to a remote logging community in Quebec, where Boyle was home-schooled by his mother until the age of 14. After serving in the Royal Canadian Navy in the Second World War, he then attended McGill University, receiving a PhD in physics in 1950. Boyle joined Bell Labs in 1953, where he spent the rest of his career before retiring in 1979 and returning to his native Nova Scotia.

Like many Nobel laureates, the prize came late in Boyle’s life, when he was 85. As his long-time friend and Nova Scotia local councillor Ron MacNutt told the Canadian Broadcasting Corporation yesterday, Boyle “had some regret that that recognition came a little bit late for him to get out and do more of that, to talk to younger children in school”. An earlier award, MacNutt added, might have let Boyle influence even more people in his life.

Revolutionary pioneer

The invention of the CCD revolutionized photography because the devices allow images to be converted directly into digital data rather than using film. CCDs once formed the basis of all digital cameras, but have since been replaced by CMOS sensors in most low-cost applications such as mobile phones and some digital cameras. CCDs are also widely used in astronomy, with the Hubble Space Telescope, for example, having several CCD cameras on board, including in the Wide Field Camera, which was recently upgraded.

A CCD camera contains millions of light-sensitive cells that are arranged in rows and columns to form a matrix. Incoming light is converted via the photoelectric effect into an electron, which is stored in a capacitor, with the amount of stored charge in each cell being proportional to the intensity of light. The charges are then transported to the edge of the CCD matrix to be read out, allowing the image to be reconstructed from the contents of each pixel.

Boyle was made a Companion of the Order of Canada in 2010 and received several other awards for his CCD work – including the IEEE’s Morris N Liebmann Memorial Award, which he shared with Smith. He is survived by his wife Betty and three children.

Nanotech industry comes under fire

A UK researcher is calling into question the capability of the nanotech industry to turn fundamental research into robust technologies that can be produced on a large scale. Mike Kelly at the University of Cambridge argues that manufacturing constraints will prevent structures smaller than 3 nm in size from being mass-produced using “top-down” processes.

“There are many billions of dollars being spent on one-offs while the bull in the china shop is the ultimate challenge of manufacturability,” Kelly told physicsworld.com.

Kelly is making these claims after carrying out a case study of the production of vertical nanopillars, which have been touted for uses in sensors and displays. These components can be produced at present by two different top-down approaches: using a metal particle catalyst to grow the pillars; or by infilling holes in a resist layer that can be defined using lithography.

Intolerable variations

Kelly considered the variability in the properties of nanopillars created using these two production methods and found that a large standard deviation in size and shape started to creep in once the scale reached below 3 nm. This, he says, would lead to intolerable variations in their electronic, optical and other properties if the nanopillars were to be used in applications.

On the back of his findings, Kelly believes that, given specific tools for fabrication, some nanostructures are intrinsically unmanufacturable. “The statistics of small numbers means that currently used methods will simply produce artefacts with too big a variation between adjacent features to be useful for anything,” he said. “If I am wrong, and a counter-example to my theorem is provided, many scientists would be more secure in their continued working, and that is good for science.”

‘Perfect’ structures

Rod Ruoff, a materials scientist at the University of Texas at Austin, agrees that some nanostructures below certain sizes will not be sufficiently stable in their structures and chemistries for all applications. “It is thus valuable to raise questions about what length scales are realistic, for each technology, for each application,” he said.

But Ruoff disagrees that bottom-up processes cannot be used to produce consistent specifications in bulk, and he highlights carbon as a promising material. “One of the attractive features of carbon nanotubes and graphene is their relative chemical stability, and the fact that such nanostructures can in fact be ‘perfect’ in structure even at the Angstrom length scales.”

Order from disorder

This is a view shared by Philip Moriarty a nanomaterials researcher at the University of Nottingham in the UK, who says that the limitations of top-down fabrication are already well-recognized by academics and industrial scientists. “This has prompted an intense research effort focussed on exploiting so-called bottom-up techniques such as self-assembly and, in particular, directed self-assembly.”

By this, Moriarty is referring to processes in which disordered systems of components form organized structures as a consequence of specific, local interactions among the components. This directed self-assembly is described by some as a hybrid of top-down and bottom-up fabrication techniques.

Moriarty believes that Kelly has presented the “effective error bars” associated with nanofabrication, rather than a general theory. He also said that he disagrees fundamentally with the assertion that scientific research should be focused on what can be manufactured now. “Fundamental nano scientific research should not be constrained by our current understanding of the limits of nanomanufacturability.”

This research is described in the journal Nanotechnology.

Nano-antenna fashions charge from light

A new device that collects and focuses light before converting it into a current of electrons has been developed by researchers at Rice University in the US. The nano-optical antenna and photodiode – the first device of its kind – could potentially be used in a variety of applications such as photosensing, energy harvesting and imaging.

Conventional antennas, which are widely used to transmit radio or TV signals, can be used at optical frequencies as long as the device is shrunk to the nanoscale. Such optical nano-antennas work by exploiting “plasmonic modes”, which increase the coupling between light emitted by neighbouring molecules and the antenna.

Naomi Halas and colleagues have now taken advantage of these plasmonic modes to make the first optical nano-antenna that also works as a photodiode – a type of photodetector capable of converting light into either current or voltage. Halas’s team made its device by growing rod-like arrays of gold nano-antennas directly onto a silicon surface – so creating a metal–semiconductor (or Schottky) barrier formed at the antenna–semiconductor interface.

When light hits the antenna, it excites oscillating waves of electrons, known as surface plasmons (so-called because they travel near the surface of the metal). These energetic or “hot” electrons are then injected into the semiconductor over the Schottky barrier, thus creating a detectable photocurrent without the need for an applied voltage.

The resonators made by the researchers, who report their work in Science, have heights and widths of 30 nm and 50 nm, respectively, and are between 110 nm and 158 nm long. Each 15 × 20 array consists of 300 devices with a spacing of 250 nm between the antennas. The structure is surrounded by an insulating later of silicon dioxide and the ensemble is then electrically connected through an electrode made of indium tin oxide.

Applications galore

One advantage of the device is that the photocurrent generated is no longer limited to photons with energies above the band gap of the semiconductor, but instead to photon energies above the height of the Schottky barrier. The device can thus detect light below the band gap of the semiconductor, and at room temperature to boot. “The result is important because it enables a new way to capture and detect infrared photons using cost-effective, sustainable semiconductor materials such as silicon,” Halas told physicsworld.com.

The range of potential applications for this device is extremely diverse Naomi Halas, Rice University

As the plasmon resonance wavelengths in the device are in the near-infrared part of the electromagnetic spectrum, with shorter nanorods giving shorter resonance wavelengths, applications for the device could include silicon-based solar cells that would work in the infrared as well as in the visible parts of the spectrum. The fact that the devices work in the broad-infrared also means that they could be used to make low-cost silicon infrared-imaging detectors that might replace costly indium-gallium-arsenide detectors that work in the same spectral range.

“The range of potential applications for this device is extremely diverse,” says Halas. “For example, as it is capable of detecting sub-band-gap photons, it could find widespread use in on-chip silicon photonics that would no longer need to integrate additional semiconductor materials as detectors into chip designs – something that would also lower fabrication costs.” Halas adds that such nano-antennas could also be used in “unforeseen applications”, such as photosensing, energy harvesting, imaging and light-detection technologies.

Multiple valleys boost thermoelectric performance

Physicists in the US and China have boosted the performance of a common thermoelectric material by modifying its electronic band structure. The improvement was made by carefully adjusting the relative abundances of tellurium and selenium in a lead alloy. The result is a material with an all-time-high thermoelectric figure of merit of 1.8 – a result that could lead to new types of thermoelectric devices that can convert waste heat into useful electricity.

A thermoelectric generator converts heat directly into electricity and comprises two thermoelectric semiconductors – one n-type and the other p-type. Devices based on lead telluride (PbTe) have been used to generate electricity since the 1960s. These have been used on space missions – where radioisotopes provide the heat – and in commercial systems here on Earth where the heat is generated by burning gas or another fuel. In principle, thermoelectric systems could also capture waste heat from anything from solar panels to car exhausts to nuclear power stations, thereby improving the energy efficiency of these processes. But before this is can happen, scientists must boost the performance of thermoelectric materials.

To be of any use, a thermoelectric material must be good at conducting electricity but poor at conducting heat. It must also have a large thermopower, which is the ratio of the voltage to temperature difference across a material. These requirements are expressed in the thermoelectric figure of merit ZT, which should be greater than about 1.5.

Promising materials

PbTe-based materials have long been seen as good candidates because they have relatively high figures of merit, long lifetimes and can operate at the high temperatures that occur in car exhausts and similar environments. However, the figures of merit for these materials have remained stubbornly below one.

In this latest work, Jeffrey Snyder and colleagues at the California Institute of Technology and the Chinese Academy of Sciences have modified the electronic band structure of PbTe-based materials to achieve a figure of merit of 1.8 at 850 K – a value that they describe as “extraordinary”.

The team achieved this by taking advantage of the fact that doping PbTe with selenium causes the convergence of many “degenerate valleys” within the electronic band structure of the material. According to Snyder, increasing the number of degenerate valleys increases the speed at which the charge-carrying holes can pass through the material. The result is a boost in the electrical conductivity and therefore the thermoelectric figure of merit.

Many valleys

The commercial thermoelectric material (Bi,Sb)2Te3, for example, is known to have a valley degeneracy of six. However, by substituting about 20% of the tellurium atoms in PbTe with selenium, Snyder’s team achieved a valley degeneracy of 16. The result is a figure of merit of 1.8 for this p-type material.

Higher ZT values have been reported before, but Snyder told physicsworld.com that he believes that this is the highest ZT to be reproduced in independent laboratories. “While I would not be surprised if I am informed of other examples I may have missed, this is the highest ZT discussed in the thermoelectric community in the past five years,” he says.

Akram Boukai of the University of Michigan described the work as “compelling”, since, he suggests, it is the first time someone has succeeded in utilizing band degeneracy for simultaneously enhancing the electrical conductivity while not significantly affecting the thermopower. “I believe this may lead to practical devices since this was done using a bulk process amenable to large-scale manufacturing,” he says. However, he points out that a suitable n-type material must also be developed.

To make a practical thermoelectric generator – with a high overall figure of merit – both p- and n-type materials must be found with high average ZT values over a wide temperature range of about 50–500 °C. The ZT of samples made by Snyder and colleagues drops off rapidly with temperature, and the team is now working on improving this by further engineering the electron energy levels. Snyder says that the team has also made progress in creating a promising n-type material.

The research is reported in Nature 473 66.

ALICE in wonderland

A major part of the Large Hadron Collider’s appeal, which has brought it recognition far beyond the particle physics community, is the sheer “bigness” โ€“ of both the experiments and the questions they are designed to address.

This is undoubtedly true of the ALICE experiment, which is seeking to recreate the conditions that existed just a few picoseconds after the Big Bang. In doing so, the ALICE collaboration has recorded the highest temperatures and densities ever produced in an experiment on Earth.

In this interview with physicsworld.com, David Evans, the leader of the UK team working at ALICE, describes the huge engineering effort that went into constructing the detector. He goes on to explain how ALICE is designed to shed light on some of the biggest mysteries in physics, such as the nature of the strong interaction that binds quarks into protons and neutrons. Press “play” for the full story.

The first American in space

USPS.jpg
Celebrating 50 years of US manned spaceflight (Courtesy: USPS)

By Michael Banks

If you haven’t marked it in your diary yet, today marks the 50th anniversary of the first American in space.

On 5 May 1961 NASA astronaut Alan Shepard blasted off on a Redstone rocket from Cape Canaveral as part of the US Mercury manned space programme, which had the goal of putting a human in orbit around the Earth.

Shepard, one of seven astronauts chosen for the Mercury programme, successfully completed the 15 minute suborbital flight, which carried him to an altitude of 187 km. He became the second person in space after Yuri Gagarin’s successful orbit of the Earth on 12 April 1961.

To mark the anniversary, LIFE magazine has published 30 images taken on the day by LIFE photographer Ralph Morse, which includes 13 previously unseen photographs.

Indeed, Morse was dubbed by NASA astronaut John Glenn (who in 1962 went on to become the first American to orbit the Earth from space) as “the 8th Mercury astronaut” because he spent many years with the astronauts as they trained. You can view the slideshow of images here.

The United States Postal Service has also commemorated the anniversary by unveiling a pair of stamps. There is also one featuring a grinning Shepard (see image above), the other stamp features an image of the MESSENGER spacecraft, which successfully entered orbit around Mercury in March.

Talking about gravitation

roger_w_babson.jpg

By Tushna Commissariat

As I was looking through all that is new and exciting in the world of physics this morning, I came across this interesting paper titled “Persistence of black holes through a cosmological bounce”, recently published on the arXiv preprint server. The paper looks at the possibility of certain black holes persisting when the universe collapses in a “big crunch”, only to stick around for the universe to re-expand with a “big bounce”. The paper was written specifically for the 2011 Awards for Essays on Gravitation held by the Gravity Research Foundation. Upon investigation, I found another two submissions published on arXiv, entitled “Birkhoff’s theorem in higher derivative theories of gravity” and “Quantum gravity and the correspondence principle”.

The Gravity Research Foundation was founded by Roger W Babson, a graduate from the Massachusetts Institute of Technology, who had an interesting relationship with gravity. In his youth, his older sister drowned in a river near their home, prompting him to write an essay titled “Gravity – our enemy no. 1” wherein he claimed that it was gravity that killed her. “She was unable to fight gravity, which came up and seized her like a dragon and brought her to the bottom” he wrote.

Later he owed a debt of sorts to the theory of gravity as it helped him to predict the 1929 stock market crash based on the principle that if there was a strong upward action, there would follow a severe downward reaction. “What goes up will come down” he said. “The stock market will fall by its own weight.”

Gravity was a neglected area of physics in the 1940s. To energize the field, at the encouragement of his colleague George Rideout, he set up the Gravity Research Foundation, which handed out the first awards for the best essays submitted on gravity in December 1949. Previous prizewinners include Stephen Hawking (who has won it six times) and British science writer and astronomer John Gribbin (who was co-author of the winning paper, with Paul Feldman, when Gribbin was only 24) An archive of all winning essays can be found on the foundation’s website.

This year will be the 62nd year of the Essay Award and they will be announcing the top five prizewinners on 15 May, so all the best to the participants. And do look out for a follow-up blog!

Antimatter trap tightens its grip

Last year physicists working on the ALPHA experiment at the CERN particle-physics lab became the first to capture and store atoms of antimatter for long enough to examine it in detail. They trapped 38 antihydrogen atoms for about one fifth of a second. Now, the same team has posted a paper on the arXiv preprint server describing how it trapped 309 antihydrogen atoms for 1000 s. This boost in both number and trapping time should lead to important insights into the nature of antimatter.

Antihydrogen – the antimatter version of the hydrogen atom – is an atomic bound state of a positron and antiproton that was first produced at CERN towards the end of 1995. The study of antimatter is important in developing our understanding of the universe and in finding out why it contains so much more matter than antimatter.

With members from seven nations, the ALPHA team shared the Physics World 2010 Breakthrough of the Year award for its capture of antihydrogen. As well as extending the previous capture time by almost four orders of magnitude, the team has gained some interesting insights into the energy distribution of the captured anti-atoms.

Ground state first

The ALPHA team produced the antihydrogen by merging two clouds of cold plasmas: one containing positrons and the other antiprotons. By improving their trapping techniques, the researchers managed to hold the antihydrogen for more than 1000 s. These advances also meant that five times as many atoms were trapped per attempt. Calculations based on data from the experiment suggest that after about 0.5 s, most of the trapped antihydrogen atoms reach their lowest energy or ground state. As a result, the team says that its trapped sample is the first antihydrogen obtained in the ground state.

The researchers have also managed to make the first measurements of the energy distribution of the trapped anti-atoms. These data, along with computer simulations, should pave the way to a better understanding of trapping dynamics. The team carried out 40,000 simulated trapped antihydrogen events and compared them with the 309 experimental ones, to study the trapping and release processes.

Studying CPT violation

The ability to trap antihydrogen for long periods of time could lead to precision tests of charge–parity–time (CPT) violation, which could help explain why the universe contains so little antimatter. Other possible experiments include microwave spectroscopy of the antimatter and even laser and adiabatic cooling of antihydrogen to temperatures where gravitational effects are observable, according to the researchers.

The paper is currently under review for journal publication and therefore the ALPHA researchers were unable to comment further.

The research is described in arXiv:1104.4982.

For a detailed explanation about how the ALPHA experiment creates antimatter see “Antihydrogen trapped at CERN”.

Vision of beauty

My childhood hero from the 1970s was the Six Million Dollar Man. Equipped with his superior bionic eye, he easily outwitted the villains as they stumbled through their evil plots using only their limited, natural vision. I reminisce about this TV character every year when I show a lecture theatre full of undergraduates the first slide in my “Physics of Light and Vision” course, which shows a face with cameras staring out of the eye sockets. I then invite my students to debate the pros and cons of artificial vision.

Technological advances over the past few decades have transformed this debate from the wild speculations of science fiction into the practicalities of science fact. For one thing, the number of photodiodes that capture light in digital cameras has escalated, driven by an exponential growth of “pixels per dollar”. Furthermore, surgeons can now insert electronic chips into the retina. The grand hope is to restore vision by replacing damaged rods and cones with artificial photoreceptors – and clinical trials to show this are already under way.

The striking similarities between the eye and the digital camera help towards this endeavour. The front end of both systems consists of an adjustable aperture within a compound lens, and advances bring these similarities closer each year. It works both ways. On the one hand, for example, some camera lenses now feature graded refractive indices similar to the eye’s lens. On the other, a laser-surgery technique called Lasik removes aberrations from the surface of the eye’s cornea (which acts as the first lens in the eye’s compound-lens system) in order to resemble the shape found in camera lenses. Meanwhile, a quick glance at the back of a camera shows that it is getting closer to the eye’s retina in terms of both the number of light-sensitive detectors and the space that they occupy. A human retina typically contains 127 million photoreceptors spread over an area of 1100 mm2. In comparison, today’s state-of-the-art CMOS sensors feature 16.6 million photoreceptors over an area of 1600 mm2.

However, there are crucial differences between how the human visual system and the camera “see”, both in the physical structure of the detectors and the motions they follow. For example, the neurons in the eye responsible for transporting electrical information from the photoreceptors to the optic nerve have a branched, fractal structure, whereas cameras use wires that follow smooth, straight lines. And while a camera captures its entire field of view in uniform detail – recording the same level of information at the centre of the image as at the edges – the eye sees best what is directly in front and not so well at the periphery. To allow for this, the eye constantly moves around, exploring one feature at a time for a few moments before glancing elsewhere at another, in a gaze pattern that is fractal; whereas the camera’s gaze is static with a pattern that is described by a simple dot.

The differences between camera technology and the human eye arise because, while the camera uses the Euclidean shapes favoured by engineers, the eye exploits the fractal geometry that is ubiquitous throughout nature. Euclidean geometry consists of smooth shapes described by familiar integer dimensions, such as dots, lines and squares. The patterns traced out by the camera’s wiring and motion are based on the simplicity of such shapes – in particular, one-dimensional lines and zero-dimensional dots, respectively. But the eye’s equivalent patterns instead exhibit the rich complexity of fractal geometry, which is quantified, as we will see, by fractional dimensions. It is important that we bear in mind these subtleties of the human eye when developing retinal implants, and understand why we cannot simply incorporate camera technology directly into the eye. Remarkably, implants based purely on camera designs might allow blind people to see, but they might only see a world devoid of stress-reducing beauty.

More than meets the eye

Early theories of human senses highlighted some of the unique qualities of the eye. The “detectors” associated with hearing, smelling, tasting and touching are all passive. They gather information that arrives at the body. For example, the ear and nose wait for sound waves and airborne particles to arrive before they respond. Consequently, the early Greek philosophers of the atomist school proposed an equally passive theory of human vision in which the eye collected and detected “eidola” – a mysterious substance that all objects shed continually.

Unfortunately, although the concept of eidola provided an appealing theory for human vision, it triggered an avalanche of scientific problems in terms of the world being viewed. For example, let us say I receive eidola from the Cascade Mountains seen from my office window in Oregon. Would the mountains not wear down, given that they must emit enough eidola for the other million people who also view the Cascades on a daily basis?

Luckily, optical theories emerged to save the atomists from their increasingly contrived models of the material world and gradually progressed towards the geometric optics that we enjoy today. But even the best optical theories suffered from a weakness: given that light rays bounce off a friend’s face, why can we not spot it immediately in a crowd – even though it is directly before our eyes? We are forced to conclude that the visual system is not passive but that it has to hunt for the information we need.

Hunting is a necessary strategy for the eye because the world relentlessly bombards us with visual stimuli. Our basic behaviour is composed of strategies aimed at coping with this visual deluge. For example, we walk round a corner at a distance that ensures that the scene emerges at a rate that we can process. However, our biggest strategy for coping lies in the way the photoreceptors are distributed across the retina and in the associated motion of our eyes.

If the eye employed the Euclidean design of cameras and distributed its photoreceptors in a uniform, 2D array across the retina, there would simply be too many pixels of visual information for the brain to process in real time. Instead, most of the eye’s seven million cones are piled into the central region of the retina. The cone density reaches 50 cones per 100 µm at the centre of the fovea, which is a pin-sized region positioned directly behind the eye lens. Unlike the camera’s passive collection of information, the eye instead has to move to ensure that the image of interest falls mainly on the fovea. Consequently, although the fovea comprises less than 1% of the retinal size, it uses more than 50% of the visual cortex of the brain.

We know a lot about how the eye moves in certain situations. If viewing a face, for example, we look first at the eyes and then the mouth. But little research has focused on how we search for information in a more complex scene. On reflection this seems to be an oversight, as the evolution of our visual system has been fuelled by natural scenery. Typical objects in these scenes each consist of structures that repeat at different magnifications. In other words, the complexity of what we have evolved to see is built up of self-similar, fractal objects such as plants, clouds and trees. How would we therefore search for something like a tiger hiding in a fractal forest?

Gazing patterns

To address the question of how we pick out the important bits of knowledge from the vast scene before our eyes, my collaborators Paul Van Donkelaar and Matt Fairbanks (also at the University of Oregon) and I used the remote-eye-tracking system shown in figure 1a, which uses an ordinary optical camera to track the position of a participant’s pupil. To detect where the participant is actually looking a beam of infrared light is shone onto the cornea and the position of the reflected ray is measured with a separate infrared camera. The participants spent time viewing a series of computer-generated fractal patterns (figure 1b). A computer algorithm then uses information from the cameras to calculate the participant’s gaze as a function of time and generates eye trajectories similar to that shown in figure 1c.

One of the intriguing properties of a fractal pattern is that its repeating structure causes it to occupy more space than a smooth 1D line, but not to the extent of completely filling the 2D plane. As a consequence, a fractal’s dimension, D, has a value lying between 1 and 2. By increasing the amount of fine structure in the fractal, it fills more of a 2D plane and its D value moves closer towards 2. We tweaked this parameter to generate various series of computer-generated fractal patterns, for which the dimension ranged from 1.1 to 1.9 in 0.1 intervals.

Our results showed that, when searching through the visual complexity of a fractal pattern, the eye searches one area with short steps before jumping a larger distance to another area, which it again searches with small steps, and so on, gradually covering a large area. This behaviour was observed throughout the D-value range from 1.1 to 1.9.

To quantify the gaze of the eye we again turned to fractals, as its trajectory is also like a fractal – a line that starts to occupy a 2D space because of its repeating structure. Simulated eye trajectories (figure 1d) demonstrate how gaze patterns with different dimensions would look. We employed the well-established “box counting” method to work out our values of D exactly. This involved covering each trajectory with a computer-generated mesh of identical squares (or “boxes”), and counting the number of squares, N(L), that contain part of the trajectory. This count is repeated as the size, L, of the squares is reduced. For fractal behaviour, N(L) scales according to the power law relationship N(L) ~ L–D, where D lies between 1 and 2. Our results showed that, in every instance, the eye trajectories traced out fractal patterns with D = 1.5, which is what is simulated in the middle panel of figure 1d. The insensitivity of the eye’s observed pattern to the wide range of D values shown to subjects is striking. It suggests that the eye’s search mechanism follows an intrinsic mid-range D value when in search mode.

A possible explanation for this insensitivity lies in previous studies of the foraging behaviour of animals. These studies proposed that animals adopt fractal motions when searching for food. Within this foraging model, the shorter trajectories allow the animal to look for food in a local region and then increasingly long trajectories allow it to travel to unexplored neighbouring regions and then on to regions even further away. The interpretation of this behaviour is that, through evolution, animals have found it to be the most efficient way to search an area for food. Significantly, fractal motion (figure 1d, middle) has “enhanced diffusion” compared with Brownian motion (figure 1d, right), where the path mapped out is, instead, a series of short steps in random directions. This might explain why a fractal trajectory is adopted for both an animal’s searches for food and the eye’s search for visual information. The amount of space covered by fractal trajectories is larger than for random trajectories, and a mid-range D value appears to be optimal for covering terrain efficiently.

Fractal therapy

Our finding that the eye adopts an innate searching pattern raises an intriguing question: what happens when the eye views a fractal pattern of D = 1.5? Will this trigger a “resonance” when the eye sees a fractal pattern that matches its own inherent characteristics? My collaborations with psychologists and neuroscientists support this intriguing hypothesis. Perception experiments performed on hundreds of participants over the past decade show that mid-D fractals are judged to be the most aesthetically appealing, and physiological measures of stress (including skin conductance measurements and electro-encephalography (EEG)) reveal that exposure to these fractals can reduce our physiological response to stress by as much as 60%. Furthermore, preliminary functional-magnetic-resonance-imaging (fMRI) experiments indicate that mid-D fractals preferentially activate distinct regions of the brain. This includes the parahippocampal area, which is associated with the regulation of emotions such as happiness.

Each year, the UK and the US each spend an average of $1000 per capita on stress-related illnesses, and so increased exposure to computer-generated mid-D fractals could present a novel, non-pharmaceutical approach to reducing society’s stress levels by harnessing these positive physiological responses. The current strategy is to use computer-generated images both for viewing on computer monitors and also for printing out and hanging on walls. The advantage of using large flat-screen monitors is that we can generate time-varying fractals, which we believe will be important for maintaining people’s attention. We are also starting a project in which we will work with artists to incorporate stress-reducing fractals into their work. To “train” the artists, we hope to develop software that can give the fractal dimension of any piece of art, so that the artists can use these to see if they are hitting optimal D values.

Crucial to our stress levels, though, is our daily exposure to nature’s mid-D fractals such as clouds, trees and river patterns, which prevent our stress levels from soaring out of control. According to our model, the physiological origin of this stress reduction lies in the commensurability between the fractal eye motion and the fractal scene, which in turn results from the non-uniform distribution of cones across the retina.

Adapting technology, not adopting

So what are the implications of the eye’s natural stress-reducing mechanism for retinal implants? Retinal diseases such as macular degeneration cause the rods and cones in the retina to deteriorate and lose functionality. Implants are inserted into the damaged region of the retina to replace damaged photoreceptors (figure 2). Referred to as “subretinal” implants, these state-of-the-art devices typically consist of a 3 mm semiconductor chip incorporating up to 5000 photodiodes. If we want to retain the stress-reduction mechanism, the distribution of photodiodes across the implant should mimic that of the retina. The point is that if the distribution were even, the eye would no longer need to move and so it would learn not to, and this lack of motion would prevent the stress reduction from kicking in. Unfortunately, current implant designs do simply feature the uniform distribution of photodiodes found in the passive camera. This discrepancy will have a growing impact as future chips replace increasingly large regions of the retina.

This flaw emphasizes the subtleties of the human visual system and the potential downfalls of adopting, rather than adapting, camera technology for eye implants. A similar downfall would result from assuming that the implant’s photodiodes should be connected to the retina using the Euclidean-shaped electrodes found in cameras. Figure 3 shows different patterns of “wiring” and how these interface with a retinal neuron. Although macular degeneration damages rods and cones, it leaves the retinal neurons intact and so these can be used to connect an implant’s photodiode electrodes to the optic nerve. Part a shows healthy photoreceptors, while parts b–d show a series of different shapes of electrodes that could be used in implants.

Retinal neurons are fractal in structure, and the simulated version in figure 3 characterized by D = 1.7 closely resembles the image of a real retinal neuron shown in figure 4a. If yield is defined as the percentage of electrodes that overlap with, and therefore establish electrical contact to, a neuron, then current retinal implants (figure 3b) have a yield of 81%.

Although this yield is greater than the 46% for the configuration in figure 3a, retinal implants do not match the performance of healthy human eyes. The artificial retina would still underperform compared with a healthy retina, as healthy retinas have a higher density of photoreceptors. The number of photoreceptors connected in figure 3a is 1050, compared with only 13 photodiodes in figure 3b. Artificial retinas must therefore be somehow brought up to speed.

One way to achieve this is to increase the yield beyond 81% for artificial retinas. Figure 3c shows the 94% yield achieved by replacing the square-shaped Euclidean electrodes with fractal electrodes. This yield increase could also be achieved by using larger square electrodes, as shown in figure 3d. However, this strategy would fail to take into account another striking difference between the camera and the eye. A camera manufacturer would never feed the wiring in front of the photoreceptors because it might hinder the passage of light to them. Yet this is exactly what happens in the eye. The layer of retinal “wiring” sits in front of the rods and cones, which means that light has to pass through this to reach the photoreceptors. As a consequence, the implant electrodes also have to sit in front of the photodiodes if they are to connect to the retinal neurons. The increased area of electrodes in figure 3d would therefore prevent the light from reaching the implant’s photodiodes.

In contrast, the fractal electrodes of figure 3c allow both high connection yield and high transmission of light to the photodiodes. This property results from the branching that recurs at increasingly fine scales. The fractal branches spread across the retinal plane while allowing light to transmit though the gaps between the branches, and a high D value maximizes this effect. As noted earlier, retinal neurons have a D value of 1.7.

Artificial neurons

Rick Montgomery (also at the University of Oregon) and I, in collaboration with Simon Brown at the University of Canterbury, New Zealand, employ a technique called nanocluster deposition to construct fractal electrodes with the aim of establishing an enhanced connection between retinal implants and healthy retinal neurons. In the technique, nanoclusters of material are carried by a flow of inert gas until they strike a substrate, where they self-assemble into fractal structures using diffusion-limited aggregation. These so-called nanoflowers (figure 4b) are characterized by the same D value as the retinal neurons that they will attach to.

During the deposition process, the nanoflowers nucleate at points of roughness on the substrate. Therefore, when nanoflowers are grown on top of the implant’s photodiodes, the surface roughness will be exploited to “automatically” grow the nanoflowers, making this a highly practical process for future implants. One challenge of the growth process lies in reducing nanocluster migration along nanoflower edges, which smears out the fine branches. This can be achieved by tuning the cluster sizes (which range from several nanometres up to hundreds of nanometres) and adjusting their deposition rate.

The nanoflowers can be grown to match the size of the photodiodes (20 µm), and will feature branch sizes down to 100 nm. Many of the gaps between the fractal branches will therefore be smaller than the wavelength of visible light, opening up the possibility of using the physics of fractal plasmonics to “super lens” the electromagnetic radiation into the photodiodes.

Significantly, the inherent advantages of the nanoflower electrodes lie in adopting the fractal geometry of the human eye rather than the Euclidean geometry of today’s cameras. Although the superior performance of the Six Million Dollar Man’s bionic eye is still in the realm of science fiction, the road to its invention will inevitably feature many lessons from nature.

At a glance: Artificial vision

  • Surgeons restore human vision by replacing diseased photoreceptors in the retina with semiconductor implants based on digital cameras
  • The physical structure and motion of the retina are based on nature’s fractal geometry, in contrast to the Euclidean geometry used by photosensitive chips in digital cameras
  • Nanocluster growth technology will be used to self-assemble artificial neurons on the surface of future retinal implants that mimic the fractal structure of the eye’s natural neurons
  • Pattern analysis reveals that the eye searches for visual information using a fractal motion, similar to that of foraging animals, that covers an area more efficiently than random motion
  • The spatial distribution of photoreceptors across an implant has to match that found in the eye in order to trigger a physiological stress-reducing mechanism associated with the eye moving its gaze to observe fractal scenes

More about: Artificial vision

E Cartlidge 2007 Vision on a chip Physics World March pp35–38
M S Fairbanks and R P Taylor 2010 Scaling analysis of spatial and temporal patterns: from the human eye to the foraging albatross Non-linear Dynamical Analysis for the Behavioral Sciences Using Real Data (ed) Stephen J Guastello and Robert A M Gregson (Boca Raton, CRC Press) pp341–366
S A Scott and S A Brown 2006 Three-dimensional growth characteristics of antimony aggregates on graphite Euro. Phys. J. D 39 433–438
R P Taylor et al. 2005 Perceptual and physiological responses to the visual complexity of fractals Nonlinear Dynamics, Psychology, and Life Sciences 9 89–114
G M Viswanathan et al. 1996 Lévy flight search patterns of wandering albatrosses Nature 381 413–415

Showcasing European science

By Matin Durrani, Munich, Germany

epl25venue.jpg

I’m sitting three rows from the back inside the gently lit conference room at the Bavarian Academy of Sciences and Humanities in Munich. The academy is housed in a grand, honey-coloured stone building that forms one wing of the huge Residenz complex, which was almost entirely rebuilt after the Second World War following the Allied bombing that left it and most of the city in ruins.

The Residenz, which looks glorious in the spring sunshine, is an appropriate and symbolic venue for the conference I’m attending, which has been organized to mark 25 years of the journal EPL.

Originally known as Europhysics Letters (it was rebranded in 2007), the journal was set up to promote and showcase the very best of European physics research. It may not yet match its great American rival – Physical Review Letters – as a journal containing short “letter” articles exploring the very frontiers of physics, but just as the Residenz was restored to its former glory, so EPL is playing a small part in rebuilding European physics.

Europe’s long realized that collaboration is the name of the game when it comes to science, with the CERN particle-physics lab being the shining example of what happens when nations work together. And so it is with EPL, which was begun in 1986 as a joint venture between the French and Italian physical societies, the UK’s Institute of Physics, which publishes physicsworld.com, and the European Physical Society.

The organizers have invited a string of top speakers – the full list is here – and bused and flown in over 100 students and postdocs from across Europe to create a good, international feel.

As for me, apart from consuming an extremely large number of fabulous mini chocolate croissants on offer in the coffee breaks, I’ve been filming some video interviews with Michael Schreiber, EPL’s current editor-in-chief, particle physicist Luisa Cifarelli, who is current EPS president, and David Delpy, chief executive of the UK’s Engineering and Physical Sciences Research Council. They will appear on this website in a few weeks’ time.

The conference dinner was held last night at one of Munich’s best known restaurants – the atmospheric Hofbraukeller – with a fabulous four-course buffet (it may have been five; I lost count).

Right, it’s coffee-break time – off for a few more of those croissants. I just hope my colleagues Fiona Walker, Claire Webber and Jo Pittam, who are also at the meeting, haven’t polished them off yet….

Copyright © 2025 by IOP Publishing Ltd and individual contributors