Skip to main content

Physicists watch entropy in action

Physicists in the US have gained important insights into the process of crystallization by studying how tiny plastic balls spontaneously form clusters. They found that highly symmetric clusters are created much less often than those with lower symmetry, which could shed light on how clusters of atoms or molecules form just before a liquid solidifies into a crystalline solid.

The conventional view of crystallization is that a material solidifies when one or more of these tiny clusters grows past a point of no return. But while the energies of possible cluster shapes can be calculated and confirmed experimentally, understanding the role that entropy plays is much harder. In an isolated system, for example, thermodynamics favours the formation of disordered clusters (i.e. those with a high entropy) – provided that the energy of these clusters is low enough that is.

The problem is that actually observing the role of entropy during crystallization is difficult because the clusters are too small and appear and vanish much too quickly to be seen. But by using clusters of much larger particles, which can be observed in real-time with an optical microscope, Vinothan Manoharan and colleagues at Harvard University in the US have been able to gain new insight into the role of entropy in the “nucleation” process.

Maximizing entropy

The team began with an array of thousands of tiny wells on a silicon chip – each well having a depth and diameter of 30 µm. The wells are filled with a mixture of water and two types of plastic balls – one type with a diameter of 1 µm and the other 80 nm. Both types of ball are jostled about by random thermal fluctuations in the water (Brownian motion).

However, when two large balls come to within about 80 nm of each other, the small balls can no longer fit between them. Because there are no small balls between the large balls to push them apart, the balls start to close in on each other. This imbalance looks like a short-range attractive force that causes the large balls to stick together in clusters – the so-called “depletion attraction”.

The team began by looking at wells containing six large balls. These tend to form clusters with either a symmetrical octahedron shape or a less symmetric complex of three tetrahedrons. Although both shapes have 12 bonds between balls, which means that they have exactly the same energy, the high-entropy tri-tetrahedron was found to be about 20 times more common than its more symmetric counterpart.

‘Ball-and-stick’ analysis

To understand why the tri-tetrahedron was more common, the physicists used a magnetic “ball-and-stick” construction toy to work out all the possible ways that the two different clusters could be oriented. From rotational-entropy considerations alone, the team expect the low symmetry tri-tetrahedron clusters to be 12 times more numerous that the octahedrons.

Although this is not the factor of 20 observed experimentally, the team believes that the remaining factor of about two can be explained in terms of vibrational entropy. The tri-tetrahedron is less rigid than the octahedron, which means that the tri-tetrahedron can flop about between different configurations – further boosting its entropy.

The researchers then turned their attention to larger clusters of up to 12 balls and found that the clusters also favoured the least symmetric and least rigid configurations. Indeed, at 12 balls the most symmetric (and least energetic) structure was never observed. As a result, most of the larger clusters looked like familiar bulk-crystal structures such as hexagonal-close-packed.

Although the study illustrates the importance of symmetry in the formation of tiny clusters, Manoharan points out that crystallization involves long-range interactions between atoms that are not reproduced by the depletion attraction. When such interactions are considered in the team’s model, the effect of entropy is not as strong. Also, the studies were done in complete isolation where the clusters could come to equilibrium without being disturbed by their surroundings, which does not apply to most real systems.

The work is reported in Science.

Graphene transistor breaks new record

Physicists in the US have made the fastest graphene transistor ever, with a cut-off frequency of 100 GHz. The device can be further miniaturized and optimized so that it could soon outperform conventional devices made from silicon, says the team. The transistor could find application in microwave communications and imaging systems.

Graphene – a sheet of carbon just one atom thick – shows great promise for use in electronic devices because electrons can move through it at extremely high speeds. This is because they behave like relativistic particles with no rest mass. This, and other unusual physical and mechanical properties, means that the “wonder material” could replace silicon as the electronic material of choice and might be used to make faster transistors than any that exist today.

Phaedon Avouris, Yu-Ming Lin and colleagues at IBM’s TJ Watson Research Center in New York began making their field-effect transistor (FET) by heating a wafer of silicon carbide (SiC) to create a surface layer of carbon atoms in the form of graphene. Parallel source and drain electrodes were then deposited on the graphene, leaving channels of exposed graphene between them.

Protecting the graphene

The next step is the trickiest – depositing a thin insulating layer onto the exposed graphene without adversely affecting its electronic properties. To do this, the team first laid down a 10 nm layer of poly-hydroxystyrene – a polymer used in commercial semiconductor processing – to protect the graphene. Then a conventional oxide layer was deposited, followed by a metallic gate electrode.

The gate length is relatively large at 240 nm, but it could be scaled down in the future to further improve device performance, say the physicists.

The graphene transistor already has a higher cut-off frequency than the best silicon MOSFETs with the same gate length (these have a cut-off frequency of around 40 GHz). The cut-off frequency is the frequency above which a transistor suffers significant degradation of its performance. The new device breaks IBM’s previous record of 26 GHz, reported on in January 2009.

‘Technologically relevant’

Unlike most other graphene FETs, which have been made from flakes of graphene, this device is made using techniques used by the semiconductor industry. “Our work is the first demonstration that high-performance graphene-based devices can be fabricated on a technologically relevant wafer scale,” Avouris said.

One shortcoming of such graphene devices, however, is that they cannot be used in digital circuits such as those found in computers. This is because graphene has zero energy gap between its conduction and valence electrons – and it is this “band gap” that allows conventional semiconductors to switch currents from off to on.

Instead, such high-frequency transistors could be used to amplify analogue microwave signals in communications and imaging applications – including high-resolution radar, medical and security imaging.

The IBM researchers now plan to scale down their transistor, improve graphene purity and optimize device architecture. “Such transistors could then far outperform conventional devices,” said Avouris.

The team is also looking at ways of creating a bandgap in a graphene transistor so that it could be used in digital applications.

The result was published in Science.

Exoplanet hunting brought down to Earth

Researchers in the US, UK and Germany have used a ground-based telescope to detect organic compounds in the atmosphere of an exoplanet – that is, a planet orbiting a star other than our Sun. The result, the researchers claim, will open up the hunt for Earth-like planets to anyone with access to a decent telescope. “We expect a massive explosion of exoplanet research because it is not limited anymore to the lucky few who have access to space telescopes,” says Pieter Deroo at the California Institute of Technology.

Since astronomers made the first discovery of a planet orbiting another star in 1992 they have gone on to catalogue more than 400 of these exoplanets. The favoured hunting technique is known as the transit method whereby astronomers monitor the light from a star and look for dips in its intensity caused by a planet sweeping in front of its parent star cutting across the line of vision from Earth.

Honing in on exoplanets

The next stage in exoplanet research is to start looking a bit more closely at the nature of these planets with the ultimate goal of discovering a planet with habitable conditions like Earth’s. The first step is to decipher the chemical composition of exoplanetary atmospheres as this could provide information about a planet’s formation and evolution; it might also reveal the signatures of life.

To date, the most popular approach has been to adapt the transit detection method to study how starlight, observed from Earth, is affected during eclipses. The idea is a relatively simple one: compare the spectrographic data of a star’s light when an exoplanet is first in front then behind its parent star in relation to our line of vision.

This exoplanetary eclipse technique is proving successful with the detection of water vapour, carbon dioxide and carbon monoxide in the atmospheres of the hot-Jupiter type exoplanets HD 189733b and HD 209458b. So far, however, these discoveries have only been possible using data from telescopes located beyond the Earth’s swirling atmosphere, which tends to distort our view.

View from Hawaii

Now Deroo and his colleagues have brought the eclipse technique down to Earth. They used NASA’s 3 m Infrared Telescope Facility (IRTF), located atop Mauna Kea in Hawaii, to study the light emitted by the well observed star system HD 189733. By looking for infrared light – the part of the spectrum not currently monitored by space-based planet hunters – the researchers used a new iterative technique for removing atmospheric distortions.

The resulting spectrum is in strong agreement with those acquired using space-based telescopes confirming the presence of water vapour, carbon dioxide and carbon monoxide. What is more, the researchers detect the presence of methane by observing a fluorescent emission, they suggest, by the planet’s close proximity to its parent star – one-tenth of the distance between Mercury and the Sun.

Deroo says that the real beauty of his technique is that it is not limited to detecting large Jupiter-like planets such as this one. “The NASA space mission Kepler will find true Earth-analogues – the new technique will give us a great tool to characterize these planets.”

This research is published in Nature.

Entanglement pioneers bag Wolf Prize

The 2010 Wolf Prize in Physics has been awarded to Alain Aspect, John Clauser and Anton Zeilinger “for their fundamental conceptual and experimental contributions to the foundations of quantum physics, specifically an increasingly sophisticated series of tests of Bell’s inequalities, or extensions thereof, using entangled quantum states”.

The trio will share the $100,000 prize, which will be presented by the President of Israel at the Israeli parliament (Knesset) on 13 May 2010. Zeilinger, 64, is at the University of Vienna, Austria; Aspect, 62, is at the Institut d’Optique in Palaiseau, France; and Clauser, 67, is at J F Clauser and Associates in Walnut Creek, California.

The winners were involved in three pioneering experiments that established the quantum property of entanglement – whereby two or more particles display much stronger correlations than are possible in classical physics. Entanglement plays an important role in quantum computers, which in principle could outperform conventional computers at some tasks.

Violating Bell’s inequality

All three experiments measured violations of Bell’s inequality, which places a limit on the correlations that can be observed in a classical system. The first was done in 1972 at the University of California at Berkeley by Clauser and Stuart Freedman, who measured the correlations between the polarizations of pairs of photons that are created in an atomic transition. They showed that Bell’s inequality was violated – which meant that the photon pairs were entangled.

There were, however, several “loopholes” in this experiment, making it inconclusive. It is possible, for example, that the photons detected were not a fair sample of all photons emitted by the source (the detection loophole) or that elements of the experiment thought to be independent were somehow causally connected (the locality loophole).

In 1982 Aspect and colleagues at the Université Paris-Sud in Orsay, France, improved on Clauser and Freedman’s experiment by using a two-channel detection scheme to avoid making assumptions about photons that were detected. They also varied the orientation of the polarizing filters during their measurements – and in both cases Bell’s inequality was violated.

Closing the locality loophole

The locality loophole was closed in 1998 by Zeilinger and colleagues at the University of Innsbruck, who used two fully independent quantum random-number generators to set the directions of the photon measurements. This meant that the direction along which the polarization of each photon was measured was decided at the last instant, such that no signal (which by necessity has to travel slower than the speed of light) would be able to transfer information to the other side before that photon was registered

The Wolf Prize is awarded by the Wolf Foundation in Israel and is often thought to be the most prestigious prize in physics after the Nobel prize. The foundation was created in 1975 by Ricardo Wolf, a German-born inventor and diplomat.

Quantum mechanics boosts photosynthesis

Physicists in Canada and Australia have shown that nature exploits quantum mechanics to make photosynthesis more efficient. By probing light-harvesting proteins within algae using laser beams, the researchers found that quantum coherence links molecules within these proteins. They say that these links improve the transfer of energy in the production of life-supporting sugars.

Photosynthesis involves using sunlight to convert carbon dioxide into chemical energy containing sugars. However, the protein complexes that carry out the necessary reactions do not absorb sunlight themselves. Instead, they rely on electrons being excited within pigment molecules housed in other proteins, with typically hundreds of pigment molecules supplying energy to an individual reaction centre. One reason for supplying energy indirectly in this way is that some photosynthetic reactions need the energy from several electron excitations in quick succession, something that would otherwise not be possible if light levels were low.

The transfer of energy from light-harvesting proteins to reaction centres is a highly efficient process. Scientists already know, for example, that the various pigments inside each protein are just the right distance from one another – close enough to enable fast energy transfer, but not so close that the molecular orbitals of the pigments overlap and quench their excited states. It is also known that the arrangement of the proteins allows the energy to be sent via many different routes.

Now, however, Elisabetta Collini of the University of Toronto and colleagues are proposing that the energy-transfer process is made even more efficient via quantum coherence. They suggest that the pigment molecules do not act entirely on their own, but interact so that when one molecule is excited by a photon from the Sun, it can to some extent share that excitation quantum mechanically with other pigment molecules. This superposition of excited states will then oscillate, shifting the excitations from one set of molecules to another and then back again on a very short timescale, allowing energy to be transferred to the reaction centres before it is released as light or heat.

In tune

Collini’s team studied this phenomenon in two kinds of light-harvesting protein found in cryptophyte algae, taking advantage of the fact that the main pigment in these proteins, known as bilin, can be tuned to absorb light across a wide range of frequencies. The researchers first exposed the proteins to a pair of short-duration laser pulses, exciting the constituent pigment molecules, before stimulating emission from these excited states by sending in a third pulse shortly afterwards.

What the team was looking for were emission frequencies that did not match the excitation frequency, because these would indicate the existence of a superposition of different states. This they did by detecting the quantum-mechanical equivalent of beats, the cyclical peak in volume produced when two sound waves of different frequency interfere with one another. The fact that they did indeed detect such beats is evidence, they say, that the algae takes advantage of quantum coherence.

The researchers also found that the oscillations of this coherent superposition lasted for over 400 femtoseconds (4 × 10–13 s), which was much longer than expected. They had thought the oscillations would last for no more than 100 fs, because this was the timescale over which they thought interference from the surrounding protein and water molecules would swamp or “decohere” the delicate quantum superposition state. “[We] never anticipated such remarkable effects,” says Collini’s colleague, Gregory Scholes of the University of Toronto, also because bilin molecules interact more weakly with one another than do other photosynthetic pigments.

Playing a role

Such oscillating coherence has been observed before when Graham Fleming of the University of Berkeley, California, and colleagues studied the light-harvesting proteins of green sulphur bacteria in 2007. That experiment, however, was carried out at a mere 77 Kelvin, significantly reducing environmental interference and therefore minimizing the problem of decoherence. The latest work, in contrast, was carried out at room temperature, suggesting that quantum coherence really does play a role in photosynthesis.

Indeed, Paul Davies, director of the BEYOND Center for Fundamental Concepts in Science at Arizona State University in the US, believes that quantum mechanics might be deployed more widely in the natural world. “My feeling is that nature has had billions of years to evolve to the ‘quantum edge’ and will exploit quantum efficiencies where they exist, even if the payoff is relatively small,” he says. “I suspect that many biological nanostructures can be understood fully only by reference to quantum coherence, tunnelling, entanglement and other non-trivial processes. The challenge is to identify such quantum goings-on amid the complex and noisy environment of the cell.”

Jacko spotted in droplet, claims physicist

polymerface.jpg
Polymer surgery Is the King of Pop in this mound?

By James Dacey

Just before Christmas, I caused a bit of a splash in the blogosphere when I spotted the face of Ringo Starr in a bouncing water droplet – an image captured by physicists at Duke University in the US.

Here is another physics experiment that contains a spooky resemblance to a human face, sent to us by David Fairhurst, a physicist at Nottingham Trent University in the UK.

The ugly-looking globular mound is a droplet of polymer solution, the kind of substance you might find in the ink cartridges of your printer. As the solution began to dry, Fairhurst noticed a number of small “spherulites” begin to crystallise on the droplet surface revealing what appears to be a tiny human face.

“I noticed it immediately and showed it to the other guys – we had a really good laugh about it,” Fairhurst told physicsworld.com.

The physicist and his group of PhD students reckon the face looks like a small girl, or possibly even the King of Pop, Michael Jackson.

I ran the image through an online face-recognition programme and the names that came out included: Rachel Carson, the American environmentalist; Marlene Dietrich the German-born actress; and (tenuously) Iggy Pop.

Oops, I think I’ve started something here!

Higgs hunters face long haul

Apart from being so huge, complex and expensive, CERN’s Large Hadron Collider (LHC) is perhaps most famous for having broken down just nine days after it switched on in September 2008. Fourteen months and some CHF40m of repairs later, the LHC came spectacularly back to life late last year as jubilant physicists collided particles at record-breaking energies.

But to reduce the chances of the LHC being derailed again by a similar accident, physicists at the Geneva lab have decided to run the collider at just half its design energy for the next 18-24 months. The decision will potentially increase the time it will take the CHF6bn machine to unearth new fundamental particles, particularly the Higgs boson.

Under the latest schedule announced this week, the 27 km circumference collider will begin to smash beams of protons into one another at an energy of 7 TeV (3.5 TeV per beam) in early March. Experiments will then continue until its detectors have accumulated one “inverse femtobarn” of data – roughly 10 trillion proton–proton collisions – with the run ending after two years at the latest. By the time it was shut down on 16 December last year after just four weeks of operation, the LHC had delivered more than 50,000 collision “events” at a record energy of 2.36 TeV to its two largest particle detectors, ATLAS and CMS.

5 TeV per beam now looks very risky Roger Bailey, CERN

The previous plan to step the collision energy to 10 TeV this year was shelved following lab tests carried out late last year that simulated the accident of 19 September 2008. It occurred when a connection between two of the LHC’s superconducting magnets evaporated while carrying a current of 8.7 kA, puncturing the machine’s liquid-helium cooling system and causing significant collateral damage.

By opting to run at just 7 TeV, CERN is playing it safe. “5 TeV per beam now looks very risky,” LHC operations leader Roger Bailey told physicsworld.com.

Risky business

Once the 7 TeV run is over, CERN will shut the LHC down in 2012 for a year or more to prepare it to go straight to maximum-energy 14 TeV collisions in 2013. This will be a complex job that will involve replacing some 10,000 superconducting magnet connections with more robust ones.

However, Chiara Mariotti, who co-convenes the Higgs working group on the CMS experiment, says that choosing to stay at lower energies is a big price to pay in terms of the Higgs search. “We will need more than twice the data at 7 TeV compared to that needed at 10 TeV to reach the same discovery potential,” she says. “At this energy we can at best expect to exclude a Higgs with a mass between 155 and 175 GeV.”

Her CMS colleague Tommaso Dorigo, who has written extensively about the Higgs search on his blog, reckons the hope of discovering the Higgs boson at the LHC before 2012 is “faint”.

However, the decision to run at lower energies still offers plenty of opportunity for CERN researchers, who could make major discoveries such as supersymmetric particles – or even something totally unexpected – relatively early. Indeed, the run energy of 7 TeV is still 3.5 times greater than at the Tevatron collider at Fermilab in the US, which until December was the world’s most powerful collider. What will be discovered – if at all – depends largely on how heavy such new particles are and on how easy they are to spot among “background” processes taking place in the proton–proton collisions.

Friendly rivalry

The Higgs boson is the last missing piece of the Standard Model of particle physics, and its discovery would confirm the most compelling explanation physicists have for how elementary particles acquire mass. Although the theory does not predict the mass of the Higgs boson, precision measurements of known Standard Model particles mean that its mass is unlikely to be more than 186 GeV. Meanwhile, direct searches made at CERN’s Large Electron–Positron collider – the forerunner to the LHC – rule out a Higgs that is lighter than 114 GeV.

There is less and less room for the Higgs to hide Stefan Söldner-Rembold, D0 experiment

Efforts are therefore being focused on the region inbetween, and not only by physicists who work at the LHC. Keen to spot evidence for the Higgs first, researchers at the Tevatron’s two experiments – CDF and D0 – have spent the past few years feverishly gathering data from proton–antiproton collisions at an energy of 1.96 TeV. These experiments suggest that physicists could be in for a long slog: a joint paper accepted for publication this week in Physical Review Letters rules out a Higgs with a mass of around 165 GeV, while disfavouring (at lower statistical significance) the region 160–180 GeV. “There is less and less room for the Higgs to hide,” says D0 co-spokesperson Stefan Söldner-Rembold.

A lighter Higgs?

Although such exclusion limits allow physicists on both sides of the Atlantic to focus more sharply on the region where the Higgs might exist, the data – when taken with indirect limits from measurements at previous colliders – tentatively point to a light Higgs, which would be harder to discover. For example, a Higgs weighing less than about 140 GeV would be less likely to decay into pairs of W or Z bosons, which would leave clear, quick-to-find signatures in the LHC’s detectors, and more likely to decay into pairs of b-quarks, which are much harder to distinguish from background. The LHC experiments would therefore need to collect more data to build a strong enough statistical case to identify a Higgs “signal”.

Although the Tevatron does not have the capability to discover the Higgs outright – that task will only be possible with the LHC – it could produce the first strong hints of the particle’s existence if the Higgs is lighter than about 160 GeV. “There is very high level of excitement at Fermilab and in other places including the US Department of Energy [which funds the laboratory],” say D0 co-spokesperson Dmitri Denisov. “But in order to claim evidence for Higgs we need to see the signal, not just exclude other areas. And keep in mind that the Higgs might not exist at all.”

The Tevatron result certainly is adding more pressure for the LHC to join this race without delay Pedro Teixeira-Dias, ATLAS experiment

The Tevatron is now expected to run in tandem with the LHC’s 7 TeV run until the end of 2011 following President Obama’s budget request, which was made earlier this week. “Anything beyond that is a guess,” says CDF co-spokesperson Jacobo Konigsberg.

As the high-energy baton passes from Fermilab to CERN, the race for the Higgs and perhaps other ground-breaking discoveries is on. “The Tevatron result certainly is adding more pressure for the LHC to join this race without delay,” says ATLAS physicist Pedro Teixeira-Dias. “Compared with the Tevatron the LHC will have a much higher Higgs cross-section and a better signal-to-background ratio, even at ‘just’ 7 TeV. But the Tevatron is now at the top of its game and is clearly not to be discounted. We live in exciting times.”

Spider web inspires fibres for industry

Spiders may not be everybody’s idea of natural beauty, but nobody can deny the artistry in the webs that they spin, especially when decorated with water baubles in the morning dew. Inspired by this spectacle, a group of researchers in China has mimicked the structural properties of spider webs in creating a fibre for industry that can manipulate water with the same skill and efficiency.

Lei Jiang of the Chinese Academy of Sciences set out with his colleagues to look at the fine detail of spider webs and the way that the silks interact with moisture in the atmosphere. They found that the water-collecting ability of Uloborus walckenaerius – a common, non-venomous spider – is the result of a network of knots that form in the web when it gets wet.

Dotted periodically throughout the web, these structural features create gradients of energy and pressure between knots. The result is a sort of cascade whereby moisture condenses from the atmosphere and is then channelled towards these spindle knots. As a result, drops of water as big as 100 µm in diameter can form.

Web knots

Individual knots begin to form when tiny water droplets condense at certain sites or “puffs” in the spider silk. Using Scanning Electron Microscopy (SEM), the researchers found that at these sites, known as “puffs”, the nanofibrils that comprise the silk are no longer aligned but point in random directions.

Armed with this knowledge, Jiang’s team then replicated the spider fibres using polymethyl methacrylate, a synthetic polymer that was chosen because it bonds well with water molecules. The major technical challenge was to fine-tune these fibres to function in realistic industrial conditions whereby temperatures and humidity levels are often changing. They succeeded in creating individual fibres that could trap and transport water droplets in the same way as the spider silk.

The researchers are unsure of why the spider has evolved to possess this ability. “It could be for its drinking activities, or it could be to refresh the web structure to make it stronger and stickier for prey,” Jiang told physicsworld.com.

Smart catalysis

Mato Knez at the Max Planck Institute of Microstructure Physics, who is also interested in industrial applications inspired by spider webs, believes that it could be a tactic to protect the web. “If the water is distributed along the silk as film, this might lead to destruction. However, by allowing the droplets to grow until reaching a critical size they will presumably fall from the silk,” he says.

Jiang and his team intend to develop their research by preparing a series of materials that control water in different ways. One application could be “smart catalysis”, which can speed up a chemical reaction without needing a catalyst.

Andrew Martin, a bioengineer at Bremen University in Germany, is doubtful that this technology could be useful on a large industrial scale, but he envisages smaller-scale application. “The directionality of water collection might be useful in any rheological or microfluidic process.”

This research is published in Nature.

Bell Labs launch Ireland expansion

Bell Labs, the research arm of the telecommunications giant Alcatel-Lucent, has today announced that it will double the number of researchers at its Irish research centre in Dublin. The lab, once a powerhouse of basic physics research with seven Nobel prizes to its name, announced that it will create 70 new jobs over the next five years to carry out research into novel telecommunications devices. Alcatel-Lucent also has research centres in the US, China, India, Germany, France and Belgium.

Speaking at the launch of the expansion, Mary Coughlan, Ireland’s deputy prime minister, said that it was “a significant investment in high-calibre jobs” that would cause “Ireland’s reputation [to] grow”. She was joined by Bell Labs president Jeong Kim, who dubbed Bell Labs Ireland, which was founded in 2005, “a success story” that would benefit the local knowledge economy. The expansion of Bell Labs Ireland has been supported by the Irish government through its Industrial Development Agency.

Past glory

Founded in 1925, Bell Labs was once considered to be one of the world’s leading industrial laboratories for fundamental physics research. Bell researchers were responsible for inventing the transistor, the laser, as well as the UNIX and C computer-programming languages. Indeed, only last year former Bell Labs researchers Willard Boyle and George Smith shared the Nobel Prize for Physics for inventing the charge-coupled device – a key component for most digital cameras – in 1969. They shared the prize with Charles Kao for his work on optical-fibre technology.

However, when Bell Labs’ parent company AT&T was forced to split up in 1996, the once-famous lab ended up inside the newly-formed equipment division – Lucent Technology. Lucent struggled to fund Bell labs and the number of Lucent employees fell from a peak of 160,000 to just 30,000 in 2006 before it merged with French telecoms company Alcatel in December of that year. The new firm, Alcatel-Lucent, announced in August 2008 that Bell Labs would not carry out any further basic physics research but focus entirely on research that is directly relevant to its telecoms business.

Irish home

Founded by Lucent in 2005, Bell Labs Ireland is involved in designing low-cost, high-power antennas, studying network optimization, as well as investigating novel methods to cool communications equipment. Staff at the centre have also been building devices to boost 3G mobile-phone signals in the home or in areas with low coverage. They have, for example, designed algorithms that allow a device to send a signal only to the inside of a home and not outside, preventing neighbours from hitching a ride on your signal booster.

Researchers are also investigating how they can program a network of devices to communicate with each other to manage power more efficiently, dubbed “genetic programming” by Lester Ho, a computer scientist at Bell Labs. The expansion of Bell Labs Ireland will lead to new work in areas such as allowing networks to self configure and optimize themselves as well as testing new systems that can recover dissipated heat in telecommunications devices.

“What I like is that there is still an academic feel here,” says Peter Cogan, one of a handful of physicists at the Dublin centre. “There are around 11 nationalities with 50% being international and 50% Irish.” Cogan, who did a PhD and a post-doc in gamma-ray astronomy before joining the lab last March, also points out that he is doing similar day-to-day tasks as to what he did as a researcher. “Then I was writing programmes and doing data analysis. That was to understand basic physics but now I am doing similar things to understand network optimization.”

London, the ‘polycentric’ city

How do commuters move around in big cities? Most people would assume that they all do pretty much the same thing: travel from the outskirts to the centre, and then back again. Yet according to a group of physicists in the UK and France, this is not the case.

“The popular conception of a city – that people work in the centre and live around the edge – is, to a certain extent, a gross simplification of what actually happens,” says Michael Batty, director of the Centre for Advanced Spatial Analysis at University College London (UCL). “The notion that one could simplify the sort of complexity that is evident is probably a non-starter.”

For decades town planners have analysed how people move in cities to figure out how to reduce congestion. However, data typically come from samples, such as household surveys, performed every five to ten years, and these only give a sketchy overview.

Popular journeys in London

Batty, together with colleagues at UCL and the School for Advanced Studies in Social Sciences in Paris, has investigated how people move around in London using data derived from subway travel cards – or “Oyster cards”, as they’re commonly called. Because such cards give the unique ability to track where people are going from and to for most journeys, the researchers were able to build up an accurate hierarchy of the most popular journeys taken in London.

Some of the results might surprise commuters and town planners alike. Commuters starting in different locations will often travel roughly the same distance, yet there is a huge variation – a “heterogeneity” – in where they travel to. For example, while Batty’s group found a strong travel link between the main financial district (the City) and Notting Hill, they found no similar link between the City and South Kensington, just a few miles south. Instead, South Kensington linked strongly with Westminster. The implication, therefore, is that those living in Notting Hill tend to work in the City, while those living in South Kensington are more likely to work in Westminster.

Overall, the researchers found that London contained no single centre, but instead has around 10 “polycentres” that interlink in complex patterns. “One of the conclusions is that city centres in big global cities like London really have to be unpacked and looked at in detail,” explains Batty. “But having said that, there’s an implicit conclusion that if you looked at any city centre, on whatever scale, you would find it to be considerably more heterogeneous than you had assumed it to be in the past.”

Extended to mobile phones

Indeed, the researchers’ study is a good example of how data deriving from everything from GPS trackers to banknotes can be used to analyse human mobility (see “The flu fighters). Marta González, a physicist who studies human mobility at the Massachusetts Institute of Technology, US, says that it could help develop strategies for reducing congestion. “I really benefited [by examining Batty and colleagues’ techniques], which could be extended to analyze flow from other data sources, such as mobile phones,” she adds.

Batty isn’t certain how the study will be used, but suggests that it might throw light on future travel projects in London, such as the underground “Crossrail” plan. He also believes that the analysis could be repeated in other places that have automated subway ticketing, including New York City, Singapore, Hong Kong and Tokyo.

The research can be found as a preprint at arXiv: 1001.4915.

Copyright © 2026 by IOP Publishing Ltd and individual contributors