Skip to main content

A cool $50m for theoretical physics

mike.jpg

By Hamish Johnston

In my line of work I don’t usually get to talk to multi-millionaires — but a few weeks ago I had the pleasure of speaking with high-tech magnate Mike Lazaridis, who made his fortune developing the Blackberry handheld email/mobile phone device.

Lazaridis and I were in a conference call with Neil Turok, one of the world’s leading cosmologists who had just been enticed by Lazaridis to leave Cambridge and become executive director of Canada’s Perimeter Institute for Theoretical Physics in Waterloo, Ontario.

The institute was founded in 2000 by Lazaridis who put up about CDN$100m of his own money. Now, Lazardis has donated a further $50m to Perimeter.

If you count the millions that he and his wife have given to the Institute for Quantum Computing at the nearby University of Waterloo, Lazaridis (who is not a physicist) has spent an amazing $200m on physics research!

When I asked Lazaridis why Turok was the right person to lead the institute he said: “We share deep convictions in the importance of basic science, the importance of funding basic science, and the importance of philanthropy in promoting basic science for the advancement of mankind”.

Lazaridis is one of a small but growing number of benefactors with deep convictions and deep pockets when it comes to the more esoteric disciplines of physics such as cosmology, astrophysics and particle physics.

Just two weeks ago an anonymous benefactor donated $5m to Fermilab, which has been particulalry hard hit by US government cuts on physics spending.

And staying with the topic of funding cuts, during our conversation Turok told me that recent cutbacks in the UK made Perimeter’s offer all the more attractive — something that he has discussed in great detail in a recent interview with the Times.

Extreme UV light made easy

A new system to generate coherent extreme-ultraviolet (EUV) light has been developed by researchers in Korea. The device, based on a nanostructure made of bow-tie shaped gold “antennas” on a sapphire substrate, is smaller and cheaper than existing systems and might allow an EUV source the size of a laptop computer to be made. Potential applications for the source include high-resolution biological imaging, advanced lithography of nanoscale patterns and perhaps even “X-ray clocks”.

EUV light has a wavelength of between around 5 and 50 nm (100–10 times shorter than that of visible light). It can thus be used to etch patterns at tiny length scales and is ideal for spectroscopic applications because the wavelength is the same as that of many atomic transitions.

However, EUV radiation is currently produced in a very complicated process involving the use of amplified light pulses from an oscillator (a source of laser light) to ionize noble gas atoms. The electrons freed during this process are accelerated in the light field and their surplus energy is freed as attosecond (10–18 s) pulses of light of different wavelengths. The shortest wavelengths of light can then be “filtered out” to produce a single EUV pulse.

Scientists would ideally like to produce EUV light directly from the oscillator without the need for expensive and bulky amplifiers. In this way, EUV-light generation could be simplified and the size of the source significantly reduced to tabletop dimensions. In contrast, current devices usually measure around 2–3 m across. Now, Seung-Woo Kim of KAIST in Daejeon and colleagues have shown that this might be possible.

The researchers report that a bow-tie nanostructure of gold — measuring around 20 nm across — can enhance the intensity of femtosecond laser light pulses by two orders of magnitude. This is high enough to generate EUV light with a wavelength of less than 50 nm directly from a small pulse with an energy of 1011 W/cm2 injected into argon gas (Nature 453 757). The energy needed is about 100 times less than in traditional approaches.

Surface plasmons

The technique works thanks to “surface plasmons” (surface excitations that involve billions of electrons) in the “gap” of the bow-tie gold nanostructures (see figure). When illuminated with the correct frequency of laser light, the surface plasmons can begin to resonate in unison, greatly increasing the local light field intensity. This phenomenon, known as resonant plasmon field enhancement, is already exploited in imaging techniques, such as surface-enhanced Raman scattering, which is sensitive enough to detect individual molecules on a metal surface.

Immediate applications include high-resolution imaging of biological objects, advanced lithography of nanoscale patterns and making X-ray clocks. These exploit a frequency-stabilized femtosecond laser and are being investigated worldwide to replace the current caesium atomic clocks for better time precision.

“This new method of short-wavelength light generation will open doors in imaging, lithography and spectroscopy on the nanoscale,” commented Mark Stockman of the Georgia State University in a related article (Nature 453 731). The spatially coherent, laser-like light could have applications in many areas: spectroscopy; screening for defects in materials; and, if extended to X-ray or gamma-wavelengths, detecting minute amounts of fissile materials for public security and defence.

The team now plans to improve the conversion efficiency of the generated light by modifying the design of their nanostructure — for example, by making 3D cones with sharper tips. These will not only enable higher local field enhancement but also better interaction of the femtosecond light pulses with injected gas atoms. The team will also test the spatial and temporal coherence of the generated EUV light.

Superconductivity mystery deepens

By Michael Banks

I have been closely following events concerning a new class of iron-based superconductors ever since Physics World broke the story about their discovery in March. The new materials, containing planes of iron and arsenic separated by planes of lanthanum and oxygen, offer high temperature superconductivity without the need of copper-oxide planes as in the cuprates.

The challenge now is to understand how these superconductors work, i.e. what the responsible pairing mechanism is. Early calculations showed that the superconductivity cannot be described by electron-phonon coupling. The mechanism could therefore be similar to cuprate-based superconductors, which currently hold the record for the highest superconducting transition temperature (although the mechanism in the cuprates is still not understood).

Now, however, a paper published in Nature suggests that SmFeAsOF, which is the same as the material in the story we reported in March but with the lanthanum replaced by samarium, may behave quite differently to the cuprates. The paper’s authors, who are based in the US and China, show that SmFeAsOF has a ‘single gap’ in the density of states of the Cooper-pair fluid (an energy gap originates since there is a finite amount of energy needed to break the pair of electrons held in a Cooper-pair). The temperature dependence of the gap was found to obey conventional BCS predictions — the theory named after Bardeen, Cooper and Schrieffer that proposes electron attraction, via phonons, to form cooper-pairs.

This is all different from the cuprates, which don’t follow BCS predictions and also have a so-called ‘pseudo-gap’, which as I understand only allows certain electrons to ‘see’ a gap depending on how the travel with respect to the crystal lattice. The authors found no evidence of a ‘pseudo-gap’ in the new materials. So it seems that the materials follow BCS predictions, but with a superconducting transition temperature that is too high to be explained via electron-phonon coupling. The mystery deepens.

In another recent development, researchers in Switzerland have managed to grow single crystals of the Sm based iron superconductor. All research done before was performed on polycrystalline samples, but now opening research into single crystals means finding those elusive mechanisms may be a step closer.

Electrical noise measures Boltzmann constant

Physicists in the US have developed a technique that may help to make a more accurate measurement of the Boltzmann constant, a fundamental value that relates the kinetic energy of a group of particles to their overall temperature.

The technique could mark another step on the road to redefine the Kelvin unit of temperature. Currently, the International Committee for Weights and Measures (CIPM) in Paris — the hub of the metrology community — defines the Kelvin as 1/273.16 of the temperature difference between absolute zero and the triple point of pure water (roughly 0 °C) at a certain pressure. However, the CIPM would prefer to define the Kelvin, along with other SI units, in terms of fundamental constants. The Kelvin could be obtained from the second, which is already known to about one part in 1016, and the Boltzmann constant, kB.

The best current technique for determining kB involves measuring the speed of sound in argon gas, which gives a measurement to within two parts per million. Other techniques include measuring the dielectric constant of a gas; the radiation from a black body; and the absorption of laser light by molecules. The CIPM would like to combine several such techniques to rule out systematic errors in the final value of kB.

Samuel Benz and colleagues at the National Institute of Standards and Technology (NIST) have developed another technique, known as Johnson-noise thermometry (JNT). “There is a lot of research worldwide with many different approaches, all trying to improve measurements of the Boltzmann constant,” says Benz. “Our approach is the only ‘electrical’ approach.”

Johnson noise is white noise generated by the random motion of electrons in all electrical components that have resistance, and has a magnitude that can be predicted directly from the component’s resistance and temperature, and kB. To get kB — or, more precisely, a ratio of kB to Planck’s constant, h, which is known with much less uncertainty — the Johnson noise must be compared with another, reliable noise source at the same temperature. The development made by Benz’s team is the realization of such a reliable noise source — a quantum voltage noise source (QVNS) that comprises thousands of superconducting “Josephson junctions”.

Using their JNT-QVNS technique, the NIST researchers have measured the ratio of kB to h with an uncertainty of 25 ×10–6 (IEEE Trans. Inst. Meas. to be submitted). Although this is less accurate than other techniques, the researchers think that they should be able to reduce the uncertainty to 6 ×10–6 in the future, which would make it an attractive method for determining the Boltzmann constant for the CIPM.

The CIPM are currently planning to redefine four of the SI units, including the Kelvin, by 2011.

Technique probes nanoscale magnetism

Researchers in Japan have used a new technique to measure the magnetic and electronic structure of subsurface atomic layers in a material for the first time. The technique, dubbed diffraction spectroscopy, will be crucial for understanding nanoscale magnetism and developing high-density “perpendicular” magnetic recording materials.

Future data storage densities will soon need to exceed one terabyte (1012 bytes) per square inch, requiring bits just 10 nm or less across. But this is the scale at which surface magnetism appears, so it is critical to understand whether there are any unusual magnetic effects.

Fumihiko Matsui and colleagues of the Nara Institute of Science and Technology and other Japanese institutions, combined two existing techniques to make theirs: Auger electron diffraction and X-ray absorption spectroscopy. They analysed “forward focusing” peaks that appear in the spectra along the directions of atoms on the surface of a sample. By examining the intensity of the peaks, they could distinguish the magnetic and electronic structures of individual layers (Phys. Rev. Lett. 100 207201).

Layer by layer

The researchers used their technique to analyse the magnetic structure of a thin film of nickel on a copper surface, an important material for magnetic data storage. Until now, the atomic magnetic structure of nickel thin films has been unclear, although scientists know that the magnetization axis in the films goes from being parallel at the material surface to being perpendicular at 10 atomic layers deep. Matsui and colleagues analysed this transition region and measured the magnetic moments in each individual layer.

Knowing exactly how these magnetic moments change throughout the structure could be useful for making perpendicular magnetic recording devices. Perpendicular magnetism should be capable of delivering more than triple the storage density of traditional longitudinal recording materials. This is because the magnetic particles can be packed closer together for greater density, which leads to more data per square inch.

Several diffraction techniques that image atomic structure already exist, but they have their drawbacks. Scanning tunnelling spectroscopy, for example, can only analyse the surface of a sample. The Japanese team’s diffraction spectroscopy technique can be used to visualize both magnetic and electronic properties of subsurface layers at the atomic scale in a non-destructive way for the first time. “Our technique makes it possible to focus on the subsurface region, which connects surface and bulk worlds,” Matsui told physicsworld.com.

The researchers are now extending their technique to analyse the surface of superconducting materials. “We are especially interested in correlating electronic properties and geometric structure at the superconducting phase transition,” says Matsui.

Blog life: Entropy Bound

Blogger: Peter Steinberg
URL: entropybound.blogspot.com
First post: April 2004

Who is the blog written by?

Peter Steinberg is a nuclear physicist at the Brookhaven National Laboratory in New York, US. He is acting project manager of the PHOBOS experiment, which used Brookhaven’s Relativistic Heavy Ion Collider (RHIC) to search for unusual events produced during collisions between gold nuclei. He is also involved with the PHENIX experiment, which seeks to discover a new state of matter known as the quark–gluon plasma. In addition to his own blog Entropy Bound, Steinberg is currently blogging on a website that was set up last year to publicize the involvement of US scientists with the Large Hadron Collider (LHC) at CERN.

What topics does the blog cover?

Mostly events from Steinberg’s working life and physics-related news. Recently, several posts have been devoted to Walter Wagner and Luis Sancho’s lawsuit against the LHC (Physics World May p3; print edition only). Steinberg questioned Wagner’s claim to be a nuclear physicist, and this prompted a long response from Wagner in which he accuses Steinberg of libel and threatens legal action. Steinberg subsequently withdrew the offending post.

Who is it aimed at?

Entropy Bound grew out of the “Quantum Diaries” project, in which 33 physicists, including Steinberg, wrote about their life and work for a year to celebrate the International Year of Physics in 2005. This was essentially an outreach project, and the material in Entropy Bound has remained at a level appropriate for non-physicists while also maintaining an audience within the physics community itself.

Why should I read it?

One of the main things that this blog has going for it is that it looks great. Steinberg illustrates most of his posts with a well-chosen image, which often adds significantly to their appeal. The standard of the writing is also pretty good.

How often is it updated?

Not as regularly as most blogs — usually every few days to a week, but sometimes a couple of weeks can pass between new posts. This is partly as a result of Steinberg having to divide his time between this blog and his blog on the LHC, and posts on Entropy Bound sometimes consist just of a link to a new entry on the other blog. The sparseness of the posts, however, also reflects the fact that Steinberg obviously chooses his topics a bit more carefully than many scientist bloggers — when he does write about something, it is generally interesting and worth the wait.

Can you give me a sample quote?

This may seem strange in a world where scientific information is increasingly disseminated electronically (as it should be, and for free, when taxpayer dollars are paying for it!), but I just got the greatest thing in my office mailbox today: a book. A real book. While I’ve been published in my share of conference and workshop proceedings, and various papers have ended up published on real paper and stashed away in real libraries, this one feels a bit different. The book in question is the latest edition of Annual Review of Nuclear and Particle Science. I’ve always seen these on various library bookshelves over the years, and been given photocopies of various articles throughout the years. And somehow the design feels very book-ish, sober red cover, with all of the contributors’ names embossed in gold. Feels snazzy. Classy, even.

Once a physicist: Eddie Morland

How did you originally get into physics?

I did maths, physics and chemistry A-levels, and I found physics the most interesting of the three. I chose not to go to university after finishing school because I wanted to get a job and earn some money. Instead, I did a part-time applied-physics degree at Manchester Polytechnic while working for the UK Atomic Energy Authority (UKAEA) as a junior researcher. It took a lot longer than a full-time degree, but it was a great to be able to apply the work from the course back in the laboratory.

What did your job at the UKAEA involve?

I was working on fracture mechanics, which studies why things break. I was looking at the steels involved in the construction of a new pressure vessel for the Sizewell B nuclear reactor in Suffolk. For the 1980s it was leading-edge stuff — we had some of the largest machines in the world. We were rotating steel cylinders 10 inch thick at 500 mph inside an enclosed oven at 353 °C while hitting the inside surface with cold water at a rate of 40 gallons per minute, and measuring the cracks that grew. That was the only way you could simulate a full-scale pressure vessel without pressurizing it. My training was slightly different to that of most other people in the field, who were mainly metallurgists. I had a slightly more mathematical and theoretical approach, which I found quite useful.

What did you do next?

After I had been with the UKAEA for about 12 years, I heard that the organization was going to move towards privatization, so I did a part-time MSc in management science to equip me for the commercial world. I then ended up at the corporate headquarters in Harwell as the key account manager, which meant that I was basically the point of contact for all our dealings with British Energy. In 1996 the company was privatized and bought British Rail Research in Derby. I became the deputy managing director of British Rail Research, which involved looking after about 300 staff. Shortly afterwards the company also bought the equivalent organization in the Netherlands, as well as other parts of British Rail and transport companies all over the world. I ended up as managing director there and eventually had about 1100 researchers to look after.

What does railway research involve?

It is a complicated interface between track and train — the train is basically balanced on a sixpence as it charges along at 120 mph, and you need to understand the reactions between the two. There’s also the mathematical problem of designing timetables and signalling systems.

How did you become involved with the Health and Safety Laboratory?

After the Hatfield disaster in 2000 [a train crash in which four people were killed], the rail industry went through a period of tremendous change and the company I worked for ultimately decided to leave the market, so I became available. A month or two later the job of chief executive of the Health and Safety Laboratory (HSL) came up.

What does the HSL do?

We provide scientific services to the Health and Safety Executive, and we also do work for private-sector companies and for other government departments. We look at how humans get sick or injured and how to prevent that happening; we look at what makes industrial plant — such as railways, aeroplanes or chemical factories — unsafe; and then we look at the human factors, which includes things like management processes, the management of stress and so on. So we’ve got everything from medical doctors looking at how people are exposed to chemicals, fire and explosion experts, right through to psychologists and social scientists. It’s a tremendously diverse scientific base.

What is your role there?

I have to manage the place and make sure we do the business. We try to make sure that we’re delivering customers what they want, pouring resources into the right kind of areas, and that we are paying our way.

Does your physics background help you?

Physics is a very broad based subject — you get into the habit of creating a model and then testing it, and you can do that across all sorts of disciplines. Understanding scientific methodology is also very useful when crossing disciplines, for example from the nuclear world into railways. Physics really gives you the backbone to query the quality of the science.

Do you still keep up with any physics?

I’m surrounded by a lot of physicists here, and I get to see a lot of the stuff they do on a day-to-day basis.

Physicists without borders

The International Atomic Energy Agency (IAEA), which has its headquarters in Vienna, Austria, is a specialized agency of the United Nations (UN) that seeks to promote the safe, secure and peaceful use of nuclear technology. It has three main areas of expertise. It is the world’s nuclear inspectorate, sending inspectors to more than 140 UN member states, from Brazil to Japan, to verify that nuclear technology is not being used for military purposes. The IAEA also helps countries to improve their nuclear safety procedures and to prepare for emergencies. Finally, it serves as a focal point for the world’s development of nuclear science and technology across all fields.

The science and technology arm of the IAEA consists of a diverse team of several hundred scientists experienced in doing research in all areas that use atomic or nuclear technology, including medical physics, isotope hydrology, plant breeding (radiation is used to induce mutations) and nuclear fission and fusion. People with physics backgrounds can be found working on specific projects in most of these areas — and especially those related to nuclear energy.

However, the IAEA also has a dedicated “Physics Section”, which comes under the umbrella of the Department of Nuclear Sciences and Applications. This currently consists of six professional physicists, as well as a team of clerical staff, who are all based in Vienna.

“Whereas physicists in other departments are working on just one project or sub-programme, the Physics Section is in a position to support the member states with their more general physics needs,” explains section head Günter Mank. “Say a member state wants to know how neutrons can be used to detect explosives. It can come to us and the Physics Section will explain about the possibilities and the restrictions, and provide guidance on how to initiate such an activity.” For example, the section is currently providing assistance to the SESAME light-source project, which is a synchrotron facility being built in Jordan that, it is hoped, will foster scientific co-operation throughout the Middle East (see Physics World April pp16–17; print version only).

The support given by the Physics Section usually involves providing education and training in the operation of accelerators and research reactors, and helping member states to select, run and maintain the instrumentation that they need for their nuclear activities. “Together with the IAEA’s Technical Cooperation Department, for instance, we are supporting a new laboratory in Afghanistan, ensuring that they get the basic physics equipment they need for teaching students about nuclear physics and radiation physics,” Mank explains. This facility, which will be part of Kabul University, will improve the university’s existing nuclear-physics programmes and will include provision for several new medical-physics experiments.

Fostering fusion

One of the most important duties that the Physics Section has to fulfil is looking after the IAEA’s sub-programme on nuclear fusion. As part of this role, it is the facilitator of the ITER project, a co-operative venture, first proposed in the mid-1980s, to create an experimental fusion reactor. Currently involving China, the European Union (EU), India, Japan, Russia, South Korea and the US, the plan is to construct and run a large tokamak designed to produce approximately 500 MW of fusion power sustained for up to 400 s. Last year, Mank and his colleagues were able to celebrate as ITER was formally established as an international organization and building work got under way at the project site in Cadarache, France.

Another important way in which the Physics Section supports fusion research is by organizing scientific meetings that bring together the world’s fusion experts to discuss their work. “Every two years we organize the so-called Fusion Energy Conference,” Mank explains. “This year’s conference, which will take place in Geneva in October, celebrates the 50th anniversary of the UN starting work on fusion in 1958.” (The IAEA itself turned 50 last year, see Physics World July 2007 pp8–9; print version only.)

The nature of this work means that physicists working in the Physics Section do not actually do physics research themselves. Instead, a typical working day might involve attending a scientific or administrative meeting with representatives from member states, as well as preparing for and organizing other such meetings. “I myself prepare two to three technical meetings worldwide each year, which involves interacting with the member states and liaising with the speakers or the local organizer,” says Mank.

The IAEA also provide grants or fellowships to physicists from developing countries who would like to attend these events, and administering these is the responsibility of the Physics Section. “We are also involved in about 30–40 technical co-operation projects worldwide — such as the new physics laboratory at Kabul — for which there has to be some paperwork done and items have to be procured,” Mank continues. “Overall we see ourselves as a knowledgeable facilitator for the member states.”

Cosmopolitan physics

People who come to work in the Physics Section are usually experienced physicists who already have a successful international research career. Mank, for example, is originally from Germany and has a background in fusion and plasma physics. Before joining the IAEA in 2003, he worked on international projects at the National Superconducting Cyclotron Laboratory in Michigan, US, at the Jülich Research Centre in Germany, as well as on preparations for the European Spallation Source. “The driving force for me was a desire to initiate new things in an international environment,” he explains. “Indeed, I get the impression that a lot of scientists who come to the IAEA would like to make a difference.”

Joining the Physics Section provides physicists with the opportunity to become more deeply involved in international science administration — in fact, officers in the section are often individually responsible for projects such as SESAME. “One of my officers, for example, who was an expert in using lowand medium-energy accelerators, was working with member states in Africa to build-up a sub-Saharan network of accelerators,” says Mank. “The UN allows us, within restrictions, to encourage new ideas.”

To succeed in this type of role, therefore, you need to have a lot of international experience. Having worked in a range of different countries is a must, and being good at languages is beneficial. “Being a physicist in the IAEA is not only about physics, it is also about a lot of cultural interactions,” concludes Mank. “It is very important that physicists who come to the IAEA recognize the multicultural environment and the different possibilities and restrictions that other cultures present for doing science.”

• For young physicists interested in a career in scientific administration, the Physics Section takes on several interns for 3–6 months each year, see www.iaea.org/About/Jobs/internships.html

Shedding light on life

Since the discovery of X-rays in 1895, curiosity as well as clinical need has produced huge advances in our ability to visualize structures inside the human body. X-ray radiographic imaging led the way, followed in 1957 by the use of gamma-ray cameras, and then ultrasound imaging, magnetic resonance imaging (MRI), and positron emission tomography (PET). Together, these tools, in which physics plays a key part, underpin modern clinical-imaging practice.

The length scales that these techniques interrogate, however, leave much to be desired if one is interested in the very small. Clinical MRI, for example, can only resolve structures down to 100 µm. While some living cells are more than 80 µm across, interesting and important cellular processes — such as signalling between cells — may take place over length scales of much less than 1 µm.

Living cells are essentially defined by their complex spatial structures — for example the doughnut shape of red blood cells and the elongated projections (known as axons) of neuronal cells are key to their functions. Underlying these broad morphological characteristics, however, are much finer-scale molecular assemblies, such as the cytoskeleton (a protein “scaffolding” that stabilizes the larger-scale intracellular structures) and microdomains within cell membranes, which are the locii of many molecular signalling events.

Any technique to study the properties of biological molecules and their many interactions should ideally provide spatial information, because researchers increasingly need to integrate information about the interactions that underlie a biological effect with data on where in cells these interactions take place.

Short wavelength X-rays can, of course, be used to provide information on very small length scales and can even produce images of individual molecules — as in X-ray crystallography. The downside is that such radiation can severely damage biological materials. X-rays with slightly longer wavelengths and lower energy — which can be produced by synchrotrons — are, however, much less destructive to tissue. Indeed, these “soft” X-rays can be used to generate 3D images of living cells with a resolution of up to 10 nm, as has been shown by research at the Advanced Light Source synchrotron at the Lawrence Berkeley National Laboratory in the US. Unfortunately, even soft X-rays are destructive and difficult to handle, and they require access to a synchrotron.

Most techniques used for cellular imaging, therefore, tend to be optical, exploiting mainly the ultraviolet and visible parts of the electromagnetic spectrum. Of these techniques, fluorescence microscopy has become the most important because it is very sensitive, enormously versatile and relatively easy to implement. Fluorescence is the phenomenon by which certain molecular structures, known as fluorophores, emit photons when excited via irradiation with light of a specific wavelength. This emission typically occurs over a timescale of 1–10 ns, which is suitable for measuring the movements and re-orientations of molecules within cells, thus allowing many biological processes to be followed.

In rare cases, the biological molecule of interest is inherently fluorescent. But usually fluorescence has to be “built into” the molecules that researchers wish to study by tagging them with a fluorophore. The tag can even be a whole protein in its own right, such as the green fluorescent protein (GFP), which is attached to and expressed at the same time as the protein of interest and only fluoresces when both are actually manufactured by the cell.

The energy, momentum, polarization state and emission time of photons emitted by fluorophores can all provide vital information about biological processes at the microscopic and nanoscopic scales. The polarization of fluorescence photons, for instance, is affected by the orientation of the fluorophore — and hence of any protein to which it is attached — and can therefore provide information about the molecular dynamics of the molecule of interest.

Much work is being done by physicists and biologists that takes advantage of each of the attributes of fluorescence — developments that are symptomatic of far-reaching changes in the way that we do science across discipline frontiers (see “Life changing physics”).

Biology meets telecoms

Whatever technique is used for imaging intracellular structures and processes, the essence of the imaging problem is the same: to extract as much information as possible from a biological sample. Some of the challenges that need to be overcome in achieving this goal, therefore, are akin to those encountered within the field of information transmission.

Communication involves sending signals — often in the form of optical pulses — down a channel to a receiver, where they are decoded and translated into “useful” information. Similarly, molecules of interest that are “lit up” in some way convey information through a microscope (the channel) to a detector or the eyes of the human investigator (the receiver), where this information reveals features of cellular structure.

In the late 1940s Claude Shannon, an engineering physicist working at Bell Labs in the US, developed some of the fundamental rules that define the capacity of an information channel. He quantified, for the first time using well-defined physical properties, the amount of information that could be transmitted down a channel, and his rules also allowed researchers to calculate the precise degree to which the data would be corrupted by imperfections and noise. These same rules apply to all communication channels, whether they convey eBay bids and Web applets across the Atlantic Ocean or information about molecular dynamics within a living cell.

Studying living cells requires collecting enough photons to form an image. This is often far from easy — partly because cells are very small, and researchers are often interested in only a few fluorophore-labelled molecules within them. But what makes the job harder is that any beam of light of finite size undergoes diffraction and therefore spreads out. This limits the minimum diameter of the spot of light formed at the focus of a lens and provides a limit to the resolution of any traditional microscope systems. As was shown (separately) by Lord Rayleigh and Ernst Abbe over 100 years ago, this minimum diameter is approximately half the wavelength of the illuminating light (see “Criteria for success”). For conventional fluorescence microscopy that diameter is about 200 nm, meaning that features smaller than this cannot be resolved using this technique.

Microscopy therefore faces two related challenges: extracting information from as small a region as possible; and extracting as much information as possible from that small region. Doing the latter is not easy because each fluorescent molecule will only yield a finite number of photons before it stops fluorescing. It is important, therefore, to extract the maximum amount of information from each photon.

So how many bits of information can realistically be extracted from a single photon? We can think of the fluorescent molecule as a transmitter sending signals to a receiver such as a photomultiplier tube. Under ideal conditions, transmitting one bit of information requires an energy approximately equal to the temperature of the environment (T) multiplied by the Boltzmann constant (k). A photon of green light (such as those emitted by GFPs) has an energy of 95kT at normal biological temperatures, and so could, in principle, yield about 95 bits of information.

This is a very generous limit, however, and is only valid if the information can be perfectly decoded and if the number of photons emitted is very small. It would also require us to exploit the quantum properties of the photon far better than is normally possible. A typical fluorescent molecule will emit about 104 photons before it photobleaches, so taking these limitations into account we could hope, at best, for a megabit of information from each fluorophore. This suggests two ways of obtaining more information from biological systems: designing more efficient microscope systems that make better use of the available photons; and increasing the number of photons available by, for instance, designing new types of fluorescent tags.

Breaking the limit

In recent years, researchers have made spectacular advances in the amount of information that they have been able to obtain from samples, thanks to the advent of techniques that can break through the diffraction limit. These techniques can dramatically improve resolution, but at the price of using more light on the sample, which could damage it.

One of the first practical suggestions for moving beyond the diffraction limit was made in 1928 by the Irish physicist Edward Synge, following discussions he had with Albert Einstein. Synge considered what would happen if an aperture much smaller in size than the wavelength of the light passing through it is placed so close to the surface of a sample that the gap between it and the surface is smaller than this wavelength. He concluded that the light passing through the aperture would not have sufficient distance to diffract before hitting the sample and passing back through the aperture: very fine structures could, therefore, be resolved.

Testing Synge’s idea experimentally, however, requires the ability to very precisely position the aperture above the surface and to maintain its position while scanning takes place, which was not possible with the technology available in the 1920s. Indeed, it was not until 1972 that Eric Ash and colleagues at University College London demonstrated the feasibility of this concept. Ash and Nichols used 3 cm microwaves, which, thanks to their relatively long wavelengths, relaxed the mechanical requirements considerably.

It was then another decade before Dieter Pohl at the University of Basel in Switzerland successfully applied the method at optical wavelengths, spawning the technique now known as near-field scanning optical microscopy (NSOM). NSOM has regularly achieved resolutions of about 25 nm, but maintaining the probe very close to the sample still presents a considerable technical challenge.

NSOM is only really suitable for imaging structures on the surface of cells, but it nevertheless holds much promise for biological imaging. In particular, the technique’s superb resolution makes it great for imaging microdomains in cell membranes, and it has been used to good effect for this application by Michael Edidin and co-workers at Johns Hopkins University in the US. Studying these microdomains, which are also known as membrane rafts, has been a spectacularly productive area of cell biology in recent years because many cellular signalling processes appear to be directed through them. Bacteria and certain viruses, including HIV, also gain entry into cells via membrane rafts, which are usually about 40–100 nm in diameter, and it appears that Alzheimer’s disease and some of the prion-related diseases (like nvCJD) are also associated with the properties of microdomains.

NSOM is a derivative of scanning probe techniques such as atomic force microscopy or scanning tunnelling microscopy rather than optical microscopy. A super-resolving technique based on far-field optical microscopy, on the other hand, would have several advantages over NSOM, such as allowing 3D imaging, reducing the imaging time and making the sample easier to manipulate. Several such techniques have emerged in recent years, all of which exploit some sort of nonlinear relationship between the excitation, or input signal, and the fluorescence, or output signal.

One of these techniques is stimulated emission and depletion (STED) microscopy, which has been developed over the last decade by Stefan Hell and coworkers at the Max Planck Institute for Biophysical Chemistry in Göttingen, Germany. STED is based on the idea that the resolution achievable with fluorescence microscopy can be improved by narrowing the effective width of the irradiation spot to below the diffraction limit so that the fluorescence used to build up the image emerges only from a small region. This is achieved by exciting fluorophores as normal with a diffraction-limited beam while a second beam — which has the same outer radius but is doughnut-shaped — de-excites the fluorophores in the outer part of the diffraction-limited spot via stimulated emission, thus preventing them from fluorescing (see “In good STED”).

Since the resulting fluorescence comes from an area much smaller than the diffraction limit, this technique can be used to achieve lateral resolutions down to about 30 nm, which is close to what can be achieved using NSOM. A STED module that can be used with a conventional scanning confocal microscope, which offers a lateral resolution of about 70 nm, is now available from the microscope manufacturer Leica, and Hell and co-workers reported last year in Science that this technique offers a valuable way of interrogating the microdomains in cell membranes.

A gathering STORM

Another fluorescence-microscopy method capable of excellent resolution is known as either photo-activated localization microscopy (PALM) or stochastic reconstruction microscopy (STORM). The first term is used by Eric Betzig, based at the Janelia Farm Research Campus in Virginia, US, who initiated the idea in 2006, while the latter term is used by Xiaowei Zhuang and colleagues at Harvard University, who have been actively developing the technique in the last couple of years. The method exploits the fact that single objects can be located with far greater precision than one can image two objects.

The task of locating the position of a fluorophore is essentially the same as locating the centre of the detected light distribution. If we detect a single photon from a fluorescent molecule, the position of the molecule can be typically located to within 200 nm or so, but this improves drastically if more photons can be detected. In one dimension, the spread of the light distribution shrinks by a factor of n as the number of detected photons increases by n2. In two dimensions, therefore, shrinking both dimensions by a factor of n requires n4 more photons. In other words, detecting 104 photons can reduce the radius of the patch to 20 nm.

STORM uses switchable fluorophores that can be rendered dark with one beam (usually from a red laser) and switched on with a second beam (which is usually green). During imaging, the fluorophores are first switched off by the red laser and then illuminated with the green laser so briefly that only a small proportion of the fluorophores in the field of view are switched on again. The result is that the distance between the active (fluorescing) molecules is greater than the diffraction limited resolution and they can therefore be located with great accuracy (see “A STORM cycle”).

A single cycle produces a sparse image made up of a few spots positioned very precisely. Each time the process is repeated, however, a different, random selection of molecules is switched on and so a similar sparse picture of points is recovered. By adding these sparse images together, a properly populated image is eventually built up. This technique can be applied to obtain a resolution of the order of 20 nm and can be used to generate 3D images. But since many imaging cycles are required, obtaining an image takes a long time and the sample is subject to a very high photon dose, which can harm live cells. Remarkable images of DNA molecules have been obtained with this method, but in its present incarnation it is not a prime candidate for live-cell imaging.

One problem with STED and STORM is that the equipment required for these techniques is much more complex than a conventional microscope. It is also possible, however, to achieve super-resolution with relatively small modifications to a standard full-field microscope, thanks to a technique suggested by Mats Gustafsson of the University of California at San Francisco in 2000 that is usually referred to as structured illumination microscopy. It exploits the fact that if a pattern (say produced by fluorophores in a sample) that is too fine to be imaged by a standard benchtop microscope is illuminated by light in a different pattern, then a set of low-resolution Moiré fringes is produced that are visible with the microscope. This pattern of fringes contains information about the original fine pattern. Once several Moiré images have been obtained, each with the illumination pattern at a different orientation to the sample, mathematical techniques can be used to reconstruct an image of the sample at enhanced resolution (see “Structured illumination microscopy”).

The degree to which the resolution is enhanced depends on the pattern used to illuminate the sample — using a sinusoidal grating pattern gives approximately twice the resolution of a conventional fluorescence microscope. If we increase the illumination intensity to the point where the fluorophores are driven into saturation (in other words, all of the illuminated fluorophores are raised to an excited state), however, we can obtain a lateral resolution of about 50 nm.

One advantage of structured illumination microscopy is that no scanning is required, which simplifies the optics. Another is that it utilizes the photons very efficiently, meaning that it is rather gentle on the sample compared with most other techniques for achieving super-resolution and so can be used for imaging live cells. In particular, the speed and convenience of structured illumination microscopy makes it ideal for high-resolution studies of dynamic processes on the cell membrane.

Lighting the way

These are just a few of the recent noteworthy advances in breaching the physical diffraction limits that have hindered measurements in cellular biology. What is fascinating is that the experimental needs of biology are driving developments in imaging technology, while advances in imaging technology are in turn inspiring new biological questions. Many of these developments are also going hand in hand with a revolution that is taking place in biological thinking, which intimately involves physicists. We are seeing a change in the nature of biological investigation as it takes on a sounder theoretical basis coupled to experimental analysis — the hallmarks of modern physics. These are exciting and interesting times to be working in biological research — and not just for biologists!

Box: Life changing physics

Developing imaging technologies for applications in cell biology has traditionally been hampered by formal academic divisions. For many years, mutual exchange between the essentially separate disciplines of physics and biology seemed impossible within the compartmentalized framework of professional 20th-century science. Of late, however, a new interdisciplinary zeal has gripped these two fields, drawing them together and blurring their traditional barriers. The reason for this is that biologists are now faced with challenges — such as imaging cellular function — that can only be addressed with theoretical and experimental tools that have in the past solely been used by physicists and engineers. The importance of a genuine interdisciplinary culture to deal with questions arising from the life sciences (as well as to generate new ones) has prompted the non-biological learned societies to promote such contact with their members. The Institute of Physics, for example, has recently established a Biological Physics Group with just such a purpose in mind. This group will have its inaugural showcase meeting, themed “Physics meets biology”, in Oxford this July.

Sceptical physicists might perhaps wonder how such contact benefits them — but biology in fact raises plenty of interesting physical issues. Single living cells grow, divide and respond to stimuli in a predictable manner, so their components (such as the cytoplasm and membrane rafts) must operate by defined sets of rules. These rules are out there waiting to be discovered. Similarly, when collections of cells work together (as in brain function) their collective or “emergent” behaviour can also, in principle, be predicted. Understanding these rules presents formidable challenges that will require biologists, physicists, mathematicians and engineers to all working together.

As well as these essentially theoretical challenges, cellular biology also presents physics with major measurement challenges. Biology is often said to be data rich, with large amounts of data to be processed, but given the awesome complexity of the problems being addressed, the subject is actually rather data poor. There are pressing needs, therefore, to develop better measurement techniques for studying both single living cells and dense cellular clusters. Cellular imaging offers one route that will contribute to these data-collection regimes and at the same time unites physicists, engineers and biologists on the path to common goals.

At a Glance: Super-resolution fluorescence microscopy

  • Fluorescence microscopy, which uses optical microscopes to observe biological structures that have been tagged with fluorescent molecules, is a key tool used by biologists for studying cells
  • Conventional fluorescence-microscopy systems are limited by the effects of diffraction, so the best resolution they can achieve is approximately 200 nm
  • Many interesting and important cellular structures, however, are on length scales smaller than this, and in recent years biologists have turned to physicists for help in breaking through this diffraction limit
  • The result is several novel techniques, including stimulated emission depletion microscopy (STED), stochastic reconstruction microscopy (STORM) and structured illumination microscopy, all of which are capable of resolving structures as small as 50 nm across

More about: Super-resolution fluorescence microscopy

M Gustafsson 2005 Nonlinear structured illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution Proc. Natl Acad. Sci. USA 102 13081
B Huang et al. 2008 Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy Science 319 810
T Klar et al. 2000 Fluorescence microscopy with diffraction resolution broken by stimulated emission Proc. Natl Acad. Sci. USA 97 8206
J J Sieber et al. 2007 Anatomy and dynamics of a supramolecular membrane protein cluster Science 317 1072
M Somekh et al. 2008 Resolution in structured illumination microscopy, a probabilistic approach J. Opt. Soc. Ameri. A at press
www.iop.org/activity/groups/subject/bp

Spot the physicist

“The science wars” is the colourful but rather hyperbolic name given to a period, in the 1990s, of public disagreement between scientists and sociologists of science. Harry Collins, a sociologist at Cardiff University, was one of those making the argument that much scientific knowledge is socially constructed, to the dismay of some scientists, who saw this as an attack on the objectivity and authority of science. Rethinking Expertise could be seen as a recantation of the more extreme claims of the social constructionists. It recognizes that, for all that social context is important, science does deal in a certain type of reliable knowledge, and therefore that scientists are, after all, the best qualified to comment on a restricted class of technical matters close to their own specialisms.

The starting point of the book is the obvious realization that, in science or any other specialized field, some people know more than others. To develop this truism, the authors present a “periodic table of expertise” — a classification that will make it clear who we should listen to when there is a decision to be made that includes a technical component. At one end of the scale is what Collins and Evans (who is also a Cardiff sociologist) engagingly call “beer-mat expertise” — that level of knowledge that is needed to answer questions in pub quizzes. Slightly above this lies the knowledge that one might gain from reading serious journalism and popular books about a subject. Further up the scale is the expertise that only comes when one knows the original research papers in a field. Collins and Evans argue that to achieve the highest level of expertise — at which one can make original contributions to a field — one needs to go beyond the written word to the tacit knowledge that is contained in a research community. This is the technical know-how and received wisdom that seep into aspirant scientists during their graduate-student apprenticeship to give them what Collins and Evans call “contributory expertise”.

What Collins and Evans claim as original is their identification of a new type of expertise, which they call “interactional expertise”. People who have this kind of expertise share some of the tacit knowledge of the communities of practitioners while still not having the full set of skills that would allow them to make original contributions to the field. In other words, people with interactional expertise are fluent in the language of the specialism, but not with its practice.

The origin of this view lies in an extensive period of time that Collins spent among physicists attempting to detect gravitational waves (see “Shadowed by a sociologist”). It was during this time that Collins realized that he had become so immersed in the culture and language of the gravitational- wave physicists that he could essentially pass as one of them. He had acquired interactional expertise.

To Collins and Evans, possessing interactional expertise in gravitationalwave physics is to be equated with being fluent in the language of those physicists (see “Experts”). But what does it mean to learn a language associated with a form of life in which you cannot fully take part? Their practical resolution of the issue is to propose something like a Turing test — a kind of imitation game in which a real expert questions a group of subjects that includes a sociologist among several gravitational physicists. If the tester cannot tell the difference between the physicist and the sociologist from the answers to the questions, then we can conclude that the latter is truly fluent in the language of the physicists.

But surely we could tell the difference between a sociologist and a gravitational-wave physicist simply by posing a mathematical problem? Collins and Evans get round this by imposing the rule that mathematical questions are not allowed in the imitation game. They argue that, just as physicists are not actually doing experiments when they are interacting in meetings or refereeing papers or judging grant proposals, the researchers are not using mathematics either. In fact, the authors say, many physicists do not need to use maths at all.

This seemed so unlikely to me that I asked an experimental gravitational-wave physicist for his reaction. Of course, he assured me, mathematics was central to his work. How could Collins have got this so wrong? I suspect it is because they misunderstand the nature of theory and its relationship with mathematical work in general. Experimental physicists may leave detailed theoretical calculations to professional theorists, but this does not mean that they do not use a lot of mathematics.

The very name “interactional expertise” warns us of a second issue. Collins and Evans are sociologists, so what they are interested in is interactions. The importance of such interactions — meetings, formal contacts, e-mails, telephone conversations, panel reviews — has clearly not been appreciated by academics studying science in the past, and rectifying this neglect has been an important contribution of scholars like Collins and Evans. But there is a corresponding danger of overstating the importance of interactions. A sociologist may not find much of interest in the other activities of a scientist — reading, thinking, analysing data, doing calculations, trying to get equipment to work — but it is hard to argue that these are not central to the activity of science.

Collins and Evans suggest that it is interactional expertise that is important for processes such as peer review. I disagree; I would argue that a professional physicist from a different field would be in a better position to referee a technical paper in gravitational- wave physics than a sociologist with enough interactional expertise in the subject to pass a Turing test. The experience of actually doing physics, together with basic physics knowledge and generic skills in mathematics, instrumentation and handling data, would surely count for more than a merely qualitative understanding of what the specialists in the field saw as the salient issues.

Collins and Evans have a word for this type of expertise, too — “referred expertise”. The concept is left undeveloped, but it is crucial to one of the pair’s most controversial conclusions, namely the idea that it is only the possession of contributory expertise in a subject that gives one special authority. In their words, “scientists cannot speak with much authority at all outside their narrow fields of specialization”. This, of course, would only be true if referred expertise — the general lessons one learns about science in general from studying one aspect of it in detail — had no value, which is a conclusion that most scientists would strenuously contest.

This book raises interesting issues about the nature of expertise and tacit knowledge, and a better understanding of these will be important, for example, in appreciating the role of scientists in policy making, and in overcoming the difficulties of interdisciplinary research. Collins and Evans have bigger ambitions, though, and they aim in this slim volume to define a “new wave of science studies”. To me, however, it seems to signal a certain intellectual overreach in an attempt to redefine a whole field on the basis of generalizations from a single case study, albeit a very thorough one.

Copyright © 2025 by IOP Publishing Ltd and individual contributors