Skip to main content

Quantum-dot device detects DNA

A device that can detect DNA using quantum dots has been built by physicists in Spain. The researchers say that the device is highly sensitive, portable and cheap to build, and could one day be used by doctors to diagnose and detect disease.

Bioscientists have in recent years made huge progress in determining the human genetic code and that of many other organisms. One application of this data is to test patients for hereditary conditions — such as cystic fibrosis — by taking a DNA sample from them and comparing it with the known genetic code related to that condition.

Comparisons are usually made by mixing single strands from separate sources, which join to form their iconic double-helix structures. The speed and efficiency with which double helices form determines their genetic relatedness.

Until now, detecting double-stranded DNA has involved labelling the strands using fluorescent dyes, enzymes or radiolabels. However, the sensitivity of these techniques has been limited. Arben Merkoci and his colleague at the Autonomous University of Barcelona have overcome this problem by using quantum dots as labels; this also removes the need to chemically dissolve samples before testing (Nanotechnology 20 055101).

A quantum of semiconductors

Quantum dots are nano-scale crystals that were first developed in the mid-1980s for optoelectronic applications. They comprise hundreds to thousands of atoms of an inorganic semiconductor material.

Merkoci and his colleague have created a “sandwich assay” that can be filled with DNA for testing. “Samples are inserted inside the sensor in the same way that a glucose biosensor tests for glucose levels in blood,” Merkoki told physicsworld.com. In their test report, single-strands of DNA linked to cystic fibrosis are mixed with cadmium sulfide quantum dots and inserted between two screen-printed electrodes.

When two strands form a pair they pick up a quantum dot; this affects the electrical properties of the quantum dots which leads to a detectable change in the current across the two electrodes.

Advance for DNA testing?

Nanoparticle-based detection systems for DNA have been developed in the past few years but this is the first one to incorporate screen-printed electrodes; this enables direct detection of the DNA pairs without the need for chemical analysis. However, this test used an isolated, prepared sample of DNA so the next step is to test the device using “real-world” DNA samples. “We need to conduct further study of possible interferences that could come during testing of real patient samples,” said Merkoci.

A longer term goal is to develop an array composed of several electrodes where the same quantum dot can be used to test for a range of different DNA strands, each affecting the quantum dot electrical properties by different extents. A further aim of the researchers is to create a “lab-on-a-chip device”. “It could see applications in fields where fast, low cost and efficient detection of small volumes is required,” said Merkoci.

Merkoci and his team have not yet applied for a patent but are currently looking for companies to collaborate with to develop the technology further.

The Sun and Mars

Sun.jpg
The Sun‘s so-called scoop

By Matin Durrani

Scientists have a habit of complaining that there’s not enough science in the mainstream press. So I suppose they should be glad that Britain’s best-selling newspaper, The Sun, had a story on their front page last Thursday (15 January) emblazoned with the headline “Life on Mars”.

The story was refering to a paper in Science by a team of NASA scientists that reported the finding of methane in the Martian atmosphere. And as the Sun (the real one that is) destroys methane, could it be that living organsims are constantly regenerating the gas?

Turns out that the story is not the scoop it seems: scientists already had evidence for methane on Mars, so this latest research only confirms those findings.

Moreover, according to Paul Sutherland – the journalist who wrote the story – Science was not happy that The Sun had broken the embargo on the story, which was set at 7 p.m. UK time on Thursday 15 January. Indeed, he says that Science staff rang The Sun at 3 a.m. local time, demanding the story be removed from the paper’s website.

But Sutherland denies that he ever broke an embargo. As he explains on his blog, he simply put two and two together based on NASA’s original press release, along with a couple of Google searches and a chat with an astronomer friend.

Now when a newspaper or website reports on a story before an embargo deadline, what normally happens is that the organisation that imposed the deadline lifts the embargo so that other media outlets can report the story too. But Science maintained the embargo because, it said, this “unfortunate tabloid teaser” contained nothing from the research paper and was “a purely speculative narrative”.

Which says it all about The Sun‘s coverage of science I guess. Still, fair play to them: they got planetary science on the front page and it seems churlish to complain.

But the final twist in the tale is that Nature, which each week sets embargoes of its own, reported the story back in October last year

What goes around comes around.

And the winner is…

higgs.jpg
A Higgs Boson, as envisaged by <a href="http://www.particlezoo.net

“>The Particle Zoo

By Margaret Harris

Congratulations to Alexandra Gade of the National Superconducting Cyclotron Laboratory at Michigan State University for winning Physics World’s 2008 Quiz of the year, which took a lighthearted look at physics events ranging from an Indian moon mission to the discovery that some granite countertops “might heat your cheerios a little” due to their low-level radioactivity.

In addition to the everlasting glory of victory, Dr Gade will also receive a cheque for £50, which works out at around $75 at today’s exchange rate. It’s a pity about the declining pound, but sadly there’s nothing we can do about it.

If your entry didn’t win this year, better luck in 2009 – and here are the answers in case you’d like to check your memory skills.

(more…)

The Renaissance of Astronomy

nasa.jpg
Herschel space telescope (Courtesy: ESA)

By João Medeiros

According to Jonathan Gardner, from NASA, we are going through an unparalleled renaissance of astronomy, maybe only comparable with Galileo´s pioneering efforts. In fact, most astronomy talks today seem to start with the words “We now know…”

Speaking at the IYA opening in Paris he noted that over the past decades, we’ve discovered inflation, the universe´s flat geometry and that 95% of the mass of the universe is actually not on the periodic table.

Part of the reason for this renaissance has been the dream team of space telescopes, Hubble, Chandra and Spitzer. As this generation of the telescopes is reaching the end of its days, a new one is getting ready to launch. The space telescope Herschel will be launched in April, Hubble telescope is going to be granted a new lease of life with another serving mission May this year, and 2013 will see the launch of the James Webb telescope.

There´s also Planck, Herschel´s sister mission (like the fact that it´s a she, to balance that aforementioned gender inequality in science), planned to launch this year.
Hubble´s revamping is going to be take place in May. They are going to replace Hubble´s batteries, implant new gyroscopes, repair some of the instruments and put two new pieces of tech on the satellite: the cosmic origins spectrograph (which is going to measure the cosmic web of gas between the galaxies) and the WFC3 (wide field camera, that will look for high redshift supernovae).

The new generation of space telescopes is going to prioritize the study of star formation, exoplanets (undoubtedly THE topic of astro at the moment) and the end of the dark ages, when the first galaxies formed and ionized the interstellar medium. All in the spirit of Carl Sagan´s philosophy “Somewhere, something incredible is waiting to be known.”

Chasing Robert Wilson

By João Medeiros

Bob Wilson, discoverer of the cosmic microwave background (with Arnos Penzias), Nobel laureate, is one of the big celebs here at the IYA opening. Students chase him like paparazzi. Good to know there is such a thing as science fanclubs.

I managed to scare away the students and get ten minutes with Wilson. I had to thank him for having given me a PhD topic, after all. We spoke about scientific method and the importance of science journalism.

A curious thing about the discovery of the CMB is that Wilson only truly realized the importance of his discovery when he read about it on the NY times. Being a typical postgraduate at the time (he was 29), back in 1965, he woke up at lunchtime the day after his discovery was published, and it was his father, visiting from Texas, that brought the newspaper with the news. “I didn´t really have a clue of the importance of what we had done until then, thanks to that journalist,” he said.

Wilson didn´t actually take cosmology seriously, given all the speculation back then (nothing much changed then). In fact, he was actually more philosophically inclined to believe in the steady state theory rather than a dynamic universe, partly because Hoyle had been his cosmology lecturer.

According to Wilson, his discovery made cosmology the big industry that it is today, something that we would never had imagined would happen in the slightest.

Given the serendipity of Wilson´s discovery, he says that it hadn´t been for him and Penzias, then certainly someone else would have discovered the CMB sooner or later (in fact, at the time of Wilson and Penzia´s discovery, David Wilkinson was building an antenna to specifically detect the cosmic radiation). Wilson believes in Robert Merton´s theory of multiples, that discoveries are the product of individuals, but of the times.

The NY Times episode shows that Wilson thinks science journalism plays a fundamental role to science. He still reads the New York Times and various science magazines, to keep up to date on what is going on in science. He says he much prefers it to scientific papers, which take a lot a time and effort.

Nanowires are guided around chips

Researchers in the US are the first to use electric fields to guide DNA-coated nanowires to specific locations on a chip. The technique, which allows the nanowires to be attached directly to chip circuitry, could come in handy for making a variety of nanoscale devices, such as medical biosensors that detect cancer or harmful bacteria and viruses. It also offers a way to incorporate new types of components within conventional silicon electronics.

Theresa Mayer and Christine Keating of Penn State University and colleagues were able to control exactly where nanowires were placed on different, predetermined locations on a chip with sub-micrometre accuracy, and then make individual contacts to each wire without losing or inactivating the DNA that was on the wires (Science 323 352). Placing the nanowires accurately in this way is crucial for making individual contacts between nanowires and other devices on a chip such as a transistor.

Mayer and Keating’s technique involves generating electric fields at desired locations on the chip by applying alternating voltages between pairs of guided electrodes patterned on the chip surface. The nanowires are drawn into the regions of highest electric field strength, which are found in the wells between the guiding electrodes. The researchers place a second set of wires into the next set of microwells by applying voltages to a new pair of guiding electrodes, and so on. Combining electrodes and wells in this way results in sub-micrometre positioning control, they say.

Variety of configurations

In the experiment, the different tagged nanowires were placed in rows but they could be positioned in a variety of configurations.

The nanowires are coated with DNA in separate batches before they are assembled onto the chip. Different wires, that is those carrying different sequences of DNA, are assembled into the selected wells. The sample is then exposed to a suspension of the next type of wires, which have a different DNA sequence on them.

“We control where the different types of wires go by where we apply the alternating voltage,” Mayer told physicsworld.com.

The scientists say that the technique is fairly simple and could be scaled-up for industrial manufacture. According to the team, the most obvious application for the technique is in on-chip, electronic biosensors that could ultimately lead to portable, low-power detectors for pathogens and diseases like cancers.

Avoiding false negatives

“Having many copies of each type of nanowire in the array would help avoid false negatives and positives, and enable screening for multiple target sequences at once,” explained Keating.

Biosensors would not be the only beneficiaries: being able to control the location of device components prepared off-chip also offers a route to incorporating non-traditional device components with conventional silicon electronics, say the researchers.

Keating adds that she ultimately envisions multi-analyte biosensors in which binding events are detected as electrical responses by thousands or even millions of individual nanowires and are processed by electronic circuits directly on the chip. In the present work, the team was able to show assembly of many copies of each of three different DNA-coated wire populations. So far, they have used fluorescence to follow DNA binding but plan to perform on-chip electrical detection soon.

“We are also very interested in exploring what other types of materials and coatings can be assembled and integrated with the on-chip electronics using our technique,” said Keating. “We believe this could be a very general approach to incorporating new materials that are difficult or impossible to prepare in place on a chip.”

Astronomy comes to Paris

cesarsky.jpg
Catherine Cesarsky (Courtesy: ESO)

By João Medeiros

It’s the year of deflation, the year of Obama, and the International Year of Astronomy. It started here in Paris, the City of Lights, by the Eiffel Tower, at the UNESCO HQ. The Opening Ceremony attracted more than 100 countries represented by astronomers, industrialists, diplomats, artists, a Kepler impersonator and the odd journo. We are celebrating 400 years since Galileo, the father of modern science, turned the telescope up and saw something amazing.

Catherine Cesarsky, president of the International Astronomical Union, said the vision for the International Year of Astronomy is going to be geared outwards, towards the public.
“After years of preparation, the time has come to launch this year, during which the citizens of the world will rediscover their place in the Universe, and hear of the wondrous discoveries in the making. “

Indeed, the lofty goal of the IYA 2009 is not to launch a super science program but to return astronomy to the public through a series of initiatives that will include the Dark Skies Awareness a reclassification of archaeoastronomy sites as UNESCO´s world heritage sites.

Speaking to Physics World, Catherine Cesarsky, expressed how much she wishes that astronomy reaches the public again, as an entry point to science and to a scientific world view, so necessary today. She said that, from her own experience, the younger generations, 10 to 14 years olds, are usually enthralled by astronomy popular lectures. However, from then on, adolescence kicks in and it becomes harder to excite the audiences — it’s difficult for a teenager to appreciate the mysteries of the universe when its hormones are playing no rules football. But it doesn’t matter much, Cesarsky believes, since the most important part of science communication is to plant the seed for the excitement for the wonders of science when they are really young.

Cesarsky also worries about the gender asymmetry in the physical sciences and about the “leaky pipeline effect”: you get a fair proportion of women at undergraduate level that somehow are all but gone in academia. The discrimination, fortunately, is not as palpable as it was back in the days when Cesarsky did her studies in Buenos Aires. Obviously a bright student, she was once complimented by her head of department in the following nuanced manner “It’s funny, I always thought physics wasn´t for women”.

The conference opened with a series of talks on Mayan and Islamic astronomy. I’ve always been fascinated by the role played by ancient astronomy in society, bridging primordial religious experience to a fundamental relevance to agriculture and economy. According to Martin Rees, also present in the ceremony, “astronomy is, if not the first, the second oldest science, after medicine”.

Throughout history, astronomy inexorably lost relevance to society. However, it still relates with the big and the meta-questions. To Cesarsky, that’s where the relevance of astronomy lies: “We have one sky, and that’s what should truly unite people. Astronomy, like other sciences, but astronomy in particular, is a peaceful, soul-searching activity that encourages a truly global culture.”

Of course, 2009, is also the year of Darwin. In the words of Martin Rees, “Both astronomy and Darwinism provide a beautiful narrative for humanity, that starts right from the beginning until the intelligent species that we are today”.

How massive stars form

Astronomers have struggled to understand how the largest stars — up to 120 times as massive as the Sun — can form by sucking in nearby matter. The problem is that, once a star reaches about 20 solar masses, the outward force of its intense radiation exceeds the gravitational force that pulls in matter. But researchers in California have used a computer simulation to show that structures in the bubbles of gas that protrude from a massive star can channel material to larger stars, allowing them to continue growing.

Stars form within dense clouds of dust and gas located in the interstellar medium of a galaxy. Gravity causes these clouds to collapse in on themselves, with small disturbances within a cloud causing denser clumps of matter to form. These clumps will themselves continue to collapse, until the pressure created inside the ever hotter gas increases enough to counteract gravity. At this point each clump will become a spherical protostar.

Astronomers know that a protostar can grow up to around 20 solar masses by accreting material from the surrounding gas cloud. In doing so, it creates an “accretion disk” around itself — the conservation of angular momentum dictating that the rotational speed of the shrinking cloud will increase to the point where the gas enters into orbit around the young star. The temperature inside the growing star, if it is large enough, will then increase to the point where nuclear fusion reactions occur.

Enormous outward pressure

However, applying the same logic to larger stars (formed from larger gas clouds) has been problematic, because the enormous outward pressure of the thermal radiation created as the star heats up should blow away a dusty cloud of gas trying to accrete onto it. Researchers have come up with a number of alternatives to accretion in order to explain the existence of the most massive stars, but each has had its own problems. For example, the idea that massive stars might form through the collision of smaller stars now seems implausible because there doesn’t appear to be a high-enough density of low mass stars to generate such collisions.

Now, however, Mark Krumholz of the University of California, Santa Cruz, and colleagues, have shown that radiation pressure should not halt accretion. To do this they carried out a three-dimensional computer simulation of massive star formation using their ORION programme (Sciencexpress.org). This involved modelling a gas cloud of 100 solar masses and then watching it evolve over the equivalent of several tens of thousands of years. They found that a central protostar formed after 3600 years and that the star continued to accrete material unimpeded by radiation pressure for 20,000 years after that.

It was at this point that the simulated star became large enough for its radiation pressure force to exceed its gravitational force. Because the radiation is hottest closest to the star, gas nearer the star feels a greater push than gas further away. Gas from further away therefore falls towards the star but piles up where the pressure is greatest, causing it to form two bubbles of gas (on opposite sides of the star) with only radiation inside.

Krumholz’s team found that these bubbles become clumpy, meaning that in some places radiation blows out sections of the bubble wall, whereas in other places dense filaments of gas form though the bubble, allowing gas to be transported to the star.

Smaller stars also formed

The team also discovered that gravitational instabilities in the disk meant that not all of the gas accreted onto the original star. They found that a number of smaller stars formed within the disk, most of which were consumed by the main star, but that on the 35,000-year mark a few of these smaller objects clumped together to form a second star that resisted assimilation. Another 20,000 model years later, after little change in the system, the researchers halted the simulation and found that the two stars had masses of 41.5 and 29.2 solar masses. This compares with an upper limit of 22.9 solar masses formed in previous two-dimensional simulations.

Krumholz says that had he and his colleagues continued to run the simulation, it is likely that much of the remaining gas in the disk would have increased the masses of the stars still further. Indeed, he says that his group’s simulation provides no evidence for a stellar mass limit. But he points out that observations suggest an upper limit of between 120 and 150 solar masses, speculating that beyond this point either the stars or the disks that feed them become violently unstable, preventing any further growth. “I don’t know if this could explain the observed limits,” he adds, “but I can say with some confidence that radiation pressure is not the mechanism responsible for setting these limits.”

Broadband invisibility cloak unveiled

A new kind of “invisibility cloak” that conceals an object lying on a flat, reflective surface has been built by researchers in the US. The device is an improvement over earlier microwave cloaks as it operates over a wide, rather than a narrow, range of frequencies.

Built by a team led by David Smith at Duke University in the US, the device is a metamaterial made from thousands of tiny H-shaped metallic elements. It was designed using a novel computer algorithm, which the team believes could be used to create other invisibility cloaks that work for infrared or even visible light.

Smith and his colleagues unveiled the world’s first invisibility cloak in 2006, which consisted of a cylindrical arrangement of split-ring resonators. By varying the shape, size and arrangement of the resonators, the electrical permittivity and the magnetic permeability of the cloak could be varied at any point within it. He and his team then used the theory of transformation optics to determine which permittivity and permeability values would steer the microwave radiation smoothly around the object.

Although the cloak was a success — it was named by Science magazine and physicsworld.com as one of the biggest breakthroughs of 2006 — it had several major drawbacks. In particular, it worked only over a tiny frequency range and was therefore not practical for concealing something from microwave-based radar. In addition, the cloak tended to absorb much of the microwave radiation that passed through it, which could reveal the presence of a concealed object.

Drop the magnetic response

According to Smith, these problems are related to the fact that split-ring resonators have both a magnetic and electrical response to microwaves. This concern was addressed by Jensen Li and John Pendry at Imperial College, London, who last year calculated that a broadband and low-loss cloak could be made from a metamaterial with only an electrical response. Exactly such a cloak is what Smith and colleagues have now built.

Their metamaterial is a matrix of 10,000 elements (H-shaped pieces of metal), which are arranged in a flat sheet that is five elements thick. A thin edge of the sheet is placed on a reflective surface such that there is an empty pocket between surface and sheet. The object to be concealed is placed in this pocked (see figures).

The team tested the cloak by firing microwaves along the sheet so they would normally pass through the pocket and reflect back to a detector, revealing the presence of the object. But instead of reaching the object, the microwaves reflect from the pocket such that it appears that the pocket and object are not there. This was tested using microwaves between 13 and 17 GHz, but Smith believes that the cloak’s bandwidth could extend down below 1 GHz.

Designing thousands of elements

According to Smith, the big challenge in creating the device was sorting through the many different ways of designing and arranging individual elements to achieve the required transformation. In 2006, the problem was solved by designing one element at a time, creating a library of about 10 different elements that could do the job. This time, however, thousands of elements were needed, so Smith and colleagues developed a computer algorithm to automate the process.

They began by choosing a specific structure — the H-shape in this case — and then calculated the microwave response of a relatively small number of H-shapes that differed in width. The results were used to create a mathematical model that relates the microwave response to element width and a searching algorithm is used to work out how to best arrange elements of different widths to create the desired cloak.

The result was a matrix of about 10,000 elements — 6000 of which were unique. The elements were then created on a printed circuit board using standard computer-automated design and milling techniques.

According to Smith, the algorithm could be used to design a variety of different devices including cloaks and superlenses, which could boost the performance of imaging systems. As for the cloak itself, Smith says that in principle it could be used to conceal objects from microwaves and therefore could be used to minimize interference between co-located communications antennas by making the antennas invisible to each other.

Is interactive physics the way forward?

By Margaret Harris

I came to physics very late by UK standards: I had already started my freshman year of college. For scheduling reasons, I therefore had to take introductory mechanics with engineers rather than physics majors. Supposedly, this meant I had roughly 300 classmates, but in practice, attendance at any given lecture hovered around 50 students – half of whom sat slumped in the back of the room, muttering “God, I hate physics”.

It seems that my experience was far from unique, and according to an article in yesterday’s New York Times, the physics department at MIT has decided to do something about it. Their new mechanics and E&M courses for undergrads employ something called Technology Enhanced Active Learning (TEAL that does away with the traditional professor-in-front-of-blackboard lecture format in favour of students working on physics concepts in small groups at round tables. Various high-tech gizmos let the students answer questions posed by the professor, who wanders around the room with a few teaching assistants giving presentations and answering questions.

The result? Attendance at these non-lectures has shot up from less than 50% under the old format to over 80%, and the failure rate has dropped from 12% to 4%. The NY Times article quotes a number of experts who think the new system is just great – including atomic physicist Carl Wieman, who’s become deeply involved in changing physics education since winning the Nobel Prize in 2001.

There’s just one fly in this ointment: the students seem to hate it.

(more…)

Copyright © 2025 by IOP Publishing Ltd and individual contributors