Skip to main content

What do strange metals and black holes have in common?

By Hamish Johnston in Waterloo, Canada

Harvard’s Subir Sachdev has just taken the audience here at the Convergence conference on a delightful romp through the phase diagram of the cuprate high-temperature superconductors. What I found most interesting was not the superconducting phase, but rather Sachdev’s description of the “strange metal” phase.

This phase occurs when the cuprate copper-oxide layer is highly doped with holes and has perplexed physicists for some time – hence its strange moniker. It has no quasiparticles and lots of low-energy excitations so there is no easy way to describe the collective behaviour of the electrons.

(more…)

Kirigami patterns make composite materials more stretchy, while staying strong

The Japanese art of “kirigami”, or paper cutting, has been used by scientists in the US to make electrically conductive composite sheets more elastic, increasing their strain from 4% to 370%, without significantly affecting their conductivity. The team has so far demonstrated its new technique by making stretchable plasma electrodes, but adds that its work could have a variety of applications, from reconfigurable structures to optoelectronic devices. The principles could also be used to design other composite materials that retain a specific property under mechanical strain.

Composite materials allow engineers to combine multiple materials with different properties to achieve a combination of properties not found in nature. One common tactic is to combine a strong elastic material with another that has a desired property, such as high electrical conductivity, but which is brittle. Unfortunately, microcracks can form in brittle regions, and stress then concentrates around their edges, allowing the material to fail. Using a composite with only a small proportion of brittle material can allow composites to stretch to many times their original length, but their functional properties are often drastically altered as they do so. “There is always a trade-off there,” explains Nicholas Kotov of the University of Michigan, Ann Arbor. “We want to have the cake and we want to eat it too.”

Cuts and notches

Kotov, together with Sharon Glotzer and colleagues at the University of Michigan, stressed carbon-nanotube/polymer composites designed to be electrically conductive, finding that they primarily deformed by the stretching of their internal fibres, before rupturing at around 5% strain. They then used photolithography to make a series of strategically placed cuts in the materials, according to the rules of kirigami. When they stressed the cut materials, the researchers found that they initially deformed in the same way. However, as the stress rose further, they began to absorb the extra strain energy by opening up the network of cuts, deforming out of the plane of the material and forming a “secondary elastic plateau” as the cuts gradually rotated with increasing load to align themselves with the applied stress.

As the stress increased, the cut regions were gradually pulled back to the centre, concentrating the strain onto the corners of the cuts. The materials finally ruptured when the strain on these corner regions grew too large, but not before they stretched by up to 370%. Crucially, the material’s electrical conductivity remained virtually unchanged as they stretched. The team found that it could manipulate the strength and elasticity of the materials in more detail by altering the length and spacing of the cuts.

Strained electrodes

The researchers used their kirigami system to produce stretchable plasma electrodes able to generate electric fields that could ionize argon gas, and yet still withstand strains of more than 200%. This would typically destroy the plasma by either physically destroying the electrode or reducing its conductivity. “In our case it was opposite,” says Kotov, “We actually saw an increase in the intensity of the plasma spots when we strained the electrode.” This has a direct application to plasma displays, he says, which use a similar process to generate spots of light. “There are no flexible or stretchable plasma devices right now,” he says. Further applications might be found in solar cells, prosthetics or the electrodes of lithium ion batteries, which need to expand and contract repeatedly without damage or loss of conductivity during the charge/discharge cycle.

Pop-up feature

“It’s very interesting, although it’s not the only example of this kind of thing – I’ve also seen something like this in graphene,” says Christian Santangelo of the University of Massachusetts, Amherst, in the US. Santangelo is particularly interested in the “pop-up book” aspect in which, when pulled, the material buckles out of the plane. “I can imagine using this as a way to make 3D electronic devices – taking advantage of the third dimension to pack more stuff into an electronic device.” But the more immediate task, he says, is to look in detail at how the materials respond to different, more complex cuts – something that Kotov’s group is already working on.

The research is published in Nature Materials.

Maria Spiropulu talks about multiple Higgs beyond the Standard Model

 

By Hamish Johnston in Waterloo, Canada

Caltech’s Maria Spiropulu has a great party trick. She can demonstrate the bizarre rotational property of a spin ½ particle using a full glass of water and a contortion of her arm without spilling a drop. This was just one of the many highlights of her talk about the future of experimental particle physics that she gave yesterday at the Convergence meeting here at the Perimeter Institute.

While Spiropulu doesn’t talk about spin in the above video, she does explain why she is looking forward to analysing data from the 13 TeV run of the Large Hadron Collider, where she is part of the CMS collaboration. So, what could Spiropulu and colleagues find when they dig into the vast amounts of data that CMS is currently producing? It just could be four more types of Higgs particle. To find out more watch the video.

Upgraded LIGO will begin hunt for gravitational waves soon

 

A $200m upgrade to the Laser Interferometer Gravitational-wave Observatory (LIGO) has been completed, with the facility set for observations in the coming months as it aims to be the first to detect a gravitational wave. Dubbed Advanced LIGO, it consists of two separate telescopes in the US – the Livingston observatory in Louisiana and the Hanford observatory in Washington state – that use laser interferometers to search for gravitational waves.

According to Einstein’s general theory of relativity, gravitational waves are effectively ripples in space–time that travel as a wave. While none have ever been directly detected, scientists have observed a loss of energy as two neutron stars – the dense cores of once-massive stars – spiral toward each other. That energy loss is precisely what Einstein’s equations predict would be emitted as gravitational radiation.

Twice as good

The Advanced LIGO observatories each have two 4 km-long arms perpendicular to each other. At the vertex lie a laser and a beam splitter, which sends light down each arm. The light bounces off mirrors at the end of each identical-length arm, before returning to the vertex and combining. A nearby detector usually receives no light, but if a gravitational wave passes through the observatory, the arm lengths should change slightly, meaning that the detector will see a signal.

LIGO began operations in 2001, and closed down in 2010 to start the upgrade programme. “Advanced LIGO is a complete rebuild of the interferometers of the detector,” says David Reitze of the California Institute of Technology, who is executive director of the LIGO Laboratory. “If they were cars, we traded in our 2001 LIGO sports car for the 2015 version.”

First data from Advanced LIGO are expected in September, with the facility running at about one-third of its final sensitivity. It will take a few years for Advanced LIGO to reach that level, which will make it 10 times more sensitive than the original observatory. Gabriela González, the LIGO Scientific Collaboration spokesperson, says the team predicts that Advanced LIGO will detect gravitational waves even before that full sensitivity is reached.

  • Watch the video below, from the American Museum of Natural History, to find out more about gravitational waves and how LIGO plans to measure them

 

Why converge?

Neil Turok at the Perimeter Institute for Theoretical Physics (Courtesy: Gabriela Secara)

By Louise Mayor in Waterloo, Canada

Right now, top physicists from around the world are arriving in Waterloo, Canada, to attend a unique conference. Christened Convergence, the meeting is the brainchild of Neil Turok, director of the Perimeter Institute for Theoretical Physics (PI) in Waterloo, where the event will be based. I spoke to Turok to find out what motivated him to set up this conference, what makes it so special, and what he hopes it will achieve.

(more…)

Converging streams, secret science and more

 

By Tushna Commissariat

Regular readers will know that Physics World‘s Hamish Johnston and Louise Mayor will be attending the “Convergence” conference at the Perimeter Institute in Canada from tomorrow onwards.  While the conference will undoubtedly prove exciting – just look at this list of speakers – it looks like the institute already has convergence on its mind as this month’s Slice of PI contemplates the “converging streams” of art and science. The video above features Perimeter researcher and artist Alioscia Hamma, who finds solace and symmetry in both his art and physics. Watch the video and read more about his work on the Perimeter blog.

(more…)

Does time dilation destroy quantum superposition?

Why do we not see everyday objects in quantum superpositions? The answer to that long-standing question may partly lie with gravity. So says a group of physicists in Austria, which has shown theoretically that a feature of Einstein’s general relativity, known as time dilation, can render quantum states classical. The researchers say that even the Earth’s puny gravitational field may be strong enough for the effect to be measurable in a laboratory within a few years.

Our daily experience suggests that there exists a fundamental boundary between the quantum and classical worlds. One way that physicists explain the transition between the two, is to say that quantum superposition states simply break down when a system exceeds a certain size or level of complexity – its wavefunction is said to “collapse” and the system becomes “decoherent”.

Complex wavefunction

An alternative explanation, in which quantum mechanics holds sway at all scales, posits that interactions with the environment bring different elements of an object’s wavefunction out of phase, such that they no longer interfere with one another. Larger objects are subject to this decoherence more quickly than smaller ones because they have more constituent particles and, therefore, more complex wavefunctions.

There are already multiple different explanations for decoherence, including a particle emitting or absorbing electromagnetic radiation or being buffeted by surrounding air molecules. In the latest work, Časlav Brukner at the University of Vienna and colleagues have put forward a new model that involves time dilation – where the flow of time is affected by mass (gravity). This relativistic effect allows for a clock in outer space to tick at a faster rate than one near the surface of the Earth.

Dilating states

In their work, Brukner and colleagues consider a macroscopic body – whose constituent particles can vibrate at different frequencies – to be in a superposition of two states at very slightly different distances from the surface of a massive object. Time dilation would then dictate that the state closer to the object will vibrate at a lower frequency than the other. They then calculate how much time dilation is needed to differentiate the frequencies so that the two states get out of step with one another, so much that they can no longer interfere.

With this premise, the team worked out that even the Earth’s gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond.

Beyond any potential quantum-computing applications that would benefit from the removal of unwanted decoherence, the work challenges physicists’ assumption that only gravitational fields generated by neutron stars and other massive astrophysical objects can exert a noticeable influence on quantum phenomena. “The interesting thing about this phenomenon is that both quantum mechanics and general relativity would be needed to explain it,” says Brukner.

Quantum clocks

One way to experimentally test the effect would involve sending a “clock” (such as a beam of caesium atoms) through the two arms of an interferometer. The interferometer would initially be positioned horizontally and the interference pattern recorded. It would then be rotated to the vertical, such that one arm experiences a higher gravitational potential than the other, and its output again observed. In the latter case, the two states vibrate at different frequencies due to time dilation. This different rate of ticking would reveal which state is travelling down each arm, and once this information is revealed, the interference pattern disappears.

“People have already measured time dilation due to Earth’s gravity,” says Brukner, “but they usually use two clocks in two different positions. We are saying, why not use one clock in a superposition?” Carrying out such a test, however, will not be easy. The fact that the effect is far smaller than other potential sources of decoherence would mean cooling the interferometer down to just a few kelvin while enclosing it in a vacuum, says Brukner.

The measurements would still be extremely tricky, according to Markus Arndt, at the University of Vienna, who was not involved in the current work. He says they could require superpositions around a million times bigger and 1000 times longer lasting than is possible with the best equipment today. Nevertheless, Arndt praises the proposal for “directing attention” towards the interface between quantum mechanics and gravity. He also points out that any improvements to interferometers needed for this work could also have practical benefits, such as allowing improved tests of relativity or enhancing tools for geodesy.

The research is published in Nature Physics.

Graphene light bulb shines bright

 

The first on-chip, visible-light source that uses “wonder material” graphene as a filament has been created by an international team of researchers. The team found that small strips of freely suspended graphene, attached to metal electrodes, can reach temperatures of up to 2800 K, allowing it to emit visible light. The research, while preliminary, opens up intriguing scientific questions, and possible applications of atomically thin, transparent, flexible displays and optical interconnects with electronic circuits.

Generating light on the surface of a chip is key to developing fully integrated photonic circuits that would, in theory, use light to carry information. In many ways, the new light source functions much like an incandescent light bulb, which glows as its wire filament is heated to a high temperature via an electrical current. Among its other remarkable properties, graphene is known for its very high thermal conductivity, but this makes it hard to heat it to extreme temperatures. Indeed, if it retained its really high thermal conductivity, it would not be possible to heat the graphene until it glowed in the visible spectrum, because the heat would flow away continuously.

Free-falling

Luckily, graphene’s thermal conductivity drops dramatically at high temperatures because of a process called “Umklapp scattering”, in which phonons scatter off one another. While graphene is normally mounted on a substrate, which acts as a heat sink to stop it from reaching such high temperatures, research done in 2006 by Andrea Ferrari of the University of Cambridge and colleagues produced the first freely suspended graphene. In the new work, the researchers attached reliable electrical contacts to suspended graphene, something Ferrari, who was not involved, describes as “much more challenging”.

The researchers were investigating suspended graphene’s potential for producing nano-mechanical oscillators. Led by Young Duck Kim then of Seoul National University, who is now at Columbia University in New York City, the team was investigating how the oscillation frequency of graphene changes with temperature by applying a variable voltage bias to heat it when Kim noticed something unusual. “During the measurement, I could see something blinking,” he says. “I realized that the blinking was light emission from the graphene.”

The heat is on

The team members fabricated multiple devices from both monolayer and multilayer graphene, and investigated their potential scalability. The researchers also tested devices made from graphene produced both by chemical vapour deposition and by the Scotch-tape method: all of them could effectively produce visible light. Based on theoretical models and analysis of the emission spectra, the researchers estimate that the temperature of the graphene at the centre of the suspended sheets was up to 2800 K.

By placing a layer of silicon behind the graphene, the researchers produced interference between light emitted by the graphene from both its front, unblocked side and the light emitted and reflected off the silicon. This led to constructive interference at some frequencies and destructive interference at others, thereby tuning the colour of the light produced. The devices are, at present, between 0.3% and 0.45% efficient – low for light sources in general but 1000 times higher than previous graphene-based radiation emitters using graphene mounted on a substrate. These earlier devices could not emit frequencies higher than infrared because of the restricted temperature of the graphene. The researchers are currently working to increase the amount of light emitted at desired wavelengths, for example by using photonic crystals and optical cavities.

Futuristic displays

The team suggests its device could have multiple applications, including in transparent and flexible lighting displays. “Graphene is well known as a transparent and flexible electrode material,” says Kim. “Now, we can also use graphene as a light source.” In addition, the researchers are investigating its potential to interface electronic circuits with optical ones – crucial to optical logic applications. “The heating and cooling are very fast,” Kim explains. “We expect to be able to turn on and off within 10 picoseconds, which means we can maybe achieve data transfer at 100 gigabits per second.”

“It’s a nice result. It’s of course a new result, but it’s not totally surprising,” says Ferrari, adding that the next step would be “to conduct similar experiments with other 2D materials such as molybdenum disulphide that may also become light emitters in this particular condition”. He is more sceptical about the potential for short-term technological application, however. “The technique [to produce suspended graphene] is very laborious,” he says. “So the question is: can this be easily mass-produced? At the moment, the answer is probably no, but in the future, one can find a way of making useful devices.” Ferrari notes, however, that, once backed with silicon, graphene ceases to be transparent, which might make transparent displays difficult.

The research is published in Nature Nanotechnology.

Busting dust deep underground in SNOLAB

By Hamish Johnston at the CAP Congress in Edmonton, Alberta

I’m a bit of a DIY enthusiast and one thing that I know about drilling into a masonry wall is that you should hold a vacuum-cleaner hose to the hole or you will end up with dust all over the wall and the floor below. Believe it or not, that is exactly what workers at SNOLAB in Canada do in order to keep background levels of radiation from affecting their dark-matter and neutrino detectors.

(more…)

The cradle of modern science

Why did Newton’s laws take the form they did? We are so familiar with the three laws of motion, the inverse square law of gravity and even “toys” such as Newton’s cradle that we can easily overlook how such achievements were actually reached and how they fit in with our present understanding of the world.

In To Explain the World: the Discovery of Modern Science, the particle physicist and Nobel laureate Steven Weinberg addresses this gap by giving us a historical tour of the development of the scientific method as we know it today. On the way, he gives us plenty of his own comments on what he, as a physicist, thinks about the way we got there.

This story is fascinating, with plenty of ups and downs throughout history, yet a big question remains: why does Weinberg avoid placing this narrative squarely in the context of the philosophy of science? After all, he is dealing with some big questions, such as the degree to which laws really “explain” the world and the sense in which modern science was there to be “discovered”. Many philosophers have already pored over these topics in much greater depth, so why not make use of their expertise?

I will return to this question later, for Weinberg’s tour is worth taking in itself. He guides us through four main “landscapes” of scientific thought, starting with the early years of ancient Greece and some of the first major efforts to speculate about nature. Once these foundations are laid, he moves on to the great developments in Greek astronomy that culminated in the competing Aristotelian and Ptolemaic theories of planetary motions.

The next landscape is the Middle Ages, which is more multicultural, being a mix of Greek thought and contributions from Arab science and European universities – all of which prepared the ground for the next landscape, which is the scientific revolution that took place in the 17th century.

Here, Weinberg plunges us into the writings and thinking of the familiar characters of Copernicus, Galileo, and Kepler, ending with Newton and the beginning of what we would call “modern science” (the final landscape in this journey).

On the way, he introduces us to less familiar but important contributors, such as Hipparchus, who is generally regarded as the greatest astronomical observer of the ancient world, and al-Khwarizmi, an Arab scientist of the 8th century who brought us our familiar “Arabic” numbers (which are actually of Indian origin).

One aspect of the book that I particularly enjoyed was the inclusion of many quotes from works that I knew of, but had never had the time to look up myself, such as Aristotle’s On the Heavens (containing his empirical evidence for the spherical shape of the Earth) or the annotated quotes of the three laws of motion from Newton’s Principia. Some of Weinberg’s comments on the way we do science also serve as a good reminder for us today. For example, Newton achieved what he did by realizing the power of being able to predict simple phenomena (the law of gravity) without necessarily being able to explain the more complicated aspects (what gravity is). Very good advice; start with the simple.

The book is not overlong (the main text runs to just 267 pages) and is fairly easy to follow. It is technically thorough from the physics point of view, and the generous endnotes, bibliographic sources and index make it a useful resource for further exploration. On the other hand, though, I rather suspect that philosophers of science will have a field day with it.

While Weinberg offers some useful historical commentary, and makes some attempts to acknowledge certain aspects of the philosophy of science (such as the uncertainty of theories or the possibility that the Standard Model could change significantly), the book’s lack of connections to important previous works on the subject considerably weakens Weinberg’s attempts to shore up the position of realism – the idea that an objective reality exists that is independent of human beliefs, philosophical systems and so on.

Indeed, there is a stark contrast between Weinberg’s evident suspicion of philosophers (he does not appear to appreciate the philosophy of Bacon, Descartes or Kuhn much at all) and his obvious desire to contribute to the philosophy of science by putting forward his own arguments for realism and reductionism.

An example of this reluctance to engage may be found in the way in which Weinberg describes the separation between science and religion during the scientific revolution. He calls this a “divorce” brought about by the need to “outgrow a holistic approach to nature”. The classical view, though, is that the shift had more to do with the desire of Bacon, Galileo, Newton and others to remove personal bias from scientific endeavour (despite their having religious convictions). This fundamental change in thinking – one that is crystallized in the writings of Descartes as being a separation between matter and spirit, res extensa and res cogitans – is what gave modern science its objectivity. Why does Weinberg celebrate the pleasure scientists experience when a law is “discovered” and not want philosophers to experience the same pleasure in discovering patterns within the history of the scientific method?

In some ways, Weinberg’s desire to keep philosophy at a respectable distance is understandable. There are two major reasons for this. The first is the difficulty scientists have in getting a handle on the philosophy of science as a subject. To remedy this difficulty, I would recommend having Mel Thompson’s very accessible Introduction to the Philosophy of Science (2001 Hodder Headline) at hand when reading To Explain the World in order to help judge what Weinberg is saying. The second, more fundamental, reason for keeping philosophy at arm’s length is due to the attacks on science made by adherents of social constructivism. The Sokal–Bricmont affair of the mid-1990s, in which Weinberg was involved, illustrates well the disturbing gulf that still exists today between science and some strands of philosophy.

This incident was part of what was dubbed the “Science Wars”, although this was in fact less a meeting of armies (or minds) than the lobbing of boulders from entrenched positions (see for example Knowledge and the World, M Carrier et al., Springer 2004). If Weinberg wants to reinforce the position of realism – a worthy cause – I think he would do better to reconcile himself with classical philosophy of science. By admitting its usefulness and achievements, he would be better equipped to fight against the real enemy of science – the relativism at the base of post-modernism – without falling into the extremism of scientism. He could also take courage from a more recent philosophical movement, known as speculative realism, that appears in, for example, the work of Quentin Meillassoux (After Finitude) and the physicist-philosopher Bernard d’Espagnat (Veiled Reality).

In this way, instead of skirting round the battlefield, Weinberg could take us into it, equipped with the sort of weapons that philosophers understand and respect, to defend science against relativism in hand-to-hand combat. And, on the way, by raising again questions about the basis of science, radical progress might be made in such fields as reconciling quantum mechanics with general relativity – just as Kuhn observed often happens before a paradigm change.

  • 2015 Allen Lane/Harper Collins £20.00/$28.99hb 432pp
Copyright © 2025 by IOP Publishing Ltd and individual contributors