Skip to main content

UK pledges £65m towards US neutrino facility

The UK government will invest £65m towards a new US neutrino facility that is being built at the Sanford Underground Research Facility in Lead, South Dakota. The $1.5bn Long-Baseline Neutrino Facility (LBNF) will study the properties of neutrinos in unprecedented detail, as well as the differences in behaviour between neutrinos and antineutrinos. The UK–US agreement was signed yesterday by Jo Johnson, the UK’s science minister, and Judith Garber, who is US acting assistant secretary of state for oceans and international environmental and scientific affairs.

More than 1000 scientists from 31 countries are involved with the LBNF. It will take about a decade to build and once complete will be the world’s highest-intensity neutrino beam. The centrepiece of the LBNF is a four-storey-high neutrino detector – dubbed the Deep Underground Neutrino Experiment (DUNE) – that will be built almost 1500 m underground in South Dakota. DUNE will measure the neutrinos that are generated by Fermilab, which lies around 1300 km away just outside Chicago. The detector is made up of four tanks that are each filled with 17,000 tonnes of liquid argon. While the civil-construction costs for DUNE will be met by the US, international partners are expected to contribute about $500m towards the design and construction of the accelerator and detector.

“Significant and exciting”

“This investment is a significant and exciting step for the UK that builds on UK expertise,” says Brian Bowsher, chief executive of the Science and Technology Facilities Council (STFC), which is managing the UK’s investment in the facility. “International partnerships are the key to building these world-leading experiments, and the UK’s continued collaboration with the US, through STFC, demonstrates that we are the science partner of choice in such agreements.”

Fermilab director Nigel Lockyer added that the agreement “ensures that LBNF/DUNE will have great scientific and technical strength on the team”.

The funding for the LBNF is part of the first-ever UK–US science and technology agreement, also announced yesterday, which will pave the way for closer future research collaborations.

Ultra-high-energy cosmic rays have extra-galactic origins

Proof that ultra-high-energy cosmic rays come from outside the Milky Way has been published by astrophysicists working on the Pierre Auger Observatory in Argentina. Their measurement has a statistical significance of 5.2σ and appears to settle a decades-old debate about the origins of cosmic rays with energies greater than about 1 EeV (1018 eV).

Cosmic rays are mostly atomic nuclei that bombard Earth from outer space and have energies that range from about 109 eV up to 1020 eV. Because they have electrical charge, cosmic rays are deflected by the magnetic fields that permeate the Milky Way. This process can be likened to the random scattering of light by a thick fog and it tends to destroy all information about where the cosmic rays came from.

As a result, cosmic rays detected on Earth appear to arrive in equal numbers from all directions. This has left astronomers wondering whether the particles are accelerated within the Milky Way, or if they have extragalactic origins.

Small deflections

However, this directional scrambling is not expected to be perfect. It is possible that some directional information could be extracted from measurements on extremely high-energy cosmic rays, because these are not deflected as much by magnetic fields as their lower-energy counterparts.

This has now been confirmed by an international team of researchers that has studied the arrival of more than 30,000 ultra-high-energy cosmic rays with energies greater than 8 EeV.

When a cosmic-ray particle collides with a nucleus in the atmosphere, it creates a shower of billions of particles that rain down on Earth. The Pierre Auger Observatory comprises 1600 Cherenkov particle detectors that are spread over 3000 km2 in Argentina. Multiple detectors see the shower and a careful measurement of the arrival times at each detector gives the direction of the cosmic ray. The energy of the cosmic ray is determined by the intensity of the signals in the Cherenkov detectors.

Fluorescent light

There are also 27 fluorescence telescopes located in four separate places in observatory region. These detect fluorescent light that is emitted when shower particles interact with nitrogen in the atmosphere. This information is used to refine the energy and direction measurements made by the Cherenkov detectors.

The measurements revealed that the arrival rate of ultra-high-energy cosmic rays is about 6% greater in one half of the sky. What is more, the excess lies about 120° away from the centre of the Milky Way – suggesting extra-galactic origins. After correcting its data for the expected bending of these cosmic rays by the magnetic fields of the Milky Way, the team says that the particles appear to be coming from directions in space that have a high density of nearby galaxies.

Exciting result

“I consider this to be one of the most exciting results that we have obtained and one which solves a problem targeted when the observatory was conceived by Jim Cronin and myself over 25 years ago,” says Alan Watson of the University of Leeds, who is emeritus spokesperson for the Pierre Auger Observatory.

Because ultra-high-energy cosmic rays are not produced in our galaxy, it is likely that their origins are in galaxies that do not resemble the Milky Way. Watson points to galaxies such as Centaurus A, which appears to contain a supermassive black hole that powers relativistic jets of particles. Watson told Physics World that shock waves in such jets could accelerate nuclei so that they become ultra-high-energy cosmic rays.

The next step for the Pierre Auger Observatory is to get a better understanding of what types of nuclei make up ultra-high-energy cosmic rays. This is the task of the next phase of the observatory, which is called AugerPrime and will run until 2025. This involves covering each Cherenkov detector with a plastic scintillator that is used to detect muons in the cosmic-ray showers. Knowing the muon content will allow scientists to work-out whether the shower was created by a hydrogen or iron nucleus, for example. Different nuclei have different masses and charges, which determine how a cosmic ray is bent by magnetic fields. Having this information could lead to a better understanding of the magnetic fields in the Milky Way – and ultimately pinpoint the sources of ultra-high-energy cosmic rays.

The research is described in Science.

Drop-on-demand bioinks foster angiogenesis

The biofabrication of 3D-printed tissues has great potential for both transplantation and disease modelling. A major limitation of 3D-printed tissue constructs is the lack of vascularization permitted, which inhibits the supply of oxygen and nutrients to all regions of the fabricated tissue. Researchers from RWTH Aachen University Hospital in Germany have used blended biomaterials to enable vascularization development from 3D-printed tissue (Biofabrication 9 045002).

Biomaterial blend
By blending two hydrogel-based biomaterials with differing physical and functional properties, the researchers created a bioprintable biomaterial, also known as a bioink. Hydrogels are biopolymers or peptides that retain a high proportion of water (around 99%) whilst providing the structural properties of a solid; delivering important nutrients and 3D support for cells.

The research group mixed gelatin-methacrylate (GelMA) — for the structural properties it offers, as well as the ability to be crosslinked with UV-light, and collagen — due to its functional properties, which allow cells to bind for migration and growth. This combination produced a bioink with strong shear thinning properties and an appropriate stiffness. The correct mix of these parameters resulted in both an increased printing accuracy and improved cellular performance.

Angiogenesis
The German group also observed amplified cell spreading and the formation of vasculature, indicating angiogenesis — the formation of new blood vessels from already formed blood vessels. The surrounding environment of cells — the extracellular matrix (ECM) — is crucial to angiogenesis, with remodelling of the ECM necessary for blood vessel growth. In 3D tissue constructs, the biomaterial used to encapsulate cells takes on the role of mimicking the ECM. Therefore, the bioink blend developed by the Aachen researchers needed to provide specific physical and biochemical cues to cells, whilst enabling the same remodelling allowed by the ECM of in vivo tissue.

Printing effects on cells
A concern for 3D bioprinting arises when considering the effect of the printing process on cell viability, with multiple potentially damaging physical forces being applied to cells. The process used by researchers in this study forces cells from a microvalve, which then has UV light applied to crosslink and solidify the structure. Using a mixed culture of two different vascular cells — human umbilical vein endothelial cells (HUVECs) and human mesenchymal stem cells (hMSCs) — to develop the 3D models, the researchers found that both the printing and UV-crosslinking procedure did not affect the viability of cells.

The research group has successfully developed a bioink that allows for angiogenesis while providing a high level of structural and spatial 3D-printing accuracy. This bioink can then be used to biofabricate multiple different vascularized models and tissues for transplantation. This presents an ideal platform for studying vascular formation under a range of different conditions, for example, modelling the post-stroke neurovascular unit or researching tumour angiogenesis.

How science gets women wrong

“Writing the book has made me question my own feelings about the world,” says Saini. The engineer-turned-journalist admits that she fully expected to discover more clear-cut differences between men and women, and was surprised by the inconclusive science behind many claims. One of Saini’s key points is that scientific studies of gender always need to be viewed within their historical and cultural contexts. Journalists and science communicators also play a role in translating research findings, which often include subjective interpretations.

Also in the podcast, Glester travels to Birmingham to the International Conference on Women in Physics (ICWIP), which took place earlier this year. Accompanied by Physics World journalist Sarah Tesh, the pair meet delegates who share their experiences working in physics. Among them is Jess Wade from Imperial College London, who reviewed Saini’s book for the July issue of Physics World, and Helga Danga who is the only female physicist she knows of in Zimbabwe.

For more information about the ICWIP event, check out Sarah’s account on our blog.

Seeking causes of Mexico City’s earthquake

By James Dacey

At the time of writing, the official death toll stands at more than 200 people following the magnitude 7.1 earthquake that struck near Mexico City on Tuesday. According to the secretary of education, 200 schools have been affected, including the Enrique Rébsamen elementary school in Mexico City’s southern Coapa district where 37 people died, as reported by the BBC. Meanwhile buildings have collapsed at a campus of the Monterrey Institute of Technology killing five people and injuring 40, also in the south of the city.

In a cruel twist of fate, the quake struck on the day of the 32nd anniversary of the 1985 Mexico City earthquake that led to the death of up to 10 000 people. Even though yesterday’s event is likely to claim fewer victims than the 1985 disaster, it is still a shocking reminder of how vulnerable Mexico City is to earthquake damage.

Yesterday’s quake struck at 13:14 local time, with its epicentre around 115 km southwest of Mexico City’s urban centre. As with the 1985 event, the earthquake occurred along the fault zone where the oceanic Cocos plate submerges beneath the less dense continental section of the significantly larger North American plate. Energy is released in regular seismic events along this subduction zone – indeed, a magnitude 8.1 earthquake struck on 7 September just off Mexico’s southern coast, killing almost 100 people, mostly in the state of Oaxaca.

Mexico seismology map

So why is Mexico City so prone to earthquake disasters?

The image at the top of this article – produced using data from NASA’s Landsat Thematic Mapper – is a view across Mexico City from the north-northwest with the stratovolcanoes Popocatépetl and Iztacihuatl visible in the background. It reveals that the city lies in a basin that used to contain Lake Texcoco, which was drained by the Spanish conquistadors from the 17th century onwards. What was left behind were the sediments from the lake bed, which form a so-called “lacustrine sequence”. This resulting geology is a key factor in why modern day Mexico City is so vulnerable to earthquake damage, according to Jaime Urrutia Fucugauchi, a geophysicist at the National Autonomous University of Mexico (UNAM).

Urratia says these sedimentary beds have a characteristic frequency response and ground motions can be accelerated locally, depending on factors such as the size and orientation of buildings. There may also be a secondary effect from seismic waves becoming “trapped” inside the basin, which is underlaid with volcanic rocks. Urrutia compares the resulting amplification effect to a wine glass making a ringing sound when it is exposed to certain frequencies. He says that intermediate-sized buildings in the central and southern sectors of Mexico City are particularly at risk from this effect, as was observed in the 1985 event.

“Infrastructure is more adequate now [than in 1985], though problems persist as shown by the collapsed buildings,” says Urrutia. He adds that only minor damage has been reported at the sprawling UNAM campus, a major hub for Mexican physics.

Arturo Menchaca Rocha, a UNAM physicist, says Mexico City’s geology behaves like a gel, producing chaotic resonant patterns on its surface. “When the characteristic wavelength of a local resonance coincides with a building height (or is a sub-multiple of it), the structure resonates too, magnifying the effect,” he explains.

Menchaca is not aware of any damage to Mexico’s major physics facilities such as the HAWC Gamma-Ray Observatory or the Large Millimeter Telescope, both located on Sierra Negra extinct volcano in Puebla.  But some of his colleagues in Mexico City have reported damage to measuring devices falling from their desks.

Find out more about why it is so difficult to predict when and where the next major earthquake will strike, by watching this short film On Shaky Ground.

You can also read this Physics World special report on physics in Mexico, including a plan to use muons to help predict eruptions of the Popocatépetl volcano.

Binary supermassive black hole system spotted in Seyfert galaxy

A pair of supermassive black holes (SMBHs) could be orbiting each other in a galaxy that is 400 million light-years from Earth – according to astronomers in India and the US. The binary system appears to have a combined mass of about 40 million Suns and the black holes are separated by just over one light-year. The observation appears to back up a theoretical prediction linking galactic radio emissions to the presence of a binary SMBH.

Today there is only one confirmed sighting of a binary SMBH, in a radio galaxy called 0402+379. This object is 750 million light-years away and the two SMBHs have a combined mass of 15 billion Suns and are separated by about 24 light-years.

Compact and closer

Now, Preeti Kharb and Dharam Lal at the National Centre for Radio Astrophysics in Pune and David Merritt at the Rochester Institute of Technology have identified a possible second binary SMBH that is more compact and closer to Earth. It is in the Seyfert spiral galaxy NGC 7674 and was studied using the Very Long Baseline Array (VLBA) of radio telescopes in the US.

The exceptionally good angular resolution of the VLBA allowed the trio to identify two compact sources of intense radio waves at the centre of NGC 7674. “The two radio sources have properties that are known to be associated with massive black holes that are accreting gas,” explains Kharb.

Gravitational waves

Calculations suggest that the binary has an orbital period of about 100,000 years. While it will be broadcasting gravitational waves, they are far too low in frequency to be seen by existing or planned detectors. Writing in Nature Astronomy, the trio point out that such objects would contribute to the gravitational-wave background signal that is expected to permeate the cosmos.

Merritt points out that NGC 7674 is a “Z-shaped” radio source, which refers to the twisted shape of the galaxy’s radio emissions. This peculiar structure is thought to form in the aftermath of the merger of two galaxies, each containing a SMBH. “Detection of a binary supermassive black hole in this galaxy also confirms a theoretical prediction that such binaries should be present in so-called Z-shaped radio sources,” says Merritt.

Binary SMBHs are thought to exist at the heart of some elliptical galaxies that formed by the merger of two large spiral galaxies – each galaxy bringing its own SMBH. In contrast, Seyfert galaxy formation is not believed to involve the merger of large galaxies. Seyferts are therefore not expected to harbour binary SMBHs, making this latest observation a surprising one.

Grabbing a slice of the pie in the sky

By Margaret Harris at the European Planetary Science Congress in Riga, Latvia

If you wanted to mine an asteroid, what would you need? Right now, it’s a hypothetical question: only a handful of spacecraft have ever visited an asteroid, and fewer still have studied one in detail. As commercial ventures go, it’s not exactly a sure thing. But put that aside for a moment. If you wanted to create an asteroid-mining industry from scratch, how would you do it?

Well, for starters, you’d need to know which asteroids to target. “Not every mountain is a gold mine, and that’s true for asteroids too,” astrophysicist Martin Elvis told audience members at the European Planetary Science Congress (EPSC) yesterday. For every platinum-rich asteroid sending dollar signs into investors’ eyes, Elvis explained there are perhaps 100 commercially useless chunks of carbon whizzing around out there, and the odds for water-rich asteroids aren’t much better. Moreover, some of those valuable asteroids will be impractical to mine, either because of their speed and location or because they’re too small to give a good return on investment. “Smaller asteroids aren’t even worth a billion dollars,” Elvis scoffed. “Who’d get out of bed for that?”

Identifying the “right” asteroids is only the start. Last year, participants in the first ever joint conference between asteroid scientists and mining engineers identified several other key challenges. One is the need to understand how the surface and subsurface of an asteroid might behave when disturbed by a landing, sampling or digging robot. For the nascent asteroid-mining community, ESA’s Philae probe – which landed on the comet 67P/Churyumov–Gerasimenko in November 2014 – isn’t so much a success story as a cautionary tale. Tomas Kohout, a planetary scientist at the University of Helsinki, Finland, and one of the asteroid-mining session’s other speakers, reminded me during the evening poster session that all three of Philae’s tools for latching onto the comet had failed, and its shadowy landing site had rendered its solar panels useless, severely limiting its lifespan. That didn’t keep Philae from doing good science; indeed, its parent Rosetta mission was hailed as Physics World’s 2014 “Breakthrough of the Year”. But for a mining lander, it would have been disastrous.

Given the technical and knowledge barriers that must be overcome before anyone can turn a profit from asteroid mining, I was surprised to see that one of the session’s speakers, J L Galache, comes from a start-up firm, Aten Engineering. After the session, I asked how he’d managed to get funding to mine asteroids. “I’m not mining, I’m prospecting,” he replied immediately. “These are different things.” Galache also referred to a government initiative in Luxembourg, a country with a well-developed commercial space sector, that aims to boost R&D in the field. “That’s been a big deal for us,” he said, though he acknowledged that it was still difficult to find investors interested in such a long-term project.

Aside from the Luxembourg initiative, another factor tugging asteroid mining away from the realm of science fiction is a growing realization that the main goal is not to bring giant chunks of precious metals back to Earth (which would in any case cause the prices of these commodities to crash). The real prize is using material from asteroids to develop what the session’s first speaker, Luxembourg space policy officer Mathias Link, called “Earth-independent architecture.” Taking material from Earth and putting it into space is incredibly expensive; during his talk, Elvis joked that the water bottle on the lectern would be worth $5000 on the International Space Station. At those prices, it’s worth trying to exploit material that’s already in space, even if doing so requires techniques that would be economically unfeasible on Earth, such as extracting water from metal hydrides and turning it into hydrogen and oxygen fuel.

Although asteroids were the focus of the EPSC talks, the session’s organizer, Amara Graps, told me afterwards that they aren’t the only potential target for space mining. Many near-Earth orbits are currently occupied by defunct spacecraft, and although scavenging for parts in 1970s telecoms satellites might not get investors’ hearts pounding, it’s an avenue worth pursuing if we want to build structures in space – not least because we know where old satellites are, whereas near-Earth asteroids generally “disappear” within a week of being discovered. And wouldn’t it be great if space junk, one of space exploration’s ugliest legacies, could become part of a solution rather than a problem?

Scale is key to predicting climate-change responses

While small habitats may respond to climate change quickly, those on larger scales may delay their responses until they reach a tipping point. That’s one of the findings of a team studying a subarctic lake in northern Sweden.

The large Lake Torneträsk reacted differently to small lakes in its catchment area, Carsten Meyer-Jacob of Sweden’s Umeå University and his collaborators from the UK and Switzerland showed.

Although over the past 9500 years or so Torneträsk responded to climate warming in a similar way to its smaller cousins, when the climate cooled about 3400 years ago its response diverged. The small lakes reacted to this cooling very quickly, but initially Torneträsk showed almost no change.

Only when the Little Ice Age occurred in the last millennium did Torneträsk react. The Little Ice Age acted as a thermal tipping point, causing Torneträsk to exhibit an abrupt ecosystem response more than 2000 years later than the smaller lakes nearby.

The team also found that Torneträsk had an immediate and pronounced response to recent climate change, highlighting both the variability of potential responses to climate change and the magnitude of the ongoing climate upheaval in the Arctic.

To come up with their results, Meyer-Jacob and colleagues took cores of sediment from the bottom of Lake Torneträsk and the upstream Lake Abiskojaure. Different periods of deposition, characterized by different types of sediment, were visible in the core. Geochemical analysis provided information such as the degree of weathering of the incoming sediment, which depends on precipitation and vegetation cover. Looking at the amount of silica produced by aquatic algae, meanwhile, showed the productivity of the lake.

The results revealed the full history of the lakes since the last ice age. Initially a series of small lakes dammed by ice flows filled the basin. By roughly 6600 years ago glaciers had completely receded and the local area began to recover; lake sediments record the gradual build-up of soils and more complex plant communities.

The landscape continued to develop until around 750 years ago when the onset of the Little Ice Age brought temperatures close to 1°C below today’s. This cold period enhanced soil erosion in the area, radically changing the composition of the sediment entering the lake – even though more than 2000 years of steady cooling had had little measurable effect. This suggests either that the Little Ice Age caused extreme climatic changes, or that temperatures crossed a critical threshold with its onset.

Another major ecosystem change in Lake Torneträsk took place in the last century, with the return of the sediment composition to pre-Little Ice Age levels, indicating reduced soil erosion. This immediate response is due, the scientists believe, to ongoing climate warming that has seen local temperatures rise 2.5°C in the last 100 years. Their results demonstrate how ecosystems can respond to stimuli very differently at different scales, as well as how a relatively small increase in a stimulus can produce a large change.

Meyer-Jacob and his colleagues published their work in Palaeogeography, Palaeoclimatology, Palaeoecology.

How to make half-metals that contain no metals

A new type of material called a “spin-valley half-metal” has been predicted by calculations done by physicists in Russia, Japan and the US. While the material has not yet been characterized in the laboratory, the team says it could find use in new types of biocompatible and carbon-based electronics.

Half-metals are materials in which only electrons with a specific spin polarization (spin-up, for example) participate in electrical conduction. These materials can therefore create currents with 100% spin polarization. This means that half-metals could be very useful for making spintronic devices – components that use the spin of the electron to store and process information.

Strong interactions

Materials that are known to be half-metals are compounds containing transition metals such as nickel and manganese. These elements ensure that there are strong interactions between conduction electrons, which result in the spin polarization. However, these metallic compounds are not suitable for several future applications of spintronics such as biocompatible electronics and devices based on carbon and organic molecules.

This latest work is by Alexander Rozhkov, Artem Sboychakov, Kliment Kugel and colleagues at the Institute for Theoretical and Applied Electrodynamics in Moscow, RIKEN in Japan and the University of Michigan in the US. The team has done calculations that suggest that half-metals can be made from compounds that do not contain transition metals.

The researchers focused on materials called spin-density wave insulators, which contain a microscopic periodic arrangement of regions with non-zero spin polarization. These materials have four energy bands that are characterized by the carrier charge (electron or hole) and the carrier spin (up or down). Such bands are often referred to as valleys and devices that try to exploit them for practical purposes are “valleytronics”.

Two valleys

Calculations done by Rozhkov and colleagues show that when certain spin-density wave insulators are doped by adding impurity atoms, two of the valleys will acquire charge carriers and therefore support electrical conduction. These two valleys can form the basis of a half-metal. Depending on the composition of the doped material, it can either be a conventional semi-metal or a new type of material that the researchers have dubbed a “spin-valley half-metal”.

Sboychakov says that it is now up to experimental physicists to try to create the doped compounds. “There are plenty of materials adequately described by the model we dealt with,” he says, “I am therefore convinced that the phase we predicted will eventually be discovered, either in a material that is available today or in one that is yet to be synthesized”.

The research is described in Physical Review Letters.

Making space

In a column last year I wrote about a book by the historian of science Jimena Canales (January 2016 p19). Entitled The Physicist and the Philosopher: Einstein, Bergson, and the Debate That Changed Our Understanding of Time, her book describes an encounter that took place in 1922 between Albert Einstein and the French philosopher Henri Bergson. The book was notable for dramatizing the gap between Einstein’s approach to “objective time” as quantitative and measurable, and Bergson’s notion of “experienced time” as a flux in which past, present and future are knitted together.

A century later, the gap in our thinking of time persists. But a similar gap also exists in notions of space, even if that divergence has never crystallized into a specific encounter as it did between Einstein and Bergson over time. Physicists are apt to explain the everyday experience of space as a smooth, 3D arena in which things always have definite positions as an illusion – a by-product of the limited sensory faculties of humans. As the German mathematician Hermann Minkowski proclaimed in 1908, “space by itself, and time by itself, are doomed to fade away into mere shadows”, for only 4D space–time can preserve “an independent reality”. In quantum mechanics, moreover, the uncertainty principle forbids things from having definite locations.

In more speculative theories, space is more complicated. Some versions of quantum-field theory picture a fluctuating space–time foam. In loop quantum gravity, meanwhile, space is quantized, with its patches unable to become infinitely small. As for string theorists, they insist on 10, 11 or 26 dimensions, with the extra ones closed or “compactified” so they are unseen even in current scientific experiments. The Columbia University theorist Brian Greene’s 2011 book The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos discusses no less than nine types of alternate universes.

Philosophers, by contrast, tend not to consider the ordinary experience of space illusory. They are more interested in features that make such experience possible – features that are not incidental or subjective but belong to the full reality of spatial experience. As the French philosopher Maurice Merleau-Ponty wrote in his 1945 book Phenomenology of Perception, these aspects are like “the darkness needed in the theatre to show up the performance”.

Philosophers, for instance, distinguish between “allocentric”, “perceptual” and “bodily” space. Allocentric space is an objective space in which locations and orientations can be defined – with a GPS, say – without reference to an observer’s location. Perceptual or “egocentric” space was identified in the 18th century by the German philosopher Immanuel Kant. It is based on the spatial orientation provided by an observer’s body – up and down, right and left, front and back – without which it would be impossible to locate something in allocentric space even with a compass or GPS device.

As for bodily or “proprioceptive” space, it was described by 20th-century philosophers (including Merleau-Ponty) as an awareness of the presence of your own body and its ability to move. It is the sense you have of your head and hands as you hold this magazine – or as you operate the phone or tablet you’re reading it on. Bodily space is the non-mathematical sense called upon in walking, playing and operating objects such as a GPS or a compass.

Ambitious disciplines

Given the gap between the physical and philosophical approaches to space, you might wonder why we don’t just assume that physicists and philosophers investigate different things: space and place, say, or Space and space.

That won’t work, because physics and philosophy are both ambitious disciplines; each aims to describe the world, not just a particular slice of it. As the physicist John Bell wrote: “To restrict quantum mechanics to be exclusively about piddling laboratory operations is to betray the great enterprise. A serious formulation will not exclude the big world outside the laboratory.”

That big world includes human experiences. The German philosopher Edmund Husserl, for example, denounced the “scientific fanaticism” of those who think they are studying the world when they rely on only what shows up in laboratories. Philosophy’s task, as Husserl saw it, is to investigate the basic features of all human activities, including science, and the experiences that make them possible. Both physicists and philosophers, then, insist that they are the ones talking about “Space” rather than “space” and grasp their relation.

What most divides physicists and philosophers on the issue of space seems to boil down to their answers to the question of whether consciousness is a fundamental feature of the world. Physicists begin with the assumption that what they study precedes and is independent of interactions with observers. Philosophers – or at least those who follow Husserl’s general starting point – begin with and never fully leave behind experiences of connections between humans and the world. That makes it hopeless either to try to reduce one starting point to the other, or to develop some larger conception of space to encompass both in an artificial compromise.

The critical point

Let’s adapt Bell’s image of the lab and the “big world”. Space that’s investigated in the lab – where there are trained staff, special assumptions, advanced equipment and controlled conditions – looks the way scientists find it to be, and it also can be used to describe much of what’s on the outside. Yet humans are born into and inhabit that “big world” first, and rely on the practical experience of space that philosophers address. That space is also found inside the lab, where scientists practically handle equipment and make measurements using perceptual and bodily space.

What’s in the lab is also in the big world, and what’s in the big world is also in the lab. Recognizing this multidimensionality makes space capacious enough for both physicists and philosophers alike.

Copyright © 2026 by IOP Publishing Ltd and individual contributors