Skip to main content

Fancy a flutter on the Higgs?

bethiggs.jpg

By Hamish Johnston

Alexander Unzicker wants you to place a bet on the Higgs boson — or more precisely whether it will be discovered at the LHC.

He has set up a site called www.bet-on-the-higgs.com, where he explains how you can bet on the discovery of the Higgs at intrade.com — which is a ‘prediction market’ based in Ireland.

If you are tempted to place a bet, please make sure that doing so is legal where you live.

Here in Dirac House I can’t even look at intrade’s Higgs pages — they are ‘forbidden’ — but Unzicker has posted a screen shot of his intrade account, which I have reproduced below.

bet2.jpg
Buy low, sell high

Sorry about the fuzziness, but if you squint you can see that he is wagering on the ‘Observation of the Higgs Boson Particle’.

What’s not clear from the screen shot is whether Unzicker — who teaches maths and physics in Munich — is betting for or against the discovery…you’ll have to check his website to find out.

Fusion challenges and solutions

fusionsupp.jpg
<a href="http://download.iop.org/pw/PWFusionDec09_web.pdf

“>Fusion challenges and solutions (PDF, 8MB)

By Hamish Johnston

“The governments of the world have made a substantial investment in fusion research; the time has come to begin to capitalize on this.” This is the battle-cry of Stephen O Dean, president of the research and education foundation Fusion Power Associates.

Writing in our latest supplement Fusion challenges and solutions, Dean argues that now is the time for the fusion community to pull together to make fusion power a reality.

You can download a PDF of the 16-page supplement here and read the following articles:

JET set to break own fusion record
The completion of a €60m upgrade means the Joint European Torus can better mimic the technology needed for ITER, as Andy Extance reports.

Laser fusion shifts into HiPER drive
As an alternative to using magnets, laser-driven fusion power, is coming to the fore. Margaret Harris describes the current state of play.

Building career prospects in fusion
Greg Tallents and Howard Wilson showcase two new UK postgraduate training programmes educating the fusion scientists of the future.

Fusion supercomputer starts up
The High Performance Computer for Fusion, with a peak performance of 100 teraflops per second, could help get the best out of ITER’s plasmas, as Sibylle Günter explains.

FPA president predicts bright future
Stephen O Dean discusses his vision for fusion power, and how the research and education foundation Fusion Power Associates can help.

Measuring (almost) zero

 

When most of us think about searching for physics beyond the Standard Model – the dominant paradigm of particle physics – the first thing that springs to mind is probably a gigantic particle accelerator like CERN’s Large Hadron Collider (LHC). Within the collider’s 27-km loop, protons slam together at 99.9999991% of the speed of light. Office-building-sized detectors generate terabytes of data for physicists to sift through, seeking elusive traces of new kinds of particles.

But there is another type of search for new physics under way as well, this time in atomic-physics labs. Using apparatus no more than a few metres in size, and energies a trillion times lower than those at the LHC, these experimentalists are trying to detect new particles, too – by measuring the electric dipole moment (EDM) of the electron.

The logic behind their search is that under the basic Standard Model, a detectable electron EDM is forbidden. Hence, finding a tiny-but-finite EDM would indicate that the Standard Model needs revision, thereby opening the door to a new class of “virtual particles”. From an experimental standpoint, the task is not easy: how do you measure something that is almost, but not quite, zero? Yet these EDM searches may nevertheless be our best chance of discovering new physics until the LHC reaches its full potential – and perhaps even beyond then.

Rules and exceptions

The most familiar example of a dipole is a magnet. If you place a common bar magnet, like a compass, in a magnetic field, its north and south poles will align with the field. Similarly, an electric dipole can be created by placing two oppositely charged objects close together. The electric dipole moment, de, of this simple system is equal to the magnitude of the charge, q, multiplied by the charge separation distance, r, i.e. de = qr. Like a magnetic dipole, an electric dipole has a direction (both r and de are vectors), and it will tend to align itself with an applied electric field. For fundamental particles, de is measured in e cm, where e is the charge on the electron (1.6 × 10–19 C).

Physicists have known since the 1920s that the electron behaves like a magnetic dipole, thanks to its “spin”, or intrinsic angular momentum. (The electron is not literally spinning, but the analogy with a spinning ball of charge is a useful one.) However, as a fundamental particle, the electron should not have a permanent electric dipole moment – at least not according to the simplest version of the Standard Model. The absence of an electron EDM is a consequence of “time-reversal symmetry”, a fundamental principle of physics that holds that physical interactions should look the same if the direction of the flow of time is reversed.

To understand why a permanent EDM for the electron would violate time-reversal symmetry, consider our picture of the electron as a tiny, spinning ball of charge. The spinning charge acts like a small loop of current and produces a magnetic dipole moment along the spin axis (the blue arrow in figure 1a). To produce an EDM, we must distort the charge distribution of the electron slightly, creating an electric dipole along the spin axis (red arrow in figure 1b). When we reverse time, we reverse the ball’s spin, and thus the direction of the magnetic dipole. But the charge distribution does not move, so the electric dipole does not change direction. Hence, with time flowing forward, both dipoles point in the same direction, but with time reversed, the two point in opposite directions (figure 1c). This clearly violates time-reversal symmetry, and thus rules out a permanent EDM for the electron (or, indeed, any other stable particle) within the basic Standard Model.

However, as with any rule, there are exceptions to time-reversal symmetry. More sophisticated versions of the Standard Model do allow time-reversal violation, provided that there is also asymmetry in the behaviour of particles when their charges are reversed and “left” and “right” are exchanged. This is known as charge–parity (CP) symmetry violation. We know that CP violation must occur, because we have observed a pronounced asymmetry between matter and antimatter in the visible universe: for example, we observe far more electrons than positrons, so a world in which their charges were flipped would look quite different. Logically, therefore, time-reversal violation must also occur, and thus allow the electron to have a tiny permanent electric dipole moment.

But how tiny is tiny? The size of the electron EDM can be calculated by considering the fleeting interaction between the electron and the “virtual particles” that appear from the vacuum energy of empty space and disappear before they can be measured directly (see “Feynman diagrams for an electron interacting with an electromagnetic field”). Some of these virtual-particle interactions violate time-reversal symmetry, and so the Standard Model predicts an EDM of at most 10–39 e cm. This is far too small to be measured. However, most theories going beyond the Standard Model introduce new types of particles that violate time-reversal symmetry more easily. Such theories predict an electron EDM many orders of magnitude larger – 10–25–10–30 e cm. This is big enough that we can hope to detect it in precision measurements, and thereby start to rule out some classes of Standard Model extensions.

Searching for an EDM

If a non-zero electron EDM exists, how can we measure it? Again, it is useful to draw comparisons with the electron’s behaviour as a magnetic dipole. In quantum mechanics, the spin of the electron has two discrete states, “up” and “down”. As the energy of a dipole de_pends on its orientation with respect to the field, these two spin states have slightly different energies in a magnetic field: an electron with its spin aligned with the field has a slightly lower energy than one with its spin aligned opposite to the field. This energy difference leads to small changes in the energy levels of an electron in an atom, shifting the energy of some atomic states, and causing some single-energy states to split into two different states.

Similarly, when placed in an electric field, an electron with a small EDM will have a slightly lower energy when the dipole is aligned with the field than when the dipole is opposite to the field. For any plausible value of the electron EDM, however, the effect of the dipole interaction will be dwarfed by the interaction between the electron’s charge and the field. If we apply an electric field to a free electron, for example, the electron will simply rush off towards the positive pole.

The key to avoiding this problem is to look at electrons inside atoms or molecules. Such systems are electrically neutral and thus do not move in response to the electric field, but their energy levels still shift due to the EDM. In order to observe such a shift in the laboratory, the electron inside the atom must experience an electric field of more than 106 V cm–1. Such fields are not easily obtained, because when an electric field is applied to an atom, the electrons inside it respond by shifting their position relative to the nucleus, thereby cancelling out most of the field inside the atom. Relativistic effects keep the cancellation from being perfect, though, and in very heavy atoms, where the electrons move at speeds close to the speed of light, there can even be an enhancement of the applied field.

For this reason, the most accurate measurements of the electron EDM performed to date have used thallium atoms. Thallium, with an atomic mass of 205, has a field-enhancement factor of 585 – that is, the field experienced by the electrons inside thallium atoms is 585 times greater than the field applied in the lab. In a series of experiments in the early 2000s, researchers in Eugene Commins’ group at the University of California, Berkeley, first sent a beam of thallium atoms into a modest magnetic field. The presence of the field established a preferred axis for the electron spin, and thus the electron EDM (which must point along the same axis as the spin). They then applied an electric field of up to 1.23 × 105 V cm–1 along the same axis and looked for signs of a shift in the atomic energy levels that might be caused by an EDM.

To maximize their sensitivity to an EDM-generated shift, the Berkeley group used an interferometric technique similar to that used in atomic clocks, involving quantum-mechanical interference between two states of the same atom (see “Interferometric detection of an EDM shift”). The researchers looked for a shift in the interference pattern that depended on the applied electric field, and repeated the experiment with numerous combinations of electric and magnetic fields. Putting together 44 datasets, each consisting of measurements under 128 different conditions, they found that the electron EDM, if it exists, must be smaller than 1.6 × 10–27 e cm.

From atoms to molecules

This result was enough to rule out one Standard Model extension (termed “naive supersymmetry” by a former member of the Berkeley group) that had predicted a larger EDM. But the Berkeley experiment pushed the limits of EDM searches in atoms. Three factors limited the measurement’s sensitivity: noise due to stray magnetic or electric fields; the interaction time of the atoms in the electric field; and the size of the energy shift due to the EDM.

On the first count, the Berkeley group went to extraordinary lengths to limit the effects of noise. The experiments were sensitive enough to pick up electrical noise caused by trains at a subway station almost a mile away. As a result, most of the researchers’ data had to be collected between 1 a.m. and 5 a.m., when the trains were not running. They also used parallel beams of sodium atoms as a “co-magnetometer” to rule out other sources of interference; the much lighter sodium atoms are not sensitive to an EDM, but would be sensitive to stray magnetic fields.

As for the other two limitations, the interaction time for the experiments was determined by the speed of the atomic beam, which was set by the 920 K operating temperature of their thallium source, and not easily changed. This left increasing the size of the EDM-related energy shift as the remaining hope for improving sensitivity. Doing so would require either significantly larger electric fields or a system with a larger field-enhancement factor. Unfortunately, applying a larger field presents significant technical challenges, and thallium is already close to the maximum enhancement factor – for atoms, at least.

A dramatic improvement in field-enhancement is possible, however, in molecular systems – in particular polar molecules consisting of one heavy element and one light one. The effective applied field in the Berkeley experiment, including the thallium enhancement factor, was about 72 × 106 V cm–1, but polar molecules can have fields of up to 20 × 109 V cm–1 – almost 300 times larger. Using molecules instead of atoms could, therefore, increase the sensitivity of EDM experiments by the same factor, to 10–30 e cm or better. As a result, there has been a recent explosion of interest in polar molecules, with research groups beginning electron EDM searches at Imperial College London, and at Yale, Michigan, Oklahoma and Colorado universities in the US.

One of the current leaders in this effort is Edward Hinds’ group at Imperial, which uses ytterbium-fluoride molecules and a beam apparatus similar to that used in the Berkeley experiments. A beam of these molecules passes through a region containing a large electric field (13 kV cm–1), and quantum state interference is used to search for the tiny shifts induced by an electron EDM. Their first measurement was published in 2002 and had a sensitivity of 0.2 × 10–26 e cm, slightly worse than the Berkeley experiment. However, after a recent system upgrade, they expect to achieve a sensitivity of at least 5 × 10–28 e cm by the end of 2009, and are hoping for another factor of three improvement by late 2010.

Another promising contender is David DeMille’s experiment at Yale University. The Yale group employs a different technique of holding lead-oxide molecules in a glass cell rather than sending them through the experimental apparatus in a beam. Although the cell needs to be maintained at about 973 K (a significant technical challenge), this technique means the researchers can keep the molecules interacting with the electric field for longer than is possible with beam experiments. The team expects to match or exceed the sensitivity of the Berkeley experiment in early 2010.

Meanwhile, other types of searches are also under way. A team led by Steve Lamoreaux at Yale and Larry Hunter at Amherst College is trying to detect an EDM in magnetic solids rather than diffuse molecular gases. Their approach is to use magnetic fields to align all the electron spins (and thus the EDMs) in a sample of solid gadolinium iron garnet and then measure the cumulative dipole moment of the whole sample. One advantage of this approach is that the density of electrons in a solid is much higher than in a gas, so the sensitivity to an EDM should also be higher than it is in the atomic and molecular experiments.

Assuming none of these experiments find a non-zero EDM, what happens after that? On the molecule front, the next major improvement will come from further increasing the time that the molecules spend interacting with the electric field. This can be done by performing the experiments using cold molecules, which move more slowly than room-temperature ones. David Weiss’ group at Pennsylvania State University is already using this approach in a new atom-based experiment where laser-cooled caesium atoms are trapped for up to a few minutes at a time, which leads to a projected sensitivity 200 times better than the Berkeley thallium experiment.

The Imperial and Yale groups are hoping for similar boosts as they begin work on cold-molecule experiments. The Imperial group plans to pre-cool its ytterbium–fluoride molecules by placing them in contact with helium vapour at 4 K before they enter the beam. The Yale group is also investigating a buffer-gas-cooled source, but the researchers and their collaborators at Harvard plan to switch to a new molecule, thorium monoxide, which offers a higher field-enhancement factor than lead oxide as well as the prospect of longer interaction times.

Tough times for theorists ahead?

From a theoretical standpoint, there are a number of ways to reduce the predicted size of an electron EDM. Certain symmetry-violating effects can partially cancel it out, for example. Nevertheless, the vast majority of extensions to the Standard Model predict an EDM within a few orders of magnitude of the current experimental limit (figure 2). Hence, if any one of these models is correct, and if the cold-molecule experiments now in development reach their full potential, a non-zero EDM should be measured in the near future. Such a measurement would provide crucial information about symmetry violation in the universe, which would help explain why everything we see is made of matter rather than antimatter. On the other hand, if the proposed experiments do not find a non-zero electron EDM by the 10–32 e cm level, life could become very difficult indeed for particle theorists. A null result would rule out nearly all existing theoretical approaches, and make it hard to explain the contents of the visible universe within our current theoretical framework.

For most theoretical scenarios, collider-based experiments and EDM searches are complementary. Colliders, for example, can create and detect new types of particles, which EDM searches cannot; on the other hand, the LHC cannot measure the symmetry-violating properties of particles. The two methods’ sensitivity is also similar: if there are new particles within reach of the LHC, there should also be an electron EDM within reach of the next generation of EDM searches. We will probably need a combination of both measurements to fully explain the universe we live in.

However, there is a chance that the symmetry violation due to new particles will be strong and not cancelled by other effects. If this is the case, and the EDM is smaller than about 10–29 e cm, then any new particles would have larger masses than the LHC – or any other collider – could detect. In this case, EDM experiments will be our only hope of learning about physics beyond the Standard Model – all without even dissociating a molecule, let alone colliding protons.

At a Glance: The electron’s electric dipole moment

  • The electric dipole moment de of two oppositely charged objects is equal to the magnitude of the charge, q, multiplied by the charge separation distance r, i.e. de = qr
  • Under the basic Standard Model of particle physics, electrons cannot have a permanent electric dipole moment (EDM) because this would violate time-reversal symmetry, which states that physical interactions should look the same if time were to flow in reverse
  • More sophisticated versions of the Standard Model theory do allow an electron EDM to exist, but they predict it would be far too small to measure in the lab. Hence, if experimentalists can find a non-zero EDM, this would indicate the existence of new physics beyond the Standard Model
  • Several experimental groups are now searching for an EDM in atomic or molecular systems. Results from these precision measurements have already ruled out one proposed Standard Model extension, and a new generation of experiments with cold atoms or molecules should put others to the test

More about: The electron’s electric dipole moment

S Bickman et al. 2009 Preparation and detection of states with simultaneous spin alignment and selectable molecular orientation in PbO Phys. Rev. A 80 023418
J J Hudson et al. 2002 Measurement of the electron electric dipole moment using YbF molecules Phys. Rev. Lett. 89 023003
B C Regan et al. 2002 New limit on the electron electric dipole moment Phys. Rev. Lett. 88 071805
A C Vutha et al. 2009 Search for the electric dipole moment of the electron with thorium monoxide arXiV: 0908.2412v1

Cargo-cult training

Richard Feynman, in one of his famous rants, evoked as a metaphor what he called “cargo-cult science”. During the Second World War, the indigenous people of the South Pacific became accustomed to US Air Force planes landing on their islands, invariably bringing a profusion of desirable goods and tasty foods. When the war ended, they were distressed by the discontinuation of this popular service. So, they decided to take action. They cleared elongated patches of land to make them look like runways. They lit wood fires where they had seen electric floodlights guiding in the planes. They built a wooden shack and made a man sit inside with two halves of a coconut on each ear and bamboo bars sticking out like antennas: he was the ”air controller”. And they waited for the planes to return.

Even though Feynman meant his figure of speech to apply to fake science, it fits a number of the current UK government’s education policies like a glove, such as its obsession with being seen to be training people – as opposed to actually helping staff develop. Such policies are all for the benefit of politicians and bureaucrats: pure gimmicks that would be funny were they not alienating staff and actually lowering the standards they are supposed to improve. Just ask any high-school teacher wading through reams of useless paperwork. Higher education has not escaped the blitz either.

New lecturers at universities have long been short-changed when they are forced to take qualification courses to supposedly enable them to teach and supervise students. Over the past five years, these “qualifications” have expanded into certificates obtained over two years on a part-time basis. More than 80 hours of contact time and the submission of a dissertation are now the minimum requirement. Failing the course often means remaining under probation, or being blocked from promotion. (There are even plans to convert the certificates into postgraduate courses.)

These courses are sanctioned by the Higher Education Academy (HEA), which was founded in 2004 “for students in UK higher education to enjoy the highest quality learning experience in the world”. Most universities have complied with its requirements, setting up a central training facility dishing out the same course for everyone. And as these courses have become more burdensome, they have, I regret to report, also become more useless. I have collected numerous comments by e-mail, in person and from Web forums from people taking such courses in a wide range of universities and subjects (see “From the horse’s mouth”, below). Almost all were negative, with the all-time winner being from a colleague of mine who was asked to cut out cardboard “buzzwords” to form phrases on the theory of education. “It was the lowest point in my whole academic career,” he told me. “I felt that I was being treated like a moron.” What is going on here?

A little bit of history

Education is a controversial business that has for long excited passionate debate. At universities the task is often performed by academics who are much more interested in research and therefore regard teaching as a chore. It is also done in the least onerous format for the staff, namely in the form of traditional lecture courses that compress the material as much as possible. Paradise for most scientists is to work at a place like the Institute for Advanced Study in Princeton or the Perimeter Institute for Theoretical Physics in Waterloo, Canada, where teaching duties are almost completely absent. Feynman stood aside in this respect, stressing in his writings the mental block that a research-only framework tends to induce. But the fact is that only a few of us enjoy teaching, and even then we are mostly not as good at it as Feynman blatantly was.

So it is not as if there is no scope for improving university teaching, both at a personal level and as a system. Regarding the latter, “to lecture or not to lecture” is a poignant issue. Even the verb “lecturing” sounds negative, and has patronizing overtones. Other avenues should be explored, such as project work, discussion classes and “problem-based learning”, where students work in small groups tackling open-ended problems helped by lecturing staff. Alas, none of these noble considerations permeate what is actually being done to improve higher education in response to government initiatives, such as the training courses being promoted by the HEA.

The task, instead, has fallen prey to educationalists, or education theorists: people who specialize in the theory of how teaching and learning works. Do not get me wrong, at their best, these people are outstanding. They are experienced teachers, often with a maverick streak, who faced difficult teaching environments and found out the hard way a method that worked. They then collected their experiences into “systems”. Maria Montessori’s work with “defective” and poor children is a bright example of a pedagogy that works.

But the real problems arise from the disciples that follow. Many run-of-the-mill educationalists have no classroom experience and merely parrot what the master said, typically out of context. They hide behind vacuous jargon and force-feed their charges fashionable theories, regardless of the suitability of these ideas. Educationalists like to say that each student is different; yet despite the fact that their theories were developed typically in the context of the humanities, they want them to be valid for physicists, medics, everyone. The one-size-fits-all nature of these courses is regarded as their worst drawback.

Still, there are worse irritations to those who take these courses. They are told to avoid technicalities – to the point where mathematicians are advised not to write formulas on the blackboard. Yet educationalists are the worst offenders when it comes to verbosity and an inability to state concisely what they mean. Worse, they counterpoint lecturing with ridiculous “teaching games”. Surely I do not need to cut out cardboard buzzwords before my brain is graced with a thought.

Something has gone horribly wrong in the business of real, practical educationalism. Certainly, educationalism could be good, but in reality it is a case of “do as I say, not as I do”.

To be positive and constructive

It is difficult not to sound exasperated when discussing this matter, but the fact is that no-one is listening to the complaints. Educationalists emphasize the importance of student feedback, yet they close their ears and entrench themselves when their own students complain vociferously about the inadequacy of their courses. To copy, verbatim, one of the comments I have collected: “If academics treated their students like educationalists treat their student academics, they’d be appalling teachers.”

But the educationalists are right about one thing: there is nothing more embarrassing than giving a bad lecture – something virtually every lecturer has experienced. And with most UK universities now charging students to attend, there is, quite rightly, a growing emphasis on ensuring that lecturers actually know how to teach. So what could be done to make these courses more useful?

A variety of techniques have been tried in other countries, in the context of lecturing or seeking alternatives to it. But the fact is that students – particularly those in the UK – prefer lectures to more interactive methods (such as group discussions), no matter how much they complain about boring lecturers. This is certainly the case in physics and maths. So let’s do as we preach and first look at our students rather than try to impose our theories upon them. In practice, the issue of how to improve teaching may have to boil down to that of improving lecturing techniques. And a lot can be done in this respect.

For example, the Nobel-prize-winning physicist Julian Schwinger is famous for taking acting lessons to help him face a crowd of undergraduates at Harvard University. And in due course he became an outstanding teacher (to the point of being nicknamed “the snake-oil salesman”.) It makes sense: good lecturing is a bit like acting. A lot depends on body posture, voice placement and general performance techniques (areas in which all of us could improve, I am sure). Bring in an actor and it could do wonders.

More prosaically, we could simply watch colleagues in our field who are particularly good lecturers and learn from them. Or conversely, be observed regularly by an experienced lecturer known to be popular with undergraduates and told in concrete terms what worked and what did not in our lectures. As one colleague suggested, “A simple course allowing us to learn from a video of our own lectures would be immensely useful.” Undergraduates themselves can be a copious source of very vocal feedback, often constructive in spite of the language in which it is cast. Sadly, this takes place only rarely, if at all, on current lecturing courses.

And yes, some theory of education might be useful, too. Knowing some background about how people learn can only help. But the onus is on educationalists to prove that this is indeed relevant on a case-by-case basis. How can they gauge whether they are really hitting the target? Make the courses non-compulsory. The attendance record will show whether this does indeed improve teaching or not.

The psychology of fear and loathing

When I raised this issue in an article in the Times Higher Education last summer (20 August), I was surprised by the number and venom of those who came to my support, by e-mail and in the Internet discussions that followed. But one glaring feature shocked me: the vast majority of these comments were anonymous and several correspondents confessed to being “too oppressed” to complain openly. You cannot blame them. When one of my colleagues prepared a portfolio explaining how his teaching could be improved and why the teaching course had not filled that role, he was declared “not yet commended” and kept on probation (in fairness, a similar stunt at Nottingham University got full marks). This speaks volumes for the atmosphere of intimidation towards younger staff surrounding these initiatives.

Another problem is that the feedback techniques employed to monitor the usefulness of these courses certainly feel, at least to me, as if they are rigged. I vividly recall that when I had to take these courses, virtually every one of my fellow attendees complained about it in unprintable terms. Yet the course website paraded a profusion of positive comments from seemingly satisfied customers.

The fact is that if only all trainee lecturers decided collectively to boycott the initiative, it would not last another day. But that is unlikely to happen and any attempt to improve things will have to come from answering the question of how this situation came about. I have gathered a number of possible theories.

One potential explanation is that universities do not have a choice. These courses are government imposed and universities simply have to comply. With so many useless gimmicks thrown at hospitals, schools and universities by the government, this is an answer that pacifies some critics, as if bad government initiatives are an unavoidable evil, like swine flu. But reality does not corroborate this theory. Indeed, the interpretation of what is legally mandatory is loose and some universities, notably Oxford and Cambridge, have made a point of not following the government directives. The minutes of a Cambridge meeting where this sensible decision was taken concluded that “current staff development provision was felt to be good and appropriate. There was little enthusiasm for expanding staff development provision and none for a compulsory teaching qualification.”

An alternative explanation – that this is all a matter of money – can also be ruled out. Universities receive tens of thousands of pounds a year from the government to run these courses, but this is actually not much in the broader scheme of things. Indirectly, however, this possibility remains open. Who knows what blackmail universities are subjected to behind the scenes.

Perhaps the most likely explanation for the current situation is that a rogue group of edu_cationalists and bureaucrats has emerged inside universities and the government. Rogue groups are not about doing a task. They are about self-perpetuation, surviving in a ruthless way by killing off all opposition, reproducing and expanding. This seems to be the opinion of the majority of my colleagues. As Alexander Schekochihin, a former Imperial physicist who is now at Oxford, put it, “[These courses are] a classic exercise in self-perpetuation by a parasitic structure on the college’s body – self-perpetuation achieved by wasting large amounts of other people’s time.”

Cardboard monuments

We are obviously facing a case of “cargo-cult” training, to use Feynman’s metaphor. These courses are more about ticking boxes and satisfying bureaucrats than about teaching skills. This is particularly noxious since we live in a time of thrift, when we are being asked to save and cut costs as much as possible. Cuts are being imposed in training that actually works – for example the popular foreign-language courses at Imperial. Here is a demonstrably useful training initiative: learning a new language. The courses are non-compulsory and are not free, yet people flock to them. They work! Sadly they are being cut, while money continues to be wasted on training initiatives that no-one likes and that have zero track record of demonstrable usefulness, just because they please the bureaucrats.

Regrettably “cargo-cult” training is not only an “anti-saving” at a time of financial austerity, it is also “anti-training”. It can be argued that it actually lowers standards. It certainly alienates valuable staff: I know of top candidates steering clear of certain institutions in order to avoid these courses. This suggests that if educationalists were paid their full salary but required to stay at home and do nothing, then it would already be a major improvement.

From the horse’s mouth

“I have learned nothing of any use and been reduced to tears of rage.”
“[I have felt] utterly patronized and frustrated by the waste of time.”
“A colossal waste of time and resources, and an insult to the lecturer’s intelligence.”
” I’ve learnt more about how not to teach from the education lecturers than how to teach.”
“[Full of] daft warm-up sessions and ‘games’.”
“Laden with ideologies and ideas that were trendy about a decade ago.”

How not to salt popcorn, and other mad experiments

For experimentally minded readers, the warning “don’t try this at home” is a real killjoy. However, coming from Theo Gray, it is also a seriously good piece of advice. Mad Science: Experiments You Can Do At Home – But Probably Shouldn’t is a catalogue of inventive and terrifying things that Gray, a columnist with the Popular Science website and co-founder of the company behind Mathematica software, has done in the name of science. The book features step-by-step instructions, safety guidance and background information for 55 different experiments that range from a liquid-mercury motor to a dry-ice cloud chamber. As these two examples show, some of the experiments are far safer than others, and many have no place whatsoever in your garden shed – at least not if you want to see the shed again afterwards.

Figure 1, which shows what happened after Gray blew pure chlorine into liquid sodium, is a case in point. The result looks impressive. It will certainly salt your popcorn, but it could also kill you, Gray observes, adding that his safety preparations included “a clear path to run like hell” in the event of an uncontrolled chlorine leak. A separate experiment, which uses a bank of 12,000 V capacitors to crush a penny, gets a stark warning: brush against these capacitors while they are charged “and you are stone-cold dead, instantly”.

Other experiments are safer, but not necessarily easier to reproduce. Gray is a past master at sourcing rare materials: he won an Ig Nobel Prize in 2002 for building a table containing samples of nearly every element in the periodic table, including oddities like dysprosium and lutetium. Many university labs, let alone secondary schools or shed owners, will struggle to replicate the ingenuity on display in his book. For example, the “gravity cell” battery (figure 2) looks straightforward at first, as it requires nothing more dangerous than copper sulphate (which Gray helpfully notes is available at some garden centres). But it also needs zinc crow’s-foot electrodes, which are no longer available commercially; Gray had to make his own using molten zinc and a machined graphite mould. Similarly, the Lichtenberg figure (figure 3) was prepared using the Dynamitron particle accelerator at Ohio’s Kent State University – a Van de Graaff generator makes a poor (though far more accessible) substitute.

But although “experiments you can’t do at home” might have been a more accurate subtitle, Gray’s heart is clearly in the right place. This lavishly illustrated book allows us to see the results of his antics and appreciate the beautiful science behind them, without having to expose ourselves to the danger. What’s so mad about that?

Not crazy enough

After a period in power, revolutionaries rather depressingly turn into conservatives, eager to preserve the status quo. This is as true in the academic enterprise as it is in the world of politics. Academic institutes are often portrayed as stuffy, inhibiting innovation and neglecting the adventurous or unorthodox – especially by those who feel stifled or ignored by the establishment. Such vocal critics of the establishment rarely have the chance to try something new with the resources needed to make a real impact.

Howard Burton was an exception. Just over a decade ago, as a newly graduated physics PhD searching for a job, any job, Burton was given a chance by Mike Lazaridis – co-founder of RIM, makers of the Blackberry electronic device – to create a new project to nurture fundamental physics. Suddenly, Burton found himself developing a vision for what would become Canada’s Perimeter Institute for Theoretical Physics: a major new institute that would focus on basic science, untrammelled by the pressures of conventional university life, and with a multimillion-dollar endowment.

Building such an institute has, of course, been tried many times before – by Abraham Flexner at the Institute for Advanced Study in Princeton , and by Eamon de Valera at the Dublin Institute for Advanced Studies, to name just two. But the Perimeter Institute was supposed to be different from anything previously attempted. Ambitious plans were therefore formulated to build this new institute in the fairly obscure town of Waterloo, Ontario (the home of RIM), and to make it a world centre for work on cosmology, quantum foundations and quantum gravity.

Howard Burton’s book on this process, First Principles: The Crazy Business of Doing Serious Science, sets out a very personal view of how the institute started. It titillates those who know a little about the initiative by including Burton’s own personal journey from run-of-the-mill and rather directionless PhD student, to becoming director and recruiting theorists from around the world, to building a very glamorous centre, and finally to his sudden and inexplicable departure in May 2007.

The book is written in a rather breathless style, full of touchingly naive views of the world of science. Burton writes tellingly of his worldwide tour to get advice from the mandarins, the movers and the shakers of modern theoretical physics. Like a child with his face pressed up to a sweetshop window, he was eager to find out the secrets of success from gurus such as Freeman Dyson in Princeton, Chris Isham in London and Roger Penrose in Oxford. As a consequence, quantum gravity became a “must have”, as did foundational studies of quantum mechanics. An encounter with Artur Ekert (wrongly described here as David Deutsch’s student rather than mine) brought in quantum computing and quantum information science – both of which Burton seems to have thought of, oddly, as a practical balance to the other, more esoteric subjects.

A decade later, how do all these acquisitions measure up? It is hard to judge from Burton’s prose, but to me, at least, the hiring of the quantum-computing scientists was inspired, and did produce world-leading insights. I am much less convinced that the past decade of substantial investment by Lazaridis (matched by very generous funding from Canadian provincial and federal sources) in the other areas has produced much of comparable substance. Despite all Burton’s talk of the need “to create an environment where theorists and experimentalists could naturally and productively interact on a regular basis”, there is not much to garner from his book on how to do it, or how the end result is any different from a well-funded research institute to be found in any world-class university.

But perhaps this failure is revealing. For, after 10 years of operations, has the Perimeter Institute ended up as anything more than a well-funded facility attached to a university, operating in pretty much the standard model we see everywhere, with tenure track, graduate students and the like? Well, no. The razzmatazz and the PR have been great. So has the outreach activity. The building looks terrific, although so do many others on campuses around the world, and it has a bistro that puts to shame most departmental cafes. But has the funding model allowed the inhabitants of Perimeter to break the mould and do something unique, beyond anything you would find, for example, in one of Germany’s Max Planck institutes? Not really.

Do not get me wrong. Good science has been done there, and careers have been nurtured. Indeed, many of my own former students and postdocs have enjoyed themselves greatly in Waterloo. But they have also done so in Munich, Brisbane and in my own Institute for Mathematical Sciences at Imperial College London. So has the Perimeter Institute been more than a set of good research groups, puffed up by a highly effective PR exercise?

We all expected so much to come from this imaginative and bold venture, and I am afraid the reality has not fully measured up to our expectations. The institute now has new leadership in the former Cambridge University cosmologist Neil Turok, and will doubtless move in fresh and exciting directions. But Burton’s book, while concerned with a fascinating endeavour, sheds little light on how to build a new institute that breaks free of the constraints of the past. Instead, it portrays a rather troubled individual who was, on occasion, out of his depth and too prone to hero-worship established figures, and who after all that promise and hype set up what inevitably became part of the scientific establishment.

Seeing through the ‘two cultures’

This year marks the 50th anniversary of C P Snow’s famous “two cultures” speech highlighting the gulf between scientists and other intellectuals. In her book The Shadow of the Enlightenment, science historian Theresa Levitt of the University of Mississippi reminds us that such a gap did not always exist and takes us back to a time and a place where science, the arts and even politics were inseparable.

At the beginning of the 19th century, Paris was a place where anything seemed possible. The new revolutionary government had stuttered to a halt, and war with Britain would soon set the stage for Napoleon’s military coup. Writers and artists such as Stendhal and Delacroix would soon shock the establishment with their unflinching portrayals of contemporary society. And in the salons of Paris, writers and politicians were rubbing shoulders with a new breed of intellectuals: the scientists. Spurred on by an arms race with Britain and the industrial revolution, the pace of scientific discovery had quickened. Now, places such as the Academie des Sciences on one side of the Channel and the Royal Institution on the other provided a forum for ambitious young scientists to make their mark and build a career.

All of this helps to explain why the winter of 1806 found two young French scientists, Jean-Baptiste Biot and François Arago, halfway up a Spanish mountain, where they were trying to measure the length of the meridian. Their efforts were aimed at helping define the new Revolutionary unit of length. Unlike the old toise, which was based on the length of the King’s foot, the metre was to be a rational and universal unit, defined as one ten-millionth of the distance along the Paris meridian from the North Pole to the equator.

Elegant though this geometric definition was, the real world was not so accommodating. Shortly after Biot returned to Paris with the measurements, relations between France and Spain soured, and Arago was imprisoned as a French spy. After several months without contact, the Bureau des Longitudes voted to suspend his salary. But Arago managed to escape to Algeria and, after an extraordinary series of adventures, he returned to Paris a year later. Reunited in France, Biot and Arago again became firm friends and close colleagues. This was not to last, however, and Levitt’s book follows the very different trajectories of Biot and Arago as they are driven apart by their competing work in optics, and later by their politics.

Their scientific disagreements focused in particular on the question of colour. Working independently, Biot and Arago had both developed versions of a polarimeter that used a birefringent crystal and a polarizer to generate complex coloured patterns. The question of colour was at the heart of intellectual debate in the 19th century. The Newtonian theory of colour, which held that it was a purely physical property that could be objectively measured, was being challenged by scientists such as Arago, and by artists and dye-makers who sought to try and define a colour standard. The debate often turned on a question more philosophical than physical: do objects have an intrinsic colour that can be measured and defined, or does colour depend on how and by whom it is observed?

This question of perception is the main theme of the book. Levitt links the work of Biot and Arago to the ideological divide in French society between those like Biot, who believed in a natural hierarchy of society with a strong central government, and those like Arago, who believed in the equality of man and government by rational consensus.

Both men continued to carry out groundbreaking work in magnetism (every physics student learns the Biot–Savart law), optics and astronomy, clashing frequently at the Academie des Sciences. Arago eventually became head of the Academie, and to Biot’s horror threw open its doors to the public. In the process, Arago became immensely popular, and entered politics as a champion of the Republican cause. Biot, in contrast, was troubled by the political torment of Paris. With his scientific views out of favour, he retired to the country as a wealthy landowner and mayor. There, he became fascinated with ancient astronomy, which he believed showed that true understanding of the hidden nature of the world could only be obtained by the initiated few.

The style of the book is unusual. In one sense it is a scholarly work on the history of science (complete with comprehensive references), laying out the author’s ideas on the importance of links between the new science of optics and changing concepts of perception in art and politics. But it is also a fascinating biographical tale that sparkles with insights into these turbulent times. As we follow the trials and tribulations of Biot and Arago, Levitt expertly covers a vast range of subjects, from ancient systems of astronomy to the origins of photography and the end of the slave trade. The description of the near-riot as people packed into the Academie des Sciences to see Daguerre present his first photographs is just one example of the many wonderful historical asides.

The Shadow of the Enlightenment is, overall, an enjoyable if dense read. Only a few times in the introduction does the book lapse into academic jargon. Reading it as a scientist rather than a historian, I found the lack of a clear explanation of the science irritating at first; the polarimeter, for example, is never clearly described, and there is no diagram. However, as I read more, I realized that this forced me to see these 19th-century discoveries through the eyes of Biot, Arago and others, peering into the obscurity and trying to make sense of the torrent of new observations and discoveries. Like its subject matter, this fascinating book makes a mockery of the “two cultures” debate, and should appeal to anyone with an interest in the history of science and the origins of the way we see our world.

Worth the time

The most interesting problems in mathematics are those that any intelligent person, be they poet or physicist, can immediately understand. Examples abound, with the four-colour planar map problem and Fermat’s last theorem being perhaps the two most famous. Both are easy to state, and even those without any mathematical training beyond a little algebra can “play around” with them. They are seriously difficult on a technical level, however, with the first being supposedly “solved” by a computer programme, the details of which nobody but the programmers would claim to fully understand, and the second by an extraordinarily deep analysis that only a few mathematicians have actually gone through from start to finish.

The subject of Dan Falk’s In Search of Time is actually one step beyond these deeply perplexing mathematical questions. While we might think we understand time, unlike the “solidity” (and, dare I say it, the timelessness) of the above mathematical problems, there is absolutely nobody on Earth who knows what time is. Or, for that matter, if it even exists. Falk is frank about this right from the start, beginning with St Augustine’s famous lament in his Confessions: “What, then, is time? If no-one asks of me, I know. But if I wish to explain to him who asks, I know not.”

That has not stopped Falk from writing over 300 pages on the subject of time, and for the most part he has done an impressive job. Falk is a science journalist, not a scientist; however, it is clear that he has spent a lot of time reading about time, and understanding what he has read.

The book does get off to a slow start, with the first 100 pages devoted to a somewhat overly familiar historical tour of topics like the ancients’ fascination with the motion of celestial bodies and the development of the modern calendar. Although it is all well written, I suspect that most readers will flip through it quickly.

By chapter five, however, we are introduced to what modern physicists, psychologists and philosophers think about time, and Falk starts to hit his stride. This section begins with what Falk calls “mental time travel”, the ability of humans to think about both the past and the future, and the distinguishing of the past from the present and the future through the use of tenses in spoken and written languages. This is interesting stuff, but it is also fairly “squishy” when judged by the standards of argument that would appeal to a mathematical physicist. And so we have to wait a little longer, until the next chapter (“Isaac’s time”), before we really get to the nitty-gritty of time.

Here, we read of Newton’s conception of “mathematical time” as flowing uniformly, and of the statistical explanation for the apparent contradiction between the time reversibility of the laws of physics and the observed fact that the world inexorably moves only from past to future and never backwards (the so-called entropic or thermodynamic “arrow of time”). The trajectory of an object in space and time is called the world-line of that object, and we learn about the “block universe” in which all the world-lines are completely determined from beginning to end – which seems to toss the concept of free will under a bus. This last idea, which some have interpreted as having a sort of “seal of approval” from Einstein himself, brings philosophers and theologians into play with the physicists – an explosive mixture if there ever was one!

Einstein’s special theory of relativity introduced the idea of a 4D amalgamation of space and time – “space–time” – thus doing away with Newton’s uniform time and even providing a theoretical basis for time travel into the future. Einstein’s later general theory of relativity, with its “warped” space–time, goes even further, allowing the return trip through time along world-lines that bend back on themselves. In other words, the general theory provides support for the possibility of a time machine. Falk devotes an entire chapter to this topic, which fascinates just about everybody – even Stephen Hawking, who adamantly rejects the possibility of time travel to the past. Hawking, whose tongue-in-cheek “chronology protection conjecture” aims to “make the past safe for historians”, nevertheless studies time travel to the past because he wants to discover the (yet unknown) physics that he believes will forbid it.

The classical paradoxes of time travel to the past, such as the grandfather paradox (the attempt to go back in time and murder your grandfather before any of his children – one of your parents – is born) and causal loops, get some discussion, as does the consistency principle of the Russian physicist Igor Novikov. A world-line that bends backwards through space–time and connects with itself to form a closed world-line is called a “time loop”. If a time loop contains an event caused by a later event, which is itself caused by the first event, then we have a “causal loop”. For example, suppose a hitman-for-hire wishes to kill his victim in a “foolproof” way, and so takes a copy of tomorrow’s newspaper back to yesterday. It has the victim’s obituary in it which says he died while reading a newspaper, which so shocks the man when he reads it that he drops dead – thus explaining the obituary! Such a time loop, odd as it is, is not self-contradictory (unlike the classical grandfather paradox) and so satisfies the definition of Novikov’s principle. All such paradoxes are rendered moot if one accepts the controversial idea of parallel universes, where if you travel into the past and do something to change it, then that altered past becomes the past of a different universe. That is the belief of the Oxford University physicist David Deutsch, but it does ask a lot from nature simply to “solve” the paradox problem.

Throughout the book, Falk often presents his personal opinion on what he is discussing. However, he tries hard to give many alternative views as well, having interviewed numerous well-known scientists (including Roger Penrose and Lee Smolin) about their different takes on time and what it might actually be. And yet despite all this collective modern wisdom, when you finish In Search of Time, you may still feel like St Augustine did in the late 4th century AD. Indeed, Falk could have quoted the saint’s Confessions at even greater length, as the following passage demonstrates: “I confess to you, Lord, that I still do not know what time is. Yet I confess too that I do know that I am saying this in time, that I have been talking about time for a long time, and that this long time would not be a long time if it were not for the fact that time has been passing all the while. How can I know this, when I do not know what time is? Is it that I do know what time is, but do not know how to put what I know into words? I am in a sorry state, for I do not even know what I do not know!”

Amen to that.

The trials of Galileo

There are so many books about Galileo, author Dan Hofstadter remarks, so why another? Given that 2009 marks the 400th anniversary of the first astronomical use of the telescope, where Galileo’s role was paramount, the answer may seem obvious. But that is not where the strength of Hofstadter’s book lies. In The Earth Moves: Galileo and the Roman Inquisition, he instead advances the clock to 1633, towards the end of the Italian scientist’s career and the year of the infamous trial that resulted after Galileo’s Dialogue on the Two Great World Systems was published in 1632.

Here, in one of the most lucid treatments available in English, Hofstadter carefully identifies all the leading participants, and dissects the three successive hearings that Galileo was subjected to. One particular hero to emerge from this account is Francesco Niccolini, the Tuscan ambassador to the Vatican and a savvy advisor to the recalcitrant and argumentative Galileo. Galileo was keen to debate the merits of heliocentric cosmology and the way that Scripture should be interpreted; at one point, he declared (quoting an eminent cardinal) that “The Bible does not teach how the heavens go, but how to go to heaven.” But this was not a wise tactic: while Galileo may have sought intellectual engagement, Hofstadter writes, the Church sought discipline. Niccolini recognized this, and the strategist in him “grasped that the inquisitors saw Galileo as a kind of prodigal son, a famous and gifted Catholic who must at all costs be reconciled to Church doctrine”.

As a result, Hofstadter convincingly argues, never in the entire trial was the status of the heliocentric system discussed. Though obviously heliocentrism remained the elephant in the parlour, the issue for the inquisitors, clearly, was whether Galileo had disobeyed orders by teaching Copernican cosmology in his Dialogue. At his first hearing, Galileo made the preposterous claim that his book actually refuted the Copernican system. In his second deposition he backed off, saying it had now dawned on him that the Aristotelian side of the argument had not been presented fairly. “I resorted”, he said, “…to the natural gratification everyone feels for his own subtleties and for showing himself to be cleverer than the average man, by finding ingenious considerations even in favour of false propositions.” In other words, as Hofstadter declares, “Galileo was being so bold as to tell the tribunal to convict him of vanity, ignorance, and carelessness, none of which approached heresy.”

By the time of his third interrogation, Galileo’s hopes for a plea bargain arranged by Niccolini had evaporated and he was a broken man. He was even willing to state that “I have not held this opinion of Copernicus since I was notified by the injunction that I was to abandon it [in 1616]”. Hofstadter concludes that every one of Galileo’s answers in this final session reveals that he embraced the doctrine of the Church only because he had been told to, and for no other reason. In other words, Galileo was prevaricating. The ecclesiastical authorities must have suspected as much, but it ultimately suited their strategy to force him to recant in a public confession.

Some of the material in The Earth Moves is as fresh as it is unexpected. The pre-trial section, in particular, places Galileo and his telescope in the artistic context of his favourite poet, Ariosto, and favourite contemporary artist, Cigoli. Perhaps, when Galileo turned his instrument to the Moon, he might even have remembered the line from Ariosto’s Furioso. “On the Moon there are rivers and lakes and hills and dales, like those we have but different.” But with respect to the hard-core science, the book loses a bit of its virtue, and cannot always be taken as an authoritative guide. Describing Galileo’s method for finding the height of a mountain on the Moon, Hofstadter gives a patently absurd height of 8,704,941 miles; one suspects that his publisher ought to have found a better-qualified copy editor. In a similar vein, Galileo never observed sunspots from Venice (p90), and if he had been eager to observe Venus earlier than the fall of 1610, then he failed not because Venus was too close to the Sun but because he resisted getting up before dawn.

On the other hand, Hofstadter is entirely correct on one matter where many popular writers and journalists have failed in this anniversary year. Over and over, it seems, we have heard it asserted that with his telescope, Galileo proved the Copernican hypothesis. Although Galileo’s observations made the motion of the Earth more intellectually respectable, he was unable to rule out motionless alternatives. As Hofstadter has it, while his intuition “came closer to confirmation with each passing year, he never found a definitive proof for it”. Eventually, Newtonian physics provided such a coherent understanding that only the outliers could deny the Earth’s daily rotation and its annual revolution round the Sun – but at the time of Galileo’s trial, Newton’s Principia was still some 50 years away from publication.

In summarizing the trial, Hofstadter remarks that the issue of the Earth travelling round the Sun had little, if any, bearing on the Catholic faith. However, in the context of the Counter-Reformation, the notion that persons without theological training (such as Galileo) could decide for themselves to read a biblical passage in a non-literal sense constituted a mortal danger for Catholicism. Hence the entire trial focused on the issue of insubordination. Galileo was found guilty not of heresy but of “vehement suspicion of heresy” for his insubordination, and he was placed under house arrest for the remainder of his life. Although this interpretation is well documented elsewhere, it is worth the price of the book to get it explained with such clarity.

Putting a value on fundamental research

Drayson etc.jpg
Blue-sky thinkers Lord Drayson (centre) promotes the economic-impact of science

By James Dacey

It is very well-documented that the World Wide Web emerged as an incredibly successful “by-product” of the blue-skies research carried out at CERN. But should all scientists be required to justify their funding by the “impact” of their work on the rest of society?

A new proposal by the UK government would mean that every application for a research grant will require the researchers to detail the direct benefits of their work on the economy, public policy, and a number of other realms.

At a public event in London last night Lord Drayson, the UK Science Minister, was met with a number of concerns from UK researchers who have taken issue with the new scheme.

“We need to change the way we do science in this country. It is perfectly reasonable to expect scientists to do more to demonstrate the importance of what they do,” Drayson told the audience.

“Science should be accountable where science is funded by the tax-payer.

“[The new scheme] is asking people within their grant applications to think about the impact they give and to describe it,”

Perhaps offended by the tone of Drayson’s comments, several people argued that researchers already do this sort of thing as an intrinsic part of their work. One member of the audience dismissed the new scheme as nothing more than “box-ticking” by the Government.

A major sticking point is whether researchers should be assessing the impact of the work that they have already done or the impact of the research they hope to carry out, as this is apparently unclear under the new proposals.

“It would be lovely if we did have a crystal ball which could predict the future applications of our research, but the reality is that most research simply does not work this way as many benefits result from serendipity,” said Colin Stuart an astronomer working at The Royal Observatory in Greenwich, who was also on the panel.

Blue Skies ahead? The prospects for UK science, was hosted by the Wellcome Collection in central London and the event was chaired by Brian Cox, the CERN physicist and presenter of a number of popular science programmes in the UK. You can watch a recording of the debate online.

Earlier yesterday evening, Cox tackled Drayson on the issue of science and economic-impact as part of a new BBC radio series The Infinite Monkey Cage, which you can listen to here.

If you want to have your say regarding the proposed reform then the consultation will run until 16 December. Whatever comes out of this, however, I’m sure many UK physicists will still be left wondering how the Government will put a price on some of the true blue-sky research such as the search for the Higgs.

higgs2.jpg
Higgs Boson Will UK money continue to fund the search for him/her indefinitely?

Copyright © 2026 by IOP Publishing Ltd and individual contributors