Skip to main content
Semiconductors and electronics

Semiconductors and electronics

Room at the bottom

23 Jul 2015
Taken from the July 2015 issue of Physics World

Moore’s Law: the Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary
Arnold Thackray, David C Brock and Rachel Jones
2015 Basic Books $35.00hb 560pp

A date with density

Everyone has heard of Silicon Valley, but few really understand how it became the home of the global computing industry. Before the 1980s, the area’s technological economy was dominated by the development and manufacture of magnetic recording storage, and it has been said, somewhat tongue-in-cheek, that it was then more of a “Rust Valley” due to the prevalence of various ferrous-ferric oxide mixtures employed in this industry. The label “Silicon Valley” did not gain currency until later, when Fair-child Semiconductor and its descendants Intel and Advanced Micro Devices began to commercialize their metal-oxide-semiconductor (usually silicon) field-effect transistor technology (MOSFET) – thus transforming both the valley and its worldwide image.

Moore’s Law celebrates the life and career of a scientist who played a major role in these developments. In the geek world, Gordon Moore is best known as the progenitor of “Moore’s Law”, the empirical observation (made in 1965) that the density of MOSFETs on an integrated circuit would double every 18–24 months. This doubling has indeed occurred more or less on schedule. The book’s subtitle describes Moore as a “quiet revolutionary” and the first word is certainly accurate; Moore is definitely not a superstar who attracts the kind of press promotion received by the likes of Bill Gates, Steve Jobs and, most recently, Elon Musk. But I prefer the description “quiet hero”. In his own industry, Moore has been to his colleagues what Steve Wozniak was to Steve Jobs at Apple Computer – the real font of technical (not sales) innovation behind their respective enterprises.

The hardback book I now hold in my hands is some 4?cm thick and contains much more material than can be absorbed in one or two evenings of reading. To summarize, it describes how Moore was born and raised in the San Francisco Bay Area; attended local universities in San José and Berkeley; graduated from the latter in 1950 with a degree in chemistry; and obtained a PhD in that discipline from the California Institute of Technology. Following postdoctoral studies at Johns Hopkins, Moore joined William Shockley at Beckman Instruments in California, but in 1957 he and seven other young researchers broke with the notoriously difficult Shockley and accepted financial support from an entrepreneur, Sherman Fairchild. Over the next 10 years, their new company, Fairchild Semiconductor, pioneered the development of MOSFET devices, but not their successful commercialization. That began in 1968, when Moore and Robert Noyce founded the company that became Intel – arguably one of the most successful American enterprises of the later 20th century.

Narrating this tale takes up most of the book, which is replete with moving family memorabilia and corporate intrigue. An example of the latter was Intel’s uneasy alliance with IBM, which Moore engineered in the early 1980s. With demand for IBM’s line of personal computers and mainframes exceeding its in-house manufacturing and development resources, it purchased, temporarily, a 15% interest in Intel to assure continuity of supply. Ordinarily, such a purchase could have fallen foul of US antitrust legislation, but at the time, IBM mainframes underpinned a large number of US defence and intelligence resources. This led to concerns that if the company had to source parts for its machines off-shore (particularly in Japan), it could engender a security risk. Hence, IBM was assured that its temporary funding of Intel would not be subject to antitrust action.

So much for biography. Now let’s put on our physicist hats. Just how did Moore’s law come to be, and when will it be repealed? The basic concept behind MOSFETs was revealed in patents filed by two physicists, Julius Edgar Lilienfeld in the US and Oskar Heil in the UK, in 1926 and 1935 respectively. (Perhaps these dates should be the real “t = 0″ for Moore’s law.) So why did it take almost four decades for the device to be realized in practice? Developing ancillary tools for fabrication took time, of course, but lack of demand was also a factor. Simply put, it took a while for the window of “conventional” technology (the vacuum tubes, junction transistors and bulk diodes that underpinned the devices that emerged after the Second World War) to slam shut, and for demand for faster and smaller “1” and “0” switches to take off. The micro- and nano- “wrenches” -“hammers” and “pliers” (actually vacuum deposition chambers, X-ray and electron diffraction, lithography and an alphabet soup of other technologies) required for manufacture had actually existed in the tool sheds of academic research institutions, US national laboratories, and a few hi-tech companies (notably IBM and Bell Labs), but it took the opening window of economic promise to get these tools off the shelf.

So, was the inevitability of Moore’s law foreseen in the basic physics of MOSFETs and of the tools needed for its commercialization? I would argue that it was, and here Richard Feynman deserves a lot of credit. In 1959, well before Moore’s 1965 speculation, Feynman gave his now-famous lecture “There’s Plenty of Room at the Bottom” (a play on the title of the 1959 film Room at the Top). In the lecture, Feynman pointed out that our known laws of materials physics more than allowed the evolution of micro-nano fabrication that gave rise to Moore’s law. And the rest is history.

Well, almost. On current trends, MOSFET volumes will approach atomic dimensions in a decade, and the last section of Moore’s Law (entitled “All Good Exponentials End”) discusses this problem. Keep in mind we’re talking physics here, not economics. Today, all computers, whether in the cloud or in your pocket, are based on the Turing–Von Neumann stored program concept using “irreversible” binary logic and switching devices. By “irreversible”, I mean that the storage technology is incapable of “remembering” whether it contained a 1 or 0 before its current state. In 1961 – barely a year after Feynman and four before Moore – Rolf Landauer of IBM postulated a thermodynamic limit on the density of irreversible binary logic. Roughly stated, the Landauer limit scales as the number of switches per unit volume times kT ln 2. This unitary Landauer limit was verified in a 2012 article in Nature.

So when will Moore collide with Landauer? This has been a point of debate for at least a decade, but unfortunately it is not clearly addressed in Moore’s Law. Some have suggested that Landauer’s limit could be overcome by storing and manipulating our 1s and 0s in a black hole – a sort of Feynman cellar, if you will. If we could somehow convey this to Feynman’s spirit today, his response might be, “Of course. There’s still plenty of room at the bottom…and the top of the universe as well!”

Copyright © 2024 by IOP Publishing Ltd and individual contributors