by Rachel Jones|March22,2020
In the last half-century, and especially in the last decade, computers have given us the ability to act and interact in progressively faster and more frictionless ways. Consider the now-ubiquitous smartphone, whose internal processor takes just a millisecond to convert a movement of your finger or thumb to a visual change on your screen. This speed has benefits (in 2020, theres a virtual library of information online) as well as disadvantages (your gaffe can go viral in seconds).
What made the smartphoneand the rest of our unfolding digital transformationpossible? Many point to a prediction in April 1965, published in a then-little-read article toward the back end of the trade paper Electronics. The piece, written by a young chemist named Gordon Moore, outlined in technical terms how quickly the technology behind computer chips might develop and, by implication, make its way into our lives. Its been 55 years since the articles publication, and its worth revisiting its original predictionnow known as Moores Law.
If you ask people today what Moores Law is, theyll often say it predicts that every 18 months, engineers will be able to come up with ways to double the number of transistors they can squeeze onto a tiny computer chip, thus doubling its processing power. Its a curious aspect of the law that this is not what Moore actually said, but he did predict consistent improvement in processing technology. Moreover, the world he anticipated did take shape, with his own work as founder of the chipmaker Intel creating much of the momentum necessary to turn his law into a self-fulfilling prophecy.
Initially, Moore had few notions of changing the world. Early in life, he discovered a love for chemistryand though he was kept back at school for his inarticulate style, he excelled at practical activities, making bombs and rockets in a home-based laboratory. He went on to study chemistry at UC Berkeley under two Nobel laureates, and earned a Ph.D. at the California Institute of Technology in 1954.
Moores career trajectory coincided with the rise of the transistor, a device made of semiconductor material that can regulate electrical current flows and act as a switch or gate for electronic signals. As far back as the 1920s, physicists had proposed making transistors as a way to improve on the unreliable, power-hungry vacuum tubes that helped amplify signals on telephone lines, and that would be used in the thousands in computers such as ENIAC and Colossus. In 1939, William Shockley, a young Bell Labs researcher, revived the idea of the transistor and tried to fabricate a device; despite several failures, he continued on and in 1947 he and two colleagues succeeded in making the worlds first working transistor (for which they shared a Nobel Prize in Physics). In 1953, British scientists used transistors to build a computer, and Fortune declared it The Year of the Transistor.
In 1955, Shockley moved to Mountain View, California, to be near his mother. He opened a semiconductor laboratory and picked a handful of young scientists to join him, including Moore and his Intel co-founder, Bob Noyce. The launch of the Sputnik satellite in 1957 and the escalation of the Cold War created a boom within a boom: Moore and seven colleagues, including Noyce, broke away from Shockley in a group quickly branded The Traitorous Eight, forming the seminal start-up Fairchild Semiconductor. They planned to make silicon transistors, which promised greater robustness, miniaturization and lower power usage, so essential for computers guiding missiles and satellites.
Our curiosity was similar, but not our approach. Noyce liked things that flew. I liked things that blew up, said Gordon Moore (left) with Robert Noyce.Courtesy of Intel Free Press.
Developing the core manufacturing technology was a seat-of-the-pants adventure in which Moore played a central role. In March 1958, Fairchild received an order from IBM for 100 mesa transistors priced at $150 each. Mesas, made on 1-inch silicon wafers, were so named because their profiles resembled the flat-topped mesa formations of the American Southwest. Moores responsibility was figuring out how to fabricate them reliably, which involved a complex chemical ballet and a considerable amount of thrift and improvisation. Unable to buy appropriate furnaces, Moore relied on glass-blowing skills to create gas-handling systems, assembled on cobbled-together aqua blue kitchen cabinets and Formica countertops. (Real lab furniture was as expensive as heck, he remarked.) Delivery solutions were similarly no-frills: Fairchild sent mesa transistors to IBM in a Brillo box from a local grocery store.
The mesa transistor was successful, but the companys new planar transistor (named for its flat topography) was a game-changer, bringing more stability and better performance. Another key development was the step to connect transistors by making all components of a complete circuit within a single piece of silicon, paving the way for the first commercial integrated circuits, or microchips. Everyone wanted miniaturized circuitrythe obstacle to greater computing power was its need for more components and interconnections, which increased the possibilities for failure. Noyce grasped a solution: why not leave transistors together in a wafer and interconnect them there, then detach the set as a single unit? Such microchips could be smaller, faster and cheaper than transistors manufactured individually and connected to each other afterward. As early as 1959, Moore proposed that sets of these components will be able to replace 90 percent of all circuitry in digital computers.
In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute a major revolution in the history of mankind, as important as the Industrial Revolution.
Six years later, in 1965, when he wrote his now-famous article in ElectronicsCramming More Components onto Integrated Circuitspersonal computers were still a decade away. Moore, who had seen the number of elements on a chip go from one, to eight, to 60, hinted at how integrated functions would broaden [electronics] scope beyond [his] imagination and at the major impact the changes would bring, but saw his analysis as distilling merely a trend in technology that would make everything cheaper. Nevertheless, his analysis was rigorous. Doubling the number of components on an integrated circuit each year would steadily increase performance and decrease cost, which wouldas Moore put it 10 years laterextend the utility of digital electronics more broadly in society.
As chemical printing continued to evolve, the economics of microchips would continue to improve, and these more complex chips would provide the cheapest electronics. Thus, an electronics-based revolution could depend on existing silicon technology, rather than some new invention. By 1970, Moore asserted, the transistor that could be made most cheaply would be on a microchip 30 times more complex than one of 1965.
In 1968, Moore left Fairchild and joined Noyce to found Intel, with the aim of putting cleverness back into processing silicon. In 1975, he reviewed his original extrapolation. Chips introduced until that point had followed the trend he predicted, but engineers were reaching the limits for circuit and device cleverness. Moore now proposed a doubling about every two years.
The analysis in Electronics was becoming known as Moores Law. Having correctly observed the potential for exponential growth, Moore overcame his personal dislike of the spotlight by travelling widely to talk about his idea, taking every opportunity to persuade others. After all, the fulfilment of Moores Law would be as much social as technical, relying on widespread acceptance: industry needed to invest to develop the technology, manufacturers needed to put microchips into their products, consumers needed to buy and use electronic devices and functions, and researchers and engineers needed to invent advances to extend Moores Law.
In the 1970s, seeing progress continue, Moore grew bolder, telling audiences that silicon electronics would constitute a major revolution in the history of mankind, as important as the Industrial Revolution. He was so confident in his vision that he told a journalist that students whod made headlines getting kicked off campuses (kids with the long hair and beards) were not the ones to watch: instead, he pronounced, we are really the revolutionaries in the world today. In front of a crowd, he pointed out that if the auto industry made progress at the same rate as silicon microelectronics, it would be more expensive to park your car downtown for the night than to buy a new Rolls Royce. And, he recalled years later, one of the members of the audience pointed out, yeah, but itd only be 2-inches long and a half-inch high; it wouldnt be much good for your commute.
The rest is history. For more than three decades, the New York Times pointed out in 2003, Moores Law has accurately predicted the accelerating power and plummeting cost of computing. Because of the exponential nature of Moores prediction, each change has arrived faster and more furiously. Its curve, shallow at first (though spawning the birth of the microprocessor, digital calculator, personal computer and internet along the way) has, since 2005, gone almost straight up in hockey stick style.
Despite the changes weve all witnessed, Moores Law is still widely misunderstood, even in tech circles. [Its] only 11 words long but most people manage to mangle it, said one report. Moores 1965 article is a sophisticated piece of analysis but many prefer to interpret it more vaguely: The definition of Moores Law has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line, noted its originator, dryly.
Up to April 2002, Intels website noted that Moore predicted that the number of transistors per integrated circuit would double every 18 months, even though Moore had pointed out that he never said 18 months.
Why did 18 months stick? Perhaps because a projection by an Intel colleague in 1975 led to a conflation of transistor count and doubling of performance; perhaps because this timescale appeared in an influential technology column in 1992, as the modern configuration of Silicon Valley was formingperhaps because that speed felt more accurate to the semiconductor industry.
During the technology bust of the early 2000s, people began to speculate about the death of Moores Law. Others suggested it would peter out because people would drop their computer fixations to spend less time at work and more with their families, or because Silicon Valleys obsession with it was unhealthy for business strategy. In 2007, the year the smartphone launched, Moore pointed out that we make more transistors per year than the number of printed characters in all the newspapers, magazines, books, photocopies, and computer printouts. But he recognized exponential growth could not continue forever; he knew the physical and financial constraints on shrinking the size of chip components.
When people in industry circles describe Moores Law as a dictatethe law by which the industry lives or dies, it is more evidence of the laws power within Silicon Valley culture rather than its actual predictive accuracy. As the essayist Ilkka Tuomi observed in The Lives and Death of Moores Law, Moores Law became an increasingly misleading predictor of future developments that people understood to be something more like a rule-of-thumb than a deterministic natural law. In fact, Tuomi speculated, the very slipperiness of Moores Law might have accounted for its popularity. To an extent, tech people could pick and choose how they interpreted the dictum to suit their business needs.
Today, Moores Law continues to thrive in the smartphone space, having put some 8.5 billion transistors into a single phone that can fit in our pockets. The law may now be, in the words of one commentator, more a challenge to the industry than an axiom for how chipmaking works, but for what began as a 10-year forecast, it has had an astonishing run. Once youve made a successful prediction, avoid making another one, Moore quipped in 2015.
Even as technology continues to pervade our liveswith the advent of more specialized chips and materials, better software, cloud computing, and the promise of quantum computinghis law remains the benchmark and overarching narrative, both forecasting and describing our digital evolution.
Originally posted here:
What Is Moore's Lawand Did it Inspire the Computer Age? - zocalopublicsquare.org