21 September 2007
Moore's Law No More?
By Rusty Rockets
During Intel Corporation's biannual technical bash, co-founder Gordon Moore recently conceded that his law predicting the number of transistors on a computer chip to double every two years will soon no longer hold. Addressing his audience of tech-heads, Moore stated that in roughly 15 years the ability to cram ever-shrinking semiconductors onto a sliver of silicone would likely be prevented by "something fairly fundamental." But what will the demise of Moore's exponential transistor paradigm mean for computing in the future? Is this the end of the golden age of computing?
Moore first posited his exponentially increasing transistor idea in a 1965 magazine article - the 35th anniversary edition of "Electronic Magazine" - when the integrated circuit was still in its teething phase (with only a handful of components per chip). For the article, Moore was asked to make some predictions about the future of silicon components over the following 10 years. Moore's intention was to demonstrate how electronics was going to become cheap, which was not at all obvious at the time, according to Moore. "It wasn't true of the early integrated circuits, they cost more than the bits and pieces that you could assemble [yourself] cost," explains Moore in an in-house Intel interview. "But from where I was in the laboratory, you could see the changes that were coming, make the yields go up, and get the cost per transistors down dramatically." So technological innovation is, ostensibly, a battle of Moore's Law versus the Law of Diminishing Returns.
Moore's first prediction was based upon the progress of the integrated chip up until that point, which showed that since the introduction of the first planar transistor in 1959, there had been a doubling of components contained on a single chip every year. Moore's wasn't a particularly rigorous line of scientific enquiry, but then history is full of brilliant ideas derived from the intuitive reasoning of geniuses. "I took that first few points, up to 60 components on a chip in 1965 and blindly extrapolated for about 10 years and said okay, in 1975 we'll have about 60 thousand components on a chip," recalls Moore. "I had no idea this was going to be an accurate prediction. And one of my friends, Dr. Carver Mead, a Professor at Cal Tech, dubbed this Moore's Law." So, originally, Moore's prediction was a doubling of complexity annually. But in 1975, after missing a vital factor contributing to the chip's rate of "remarkable" progress, Moore revised this prediction to the one we are more familiar with: every two years.
The looming retirement of Moore's Law doesn't come as anything of a surprise, especially to Moore himself who has predicted its demise on many occasions - it's not so much a matter of if, but when. "Materials are made of atoms," says Moore, "and we're getting suspiciously close to some of the atomic dimensions with these new structures." In fact, some commentators are already arguing that there has been a noticeable slow down in the ability to jam as many transistors onto a chip as possible. But such suggestions are no deterrent to those who have taken up the challenge of continuing Moore's law well into the future.
A new generation of technological pioneers has been able to shrink transistors down to a size of only one atom thick and 50 wide. Overcoming some early problems, Professor Andre Geim and Dr Kostya Novoselov, from The School of Physics and Astronomy, at The University of Manchester, have shown how the use of a material called graphene will allow the continuation of electronic miniaturization as silicon-based technology becomes obsolete. "We have made ribbons only a few nanometres wide and cannot rule out the possibility of confining graphene even further - down to maybe a single ring of carbon atoms," says Professor Geim. But it seems that other technologies will have to play catch up before functional graphene chips can be produced at these miniscule sizes.
Geim's team entertain the grand plan of being able to cut out electronic circuits from a single sheet of graphene; circuits that would incorporate a semitransparent central element, or "quantum dot" barrier, which controls the movements of electrons, and various logic gateways. Unfortunately, the necessary precision to produce such circuits is not yet available, and current progress seems to be in the lap of the technological gods. "At the present time no technology can cut individual elements with nanometre precision," says lead researcher Dr Leonid Ponomarenko. "We have to rely on chance by narrowing our ribbons to a few nanometres in width. Some of them were too wide and did not work properly whereas others were over-cut and broken."
Ponomarenko doesn't seem to be rattled, however, and considers the team's current position to be no different to the challenges faced by silicon-based technologies. The big difference between the two technologies, however, is that graphene is much more stable at miniscule sizes. "The next logical step is true nanometre-sized circuits and this is where graphene can come into play because it remains stable - unlike silicon or other materials - even at these dimensions," explains Ponomarenko, adding that graphene probably won't come into its own anytime before 2025. But even if researchers like Ponomarenko manage to extend the warranty on Moore's Law a few more decades, the inevitable can only be staved off for so long. What's next?
Perhaps the obvious thing to do is to try and take a broader view of exponential computing power, and technology generally. Much like futurist Ray Kurzweil has, in fact. Kurzweil, author of The Singularity Is Near, is very certain about what new paradigm - the 6th paradigm, according to Kurzweil, but more on that later - will emerge from where Moore's left off. "Chips today are flat [although it does require up to 20 layers of material to produce one layer of circuitry]. Our brain, in contrast, is organized in three dimensions. We live in a three dimensional world, why not use the third dimension?" asks Kurzweil on his site, KurzweilAI.net.
According to Kurzweil's big picture view, the exponential growth of computing power didn't begin in 1959 at all. But rather with 49 machines from the electromechanical calculators used in the 1890 and 1900 U.S. Census and Turing's relay-based "Robinson" machine. "It is important to note that Moore's Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance," writes Kurzweil. From Kurzweil's lofty futurist vantage point, he can see the first, second, third, and finally the fourth paradigm that eventually fatigued and passed on the baton to Moore. What's more, Kurzweil goes the extra mile and asks what Moore's Law represents within the context of exponential computing power and technology in its entirety.
"In my view, [Moore's Law] is one manifestation (among many) of the exponential growth of the evolutionary process that is technology. The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can also express the exponential growth of computing in terms of an accelerating pace: it took ninety years to achieve the first MIPS (million instructions per second) per thousand dollars, now we add one MIPS per thousand dollars every day."
Even more perplexing and surprising, even to Kurzweil, is his discovery (perhaps through exponential navel-gazing) that "there's even exponential growth in the rate of exponential growth." This means that the rate of technological progress, as we build upon prior progress, occurs at an ever-increasing pace. Kurzweil's arguments lead him to a conclusion involving artificial intelligence, human-machine integration, immortality, etc., which, according to Kurzweil, is the result of the exponential growth of human-created technologies since their inception.
In this instance it's sufficient to say that Moore's Law was not the first, and nor will it be the last, technological paradigm that will sustain the advancement of computing and technological progress. Together, both Kurzweil and Moore seem to agree that human innovation will keep us in good stead for years to come. "I'm periodically amazed at how we're able to make progress," says Moore. "Several times along the way, I thought we reached the end of the line, things tapered off, and our creative engineers come up with ways around them." Perhaps, as Kurzweil believes, this trend will only end when "we saturate the Universe with the intelligence of our human-machine civilization."
Radical Transistor Design Blasts Single Electrons Through Circuits
Magnetic Memory Research Attracts More Funding
Single Electron Switches New Silicon Transistor
Mechanical Memory Set For Comeback
DNA Creates Self-Assembling Nano-Transistor