Beginning of the End of Moore’s Law

Many techies are familiar with Moore’s Law, commonly stated that computing power doubles every 18 months. It was more or less true throughout the latter half of the 20th century and into the 21st century, but the “law” is at the beginning of its end.

Transistor_Count_and_Moore's_Law_-_2011.svgMoore’s Law is that old saw about processors

Ever hear someone say “processing power doubles every two years” or something to that effect? It isn’t BS. It’s called “Moore’s Law,” named for Gordon Moore. Moore, an electronics engineer, did a lot of influential work in semiconductors and integrated circuits, co-founding Fairchild Semiconductors and later NM Electronics, which became Intel. Moore, according to Seeking Alpha, remarked in a 1965 paper that the number of transistors in integrated circuits was doubling every year (thus boosting processing power), and would do so every two years (or less) in perpetuity. The paper is available here (PDF) in case anyone’s interested.

It’s more or less held up over time, and a lot of companies that make computer processors for every conceivable application use it as a guideline. However, as Seeking Alpha and Wired point out, the pace is slowing. The top ten list of the “Top 500” – a list of the biggest and most powerful extant supercomputers worldwide compiled by academics who know such things – hasn’t changed in two years. Granted, those are all cutting-edge, institutional application machines, but growth has slowed there all the same.


The race towards zero tolerance

The issue is how many transistors can be crammed into integrated circuits, also called ICUs or microprocessors.

Yes, those.

Processing power is partially a product of transistor density or the number of transistors in an integrated circuit. One measure is the half-pitch (half the distance) between them. Those tolerances are getting rapidly smaller.

In 1995 – coinciding with a version of Windows that made millions of people contemplate tossing their PCs out the window – the industry standard half-pitch between transistors was 350 nanometers, narrowing to 130 NM in 2002. In 2013, according to PC World, AMD’s chips, for instance, were at 28 NM and Intel’s latest Haswell chips are at 22 NM. Intel is working on 14 NM and 10 NM chip technology at the moment.

Each gain costs a lot of money in R&D to realize. AMD has been having a lot of difficulty moving past 28 NM. Some hardware engineers, according to ElectroIQ, have calculated that 28 NM is the point where the costs and difficulty of development begin to make each incremental decrease in transistor density increasingly more costly, eventually culminating in a point where the next gain is too costly to justify.

Peak Processing

Robert Colwell, former chief architect at Intel (ever use a Pentium? Those were Bob’s), is convinced the 12- to 24-month doubling of transistor density will hit the wall between 2020 and 2022, according to CNET, when the hardware industry hits 7 NM. Physicist Michio Kaku (you’ve seen him on TV), according to PC World, likewise gave Moore’s Law about a decade.

So who gives a crap? What does this mean? Well, the modern computer – be it a desktop, laptop, tablet or smartphone – uses technology that hasn’t changed a whole hell of a lot in decades. Granted, the capabilities have DRASTICALLY increased, but said technology might hit the limit of its potential within a decade. That means a whole new breed of computing technology will need to be invented.