Pat Gelsinger officially became the new Intel CEO on February 15, 2021. He arrives at a time when Intel is facing numerous monumental challenges from multiple competitors in several markets. The world is watching to see how Gelsinger plans to guide Intel, and he’s been very direct about his plans, saying that he intends to…
A question of scale Is Moore s Law dead, dying or in rude health? It depends who you ask. In turn, it depends on how they are applying the ruler when it comes to measuring the scaling factors as semiconductor processes move down the nanometre curve.
Together with colleagues from MIT, TSMC, UC Berkeley and his own institution, Philip Wong, professor of electrical engineering at Stanford University, wrote a paper for April s Proceedings of the IEEE on the progress made by silicon scaling and used it as the basis for his keynote at July s Design Automation Conference. In their view, Mooreâs Law is still in operation but the assumptions that underpin it have changed. As a result, technologists should look far less at simple areal scaling of transistor footprints and spacing but take a view on the effective density of each successive node.
by Kevin Morris
According to tech folklore, Carver Mead actually coined the term “Moore’s Law” – some ten years or so after the publication of Gordon Moore’s landmark 1965 Electronics Magazine article “Cramming More Components Onto Integrated Circuits.” For the next five and a half decades, the world was reshaped by the self-fulfilling prophecy outlined in that article. Namely, that every two years or so, semiconductor companies would be able to double the number of transistors that could be fabricated on a single semiconductor chip.
That biennial doubling of transistors led most noticeably to an even faster exponential increase in computation power. Besides getting more transistors from Moore’s Law, we got faster, cheaper, and more power-efficient ones. All of those factors together enabled us to build faster, more complex, higher-performance computing devices. In 1974, Robert Dennard observed that power efficiency in computing would increase