Sunday, June 7, 2009
The End of Moore's Law
I noticed a question from Harsh about the end of Moore's law, my personal pet-peeve, and so here I am writing this post.
What do I make of it? Honestly I know little beyond what is published in the news. I don't think there are a lot of people who understand the challenges at the device level unless they actually work in the area, and even when they do, they don't understand the challenges in areas other than their own. Shekhar Borkar, director of Intel Research, is someone I've heard speak on the topic. Intel is a couple of years ahead of everyone else technology-wise and who knows better than the person leading all the Intel researchers.
Shekhar does warn that we are headed into unreliable territory beyond 10 nm or so. He claims that reliability of the devices will be terrible. Devices will fail in-field all the time, devices will make many transient errors. Reconfigurability is one alternative, and there will be ways to route around the problems or have redundancy to make things work. But if this does happen, I imagine it will lead to a huge shake-up in the overall design flow.
Overall if you look at the costs of making a chip now, they are incredibly high. The barrier for entry makes it very hard for chip-startups to survive. So making chips will end up becoming a big player market, and/or we will see a lot of consolidation. Already Intel is able to put a lot of the functionality of a computer onto one package. Intel Moorestown will put CPU, GPU, motherboard function inside one package.
Another challenge according to Borkar is lithography. Apparently we have been using the same light-source for the last few decades. We are able to produce features at 40nm or 32nm using a lot of optical tricks, but unless a new smaller wavelength light-source works we are in dire straits. EUV light has been promised for many years but so far no one has made a breakthrough on it.
On the positive side, someone told me the other day that people have always thought that Moore's law would end in the next 5 years. Some breakthrough takes place and it buys us some more time. So historically people always thought we were 5 years away from doom. What's different now is that we have hit some physical limits already. For instance clock speeds have saturated now and Intel no longer tells you to buy faster CPUs, but tells consumers they need many cores (,which many people say is balderdash). Another positive is that there are a few promising technologies which will buy us some more benefits... 3D CMOS, optical interconnect, graphene, carbon-nanotubes, but only some of these are close to being production-worthy.
So what's my advice at this point in time? If you want a research-y job, go investigate these new technologies. If you just want a development job, software or higher-up the chain, is a better bet. Image recognition, voice recognition, character recognition are all great fields. CAD/EDA might become a hot area for chips with all these pesky reliability problems. Solar looks like a very promising field too. Prediction is an inexact science, so only believe what you think makes sense, even if I said it. You are going to have to make a bet on something.
Subscribe to:
Post Comments (Atom)
1 comment:
I think that we need not worry till 2014. According to the articles mentioned here
http://public.itrs.net/
silicon feature sizes will continue to shrink, the transistor count in chips will continue to increase, and performance requirements will become increasingly stringent. Under such a scenario, with new problems that crop up and new technologies that constantly evolve, CAD techniques will remain essential for the design of high-performance circuits.
So i guess we have time till 2014 , till someone makes a revolutionary breakthrough in semi-conductor technology.
Thanks and cheers
Harsh
Post a Comment