Illustration source: iStockphoto
Accelerators have become ubiquitous: the world’s bitcoins are mined by chips designed to speed up the cryptocurrency’s key algorithms, hardwired audio decoders are used in nearly every digital product that makes sound, dozens of A startup is chasing fast silicon that will make deep learning AI ubiquitous. This specialization enables various types of software originally running on general-purpose CPUs and their internal common algorithms to bring faster processing speeds on customized hardware. A way to continue driving the growth of computing Power into two generations of chips.
But this won’t work. At least, it won’t work for long. That’s the conclusion of research to be presented at this month’s IEEE International Symposium on High-Performance Computer Architecture, by David Wentzlaff, associate professor of electrical engineering at Princeton University, and his doctoral student, Adi Fuchs. They calculated that chip specialization cannot produce the kind of gains that Moore’s Law can. In other words, accelerator development will hit a wall like transistor shrinkage, and it will happen sooner than expected.
To prove their point, Fuchs and Wentzlaff had to figure out how much of the recent performance gains came from chip-specific tweaks and how much came from Moore’s Law itself. That means going through more than 1,000 chip datasheets to figure out how much of the improvement from chip generation to chip is due in part to better algorithms and clever implementations of smarter circuits. In other words, they were trying to quantify human ingenuity.
To do this, they did what engineers are good at: they transformed it into a dimensionless quantity. They call it the return of chip specialization, and they hope to answer the question: “How much more computing power does a chip have for a fixed physical budget of transistors?”
Using the metric, they evaluated video decoding on application-specific integrated circuits (ASICs for short), game frame rates on GPUs, convolutional neural networks on FPGAs, and Bitcoin mining on ASICs. The results were not encouraging: the gain of a dedicated chip was largely determined by the increase in the number of transistors available per square millimeter of silicon. In other words, apart from Moore’s Law, the power of chip specialization itself is limited.
So if specialization doesn’t give the ideal answer, where is the way forward? Wentzlaff advises the semiconductor industry to learn to compute with things that can scale even when logic stops. For example, the number of bits per square centimeter of available flash memory continues to grow without being affected by Moore’s Law, as the industry has turned to 3-D technologies that can fabricate 256 or more layers of cells. Fuchs and Wentzlaff have already started working on this problem, developing a computer architecture that speeds up computations by having the processor look up previous computations stored in memory instead of recomputing them.
The end of Moore’s Law “is not the end of the world,” Wentzlaff said. “But we need to prepare for that.”