Categories
article Fall 2016

Moore No More: A Paradigm Shift in Computer Architecture

By Stephen Eick

Computers drive the world, it seems, and with the rise of applications like self-driving vehicles, this is true more so than ever before. The intricacies involved in their development are vast and numerous. A modern computer processor can have more transistors (the basic building blocks of computer chips) than the number of blades of grass on ten football fields, all squeezed into an area half the size of a penny. Pushed by economic and technological forces for more than 40 years, the continued miniaturization of the transistor has finally hit the unyielding wall of physics. These limitations force computer architects to rethink their approach to modern design problems. By exploring the changes in computer architecture over the years, we can get a glimpse at what the future holds for computing.

One of the most important and well-known principles in computer architecture was Moore’s Law. “Moore’s Law isn’t really a law of physics. It’s more of a law of business, of economics,” says Karu Sankaralingam, an associate professor of computer science at UW-Madison. Moore’s Law, named after Intel co-founder Gordon Moore in 1965, proclaims that the number of transistors — the device which switches electricity on a computer chip — will double approximately every two years. Moore’s Law goes hand-in-hand with Dennard scaling, which states that the amount of power a transistor consumes is proportional to its size. The transistor has been the fundamental component of computers for the last fifty years by its ability to switch electricity.

A transistor can be visualized as a light switch but with two key differences: (1) the switching is controlled by electricity rather than a mechanical arm, and (2) no part of the transistor moves when switching. As the size of transistors decreases, the energy needed to switch them decreases as well. This means the performance of a transistor per watt of energy used increases as the transistor shrinks, allowing for more transistors to be used on a chip without increasing the amount of energy needed. Also, the less energy transistors use, the faster they can flip on and off. Because smaller transistors can switch faster, computers can perform more work in a given amount of time. In the case of Moore’s Law and Dennard scaling, the combination of an economic observation and a technological principle became a self-fulfilling prophecy — and has driven competition between computer manufacturers at Moore’s projected levels for over 40 years.

In the past, this trajectory was only attainable because of how relatively large transistors were. Ten years ago, the smallest transistors were 45 nanometers (nm) in length, whereas current transistors are now 10 nm. Each new generation of transistor shrinks both its length and width by thirty percent. This means the 10 nm-long transistor occupies five percent of the surface area of a 45 nm-long transistor. Lately, though, it seems the trend of miniaturizing transistors has encountered an impasse. “Moore’s Law has been hit by the sheer economics of how much more expensive it is to make a transistor smaller than what it is today,” says Sankaralingam. “Almost all technology projections suggest it will be harder to get the next generation of transistors to be cheaper than the current generation.”

Since the advent of Moore’s Law, the integrated circuit manufacturing process, known as photolithography, has allowed for low-cost scaling of computer chips because of how large the transistors were. However, the size of a silicon atom is near 0.1 nanometers, meaning only 100 silicon atoms span across a 10-nanometer transistor. Creating the tools to carve such minuscule features has become preposterously complex, driving the manufacturing costs of any transistor smaller than 10 nm by 10 nm through the roof. With the death of Moore’s Law as well as Dennard scaling, new ideas and techniques are needed to keep the field of computing moving forward.

This is where Professor Sankaralingam steps in. “My research work is essentially this: how do we think about transistors as precious resources even when there are a billion of them on a chip, and build devices that are really energy efficient?” says Sankaralingam. “There’s a lot of stagnation in hardware innovation because it’s really, really expensive.” One of his strongest efforts is to push open-source computer hardware into the mainstream. If something is “open-source,” it means anyone can see what makes the product tick and can use the product for free. The open-source software movement took off in the late ’90s as a result of a slowdown in software innovation from major software players at the time. One of the biggest success stories of open-source software is Linux, an operating system which, according to a global survey by W3Cook in March 2016, runs on over 95% of the web servers we interact with daily. Leveraging free software tools allows companies large and small to develop new products faster and cheaper while scaling their current technologies larger than ever before.

Open-source hardware development, however, is an entirely different beast. Efforts such as Open Compute Project create open-source servers for large-scale datacenters, but the chips running inside those servers are often still closed off. Sankaralingam is looking to help change that with the Many-core Integrated Accelerator of Waterdeep/Wisconsin, or MIAOW (pronounced me-ow). MIAOW is an open-source general-purpose graphical processing unit (GPGPU) implementing a publicly available specification from one of two companies currently producing mass-market GPUs. “A benefit of open-source hardware is establishing that you have a minimum-viable product,” says Sankaralingam. If a small startup had a huge new computer architecture idea, they would need to gather enormous amounts of money because developing a processor from scratch is incredibly time-consuming and expensive. A tool such as MIAOW can drastically reduce the upfront costs of development, encouraging innovation by lowering the barrier of entry.

Turning conceptual circuitry into a physical device is an extremely costly endeavor. Fortunately, a tool exists to quickly bring concepts like MIAOW into the real world. A field-programmable gate array, or FPGA, allows one to take a bit of designed circuitry and program it onto this special device. Unlike a normal programmable chip, the uploaded design reconfigures the transistors of the FPGA so it “becomes” the circuitry. Due to the incredible savings in computing time and energy usage coupled with decreased manufacturing costs they bring, FPGAs are beginning to hit the mainstream. “I think it’s the exact right thing for industry to do,” says Sankaralingam. Intel is planning to integrate an FPGA onto some of their server processors in the coming generations. This would allow the processor to offload certain tasks normally handled by software to a hardware unit capable of performing faster and more efficiently. FPGAs will almost certainly play a big role in the coming years.

As crucial as a transistor is to making an FPGA work, there are new technologies coming down the pipeline promising to provide new, incredible functionalities. One such technology is known as the memristor. First theorized in 1971 by UC-Berkeley professor Leon Chua, the memristor is a single circuit element made of an advanced material which remembers the amount of electricity that has passed through it in each direction. Based on that historical value, the memristor will alter its electrical resistance. Designating high and low resistive states for the memristor allow for the storage of binary information without the need to store electric charge because the material properties of the memristor have been physically altered. Since no charge needs to be held captive, the energy savings wrought through this technology are immense. This type of memory is generally referred to as resistive random-access memory (ReRAM or RRAM), and is working its way toward widespread adoption. In July 2015, Intel and memory manufacturer Micron announced the pending release of a new information storage device called 3D XPoint. While the core underlying storage technology for the device has not been publicly announced as of April 2016, the publication EE Times received a statement from Intel stating “the switching mechanism is via changes in resistance of the bulk material,” strongly implicating ReRAM or a close relative as the storage mechanism. This technology significantly reduces power consumption and increases memory access speed by an order of magnitude, laying groundwork for a huge shake-up of the computer memory market.

Digital technology is a field continuously experiencing flux. The speed at which computers evolve and change is unlike how the rest of the world works. Up until now, it’s been relatively easy to achieve speedup in computers, but the death of Moore’s Law ended that party. However, the amount of new technologies being developed and ideas being tested by Karu Sankaralingan will help ensure a steady beat for future generations of computing to march to.

Leave a Reply

Your email address will not be published. Required fields are marked *