top of page

What Happens After Moore's Law?

June 5, 2017

gordonmoore_intel.webp

For the past decade, technologists and reporters have been claiming the end of Moore’s law is just around the corner. In many cases, they cite a slowing in clock speed improvements, reliance on multi-core architecture for performance gains, and most recently Intel’s discontinuing of it’s Tik-Tok processor iteration cycle. This post will not discuss when Moore’s law will end, but rather, what are the implications to the electronics industry when it inevitably does?

The semiconductor industry is strange from an outside view. Semiconductor companies spend billions of dollars every few years to build new fabs (semiconductor factories) to achieve only marginal improvements over their competitors. Then, a few years later, write off the entire factory as it has become obsolete. As Moore’s law ends, these fabs will no longer need to be constantly replaced, only maintained, meaning that consumers will no longer have to bear the cost of building new fabs. Chip prices will decrease substantially across the board, and chips will no longer be differentiated by the process node, but by the size of the die. Chips on larger dies will be more powerful, and will likely be priced proportional to the amount of silicon they require to be manufactured. Manufacturers will likely begin to shift research from improving process node tech to improving yield and die size as the end of Moore’s law approaches.

The end of Moore’s law does not mean the end of performance improvements—rather, it may lead to an explosion in higher efficiency chips. Under Moore’s law, semiconductor startups had extreme difficulty competing with incumbents, simply because they did not have the resources to keep up with the latest process node technology; by the time a startup made a chip available to customers, the chip giants were already two process node generations ahead of them, and wiped out any efficiency improvements the startup may have had. As Moore’s law ends, these startups will be able to experiment with new architectures and chip designs without fear of immediate obsolescence. A startup with a substantially better CPU design may even be able to disrupt the dominance of x86 in modern computing if it gains enough developer support.

However, I believe that the end of Moore’s law will be most notably marked by a huge increase in the variety of application-specific integrated circuits (ASICs). The CPU today is a jack of all trades but master of none, but as CPU performance flattens out with the end of Moore’s law, designers may delegate specific CPU functions to ASICs to improve total system performance. This is already true of graphics cards, which run the graphical and physics elements of games on specially designed chips. CPUs have also begun to add ASICs: Intel chips have special circuitry designated specifically for encoding and decoding h.264 video, and for compressing and decompressing documents; Snapdragon SoCs have special circuits for vision processing and ambient keyword listening. As Moore’s law comes to an end, I expect CPUs to increasingly use ASICs in their design for basic functions, but also expect modular ASICs to become common in PC and server design. Just as PC gamers added GPUs to their systems to increase game performance, Google added TPUs to their servers to accelerate deep learning workloads, and has now even designed specialized dedicated deep learning servers for training their networks. I anticipate that processor differentiation will continue to increase (in part stoked by a growing number of semiconductor startups), and that PCs will no longer be as homogenous as they are today. I expect that in the future, laptops designed for artists, gamers, programmers, writers, and businessmen will have greatly different architectures and designs.

An end to Moore’s law may also lead to longer product lifecycles, potentially changing major aspects of the product’s design. For example, today most smartphones are replaced every 2-3 years. Because of this, many aspects of the device, including the battery, are not designed to last over 1000 cycles. If the processing power of devices does not make them quickly obsolete, then designers may need to build more rugged and modular electronics, so that they last many years, perhaps even decades. For example, despite a shift towards integrated rechargeable batteries in smartphones and laptops in recent years, in a post-Moore’s law era, users may demand the ability to replace their batteries as they age. With a massive shift in the weakest link in the device’s lifetime, many components will need to be re-engineered, especially those that include moving parts. Further, aesthetic changes may need to be made: users may expect their devices to look more appealing if they are planning on sticking with them for decades.

The pressure on innovation in hardware should not be the biggest concern to consumers in a post-Moore’s law age; rather, consumers should be concerned about Moore’s law’s software corollary: Wirth’s law (sometimes known as Gates’ Law, May’s Law, or Paige’s law), which states that the speed of software halves every 18 months. This trend is caused by a combination of feature bloat and developer laziness, which is why many tasks (such as browsing basic websites, and word processing) are just as slow, and sometimes slower, than they were decades ago. If this trend continues, it is quite possible that our electronics will effectively die from bad software, but new electronics will not fare any better to replace them. My only hope is that programmers of the future somehow change their habits, and focus on code efficiency, rather than the speed at which they can produce code.

bottom of page