No Moore’s Law for Networks

February 22, 2016

martinby Martin Geddes
Founder and Principal, Geddes Consulting

A common misconception in telecoms is that there is an equivalent of Moore’s law for networks. Whilst it is true that we have seen exponential growth in data transmission bitrates –driven by past rapid improvements in opto-electronics – no such property holds for networks as complete systems

What is Moore’s law?

Just over half a century ago, in 1965, then director of R&D at Fairchild Semiconductor, Gordon Moore, speculated1 , that by 1975 it would be possible to contain as many as 65,000 components on a single quarter-inch semiconductor. Later that year, Moore revised the forecast rate. Semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years. Shortly afterwards, Caltech professor Carver Mead popularized the term “Moore’s law”.

This forecast of an exponential increase in the density (and hence performance) of integrated circuits drove the technology plans for semiconductor manufacturers. Each aimed for the presumed increase in processing power that their competitors would soon attain. It therefore became in many ways a self-fulfilling prophecy.

This is of significance for society as a whole. Whilst the impact of IT on labour productivity is a matter of some controversy2, Moore’s law factors directly into product and service innovation that has unquestionably benefitted us all.

The nature of Moore’s law

“Moore’s law” should be considered an observation or projection, not a physical or natural law. Moore himself predicted that Moore’s law, as applied to integrated circuits (ICs), will no longer be applicable after about 2020 – when IC geometry will be about one atom thick. On the other hand, many believe that advances in 3-D silicon, single-atom and spin transistors will give us another twenty years of conventional doublings before the electronics limit is reached. New technologies, such as biochips and nanotechnology3, may mean that Moore’s law will continue inexorably forward4.

But in more recent times, there have been increasing indications that Moore’s law is nearing the limit of its relevance. Intel confirmed in 2015 that the pace of advancement has slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Brian Krzanich, CEO of Intel, announced that “our cadence today is closer to two and a half years than two.” This is scheduled to hold through the 10 nm width in late 2017. He cited Moore’s 1975 revision as a precedent for the current deceleration, which results from technical challenges and is “a natural part of the history of Moore’s law”5.

The value bottleneck shifts to networks

It is hardly a secret that the dominant trend in IT over the past two decades has been the shift from stand-alone mainframe and desktop computers to networked services. Every smartphone and tablet is a client for a multitude of cloud application.

The overall ability of this infrastructure to deliver value is thus limited by the performance of distributed computing applications. This performance limit is not the product of single data links, but of complete networks (or in the case of the Internet, a “networks of networks”).

There is a widespread belief that there ought to be a Moore’s law for networks. It is a pernicious one, since it perpetuates the idea that broadband networks are somehow like ‘pipes’. In this belief system, all we need to do is to keep increasing the rate of flow – i.e. supply ever more ‘bandwidth’. This false metaphor leads us into irrational design, marketing and operational decisions that are damaging both the telecoms industry and its customers.

So, why is this common belief wrong?

Reason #1: Performance is driven by latency, not bandwidth

With Moore’s Law, we are creating ICs of ever more complexity, to maximize the computational capabilities of a device. To the best of our knowledge, there is no intrinsic upper bound to this process, bar those ultimately imposed by the physical resources of the universe.

Conversely, with networks the opposite is the case: we are aiming to minimize the latency6 of communications. The smallest possible latency occurs at the speed of light. One of my colleagues heard a telco CTO instruct his staff that they were to reduce latency on their network by 10% every year. A moment’s thought tells you that can’t happen!

Reason #2: It’s not just about link speeds, but also about contention for backhaul

Technology improvements decrease the time it takes to ‘squirt’ a packet over a transmission link. (The technical term is to ‘serialize’ a packet.) However, when packets contend for that link, there is a delay whilst they wait in queues. This delay can easily offset any improvements in transmission technology. Indeed, networks are changing structurally, making them more sensitive to contention delay. One reason is that the ratio between the capacity of the edge and core is changing.

For example, in the past it might take a thousand users of dial-up modems offering a typical load to saturate their backhaul. Today, a one gigabit home fibre run may have a shared one gigabit backhaul, which means a single user can easily saturate it with a single device running a single application. Wireless technologies like beam-forming also work to increase contention on mobile networks, by allowing more users to operate concurrently on a single piece of backhaul. We are moving from a world where it took multiple handsets to saturate the backhaul one cell, to one where a single handset may be able to saturate the backhaul for multiple cells – simultaneously!

There is no technological ‘get out of jail free’ card for contention effects, and no exponential technology curve to ride.

Reason #3: As demand grows, QoS declines

When we increase supply in a broadband network, demand automatically increases to fill it. That’s the nature of protocols like TCP/IP and modern (adaptive) applications: they aggressively seize whatever resources are available. Improvements in technology don’t automatically result in corresponding improvements in application performance. If the network is “best efforts”, the customer experience may well decline accordingly.

Indeed, in some cases adding more supply can make things worse – either by over-saturating the contention point, or moving it around. This isn’t a new phenomenon: data centre architects have long known that adding more CPUs to a server constrained by storage performance can in fact make performance regress, rather than improve.

Reason #4: Applications need ‘stationarity’, or steadiness

Computation can be measured by the number of logical operations performed, which is a simple scalar. Data networking requires low enough latency and packet loss, and those have to stay sufficiently steady for applications to work. This ‘steadiness’ is called stationarity, and is a statistical property that all applications rely on. When you lose stationarity, performance falters, and eventually applications fail.

Hence the resource we are trying to create isn’t some simple scalar with a hyper-growth curve. We also need the absence of variance, which has no technology-driven improvement like Moore’s law. Indeed, growing demand acts to destroy the stationarity of statistically-multiplexed networks. Furthermore, this happens earlier in the life cycle of every new generation of access technology!

Reason #5: Physics is not on our side

Even increasing link speed isn’t an endless process. As the head of Bell Labs Research says in Scientific American7:

We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit. … That tells us there’s a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light.

Both fixed and mobile networks are getting (very) close to this limit. We can still improve other bottlenecks in the system, such as switching speed or routing table lookup efficiency, but there are severe decreasing returns ahead.

The bottom line

Moore’s law is driving hyper-growth in volumetric computational demand, but the nature of network supply does not have a corresponding hyper-growth decline in cost. That is because volumetric capacity is not the only concern – latency matters too, and this is constrained by both the speed of light as well as the schedulability limits of the network.

There is no magic technology fix through increasing link speeds. Application performance is increasing dominated by latency, not bandwidth. That is why Google has a “Round trip time Reduction Ranger”, whose job is not to reduce the speed of light, or cause technology miracles to occur, but to chop up and rearrange data flows, trading (self-contention) delay around, in order to get better overall application outcomes.

Similarly, the future of telecoms in general is firmly centred on managing latency due to the contention between flows created by competing applications and users. This means scheduling resources appropriately to match supply and demand. That in turn allocates the contention delay to the flows that can best withstand its effects. To believe otherwise is just a big fat pipe dream.

1in a brief article entitled “Cramming more components onto integrated circuits”fFor the thirty-fifth anniversary issue of Electronics magazine, published on April 19, 1965.

2https://en.wikipedia.org/wiki/Productivity_paradox

3In 2011, researchers at the University of Pittsburgh announced the development of a single-electron transistor, 1.5 nanometers in diameter, made out of oxide based materials. Three “wires” converge on a central “island” that can house one or two electrons. Electrons tunnel from one wire to another through the island. Conditions on the third wire result in distinct conductive properties including the ability of the transistor to act as a solid state memory

4In 2015, Intel and Micron announced 3D XPoint, a non-volatile memory claimed to be up to 1,000 times faster, up to 1,000 times higher endurance and similar in density compared to NAND. Production is scheduled in 2016

5Bradshaw, Tim (July 16, 2015). “Intel chief raises doubts over Moore’s law”. Financial Times.

6The term latency refers to any of several kinds of delays typically incurred in processing of network data.

7 http://www.scientificamerican.com/article/when-will-the-internet-reach-its-limit/

Martin Geddes is an authority on the future of the telecoms industry, ranging from emerging business models to new network technologies. He is a futurologist, writer, speaker, consultant, and technologist. Martin is currently writing a book, The Internet is Just a Prototype, on the future of distributed computing. He is a former Strategy Director at BT’s network division, and Chief Analyst and co-founder at Telco 2.0. Martin previously worked on a pioneering mobile web project at Sprint, where he was a named inventor on nine granted patents, and at Oracle as a specialist in high-scalability databases.

Contact: @martingeddes