Issue #18: Obsolescence

A Farewell To The Von Neumann Architecture

Unbeknownst to most if not all full stack developers and Scrum masters out there, the computer industry has been fighting a raging war against the Von Neumann architecture for the past 70 years.

John Von Neumann was probably one of the greatest scientists of all time. His contributions to science and technology range from computers, weather prediction, and economics, to fluid dynamics, quantum logic, and set theory. He sat in a desk in the university of Princeton, next to Gödel, Ulam, and Einstein.

One of his papers is the famous “First Draft of a Report on the EDVAC,” an unfinished paper in which he describes the mechanisms behind one of the first computers. This is the origin of the term “Von Neumann Architecture,” although it is now known that other scientists had come up with similar architectures before Von Neumann. But that is beside the point. The important thing to know is that whichever computer you are using to read this text, it is based in that eponymous architecture.

The Von Neumann architecture is the reason why most software developers argue that learning a second programming language requires substantially less investment than learning the first. All languages respond to the same underlying logic, because they ultimately all talk to the same kind of computers, regardless of their obvious syntactic differences. John Backus said it clearly in his ACM Turing Award lecture in 1977:

The differences between Fortran and Algol 68, although considerable, are less significant than the fact that both are based on the programming style of the von Neumann computer.

This is not the place for a lengthy discussion of the major characteristics of this architecture; suffice to say that they include the use of a bus to connect the CPU with memory, and the use of that same memory space for both data and instructions. The important bits rely in the ways that this architecture, however useful, has also reached its limits once and again during decades.

Patches

As all things created by humans, the Von Neumann architecture is imperfect. Its most well known problem is the “Von Neumann Bottleneck.” To solve this issue, computer scientists have brought up the concept of caches, which begat cache invalidation as one of the hardest things to do in computer science. Modern CPUs these days have various levels of caches, of various capacity, allowing CPUs to avoid expensive roundtrips to fetch information from memory.

The fact that programs are stored in the same medium as data is not without its problems. While it can make virtual machines possible, it can also enable buffer overruns, particularly if your chosen programming language does not perform the required (and nowadays taken from granted) checks. Viruses, another side effect of this architectural choice, span a whole industry of operating system makers who must place protections between “kernel” and “user spaces” so that harm can be contained.

Speaking about compiler checks, they have reached such level of sophistication that they are able to perform both garbage collection and buffer overrun checks during compilation.

At some point between the 80s and the 90s, many scientists and industry pundits sold the RISC architecture as another solution for the bottleneck; let us pipeline instructions issued from a different instruction set; a “reduced” one, instead of a “complex” one. One of the great things of RISC architectures is a reduced “instructions per watt” ratio, which makes it perfect for mobile devices; the ARM architecture is an example of this factor at play.

In the real world, however, no CPU is actually “100% RISC” or “100% CISC;” most are what Gordon Bell called “code museums” in his 1991 book High Tech Ventures. Code museums are CPUs built to “to hold programs created on earlier machines and to serve their present customers.” (There’s an interesting discussion about this in page 130 of the February 1990 edition of Dr Dobb’s Journal, by Hal Hardenbergh.)

Another technique aimed to accelerate the execution of code are branch predictors, a concept developed by IBM in the 1950s. This consists in the CPU trying to “guess” which branch of the code is going to be executed next. This sounds like science fiction, and to a certain extent it is, but unfortunately it was at the root of the Spectre security vulnerability, which affected virtually all CPUs.

Then there are the laws of physics. Herb Sutter’s seminal “The Free Lunch Is Over” article marks the end of the almost 40 years old reign of Moore’s Law, thanks to the fact that our CPUs shrank down to the realm of atoms and molecules. This is why the advertised CPU speed of your computer did not increase after 2005, instead hovering around 3 GHz.

The solution found by the industry was to use multi-core CPUs, but it turned out that a program built for a single CPU would not necessarily go faster with many CPUs. Gene Amdahl correctly explained in 1967 that the theoretical execution speedup was limited by the internal structure of the code.

Somebody remembered that functional languages force developers to work with unmutable data structures, and that those are perfect choices for parallel programs. This brought back functional programming languages to the forefront, right at a time when Philip Wadler was complaining that nobody used them. As a result nowadays all languages, including object-oriented ones, have lambdas: C++, PHP, C#, Java, pick yours. Operating systems started bundling runtime libraries making it easier to execute code in parallel.

Ironically enough, two notorious side effects (no pun intended) of this renewed insterest in functional languages were: Electron apps and npm packages. After all, JavaScript was described by Crockford as Lisp in C’s clothing and the rest is history. Of course, since performance is hard (let us not forget that JavaScript has no integer types) somebody came up with WebAssembly and somehow, the cycle is done.

Or is it?

Future

The future is not another patch around the venerable Von Neumann architecture. Arguably, the computer you are using to read this article is already obsolete, and IBM and Google are competing to see who is going to obsolete it first. It all revolves around Quantum Computing.

Quantum entanglement and Bell’s Theorems show that there will be a new architecture for quantum computers. There are a few proposals already, and the meager knowledge of the author of these lines in quantum mechanics represents a major obstacle to be able to read any of those papers.

Yet, the press starts to warm up to the idea of quantum computers solving the halting problem, with “God-like powers” apparently. One can feel the excitement and the hype.

What we need, instead of hype, is a new generation of Hilberts, Russells, Gödels, Turings and Von Neumanns, to bring us to the next level, and let the 21st century of computing actually start. One day we will look back at the Von Neumann architecture and find it as peculiar as Babbage’s Difference Engine.

But since Von Neumann developed a form of quantum logic, we might as well still be calling his name in the years to come.

Cover photo by John Towner on Unsplash.

Donate using Liberapay

Adrian Kosmaczewski is a published writer, a trainer, and a conference speaker, with more than 25 years of experience in the software industry. He holds a Master's degree in Information Technology from the University of Liverpool.