Breaking The 3 GHz Barrier
My first serious attempt at understanding computer hardware happened during college, in 1994. One of the labs consisted in wiring a 4-bit processor to a series of switches and a LED display. The objective was to make a very simple operation: starting from zero, make the display increment one digit every time the switch is activated. Or, as it is commonly know, to make a bare bones half adder.
That CPU was intended for teaching purposes; a squared wafer of silicon about 30 cm wide and large, with big capacitors and a whole circuitry system easily viewable with the naked eye. We could plug and unplug components of different resistance, capacity and voltage. The teacher explained to us that this contraption was a much simplified component, loosely based on the architecture of the Intel 4004, the first commercially available microprocessor, released in annus mirabilis 1971.
To be honest, I could not really wire everything properly without some help from my classmates. I was puzzled, wrongfully lost in the perception that computers could not be that simple. Yet, with a bit of wiring and patience, indeed, the LED displayed the required bits (no pun intended) of a very simple calculator in front of my eyes.
Seeing a set of wires and lights perform the simplest of all mathematical operations had quite an impact on me. Maths were no longer an abstract concept; with the proper hardware, maths could literally come to life. I felt the excitement and the wonder of those scientists who, in the 1940s and 1950s, discovered this fact and built a new science around it.
This CPU, as simple as it was, showed me what the soul of this new machine was made of.
TANSTAAFL
These days we read in the press about supercomputers built with CPUs featuring names seemingly borrowed from a science fiction novel. Take the POWER9, for example. According to Wikipedia, it is a “superscalar, multithreading, symmetric multiprocessor.” Not even HAL 9000 received such attractive marketing buzzwords. The POWER9 is a real beast, used as the brains of one of the fastest supercomputers ever made, at the time of this writing at least.
In 2004, Herb Sutter wrote “The Free Lunch Is Over“, a seminal article announcing the end of the path for Moore’s Law. This quote stands out:
That willingness is simply a clear indicator of the extreme pressure the chip designers face to deliver ever-faster CPUs; they’re under so much pressure that they’ll risk changing the meaning of your program, and possibly break it, in order to make it run faster.
In the eyes of this author, three major changes followed this article.
The first was quite visible in computer magazines. Computer marketing had no other solution but to drop CPU speeds as a differentiating factor among competitors. After all, most CPUs have been capped at around 3 GHz, and for almost 20 years now. As Graham observed during the discussions around this edition of the magazine, Steve Jobs never got his 3 GHz PowerPC G5 CPU from IBM, so he jumped ship to Intel. But he still did not immediately get a 3 GHz CPU, but rather, something called a “Core Duo”.
The second visible effect, and quite dramatic this time, was Meltdown and Spectre. Security vulnerabilities using precisely those advanced CPU features, as attack vectors of unprecedented risk and virulence.
But from the perspective of the programmer, the third visible effect was the spread of multicore CPUs, now commonplace even in smartphones and small boards like the Raspberry Pi, abundant in the drawers and behind the TV sets of most software developers reading these lines.
Multicore
The PC revolution happened on single-core CPUs running first single-threaded, then multithreaded applications. Is cloud and smartphone revolution currently based on multiple-core CPUs running single-threaded applications?
The availability of multicore CPUs at the end of the first decade of this millenium unleashed a new era in the programming of multithreaded apps.
First, technologies such as OpenMP and libdispatch simplified the conceptual work of designing and writing applications for multicore CPUs, avoiding the requirement to write threading code, dealing with race conditions, semaphores, shared data, and other concepts.
Second, event loop libraries and runtimes such as Node.js, libuv and libevent provided a different solution to the problem, avoiding multiprocessing altogether, at least from the point of view of the developer. Single threading and event loops for everyone.
Third, there was a the functional programming renaissance. This led in turn to two interesting trends: on one side the rise, of new and old languages such as Scala, Haskell, and F#; on the other, “classical” programming languages including functional programming concepts. To name a few: C++, C#, Java, PHP, Objective-C; they all got lambdas and functional idioms retrofitted into them in one way or another.
The rising popularities of Python, Ruby and JavaScript during the Web 2.0 era are arguably also to blame in this evolution; not to mention the spread of the MapReduce processing model (2004) and the respective availability of Hadoop and other similar technologies.
Virtualization
Developers are quite detached from hardware considerations these days. I have made a career in a software world increasingly disconnected from that of hardware. Most of the time, I would say 90% of the time, the computers running my code were completely, absolutely, and hopelessly virtual.
Let us consider, for example, a canonical “full stack” web developer, with some knowledge of cloud native apps; quite a common resumé these days. The person will surely use a rather high-level programming language, most probably JavaScript, Java, PHP, or C#. They will create a Docker container out of that code, and this container would run inside some Kubernetes cluster, itself running inside a virtual machine running in some cloud provider. In the middle, maybe, a CI/CD pipeline running in yet another virtual machine; most probably GitLab running the “Docker-in-Docker” image, building other images.
The actual hardware is, of course, nowhere to be seen. It is virtual machines all over, and legend has it that at some point there is an actual CPU executing those instructions. A CPU, you know, labeled with the 64 bits moniker, built with silicon, plugged to a socket, and hopelessly capped at 3 GHz.
Cover photo by Spencer on Unsplash.
Continue reading The Untimely Demise Of Workstations or go back to Issue 026: Hardware. Did you like this article? Consider subscribing to our newsletter or contributing to the sustainability of this magazine. Thanks!