A magazine about programmers, code, and society. Written by humans since 2018.

The Network Is The Computer

Back in the mists of time, an early Sun Microsystems employee by the name of John Gage coined the term “the network is the computer” to describe the centrality Sun put on network capabilities when designing their workstations.

As we saw last month in the hardware issue, Sun and other companies were interested in taking existing time-sharing computer designs, and selling customers one such system each. Given that the design of the computer means that multiple people can share one system, you need some way to justify this over-provision of resources. Move attention to the network, then: if everybody can login to access their files, printers, and emails from any of the workstations you deploy, then IT administration becomes part of the facilities department rather than the, well, IT department. The culmination of this design was the SunRay “ultra-thin” client, though that meant Sun had gone back to selling you a single time-sharing computer and multiple access terminals.

Note that Sun were careful to say the network, not our network. Silicon Valley has an egregious history of taking publicly-funded work (i.e. work paid for by US taxes) and enclosing it to create multiple billions of dollars in profit for a few individuals. The most obvious modern example is Google, who commercialised work that was in part funded by NASA and the NSF (as credited in the paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine”).

Sun were no different. In a display of hubris, the founders named the company after the Stanford University Networks project, an effort to connect computers across the campus to the ARPAnet via TCP/IP over Ethernet. Andy Bechtolsheim’s work in designing the SUN workstation was paid for by DARPA and the university’s CS department.

Sun were not the only people to take publicly-funded work from the Stanford campus network and exploit it for private gain. Sandy Lerner, the director of computing facilities in Stanford’s business school, would be forced to resign her position at the university for commercially selling a university-designed network appliance that she had no license to exploit. She subsequently founded a private company to do the same: the Stanford “blue box” became IOS and the company’s name was Cisco.

Anyway, Sun shortened the rather dull phrase “somebody else’s network is the computer we want to profit from” and a marketing catchphrase was born. Oracle (finally a company bootstrapped on the work of its founder) evidently did not agree, or at least could not use the phrase to sue Google, and did not renew the trademark registration on acquiring Sun. Enter Cloudflare, who swooped in with much fanfare to snipe the phrase and use it for something something internet of things.

There is actually a lot packed into that slogan, “the network is the computer”, and lots of information for the software designer. To understand why, let us explore the truth of the statement: why is the network the computer?

It has all to do with Conway’s Law. As frequently stated, Conway’s Law says that organisations design systems that mirror their communication structure. This is usually interpreted very narrowly, meaning that software is designed along the communication structure of the software development team. In fact, Melvin Conway expressed that the word “systems” should be “defined broadly”.

It is common to think of a software product as being a technical system that mediates between two social systems: the system of producers (i.e. the software team) and the system of consumers (i.e. the clients, customers, “users”). In fact, this view comes from the Taylorist image of software creation that was prevalent in the twentieth century. The client’s roles are to know and express what software they need, and to accept our software once we have implemented all their requirements. Our role is to ask the questions that will uncover the requirements correctly and efficiently, and then to mechanistically and efficiently convert those requirements into the expected software.

A holistic and more accurate view of software development sees all three of these “systems” as part of a single social-technical system, in which the interactions between producers and consumers are too important, too frequent, and too nuanced to be externalised as outside factors influencing distinct systems. The software (both its existence and the act of its production) mediates a dialogue between developers and users over the software’s behaviour, in addition to producing a dialectic in which the appearance of the software both challenges existing beliefs about the system in which it is embedded, and shapes a new reality.

An early attempt to move away from the “three systems” model toward the “one system” model can be seen in the Manifesto for Agile Software Development. The manifesto maintains two distinct “classes” of actor—the “developer” who has the skill to create software and sells their labour into the software-creation exercise, and the “business” defined as sponsors who control the means by which software is paid for (gold owners) and users who supply the reason for the software’s existence (goal donors). The developers are still the knowledge-work equivalent of hired muscle:

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Agile processes harness change for the customer’s competitive advantage.
  • Working software is the primary measure of progress.

Nonetheless, the Agile manifesto describes the act of software creation as an ongoing dialogue between these actors, and requests a fuzzier and more fluid boundary between their domains. Thus, it includes principles describing their interaction:

  • Business people and developers must work together daily throughout the project.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

Notice the tension inherent in the manifesto’s principles, between the boundless (more working software, faster, is better) and the bounded (we should be able to reproduce this working situation forever). Developers working in the Agile context are invited to be Animal Farm’s Boxer, working as hard as they can for as long as they can in the hope to avoid the glue factory.

A more progressive paradigm for software development would remove the classifications of “programmer” and “business people”, regarding all as individuals with particular expertise who come together to pool that expertise in pursuit of a common goal. And thus could the network as the computer be truly recognised: the network as distributed source of and access to computing and communication supporting the social system in which it is embedded to the greatest possible extent. Let us explore what that would look like.

The Internet Protocol was designed in the United States during the 1970s, at the military Advanced Research Projects Agency (ARPA). As the designers were American cyberneticians, the system contains design principles that would be desirable in an ideal system in contemporary America. Each node is a “free actor”, choosing to form peer relationships with other nodes and route packets based on local information about those nodes and the destinations of the packets. The cyber-utopia, evinced by fanzines, the Whole Earth Catalog, and numerous cyberpunk novels, is one of total anarchy and individual liberty. The internet—as designed, anyway—perceives censorship as damage and routes around it.

In fact the period of establishment of IP as the de facto standard for internetworking (in the UK’s Joint Academic Network, Janet, the “killer app” that saw IP winning out was the X remote desktop protocol) also saw the rise and entrenchment of global capitalism and the proliferation of projects designed to transfer wealth en masse to the small cohort of billionaires. Ideas of interactions in “the marketplace” changed, and thus so did the ideas about a network best configured to support them. Out was the Kevin Flynn who ran an independent arcade business and deserved restitution from Encom in the movie Tron. In was the Encom that was still dominant in Tron: Legacy and released Encom OS 12 with the killer feature: “this year, we put a 12 on the box”.

As the ideology behind the “free market” changed, so did the “marketplace of connectivity”: the internet perceived censorship as an opportunity and asked who was to pay for it. Thus the modern internet: a small number of global players controlling the core, and a ragtag assortment of startups wondering whether to give their seed money to Microsoft, Google, or Amazon in exchange for a seat at the table. The network is the computer, and the computer has five access terminals.

To understand how the internet’s adaptability to prevailing ideas about social systems enabled its survival, turn to a similar system that did not achieve the same. The Soviet vision of the internet, designed by cyberneticians in research institutions around the Union, never became a reality. Its story is told in the book How Not to Network a Nation and the article InterNyet: why the Soviet Union did not build a nationwide computer network. Where Vint Cerf and Bob Kahn were supported in their plans to introduce a nationwide military computer network, their counterpart Viktor Glushkov (a Russian-born mathematician who founded the Institute of Cybernetics in Kyiv, Ukraine) met political obstacles and funding cuts. Why?

Principally because the network was not the computer: the system proposed did not support the system that was in place. Where the ARPA vision of the internet promised a decentralised system (what Americans think their system is) and was delivered by the heavily centralised, bureaucratic, planned economy of the US military apparatus (what America really is), the Glushkov project OGAS promised exactly the same in a very different context. The OGAS internet would be a decentralised economic planning system (when Soviet political leaders believed that they had, or should have, total control over a centralised economy), delivered by a heavily centralised, bureaucratic, planned economy (which did not really exist: spending was much more regional and decentralised in the Soviet Union, which relied on local knowledge and mutual assistance between lower-down functionaries who gave the appearance of implementing a central plan).

So for a software system to succeed, the network must be the computer: the physical availability and distribution of the platform that provides access to the software (i.e. the computer) must reflect, support, and reproduce the interpersonal connections and data exchanges that will achieve the goals of the people involved (the network). The computer can change the network, and the network can change the computer, but this will happen with mutual involvement. Software designers must expand their understanding of Conway’s Law.

Photo by JJ Ying on Unsplash

Continue reading "Sniffing Packets" or go back to Issue 027: Networking. Did you like this article? Consider subscribing to our newsletter or contributing to the sustainability of this magazine. Thanks!
Back to top