Issue #15: Writing,  Library

Brad Cox

Ever since Aristotle, philosophers have struggled with the concept of existence. Why do we observe the things that we see in the world? How did they get there? Is there a purpose? Will they exist forever? Are we made of the same things as everything else, or do we have an animus or soul that other objects lack?

In 1714, Gottfried Leibniz published La Monadologie, in which he proposed that at the fundamental level, objects are made of indivisible entities called Monads. You can’t see inside a monad, it has an internal state that controls its actions and passions. Monads can be created or destroyed atomically, but cannot be partially built or partially annihilated because they are a “simple substance without parts”.

This doesn’t represent the pinnacle of metaphysical exploration. In fact, through the 20th century Martin Heidegger completely upended the whole field. In his 1927 introduction to Being and Time, he puts forward that you can’t ask the question “what is Being?” because the question itself presupposes an understanding of the word “is”, in other words it relies on an understanding of Being in which our understanding of the being of Being can be situated.

Thus Heidegger initiated what is now called Modern Hermeneutics, in which “facts” are understood not to represent a fundamentally true world outside our experience, but a working interpretation informed by our experiences, our perspectives, and our histories.

Computing, of course, spent a long time without questioning its relationship to reality. Computers were seen as domains of pure logic, in which either procedures or routines of mechanical calculation instructions were carried out (the Turing school) or functions mathematically transformed data (the Church school). These two paradigms were both silent on the meaning or interpretation of the data being acted upon, which was left as an exercise to the reader.

The concept of being, in a form that logicians would recognise as ontology but Heidegger would call merely ontics, entered computing at first slowly, and then all at once. (Ontology, Heidegger says, is the investigation of Being, while Ontics is merely the investigation of things that are in a system in which Being is accepted.) Simula broke down the barrier between the things outside the computer and the procedures written in the computer by introducing software classes. Now programmers could say “this piece of the computer represents a star”, or an animal cell, or an employee, and they could say “all [simulated] stars have this information about themselves” and “all [simulated] stars can perform these procedures”.

Researchers at Xerox PARC developed these ideas in the context of Alan Kay’s Dynabook concept and the Smalltalk programming environment, finally publishing information about their “object-oriented” programming system in the August 1981 issue of Byte magazine. The objects in Smalltalk clearly reflect a Leibnizian Monadic simulation of the world. Objects (monads) can be created or destroyed, but either exist or they do not. They encapsulate private data, which is inaccessible from outside but can be willed into action including changing their internal state by the receipt of messages. Crucially, an object itself decides what to do when it receives a message, a level of indirection and isolation not found when invoking a named procedure.

Taking full advantage of this way of structuring software meant thinking entirely differently about how software is designed and built. Daniel Robson, one of the contributors to the Byte issue on Smalltalk, acknowledged that it would be easier to come to objects fresh than to shift paradigm from a procedural view of software.

…the basic idea about how to create a software system in an object-oriented fashion comes more naturally to those without a preconception about the nature of software systems.

Object-oriented programming removes a conceptual barrier between the things we’re building software for and the software being built. All of the people, processes, organisations, artifacts and natural resources in the real world can have analogous software avatars in the form of objects, interacting with each other by sending messages within the software system. A small collection of programmers, authors and educators in the 1980s saw that this meant not only changing how you write software, but how you think both about the software and the context in which you are creating that software.

Brad Cox was one of those people. On the surface, Objective-C looks like a tool for designing objects that internally execute C procedures, and indeed that is what it does. But Objective-C was born of a desire to do for software what Intel had done for hardware, and commoditise algorithms and data structures in a component model analogous to the integrated circuit.

In Object-Oriented Programming: an Evolutionary Approach, Cox laid out this vision for “Software-ICs”. It is not fair to make the usual comparison that this book is to Objective-C as K&R is to C, or Stroustrup’s book is to C++. K&R is a user’s manual for a tool, and the C++ book is a guide to making effective use of a complex design system. OOP:aEA is nothing short of a manifesto for a completely different way to funding, staffing and delivering software products.

The Kernighan and Ritchie book opens with the “Hello, World” example. The C++ Programming Language follows an annotated table of contents with “The purpose of a programming language is to help express ideas in code.” Cox, on the other hand, opens with the story of Eli Whitney and the industrial revolution.

For Cox, the industrial revolution is not primarily about machinery and the harnessing of steam and coal power. It’s about the replacement of artisinal, cottege manufacture with scaled-up industrial processes that depend on well-specified interfaces between standardised, interchangeable parts. Where previously gunsmiths made one-off rifles with an end-to-end process, Whitney argued for rifles to be assembled from standard components. Then, if a bolt fails in the field, you just need to replace the bolt, not the rifle.

Cox takes an object-oriented look at software production, in much the same way that Ivar Jacobson took an object-oriented look at business processes. He wanted programmers to design objects, and publish data sheets that described how the objects worked. Integrators would browse these data sheets, and buy objects that they could connect together to build higher-level assemblies that solved their problems.

Ten years later, Cox had to accept that while the company he had co-founded to promote Objective-C had found some financial success, it was far from the Intel of software. To find out why, we pick up the story of another of his books, Superdistribution: Objects as Property on the Electronic Frontier.

Again, the story of Eli Whitney and the discussion of a “software industrial revolution” comes up. But now we get to the difficulty with scaling component sales in software: we don’t sell software by the unit, for the most part. Most of the cost is borne in the initial construction, and subsequent copies are free. We can’t just buy a couple of Window objects to kick the tyres before scaling up our production of graphical applications: we either license infinite Windows (and probably the rest of the GUI toolkit) or none.

Cox advocated for an object-oriented economy. When I use my application, my computer registers that use and makes a micropayment to the application author. However, the application also makes use of the GUI toolkit objects, and a portion of the payment is transferred from the app author to the toolkit author. If the toolkit uses some foundational data structures, then another payment is made.

Now development is cheap, because a developer pays less when they only use a couple of objects on their own computer. But when they distribute the application, and more users start using it, their revenue and the share they pass “upstream” both increase with the scale of the app’s use.

As described in Superdistribution, object-oriented economics required custom hardware to securely track object usage and make the correct micropayments. A small trial in Japan was not followed up. But Cox was writing in the pre-blockchain times, before dApps written in Solidity made use of micropayments in the form of smart contract evaluations on the Ethereum network. We have the technical capability to implement the ideas of Superdistribution, and we still have the problem of charging a fair amount of creating software components.

The only innovation to succeed at promoting the distribution of software components at scale is the free software license, which doesn’t make provision for paying its creators. Superdistribution deserves a modern reading, and the ideas behind a software-industrial revolution still have a lot to inform our field.

Cover photo by Adrian Kosmaczewski.

Donate using Liberapay

Graham is a senior Research Software Engineer at Oxford University. He got hooked on making quality software in front of a NeXT TurboStation Color, and still has a lot to learn.