In his rare 1994 book “Object-Oriented Programming In C” Axel Tobias Schreiner explains how to do inheritance, class methods, class hierarchies, and even how to raise exceptions using nothing else than pure, simple, pointer arithmetic-filled, ANSI C.
You do not believe me? You can read it online for yourself, and I suggest you do. You can even find it in your preferred online bookstore if you look closely (Hint: ISBN 978-3446174269,) like Lulu, for example.
The aforementioned book contains a terrible statement right at the very beginning:
Only object-orientation permits code reuse between projects — although the idea of subroutines is as old as computers and good programmers always carried their toolkits and libraries with them.
(I can picture myself those developers in the first half of the 90’s, carrying diskettes filled with useful code snippets from employer to employer, and watching new seasons of “The X Files” every evening at home.)
Let us give some slack to Herr Schreiner here: his book was written simultaneously to events he had no idea about. Roughly at the same time, somewhere in Silicon Valley, a team led by a certain James Gosling in a company named Sun was about to release the “Oak” project. Ultimately named Java, and decidedly marketed as an object-oriented environment, it borrowed some ideas from NeXT’s Objective-C, like the
interface keyword, and from C++, like
class, and from a myriad other technologies.
In the following decades, however, the Java Virtual Machine has been shown to seamlessly support functional, procedural, and even logic languages on top of it (yes, there are Prolog implementations for the JVM.)
History proved the quote above to be painfully wrong. Code reuse had more to do with component architectures, design patterns, levels of caffeine, decomposition in libraries or frameworks, and the actual and often underrated wish to make things reusable, than any particular programming paradigm.
And this happened because a Java Virtual Machine is a Turing-complete environment; as is the .NET Framework, or Erlang’s BEAM, or Smalltalk-80, or those IBM z/VM installations used to run those trusty mainframe COBOL programs written in 1965. They can run, by definition, anything that can be described by a Turing machine.
Just like the CPU in the computer you are using in this very moment to read these lines.
Reusability As A Paradigm Versus Paradigm Reusability
.call(). Haskell comonads are actually objects. Swift 1.0 implemented instance methods as curried functions.
But none of this is new. Smalltalk, arguably the precursor of object orientation, had
select methods which were the grandparents of our more common
filter functional friends.
Right here, a lambda is an object. Over there, an object is a lambda. Further away, a method is a procedure. Another procedure is a function. And more often than not, a function is a closure.
In the Kingdom of Software, the classical taxonomy of programming languages into families, as promoted by Jean Sammet half a century ago, does not make sense today anymore. Much to the chagrin of right-wing programming language extremists, languages have bred with one another and have given birth to mutants and bastard children, some of which have lost all purity in exchange for a much needed applicability. Some of which have survived, some of which have faded away.
Convenience And Egos
While it can be certainly a fun exercise to implement an Object-Relational Mapper or yet another React web framework in ANSI C, it begs the obvious question, “why?”
Of course we will not implement such a beast. Why would we? We can choose among thousands, even tens of thousands of languages today, most of them open source, and all bundled with libraries that make them suitable for a particular endeavour. They are all free, most of them hosted in Github, many of them actively maintained, and curated by long lists of luminaires in their
AUTHORS files. These teams are continuously deciding about features to include or not. Most new popular languages created in the past decade (Go, Rust, Kotlin, Swift, to name a few) all support functional, procedural and object-oriented programming off-the-box.
Does it make sense to continue to talk about programming paradigms? It seems to the author of these lines, in any case, that the discussion has shifted to more actionable items, such as the strength of the type system, or the number of supported platforms. Maybe that is the more rational question to ask, the more open-minded discussion to have.
Or maybe, in the age of microservices and serverless, is it simply more important to be able to stick your preferred paradigm inside a container, inside a node, inside a Kubernetes deployment, inside a cloud somewhere else? Maybe, if your language can speak HTTP, can we follow Graham’s advice and reconsider microservices not as a new paradigm, but rather as yet another implementation of Object-Oriented programming, one finally done right?
There are too many programming paradigms beyond the ones mentioned in this article. All of them, without exception, can be used to solve your problem as long as you use a Turing-complete language. The only valid statement that can help you choose one of them is the following: cogito, ergo sum.