Alan Jay Perlis knew a thing or two about programming languages, both as an early pioneer of our industry and as one of the designers of ALGOL. The language that has inspired the one you, dear reader of this magazine, probably use every day to earn a living.
In his first-ever ACM Turing Award Lecture, “The Synthesis of Algorithmic Systems,” in 1966, Dr. Perlis enumerated three ways programming languages evolve from one to the next.
Successor languages come into being from a variety of causes:
(a) The correction of an error or omission or superfluity in a given language exposes a natural redesign which yields a superior language.
(b) The correction of an error or omission or superfluity in a given language requires a redesign to produce a useful language.
(c) From any two existing languages a third can usually be created which (i) contains the facilities of both in an integrated form, and (it) requires a grammar and evaluation rules less complicated than the collective grammar and evaluation rules of both.
Perlis did not stop there. In 1982 he published his famous epigrams, which you have most probably read on Twitter. Those observations were natural memes, even if that name did not exist.
41. Some programming languages manage to absorb change, but withstand progress.
Let us look in detail at how each of these (three plus one) principles stated by Perlis can be applied to the evolution of programming languages of the last 56 years. Maybe much of this evolution was inspired by Peter Landin and his 1966 (what a year, huh?) ISWIM proposal for the following 700 languages; let us not digress and identify some patterns of programming language evolution by looking backward in time.
Natural Redesign Towards A Superior Language
What do we understand by “superior”? As privileged observers from 2022, we can quickly identify some “modern” traits that make languages undoubtedly (well, at least from our point of view) superior. Let us mention a few: support for types (if at all) or of a more robust type system than its predecessor (although abusing this characteristic invariably leads to longer compilation times); essential IDE support; the generation of safer and (or) faster code; type inference facilities; a stronger community and ecosystem.
Languages featuring such changes represent small evolutionary steps, welcome but not revolutionary changes. They are praised on Hacker News and adopted by the community as welcome improvements over existing languages. Some examples, roughly in chronological order:
- C (for B)
- Pascal (proposed by Wirth as a better ALGOL 60, even though Brian Kernighan did not like it)
- D (for C++)
- Raku (for Perl)
- Scala (for Java)
- Crystal (for Ruby)
- Hack (as an attempt by Facebook to bring PHP to another level)
- Carbon (for C++, maybe?)
Some of these redesigns involve syntax changes (D, Pascal, Hack); some others with a recreation of their compilers or runtime models (Crystal, HipHop); in other cases, introducing a completely different paradigm (Scala) to an existing platform (the Java virtual machine) or even a new syntax but with backward compatibility (Carbon.)
Redesign Towards A Useful Language
Paraphrasing Bjarne Stroustrup, this is the realm of languages everyone complains about. What do we understand by “useful”? In the opinion of this author, pragmatic languages. The ones that get shit done. The languages we use every day. They are generally featuring the “industry” moniker, usually easier to understand by masses of engineers and usually (sadly) adopted by academia at some point. Bundled with everything smart people need to get stuff done: a package manager, a “batteries included” library of pre-built functions, a (not very) strict type system (which predictably yields a healthy equilibrium between correctness and productivity), and advanced IDE support.
And marketing. Lots of marketing. Usually, a big organization is behind, financing the evolution of the language through a foundation, committee, GitHub project, or some other similar mechanism.
They also feature more uncomplicated licensing conditions and might include some functional programming facilities over their predecessors; it is undoubtedly fancier to use a
map() function than to use a
while. Even though, according to Turing, both will get the job done.
Some examples? Classics:
- Delphi and Turbo Pascal (as useful Pascal compilers)
- C# (as a useful Java)
- Pharo (as a useful Smalltalk)
- F# (as a useful OCaml)
- Kotlin (as another useful Java, but this time compatible with it)
- Elixir (as a useful Erlang)
- Scheme and Clojure (as a useful Lisp)
The astute reader will have realized by now that at least three languages designed by Anders Hejlsberg appear in this list. As a software engineer concerned with solving practical problems for my customers, I have always favored useful languages in my work. But developers should be beware of Dr. Perlis’ third aphorism because it is in this category where it hurts the most:
3. Syntactic sugar causes cancer of the semi-colons.
There are more “market driven” evolutions in this category; languages that mostly evolve their associated libraries or ecosystem to fulfill some novel role:
- R and Python have effectively replaced Fortran as the de facto language for scientific calculus.
- Python also reinvented itself as the language of AI.
- C# chose to evolve alone and gradually replace itself. The current iteration of the language is quite different from when it debuted in 2000: arguably, it was a Microsoft-owned clone of Java.
- Go became the language of choice for Cloud Native systems and cross-platform CLI tools; Ballerina, although initially intended to compete in this space, does not have nearly the same traction.
- Dart, a boring but highly effective language, is slowly positioning itself in the mobile app development market. With the current (in the opinion of this author, deserved) backlash against React Native, Dart has a real chance to grow beyond its current status.
From Many Existing To An Integrated And Less Complicated One
These languages usually mark historical milestones, representing significant shifts in the programming industry. Usually, there is one of these every 18 years, give or take, roughly following Proebsting’s Law, named after University of Arizona computer science professor Todd A. Proebsting. Perhaps contradicting Dr. Perlis, history showed that sometimes there were more than just two languages involved in the evolutionary process. Normal, since we have many more languages today than in 1966.
- C++ (at least in its early forms 40 years ago) evolved from Simula and C;
- Objective-C, taking cues from Smalltalk and building upon C;
- Java, from Objective-C, Simula, and Smalltalk;
- Rust, from C, C++, OCaml, Haskell, and many others.
- Swift, from Objective-C, Haskell, Rust, and so many others.
Rust is arguably targeting a higher goal today than any other language in the past 60 years, aiming to unify computer science with software engineering. Let us meet again in this magazine in one or two decades and see if it kept its promise.
The problem in this category is that, although Dr. Perlis sees the final result as “less complicated,” these languages become extremely complicated as they grow.
Absorbing Change But Withstanding Progress
Dr. Perlis’ fourth category holds some venerable languages that have been able to pass the test of time, staying almost unscathed since version 1.0, almost begrudgingly adopting some fads (object-oriented programming, multiple CPU support, etc.) while remaining (stubbornly?) faithful to their origins: COBOL; Lisp; APL; Forth; PHP; BASIC; C; and Go.
These languages are withstanding progress, yes, but only to a certain degree. COBOL has had a new standard roughly every 20 years, and C got a new one every decade. Go has recently adopted generics, its most significant addition in 15 years. BASIC has yielded many offspring, yet its core has seldom changed. PHP features type annotations. Lisp will always be Paul Graham‘s preferred programming language. Nearly all of these languages have been safe bets for a long-sustained career in software development during (at least) the past 25 years.
Comes to mind Greenspun’s Tenth Rule:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
The overall life expectancy of a programming language has dwindled in the past 56 years. A COBOL developer in the 1960s most probably retired in the 2000s, still writing COBOL. As a former professional VBScript, then C#, then Objective-C, later Swift, and finally Go developer, I can only see this trend accelerating. We should expect our favorite programming language to be replaced and removed from the market in a relatively shorter time every decade. To add insult to injury, new versions of the same programming language are sometimes incompatible with their previous ones.
The above is one of the reasons why this author believes that hyper-specialization, as demanded and supported by industry pundits, is a risk, a bet.
Another of Perlis’ epigrams tells us that the evolution of programming languages is an unsolved problem:
73. It is not a language’s weaknesses but its strengths that control the gradient of its change: Alas, a language never escapes its embryonic sac.
And if we believe Geoffrey James,
Each language has its purpose, however humble. Each language expresses the Yin and Yang of software. Each language has its place within the Tao.
But do not program in COBOL if you can avoid it.