Type inference is not a new idea. The most common approach to provide such a service in a programming language, the Hindley-Milner type system and algorithm, bears the names of two scientists who discovered it independently a decade apart from each other. It is even rumored that Haskell Curry used it as early as in the 1930s, or even that Alfred Tarski used it in the 1920s.
Type inference is a must-have feature these days. Recent, popular, “modern” languages have it: Scala, F#, Go, Rust, Swift, TypeScript, Dart. Older languages have been adapted to have it too: C++, Java, C#. It is such an ubiquitous feature nowadays that Kotlin does not even mentions it explicitly in its documentation. Even Visual Basic has it. As these lines are published, JetBrains is working on type inference mechanisms for Ruby while others aim to do the same for Python, PHP, and Lua. And needless to mention, ML, OCaml, and Haskell have all had it for decades; another argument
to brag about to be vocal about.
Type inference became ubiquitous in the 2010s. It has become one of, if not the, most expected features required by developers these days when welcoming a new language in their arsenal. There are other items in the wish list, though: optionals (or “nullable types”), generics, and, of course, speed of compilation (and execution) are the most common ones.
Most importantly, from a practical point of view, type inference has brought a truce to the infamous dynamic vs static typing wars, mercilessly waged over the previous four decades (at least.) This breakthrough peace came about thanks to a simple side effect of type inference mechanisms: making strongly-typed languages look like dynamically typed ones.
Software developers above 40 or 50 years old will surely remember those old Usenet forums, filled with long diatribes about the relative merits of static and dynamic type systems. It can be argued that there were fashion trends, which made either system more popular, through cycles about a decade long each.
It is not the point of this article to add anything else to the discussion, because type inference has automatically deprecated this discussion in the first place. If a type system is invisible enough as to provide rapid development cycles, while at the same time providing useful information to catch bugs early on in the process, then we have all won.
In the late 70s, the rise of the personal computing made BASIC a household name. Dynamically typed, these first compilers were freely available with most computers. Most new developers back then started copying BASIC code from the pages of BYTE magazine, and assumed that runtime type errors were just an unavoidable fact of life.
During the 80s, as computers started featuring visual, interactive GUIs, Turbo Pascal and C++ appeared as the strongly typed languages that conquered that space. Object-orientation was the king, and static typing was its army, the staple of professional developers, the differentiating factor from amateurs and hobbyists. On the other side of the fence, however, Perl and Objective-C were trying to convince a skeptical world that one did not need a strongly-typed language to write useful software.
In the 90s Java rose to be the most widely used programming language of its era, ushering a time of reinvention for every single wheel in a relatively strict static language. In a rather interesting peacebuilding process, Microsoft released COM+, making dynamically typed languages such as VBScript work on top of a statically typed, C++-based infrastructure, thanks to the long forgotten IDispatch and IUnknown interfaces.
Well, scratch that. Statically-typed languages, this time, did not disappear. They stayed there, but removed all type annotations from their code, thanks to type inference. As Ballmer left the driving seat of Microsoft to Satya Nadella, COM+ left its seat for the .NET DLR. But nobody cared anymore about that, because the same Anders Hejlsberg who brought us Turbo Pascal gave us TypeScript, and TypeScript begat Visual Studio Code, where our C# code is written without any code annotations, and we let the compiler infrastructure figure out the types for us.
But since peace is never fully achievable, the wars moved on to other fronts. Like, whether to build apps with Electron or not, instead of whether to create a respectful industry without burnout, exclusion of women and other groups, or with an actual ethical purpose.
Those are much more relevant war stories to write about, whose development will be published somewhere in February 2030, hopefully.