In a scene of the vastly underrated 2003 sequel film “The Matrix Reloaded”, Neo, played by Keanu Reeves, meets “The Oracle”, a sentient program interpreted by the late Gloria Foster. In that fragment, The Oracle offers Neo a piece of candy, and Neo asks whether she knows if he is going to accept it or not. The unfazed Oracle responds “Wouldn’t be much of an Oracle if I didn’t!”
We human beings are obsessed with the future, to the detriment of our present. Even though our current scientific observations confirm the second law of thermodynamics, and its famous corollary, namely the direction of the arrow of time as explained by Stephen Hawking, we dream about predicting the future since the most ancient of times, and spend inordinate amounts of energy in order to model and ultimately control it, albeit partially.
In the software industry, this obsession begat Gartner hype cycles; Agile poker planning sessions; Waterfall specification documents; financial forecasting on Excel spreadsheets; Steve McConnell’s book “Software Estimation: Demystifying the Black Art”; freely downloadable analysis papers in PDF format; Barry Boehm’s COCOMO models; and countless predictions on specialized press outlets, including many hilariously ridiculous or downright failed ones.
Of course, it is too easy to laugh at those failed predictions from the vantage point of 2025 (just like Bret Victor once did), but we know that Schadenfreude is another key component of our psyche. Let us review some famous ones.
Ken Olsen, founder of Digital Equipment Corporation, allegedly said in 1977 that he “couldn’t see any need or any use for a computer in someone’s home”. He was right insofar as very few people have a PDP-11 at home these days.
Of all people, it was Robert Metcalfe who predicted in 1995 that the Internet would go “spectacularly supernova and collapse” one year later. To his credit, he ate his words in public in 1997 (and we mean this in the most literal sense of the verb “eating”, with a blender and everything).
Nobel Memorial Prize in Economics winner Paul Krugman stated in 1998 that by “2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”. Instead, that year we got Ruby on Rails.
Microsoft CEO Steve Ballmer was quoted on Wired Magazine in 2007 claiming “there’s no chance that the iPhone is going to get any significant market share”. Seven years later, at a psychotherapy session with Charlie Rose, he finally acknowledged the blunder.
Even highly respected computer scientists can fail in this area. We have quoted in a previous article the 2006 paper “A View of 20th and 21st Century Software Engineering” by none other than Barry Boehm himself:
Assuming that Moore’s Law holds, another 20 years of doubling computing element performance every 18 months will lead to a performance improvement factor of 220/1.5 = 213.33 = 10,000 by 2025. Similar factors will apply to the size and power consumption of the competing elements.
Oh, and did I mention that it has been “the year of Linux on the desktop” for the past 25 years already?
You get the idea. But none of this is new; historically, most disruptive technologies have been attacked as soon as they appeared, as the examples of the television, cars, and packet switching networks show. This phenomenon was very well explained by E. T. Jaynes in a brilliant (and funny) paper titled “Notes on Present Status and Future Prospects” published in 1991:
The Establishment and the lunatic fringe have the common feature that they do not understand the new idea, and attack it on philosophical grounds without making any attempt to learn its technical features so they might try it and see for themselves how it works. Many will not even deign to examine the results which others have found using it; they know that it is wrong, whatever results it gives.
Le sigh.
We also have those colorful predictions that have stuck in the collective psyche, are abundantly cited in pretty much every blog post or conference talk, yet have been proven to be fabulously inaccurate. Like the one attributed to IBM founder and president Thomas Watson Sr., stating that “there is a world market for maybe five computers” or that one allegedly uttered (but later debunked) by Bill Gates, that “640K ought to be enough for anybody”.
Many of these predictions have been stored in an aptly-named “Predictions Database” available at the Elon University’s website. (Do not worry, nothing to do with the other Elon, just an unfortunate coincidence. Unfortunate for the University, that is.)
Of course, not everyone gets predictions wrong; in particular, Popular Mechanics got at least one right in March 1949:
Where a calculator like ENIAC today is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 vacuum tubes and perhaps weigh only 1½ tons.
Tru dat. The Lenovo laptop I am using to write these words has definitely less than 1000 vacuum tubes, and does not even weigh 1½… kilograms.
Despite the obvious shortcomings of human prediction capabilities, the field of technology is filled with futurism. Programmers need to plan which programming language (or LLM) to learn next. Project managers need estimations to keep their stakeholders happy (and their jobs secure). Businesses need forecasting to plan their course of action and their strategy.
Where there is demand, there is a market. Let us review some famous examples.
In 1952, Ida Rhodes gave a talk in Los Angeles (followed by its eponymous article) titled “The Human Computer’s Dreams of the Future”.
J. C. R. Licklider published at least three ground-breaking documents describing the computing of the future: the “Man-Computer Symbiosis” paper of 1960, the “Intergalactic Computer Network” 1963 memo, and the book “Libraries of the Future”, published in 1965. (And that is without counting “The Computer as a Communication Device” co-authored with Robert W. Taylor in 1968.)
Martin Greenberger published in 1964 “Computers and the World of the Future”, including papers by C. P. Snow, Vannevar Bush, J. C. R. Licklider, Grace Hopper, Alan Perlis, Claude Shannon, John Kemeny, and many more luminaires.
That same year, Herbert Marshall McLuhan published “Understanding Media: The Extensions of Man”, a book that activated more than a few synapses among technologists, introducing words and concepts such as “media”, “information age”, and “global village” into the global lexicon.
The World Future Society was founded in 1966, an organization of which Carl Sagan and Peter Drucker were members, and which was the publisher of a magazine named “The Futurist” from 1967 to 2015.
Jean Sammet published in 1972 a paper titled “Programming Languages: History and Future”. Spoiler alert: she did not mention Java nor Rust. Instead,
The major broad concepts that we should expect to see in the future are: (1) use of natural language (e.g. English), (2) user defined languages, (3) nonprocedural (sic) and problem defining languages, (4) an improvement in the user’s computing environment, and (5) new theoretical developments.
The ultimate ease of communication with the computer allows the user to specify his instructions–or wishes–in a natural language such as English.
Or, as the cool kids call it nowadays, vibe coding.
In September 1991, Scientific American published a seminal article by Mark Weiser called “The computer for the 21st century”. In two of the pictures of the article there are various people using tablets, styluses, and interactive whiteboards.
From 1978 to 1995 you could read articles about the future in the now extinct Omni Magazine, largely replaced by Wired Magazine since then.
In 1999 John Naughton published “A Brief History of the Future”, telling the story of the Internet and its possible future impact.
Lawrence Lessig published in 2002 “The Future of Ideas”, explaining the potential of societal change at the beginning of the 21st century, largely thanks to the aforementioned Internet.
In 2021, Ion Stoica and Scott Shenker argued that we were moving “From cloud computing to sky computing”. I suppose Kubernetes will still be around.
Last but not least, a few months before the publication of the article you are reading, a team of researchers released “AI 2027”, an overly optimistic and quite biased report featuring a suite of scenarios, supposedly showcasing the disruptive potential of LLMs in our near future. Around the same subject, we could not leave out “Thousands of AI Authors on the Future of AI”, a paper published in 2024.
Instead of such toxic positivity around the hype of AI, I would rather yield here to Luc Julia. He is a former designer of Apple’s Siri, and the author of a book beautifully titled “Artificial Intelligence Does Not Exist”. In a recent hearing at the French Senate, Mr. Julia said (translated from the original French):
I don’t claim to predict the future for the next thousand years, and it’s possible that quantum physics will open up new possibilities. The fundamental difference is that quantum physics is a branch of physics, while mathematics is only an approximation of physics; it attempts to describe the world, but it is not the world. Therefore, in statistics, 100% does not exist, and perfection is unattainable.
But talking about AGI means talking about perfection, about an entity that would do everything, always better than us. This is mathematically impossible. A concrete example of AGI, in a specific field, would be the level 5 autonomous car that Mr. Musk has been promising us since 2014. As a reminder, there are five levels of autonomy; level 5 represents complete autonomy, a vehicle capable of getting from point A to point B without ever hitting anyone along the way. However, this car does not exist and, as I will demonstrate mathematically, it never will.
(Which means my open letter to a future AGI will remain unread. Maybe it is better like that.)
How can anyone spot plausible futures among a sea of predictions? Who can you believe? Well, as it is often the thesis in this magazine, we argue that the study of the past might provide some clues as to the ever-changing direction of the arrow of time.
So here is a suggestion, since we happen to live in the world of the future as dreamt by our ancestors of 1950 (let us be honest: bar its lack of subspace band support, the iPhone looks much better than the clunky communicator used in the original Star Trek TV series of the 1960s). We can analyze the problem of the viability of predictions from two perspectives: the scientific one, and the fantastic one.
On the scientific side, we could start with a book that MIT Press published in 2021: “Ideas That Created the Future: Classic Papers of Computer Science”, including material from the antiquity to the late 1970s, edited by Harry R. Lewis. This might be a good starting point; these papers, each in its own way, and unbeknownst to each of the authors, described a possible future, in some cases with uncanny precision. Of course, caveat lector: survivor bias applies here. The ideas enumerated are those that stood the test of time; nothing else.
On the fantastic side of the spectrum, Gary Westfahl’s, Wong Kin Yuen’s, and Amy Kit-Sze Chan’s 2011 book “Science Fiction and the Prediction of the Future: Essays on Foresight and Fallacy” would serve to understand the other side of the coin. In this case we will see a list of outlandish ideas that did not stand said tests of time or technological feasibility, yet opened up serious questions about our future.
The precision of our analysis notwithstanding, the sad truth is that the future looks bleak and insurrectional, particularly for those in the USA. As these words hit the web, the SCOTUS has effectively abolished universal injunctions to presidential power, thereby kicking off the democracy overrule process we predicted exactly three years ago in this magazine–needless to say, a prediction we would have loved to get wrong. And there is more to come. What will happen now? As Yoda famously said in front of the (then) Senator Palpatine,
The dark side clouds everything. Impossible to see the future is.
Hear you we do, Master Yoda.
Paraphrasing Alan Kay and William Gibson, instead of predicting those countless techbro utopias and committing to what is probably the very worst of them, as technologists we should have worked harder to invent precisely one future: without fascism, without human rights violations, without hunger, and without war. A future that would be evenly distributed for everyone on Earth to enjoy.
But just as the Oracle said to Neo in the scene quoted at the beginning of this article, as a society we have already made quite a few questionable choices. It is now our responsibility to understand that those choices were terrible, and then, maybe, we could choose better the next time–well, unless we are really plugged into a highly skeuomorphic Matrix that has been pulled over our eyes to blind us from the truth.
Cover photo by Deepak Gupta on Unsplash.