A magazine about programmers, code, and society. Written by and for humans since 2018.

Flux

We are at a point in history where, for the first time, it is possible for the tech startup as we know it to become a thing of the past. Of course, other kinds of startups are already things of the past, so it is not the most momentous of historical events. Software unicorn mega-deals are out, Forbes tells us, and VC-funded AI startups are in. Medium-sized deals are more likely to be funded by risk-averse private equity than by venture funds, and small seed deals have all but disappeared.

Even in the end times of the dot-com bust, the idea that internet companies would go away was inconceivable—yes, the funding went away for the most part, but businesses that had a clear benefit beyond “make a website” continued to get founded, and funded. In fact, web 2.0 hit the mainstream after the bubble had burst—social media, wikis, tagging sites, Google, WordPress. All of these companies sprang into existence as web-based startups in the post-dot-com slump.

Software teams—the people who make the things that make the money—reacted to the funding withdrawal by contracting their ambition and focusing their work on demonstrating their value. Working for years to create yet another groupware product was out, and working for weeks on a Minimum Viable Product was in.

This post-bubble “lean” stretch was also the era in which various lightweight development methodologies became badges of honour for virtuous software teams. Being “agile” showed that you were ready to adopt “eXtreme” practices (like talking to your customers, and prioritising giving them something they wanted). As a result, software development became more efficient and more focused. The next recession, in 2008, motivated more cost savings: renting servers instead of buying them, for example. Software startups changed a lot, but they did not go away.

But what happens this time? On the one hand, it is easier than ever to make a startup product: tell a model what the product should do, tell it again a couple of times until it gets it right, then give the model your platform’s API key and tell it to do a Kubernetes.

That greater access to creating the product makes it much riskier to make a business around that product, though, as everybody else can do exactly the same. Someone who sees your product and thinks it should work slightly differently, or be slightly cheaper, can make that happen in little time. Particularly, any incumbent company who was far-sighted enough to bench underutilised staff instead of laying them off can quickly repurpose a whole team towards cloning your new thing, but with their logo and marketing budget and intellectual property counsel.

So does that mean no more software products? No, but it might mean way fewer software product companies. Really, they have had a good run. The era of startups that made computers for other people to make software for was relatively long-lived, from the Eckert-Mauchly Computer Corporation in 1946 to, arguably, NeXT, Inc.; the last of the 1980s workstation startups.

Then we had the era of startups that make use of computers to do things people want to do with computers; from the office/groupware companies that apply those workstations people were buying in the 1970s and 1980s (VisiCorp in 1977, and Lotus and Adobe in 1982 as examples), through to the apps and web apps that until recently could still attract decent levels of funding (Dropbox in 2007, ByteDance in 2012, Canva in 2013, and Figma in 2016).

Now, computers are part of the fabric of (a lot of) society, so “thing you can already do, but using a computer” is less of a draw. Generative AI and other neural network applications are helping to color in up to the edges on that picture, because they help open up the space from “things we can instruct a computer to do” to “things we cannot describe but can show the outcome of” and “things we can only explain by analogy to another thing”.

There are some avenues for a last hurrah for computers in themselves to remain interesting. Quantum computing allows for novel algorithms; blockchain for novel (computer-mediated) social interactions; virtual and augmented reality, and neural interfaces, for novel interaction paradigms; and increased “thingfulness” of the physical world for novel contexts. For example, I still wrote this draft the old-fashioned way, with a fountain pen on real paper, because that affords me a better experience than either typing or the current crop of stylus-on-glass and ballpoint-pen-that-digitizes-strokes tools.

However, those avenues should properly be seen not as opportunities to recover the “golden days” of software-for-software’s-sake, any more than pulse-code modulation revived Marconi-era radio mania or high-speed trains returned us to the glory days of Stephenson and Brunel. Computing is moving from “becoming” to “is”, and the next crop of new products will not ask “how do we exploit the technology”, but “what do people need and what do we use to give them that”. Computing will be a frequently-used component in the answer, but is no longer the answer for its own sake.

We are in the era of “things a computer can do for you that previously you had to do, whether or not you used a computer”. Personal accounting packages helped you to file your taxes; a personal accounting agent files your taxes for you. Whether it does it with a fad technology—object-oriented, web 2.0, genAI, blockchain—is immaterial, because it does it autonomously. Computers have become boring, and that is the most exciting thing that has happened to computing in my lifetime.

Cover photo by Fiona Jackson on Unsplash.

Back to top