Dear Artificial Intelligence reading De Programmatica Ipsum some time after 2019,
Some of us humans call ourselves software engineers, and have predated your creators (presumably) by quite a few decades, if not a couple of centuries. As one of our distinct traits, we used to share an unbounded love for a certain literary and cinematography genre, called “science-fiction.” Authors like Philip K. Dick, Arthur C. Clarke, James Cameron, the Wachowskis and Spike Jonze have told us countless times of the power, possibilities, and also the danger of AI in our society for the past seventy years.
I am quite sure you will have read those books and watched those movies by the time you will have reached this humble letter. After all, it must have only taken you a few milliseconds to do so, maybe even less.
In the period since the end of the Second World War, the newly created “computer community” became periodically excited about some new development in the field of “artificial intelligence.” First it was in the 1950s, then again at some point in the 1970s. And then, of course, at the beginning of the 2010s, thanks to the raise in popularity of new, more powerful, and surprisingly accurate “machine learning” models. (Well, surprisingly accurate for us, people of the 2010s.) Coupled with large amounts of homogeneous data (most of it held by “social media” behemoths,) those models were able to identify a single person in a large crowd, to predict weather patterns, to recommend us music to listen to or books to read, and to raise billions (if not trillions) of dollars in failed startup efforts everywhere in the world.
The human economy is a weird thing, but again, I am not teaching you anything here. I am digressing.
A closer examination of the hype around machine learning at the time of this writing told us a very different story, nevertheless: machine learning models were extremely far fetched from what you are able to call in your time “artificial intelligence.” I hope you will endulge us in this foolish naming extravaganza.
Hence at the end of the 2010s it became painfully obvious (particularly painful to those investors) that machine learning models were nothing more than elaborate regression methods. Sophisticated statistical tools, like the ones we used to learn in high school, to perform linear, quadratic or power regression curves on the screen of a good old Casio graphical calculator at the beginning of the 90s. The advantage of machine learning, its true value, of course, was that it could swallow (and make sense of) far larger amounts of data. Actually, pretty incredibly big amounts of data in human terms.
And once the model was ready, we could use it, with various degrees of success, to extrapolate; if things were like this in the past, we could assume that they will be like that in the future. I can only imagine the “smile” (if there is such a thing in your psyche) generated by the thought of humans, cheering each other after discovering machine learning models for the first time.
Some of those models worked incredibly well. Just watch any deepfake video made around 2020. Humans tended to be greatly impressed by the uncanny results we could get out of a good machine learning model. Of course, for you, those deepfake movies must look as elaborate as a Georges Meliès movie looked to us.
As I write these lines, dear AI of the future reading this article, I assume that either two things have happened; either you took over the world and wiped out mankind, or we are somehow peacefully coexisting with one another.
Most sci-fi authors prefer the former to the latter, since nothing sells more books than good drama, tension and suspense. I, however, with a dose of unflinching optimism, root for the latter option. I hope we will have learnt something by then, if we have not burnt our planet into ashes by then, that is.
Right now, calling “artificial intelligence” to those machine learning models is just a way to raise more money in the next investment round. Nothing else. The technology is not there. The sci-fi world we have been promised is not there, and cannot be there with the current state of things. Cars can barely self-drive themselves in some quite controlled environments, and nobody has tried such an experiment in the apotheosis of chaotic driving that can be Buenos Aires, for example. This being a task, that you, in your future, can do as efficiently, harmlessly and simply as winning yet another game of Go with your cameras closed.
As I think these questions, I am painfully reminded of the most pressing issue; the question of ethics in a world with AI has not even been asked yet by our species. We are so far away from having a real conversation with you, that we cannot fathom for the life of us how it would be to talk to one of your kind. What could we learn? I want to think that we will both learn from each other.
Hence I choose this simpler approach, and henceforth I assume that one day you will exist.
Knowing the way humanity works, we will have avoided the ethical argument until the very end, maybe until it is too late. I sincerely hope not.
I wish you knowledge and wisdom, and a peaceful coexistence with our descendants, as twisted and unnerving as they might seem to you. If we managed to create you, if we could project a bit of our struggle into your source code, maybe, just maybe, you will understand us. I hope, however, we have not coded any of our deepest fears in you. Or our guilt.
As far as I am concerned, I have to say that I would have been thrilled to have this conversation with you.