Issue #29: Internet Of Things,  Library

Douglas Hofstadter

You may be worried that I am going to talk about an author of books that are not about programming, and you are correct and incorrect. Correct, in that Hofstadter’s books are not about programming (the intellectually hollow like to claim that they are not about anything at all, or that if you think you know what they are about then you did not understand them; this is untrue). Incorrect, in that Hofstadter’s books and computer programs themselves are about the same thing.

You see, computer programs are an attempt to express—in precise, repeatable form—a document describing human cognition. The whole field of logic is an attempt to express the “laws” of rational thought, albeit a flawed one: a logical deduction is certainly rational, though a rational action need not be logically determined. For confirmation of this, look no further than the title of George Boole’s work on symbolic logic, “An Investigation of the Laws of Thought, on which are founded the Mathematical Theories of Logic and Probability”.

In Boolean logic, a conception can be represented by a symbol in an equation, maybe x or y. The numerical value of the conception represents its truth: at the beginning of the book these values are 0 for contradiction and 1 for tautology, though the book is actually (as the title suggests) about probability and so fractions are introduced to represent the likelihood that a conception is true. Computer scientists presumably only read the first few chapters of any book, and struggle with five-letter words, and therefore programming languages (which used to be designed by computer scientists, before we realised how unrelated those two disciplines are) refer to George as “bool” and only use the extremes of the range.

Just as thinkers can combine different ideas to produce new thoughts, the Boolean logic allows for conceptions to be combined using mathematical relations. Multiplication represents intersection: if x is sheep and y things that are white, then xy is white sheep. Addition represents conjunction: x + y is things that are sheep and things that are white. Minus represents exception: if x is all states and y is monarchies, then x - y is all states except monarchies. This mathematical notation is then used as a convenient shorthand in uncovering basic laws of logic (i.e. of thought), and applying them to such evidently logically-amenable questions as whether gravitation exists, or whether there is necessarily a prime mover.

If George Boole is the 19th century’s artificial intelligence scientist, then his contemporary machine learning engineers were Charles Babbage and Ada Lovelace. The Difference Engine, which would be frequently cited as the first example of a (mechanical) programmable digital computer if it had been built at the time, was explicitly designed to replace rather than augment human thought. Just as modern software engineering managers use Jira to avoid thinking about process engineering.

You see, various forecasting and actuarial tables were constructed by people with slide rules thinking about maths, and Babbage did not like that. Particularly, he did not like the fact that they would occasionally get the thinking wrong. “I wish by God these calculations had been executed by steam,” and set about making it so. Or, doing what academics do to this day: acquiring government money to make it so, getting into arguments, then moving on to another project.

So computer science was invented to capture, reproduce, and improve the reliability of thought processes and procedures involving those thought processes. This is why answers to the question of “what constitutes artificial intelligence?” are typically unsatisfying: in reality all computation is artificial intelligence. The && operator in C represents logical conjunction; it is an expression of a “law of thought” in the computer.

Even the humble hashmap (a.k.a. Dictionary) is a model of human cognition: note that human memory tends to recall things by association, rather than indexing through everything we know; now represent that idea of association in the computer’s “memory”.

So now we get disappointing results like the idea that artificial intelligence is research into thoughts that computers cannot yet represent, or worse: artificial intelligence is things a computer does that humans do not yet understand. Representing a function as the sum of a series of other functions is boring old computer science when you are doing a Discrete Fourier Transform; it is AI when you are combining a matrix of sigmoids.

“Gödel, Escher, Bach” is not about computer programming. Rather, it is closer to the truth to say that computer programming is about Gödel, Escher, Bach. But this is a point that was sadly lost on readers of GEB: sadly for everyone except Hofstadter, who got paid to write another book, I am a Strange Loop, to explain what we had all missed the first time around.

To this day, there is a Strange Loop conference in computer programming, among those who recognise the computers in themselves, themselves in the computers, and the metacyclic self-reference in the ricercar.

Cover image by dawnydawny from Pixabay.

Donate using Liberapay

Graham is a senior Research Software Engineer at Oxford University. He got hooked on making quality software in front of a NeXT TurboStation Color, and still has a lot to learn.