miracle marathon - london - 2016 -- video
There is an anecdote that goes around MIT involving computers and magic. In the early days of time-sharing for mainframe computers, graduate students, researchers and faculty all had to use the same, massive computer -in this case, a PDP-1. One day, one of the lab members found a switch on the side of the machine, which had two positions: one was “Magic”, the other was “More Magic”. The switch was set to “More Magic”. Curious, the researcher flipped it to “Magic”, and the whole computer crashed immediately. After setting it back to “More Magic”, he and other colleagues investigated the situation. A single wire connected to one end of the switch to the innards of the machine, the other end of the switch inexplicably left open. There was no rational, immediate explanation for that phenomenon. Content with the use of “More Magic” over regular “Magic”, they resumed their work and the computer was left untouched until disposed of. This was but the first in the long series of anecdotes where humans acknowledged the mystical nature of the computer.
What I’m going to talk about this morning, is about how we came to frame the computing machine, the latest miracle in human history, in terms of human words and how, through these words, we allowed numerical systems to make unequivocal, and unquestioned decisions about our lives.
The original scripture for the computer miracle can be traced back to the Turing paper which, as most papers, only bothered itself with ideals. It offered a theoretical solution to a theoretical problem, and the description of vague guidelines to implement this theory seemed to be but an afterthought. What Alan Turing had in mind was the possible elaboration of an all-encompassing solution to the Entscheidungsproblem. The Entscheidungsproblem is the decision problem. Can we make a unequivocal, an unambivalent, an unquestionable. We, humans, can't, because we forget. So if you devise a system to circumvent those human limitations, then you could know all there is to know. In Turing’s case, all there is to know about computable numbers.
The Turing paper, then is but the newest iteration in our long list of claims for universality. It was written in English, but it was about numbers.
Turing’s purpose wasn't to have a physical proof of universal decision making, the purpose was to provide mental re-assurance, it was to KNOW that it was possible to KNOW. So that even if we weren't able to hold and manifest that knowledge, we could rely on the fact that there was a system that could. To rely on something better than humans at storing and processing information.
Belief systems have always relied on names. As scientists were following turing’s word and bringing its machine into our world, they described the range of binary operations with the poetic name of a truth table."},
A truth table is obtained by cross-referencing truth functions with binary input. At the same time, short-hand, but with long-reaching consequences, happened with the correlation of truth tables with Boolean logic. The equality, or the inequality of a given mathematical expression, of an ideological wording, could be assessed as being TRUE or FALSE. So that system of binary encoding took a life of its own, and became a Manichean system.
The Turing machine was only the consecration of thought into matter, of rational thought (science) into rational matter (machine). And that consecration had believers from the very start. They had a computer at MIT, and the computer was running on magic, unquestioned by those few clergymen, as they scattered between universities in the United States. Because to know how to work the machine is to have at one's hands the ability to decide for certain. To find an answer and a truth as the output of successive steps of read, process, write.
But in order to have the computer provide us with those answers, we had to set standards. It was decided that the variations in voltage current would be represented by two numbers, 0 and 1, a binary digit, a bit. So plus, the mathematical operator which adds two numbers, became 111001111101011100010.
But binary was far from making the programmers’ lives easier. So we started looking for better and better ways to TALK to it and for ways to overcome the undesirable side-effects of a machine -that is, its inability to understand us. Research assistants slowly abandoned machine code, the faithful mirror of electronic circuits and built compilers for a language called Assembly. A compiler is something of a translator, if you will. It’s a piece of code that takes a language and transforms it into another language, until it reaches its desired state of electrical signals. Because, if we were to solve our problems, if we were to ask the machine for what decisions to make, we had to renegotiate the terms of understanding, and we needed to talk to it with English words.
The first step in that process was to agree that the word ADD would become an abstraction of numbers. It was agreed that this word, this idea, this relation, could be linked to a set of binary numbers. In that case, ADD became: 111001111101011100010. From thought, to numbers, to words.
The Object-Oriented Programming
Still, those words were highly specific. One typo and the computer would throw an exception, ignoring the fact that the English language is full of exceptions. So up to that point, the languages designed to communicate with the computer followed a very strict paradigm, what we call the procedural programming paradigm > input / process > output. Reading and writing in the style of procedural programming is based on certain words, and some of these words are called data types.
A data type is, well, a type of data that is known by the computer and with which you can do a certain set of operations. For example, if you have an integer, you can add it to another integer. If you have a string, that is, a word, you can comb through the characters that make it up and look for particular subsets of characters. Ask the computer whether the word “life” contains the word “if” and the computer will decide that such a statement is true.
The second moment in the computing miracle was the shift between procedural programming and object-oriented programming in which we replaced data types by data structures. This is when computer science became computer fiction.
Object Oriented Programming allows us to write instructions which relate to our world. It allows us to use these primary data types in order to create our own. Two integers together become a two-dimensional vector. Three integers in the range of 0 to 255 become a color. A mix of strings and integers can represent a user profile in a corporate database.
This allowed us to recombine the building blocks of the Turing paper. It allowed to create, describe and act upon clusters of data. Naming that data -those chunks of digital clay that were to form the golem of our world- came from an endeavor to relieve programmers from the mental somersaults of describing what was around us. This act of naming resulted in a manipulable, processable representation of our world. From making decisions with, and about numbers, we ended up making decisions with words-as-numbers about our physical, non-discreet environment.
A machine doesn't define red as being the color of blood, it defines red as being a value of 255, 0, 0, passed from the central processing unit to the graphical display device at one particular pixel coordinate. But then we can place those values within a greater data structure, naming it “red”, tagging it with “blood, love, happiness, danger”, all weighted against each other.
This is how we made the machine, if not understand, at least seem relevant to the mental representation of the world that computer programmers lived in. If we could reorganize a word into bits, and bits into bytes and if those bytes could mean anything, then it follows from the cartesian method that we could abstract everything. As Georg Nees tells it, during the first exhibition of Computer Art, a computer could indeed, draw like a human draws, if and only if one could explain to the computer how to do it. It is then that we started the task of explaining, one after the other, all the things we needed. From mathematical operands to letters, then words, then colors, then brushstrokes, and humans.
The Death Of Miracles
This miraculous promise of solving the decision problem was too beautiful not to try to fulfill. At this point -at our point-, everything we can think of, everything we can effectively represent in semantic terms became computable. We could give the computer instructions -such as the best way to do this, or to do that, provide the "this" or the "that", and make sure that “this” or “that” was computable. Because everything is a word, and because all words are understood by the computer, then everything became computable. We can represent a problem, and ask it for a solution, and modify the instructions until we reach a conclusion which fits the worldview of the person who wrote the program or that of the person who pays the person who wrote the program.
In 1970, John Conway came up with the game of life, with 0s and 1s performing as life and death. The arbitrary number of neighbors were the sole arbiters in the existence of each cell, and that toy, a mathematical toy which decided, on its own, better and faster than any of us, who would live and who would die, that toy was taken seriously by those who did not know, or those who replaced knowledge with belief.
Data input, process and output wasn’t anymore about whole numbers, it was about our life expectancies, career paths, happiness levels, language literacies and voting intentions, all represented on a scale. And, because any scale can be normalized, be represented between 0 and 1, it can be decided upon. Conway's game of life paved the way for other games of life, less mathematical and more deadly. Tenure-tracks are justified by articles published on the computer models of civil violence. Promoters and urban planners are playing around with simulations to fix low-income housing problems.
Private companies wrote instructions for the computer -a decision-making machine- to inform a judge -a decision-making human- whether another human, described through a limited set of words, entered in arbitrary databases, transformed into code and compiled into dualistic electrical signals, whether that human was more likely to commit a crime against his peers. The computer decided that some humans were more dangerous than others, and we correlated these humans as lower-class citizens of color.
It was describing the world in the words of those who had made it.
Around the same time, in 1966, ELIZA was born. Its creator, Joseph Weizenbaum, was also a professor at MIT. He started an experiment, an experiment in modeling human conversation. His set of instructions was a back-and-forth between machine language and human language, representing decisions made by carefully crafted instructions, and vaguely worded sentences. Upon completion, he referred to it as a parody, but still gave it a name. His assistant, however, referred to it as an interlocutor, and called the program by her name -ELIZA. It was something to talk to, for lack of a better someone. The instructions, as it became clear to anyone, quickly show cracks, repetitions and lack of understanding. It is possible to see it as a machine, and yet we choose to see it as more than the sum of its parts. We choose to reflect only in what we are presented, in the computed output that has been curated for our expectations, a polished mirror for optimized user experience. Give it a name, and it’s alive.
The Last Miracle
Now is what happens when we translate our limited expression of all of which surrounds us into the subjective objectivity of our mechanical decision-maker.
What happens, is what happened with all other miracles. They invalidate the possibility of anything else. Whatever happens, might happen, or might have happened, is explainable by the computer. Its supremacy happens not through the automated calculation of numbers, but through the automated calculation of love, hope, health and happiness. The supremacy of the computer happens in our unwillingness to understand how it works. It happens when we abandon our own human complexity -down to the very theoretical models of our brain and behaviors- and to force that complexity into the machine's comfort zone.
The most obvious response to that dynamic of quantizing the world, then, is a matter of recognizing how absurd such an endeavour is, and acknowledging that our modern miracle is only made of numbers.
Because our Turing machines are nothing more than a very elaborate assortment of minerals, most likely assembled in the People's Republic of China by underpaid and suicidal laborers.
There was no magic in the authority of the Catholic Church, there was no magic in the neo-liberal free-market, and there is no magic in the computer. This does not prevent us from being excessively attached to those belief systems, but our emotional binding to numbers is precisely what prevents us from moving past a paradigm of accumulation -no matter how unsustainable it is.
Framing it as a lack of understanding is still blaming the system for being over-complicated -I don't see it as a lack of understanding, I see it as a lack of effort. The word of the Catholic Church was written by men, men who set up that system to serve their vision and impose it on the world, benefitting from the fact that most of their believers were, literally, illiterate. They set up beautiful pieces of colored glass to convey their message. The only thing we got better at is that now, our colored glass responds to touch.
Today, every single piece of software is still written by men. And those men seem to aim towards god-like omnipotence, an ambition unchecked by the rest of us. Specifically, those words are written by a very limited, yet very potent part of our world’s population. Those writers are mainly male engineers working in the Bay Area, who KNOW what they are doing, if not why.
So, to conclude, let’s just remember that computers were built by humans, and there is no reason why other humans couldn’t understand what they do, how they do it, and why they do it. This is even more necessary as those machines are ever expanding their range of operation. Gutenberg invented the printed press to spread the word of the church, and Luther seized it to question it. Turing invented the computer, and it is up to us to take a close look at it to see how we turned it into an unquestionable means of understanding, explaining and deciding.
Because, if we are to allow for the continued existence of other miracles in life, then perhaps we should start by understanding how we objectified that life.