dld conference 2017 -- video
The broader question here is "What's the plan?". In our tech world, such a question is usually followed by "How do we build it?”. Today, when turnout at the British referendum amongst the youngest voters was unreasonably low, and when European elections across the continent consistently rank last in participation amongst all other national elections, when only barely half of U.S. citizens vote in their presidential election, it seems that democracy does need some "fixing".
We can imagine a long-term version of that 'fix', in which Google -or any other private tech company, for that matter- has figured out a voting system. There’s already a patent, number 9262882, involves a Voting User Interface, and is primarily dedicated to polling users whenever a search result comes up for real-time, reliable opinion-assessment. So currently, it will only be deployed for commercial purposes, but it isn't impossible that, at some point in the future, some private or public entity would ask a such a company to "fix" democracy. I'm not saying that such a private/public endeavor is good or bad, but rather that it is becoming more popular and, in that context, coming up with fully digital solutions to public problems is becoming more and more real, if not encouraged.
Right now, governments across the western world have published several studies arguing why electronic voting is a bad idea. The vast majority of their points are aimed at the technical liability. Now that we’re slowly understanding the extent to which war by means of computers can be as serious and as consequential as war by means bombs and missiles, we cannot risk such an obvious leverage point in any system which would aim at being democratic.
But that's really just from a technical standpoint, what i'm interested in here is the cultural standpoint. Because I agree with these conclusions. However, I disagree with the fact that security should be the main requirement on which it is decided whether an electronic voting interface would be deemed needed, built and distributed. Because strictly thinking about technicalities hides the greater picture, it allows us to ask **what computers can do** (“offer a secure means of voting”) and not **what they should do** (“offer a means of voting”). What some proponents of electronic voting are offering is an additional interface to an existing, functioning human process. And electronic voting is an example of why we might not always need more interfaces.
Voting is an interface
Recently, the Federal Constitutional Court in Germany decided that electronic voting was unconstitutional. It wasn’t declared unconstitutional based on how secure it might or might not be, but it was declared unconstitutional on the fact that voting should remain a process which all citizens should be able to audit and understand. Any German citizen, with pen, paper and patience, could recount the votes cast during any election. Voting is not just a mechanical process, it's also a cultural practice, and you can’t replace a mechanical process without changing the cultural habits that go with them.
So what is our current voting interface? After having registered in a specific area, you go out to a voting station, you stand in line with the other citizens that are registered in the same area as you, you check-in at the reception desk, you complete a paper ballot in a voting booth, unseen and unmonitored, alone with a pen, you tick a box, hide that paper by putting it in an envelope, and cast it into the ballot box. That is the interface we are used to now. The casting of a paper ballot is empowering in itself. You do something. You reveal your opinion. And no one can say or correct it otherwise, at least not during the process itself. As long as you can see your ballot, you are the only person who knows what it contains. Voting as a physical interface is also reminder that there is a physicality, a communality to political power, that you make the effort to go somewhere with other people. Voting stations are not screens, they are places where people live, learn, play and work. They are residential buildings, primary schools, city halls and churches. There is a community effort in making a voting process go smoothly. And I think that replacing one interface (paper) with another (screen) has an interesting consequence in that it reverses the power dynamic.
There is a shift in who is the focus of the process. One the one side, you have the paper interface, the focus is the community, at several levels. First, the community of people who allow the voting to happen by setting up the polling stations. Second, the people whom you meet at the voting booth. Third, the people who are inscribed on the register among whom your name stands. **Last**, the ballot which you cast in the ballot box, falling amongst other, identical ballots. The physical interface of voting highlights the part of the individual within a community, and sacralizes it by forcing the voter out of her daily routine to bring her to the voting booth.
Alexander Galloway, defines digital interfaces as a "control allegories", since what it really does on a very essential level is crystalizing how a specific set of relationships is enacted at a given moment. It **defines** precisely what is going on. And yet, no definition is entirely accepted by everyone -this is why we still have semioticians and lawyers. So whose definition are we following? How obvious is that definition? Can we design and develop interfaces that are so seamless, as one food-delivering company seems to believe so strongly, that they replicate precisely the interactions of the things that it helps communicate? Can interfaces really ever innocently disappear?
Voting as a digital interface is highly individualistic. Digital interfaces do a wonderful job at making the world's complexity disappear under them -you just click on something and disregard the whole human and technical apparatus which built the object on which you clicked. Digital, screen-based interfaces, gave birth to user-experience, and user is singular, not plural. The screen itself, today, belongs exclusively to one individual, and is rarely shared. The ritual of physical voting highlights that balance between a group behavior, when neighbors and friends would go to and meet at the voting station, until you end up alone in the booth, it’s a reminder of the relationship between one and one's community, it’s community building. Community building online is so artificial that a whole range of professions was born to deal with it. Needless to say, you tend to lose that sense of ritual, that sense of community that is essential to voting. Voting is a social act, and digital interfaces are changing our very notion of social.
Software, just like any other technology, involves an actor and an acted-upon. An observer, and an observed. Software precisely has the problem of having the possibility of being spied on. The important word here is "possibility". It doesn't matter if you're not spied on -that's a technical issue. It matters if you believe it to be, and how easily that belief allows you to distrust the whole system -"maybe they're spying on me, maybe the whole system is rigged". And that's a cultural issue, which was mentioned in the Citizens United Supreme Court dissent. What the dissenting judges of the Supreme Court argued was that allowing significant financial contributions to political campaigns might still prevent actual cases of corruption. What it might not prevent, though, are instances in which citizens believe that such corruption actually happens, consequently casting their vote towards candidates who claim to stand against corrupted, greedy and communautarian elites.
Everybody knows how to use a piece of paper, but not everybody can verify whether they're sending a request to a secure and trusted website. By implementing a computer-based interface, you're also enforcing inequality between those who know, and those who don't. Which isn't the best thing to do in a practice that is supposed to be based on the sacred equality between all members of a community. So this, this effect by which a switch which seems sensible in technical terms ends up changing cultural behaviors, that is a typical case of what computers can do, and what they should do. Whether we can build digital interfaces, and whether we should.
Interfaces for conversation
Here is a more concrete, real-life example. This is the “seen” which appeared on Facebook's chat. It's that little thing that shows up at the bottom of a message box once the recipient has, according to the company, "seen" the message. Following this design decision, most instant messaging platforms have implemented a version of that feedback. What was the goal of that implementation in the context of that interface? The goal was to try and improve an interface to real-time conversation. If you want to simulate face-to-face conversation, you need to simulate the constant reassurance that there is someone in front of you, that you are not talking in the void. So you want them to be aware that there is someone they are talking to, to reduce the technical complexities of global telecommunications into the faking of someone being there.
But what was concretely the impact of that change of interface, the impact of that computer being used to act as a face-to-face conversation? The most obvious one is a change in trust. As Georg Simmel puts it, trust is a balance between ignorance and knowledge. With an interface for real-time communication, you switch the trust of someone being there from your own senses to Facebook's decision to tell you that there is someone there when, sometimes, no one is actually paying attention. Facebook hides the actual information of whether someone has seen your message or not, which really was not a desire that was originally part of long-distance communications -look at the feeling of outcry whenever someone automatically requires you to acknowledge your opening of your email or your regular mail.
Before, you had to call or go out to have a face-to-face conversation. Now, someone is pretending that you can have a face-to-face conversation on a screen, but can we really? Since Facebook just solved that problem by providing a telephone-like service, it seems that the “seen” was simply used to hide the fact that real-time wasn't actually real-time. Did this additional interface change our behaviors as we talk to someone? Do we prevent each other from acknowledging whether we’ve received their message? Does that make us more social or less social? There’s a definite feeling of maneuvering around that hurdle, not opening the message, not engaging in that interface of instant messaging, a feeling of defensiveness which you can’t ignore, because it shouldn’t exist in face-to-face conversations.
As interfaces try to become more and more seamless, we often forget to ask what happens to those interactions where the seams are essential? Face to face conversation is not actually a seamless process, it is a highly complex interaction on which countless books have been written. Trying to simplify that often tends to make that simplification stand out. Someone doesn't just "see" a message, there's a lot more to it: you can read, acknowledge, unadvertently click it, dismiss it, forget it, glance at it, expect it, fear it, be surprised by it, etc.
Interfaces through time
Interfaces do not only simplify and conceal interactions, they also define the framework in which these happen. Let's take a last example with Venmo. What interface does Venmo offer an alternative to? It offers an alternative to shared human memory -we have to remember who owes what to whom. It harmonizes, it smoothes out that relationship, because it allows you to solve it immediately, and therefore to get rid of it at the same time. I pay something for you, and you pay me back immediately. This collapses the interaction between two human memories ("I owe you", and "You owe me") into an instantaneous resolution. The prior interface of owing things to people relied more on a social practice than on a technical system -indeed, the most elaborate technique so far was writing it on a paper. It relies on the fact that we had to remember our previous interaction, our previous relationship with someone in order to fulfill its past state. It was a social commitment.
As Marcel Mauss exposed it in his work on the gift and counter gift, societies are partly built and maintained around the concept of giving and of fulfilling that gift. By giving, whether it is an offer or a social constraint, you are re-affirming the need for a relationship to last. It also reaffirms the necessity of trust between people. Not only does that immediate resolution offered by Venmo's interface takes away that need for trust, it can actually make the refusal to use Venmo even more suspicious. That interface no longer relies on the possibility or the necessity of future interactions, of people linked not only through geographic location, but also through time.
The interface offered by Venmo takes the action of "paying back" away from its greater cultural context in which cultural practices are not as neatly delineated and separated as we'd like.
So what we can see from these examples is that digital interfaces have specific affordances, that they draw directly from the properties of the code that they are built in.
By definition, the only system that a user is presented with when interacting with a digital interface is a computer screen, every system, then, is simplified into a single piece of software, into a single window, whether it is a webpage, an app, or a communication protocol. The push towards one-to-one interaction tries to abstract and reduce all superfluous elements from the relationship between the individual, the screen and the desired process embodied by that screen.
The corollary, then, is deliberately concealing the complexity of the previous system that they are meant to replace. Instead of explicitly bringing the individual to deal with all the cogs that make up the incredibly complex machines that are human organizations, they remove any reference to things that are not directly relevant to the task at hand. A voting registration removes the need to know what are the different government organs needed to make the voting process happen. Real-time indicators on a communication platform conceal the uncertainty that someone might not be on the other side, and that this communication method might not be as good as face-to-face, or voice-to-voice.
Finally, they compartmentalize. They decide how, when and where any given interaction should and can take place. They make sure that they are not dependent on other, non-crucial interactions. Each interface tries to minimize conflictual interaction with other interfaces. Voting interfaces remove voting from the physical and social world. Social payment interfaces remove paying from the larger personal relationships that are built on a symbolical debt and the repayment of that debt. By default isolated, interfaces need to actively seek openness in order to break that compartmentalization, "integrating with third-party services", as the phrase goes.
But if they fail at all of the above, interfaces stand out even more, they make it obvious that they are a middle-layer, and not the real thing. E-voting is not the real thing. Digital social networks are not real social networks. Apps designed to streamline all economic transactions treat them as financial transactions, disregarding the impacts it might have in other realms of human life.
All of these terms -conceal, compartmentalize, simplify- have very different connotations, depending on which context they are used. In software development, they are mottos that one should closely follow in order to write the most efficient, modular, re-usable code. A specific piece of code doesn't need to know anything about its surroundings and the specificities of what it is interfacing with. Only specific endpoints are exposed, and a limited, pre-defined set of function calls are the only way to interact with it.
On the other side, when you use these same terms, concealing, compartmentalizing, simplifying, in the context of social relations, it doesn't seem to me that these are worth striving for. From concealing comes mis-trust, from compartmentalizing comes communautarism and the fear of the unknown, from simplifying comes the dumbing down of public discourse in which our uninformed, and more and more disinformed citizens are asked to make informed decisions, with consequences they don't always fully realize. Still, this, if successful, could indeed sound like a very efficient, optimized social system, but one where the complexities, irregularities and mis-behaviors that make up the richness of human life no longer have a place. A docile, well-oiled, seamless society might be a dream for some engineers, but it might be worth asking whether it is a dream for everyone.
So, really, our use of interfaces too often consists in finding fixes without focusing on the plan. The beauty and the problem in that realm of engineering is that everything, since Turing, is virtually achievable. This completely disregards whether or not it is desirable. Again, interfaces are a means of simplifying, of concealing, and of compartmentalizing. These are not just technical features, they are cultural practices which, in some contexts, might actually run counter to the desired effect. There is *"how do we fix it?"* and there is *"why do we fix it?"*.
When proposing and building interfaces, then, we should be aware that some of those simplifications have a proactive, creative effect on our human relationships and social connections. And maybe we shouldn't always try to optimize those relationships, because that implies that there might be a point at which nothing will be cumbersome or complicated anymore.