Decoding America’s Operating System
Author and technology journalist Clive Thompson on the relationship between code, politics, government, and power
The United States is in existential crisis. What it stands for and who gets to call themselves “American” seems up for grabs. Crackdowns on immigration and refugee resettlement, increases in xenophobia and hate crimes, and assaults on institutions and the rule of law have left both sides of an increasingly divided nation reeling. But unlike the early 19th century, arguably the last time the country was so fractured, the public square isn’t newspapers or places where people physically gather—it’s Twitter and Facebook. And unlike 200 years ago, conversations about what America is and who represents it are vulnerable to influence and interference from agents (hackers, trolls, and bots) of foreign governments.
This overlap between the political and digital makes civics and technological literacy more important than ever. “When you look at the careers of the Founding Fathers, the majority were trained lawyers ... and those who weren’t … were nonetheless masterful legalists,” writes Clive Thompson in his book Coders: The Making of a New Tribe and the Remaking of the World. “They were the ones who wrote the rule sets that made America America. They wrote the operating system of its democracy. And the tiniest of their design decisions have had massive, long-standing effects on how the republic evolves.”
We’re not used to talking about government with the language of technology: “rule set,” “operating system,” and “design decision” feel more appropriate to discussing TikTok than the Bill of Rights. But as the 2016 election showed—with Facebook disseminating political ads, news, and disinformation generated by Russian assets—America needs to rethink how we view the role of code and tech in our civic lives.
It’s a narrative thread Thompson tugs throughout Coders, which is out in paperback on March 24. The Elective spoke with the author and Wired journalist about how social media has fundamentally altered the political landscape, the impact code has had on democracy, and why the road ahead could be more dangerous than we think.
Penguin Press (cover) / Liz Maney (Thompson)
Early in the book you write that the Founding Fathers “wrote the operating system of [America’s] democracy.” Before you started Coders, had you thought of viewing the Constitution through that lens?
A little bit. I was familiar enough with coding to know that it’s really just writing rules for computers to follow. All this mystery around an algorithm is not so complicated. It’s just a list of rules the computer has to follow. I studied political science when I was in university, and I’ve been a lifelong consumer of political news. I follow it as a lay person, and in fact I became an American citizen last fall, so I spent some time reading up on the U.S. Constitution. And I think in a weird way I might have been influenced by seeing the musical Hamilton and becoming swept up by it. What you really see there is [the Founding Fathers] talking explicitly about, “We are going to write rules down that will govern the way this country works. And we’ve got to get it right.” I think I’d also been influenced a little bit by Larry Lessig, who wrote a book almost 20 years ago called Code is Law, Law is Code, and he very neatly said back then: We have laws that govern what we do in our daily lives, and we have these new laws that are kind of invisible, that govern our everyday behavior, which is software. I was already a technology journalist back then, I’d already been writing a lot of code, but it was such a productive framing that it just kept me thinking more and more about it. But it really came to a head with the 2016 election. There was a real burst into the mainstream of people thinking about the ways these social networks shaped the American conversation around politics. I had already begun work on my book, and the fact that that became such a mainstream concern made me double down on wanting to tease out the civic implications of code.
You interpret Silicon Valley’s contempt for anything that fails to become massive as “smallness seems like weakness.” I can imagine Trump tweeting that at something or someone.
Silicon Valley is very swept up in this idea of scale and size and power. That’s economic, but it’s inherently political, too, particularly when you consider that the companies we’re talking about, which have become so big so rapidly, tend to want to become massive in spheres that really affect the fabric of everyday life: news, our everyday conversation, the economy, how we get jobs. Facebook, Twitter, YouTube—they want to be huge. And by being huge, they really want to exert an enormous amount of influence over what we see and what we talk about. Companies like Uber, Lyft, Amazon, or even the new slate of firms that are scrapping to be the ones for on-demand labor, they very explicitly want to have an enormous say over how the economy works. So yeah, this stuff is about code. But it’s also inherently about power.
The thing that became clear, as I wrote the book, is that when it comes to the big social network companies, part of the reason they’re having trouble managing what’s going on is that they are just so large that they slipped the bounds of any serious human control. Everything that gets seen on those sites, everything that’s filtered out, is done by algorithms over which even the creators can’t account fully for the behavior. [Facebook is] a company that affects everything from race riots in foreign countries to the intervention of foreign powers in discourse in the U.S. and other countries. That scale is so big, and that’s what they wanted. But with size comes weird problems. There’s a famous phrase in technology that more isn’t just more, more is different. Above a certain size, you have a whole new problem.
That’s also very similar to what we often talk about in politics, the difficulty of transferring ideas that work really well small-scale into large-scale groupings. When you look at the political spectrum right now in the U.S., anywhere you’re seeing real progress or agreements across ideological divides—it’s all the local level. It’s all cities, maybe states. There’s something kind of broken about this on the federal scale. We’ve lost the ability to manage these competing interests on that scale. And that reminds me of a weird parallel with the challenge of these large social networks being able to even understand what their problems are. Over and over again, when I would look at the challenges technology has created, they resemble things I’ve read about in the world of politics, from Jane Jacobs to the writers of the Constitution and the Federalist Papers to Plato and Socrates.
This stuff is about code. But it’s also inherently about power.
Facebook likes to tout this stat that its user base is larger than any nation, that if it were an actual country it would be the largest, by population, in the world. And yet all these people are coming into this environment, and many are nationalists and many believe in really hard borders. There are racists and advocates of genocide. So you have a company that prides itself on this ...
…idea of community. The whole idea at the beginning was that [Mark Zuckerberg] was going to be creating community. And the point you’re making here is that it’s inviting a bunch of people who care nothing about community.
Or care so much about a specific kind of community that they’re trying to recreate it in the digital space, which fundamentally sets them at odds with this platform that says it wants to create community but is unable to manage that many people. How do you say, if you are Facebook, “Like these baby pics, but don’t advocate genocide”?
Exactly. When you talk to technologists, people who have been involved in building social software, ranging from a little forum to early blog and commenting software to social networks, they’ll all sort of tell you about how there are these early phases in a new communications tool when it seems like it’s functioning pretty well. People are doing something new and cool; they’re tweeting or they’re doing social networking across different interest groups or bringing in people from Texas and Russia and Indonesia and Canada. And everything works fine until a certain scale is reached and things start to fall apart. I’ve come to think of this as an iron law of social networking. Some of that may simply be that we need better design, in the same way that the framers of the Constitution were thinking about how to manage competing interests and how to set up productive conflict—these three equal branches of government that would all be a check on each other. They were trying to set up conflict and friction.
I would argue that some of the problems we’ve got ourselves into with modern social software are because the pioneers who created it really weren’t thinking as carefully as the framers of the Constitution were. They had one or two interesting ideas that uncork a lot of conversation in new ways, but they totally failed to think in any sort of way [about what comes from that]. Whereas when you read the Federalist Papers and all the letters and the arguments that they were having back then, they weren’t necessarily right all the time. In fact, they got certain things catastrophically wrong and we’re now stuck with some really bad design decisions. But put that aside. You can see them struggling to talk and argue and think about the implications of these rules in a way that I don’t think [social network creators] are doing at all.
Architect of the Capitol
Howard Chandler's "Scene at the Signing of the Constitution of the United States" depicts, in the middle and from the left, are Alexander Hamilton, Benjamin Franklin and James Madison. The painting hangs in the U.S. Capitol.
Another interesting overlap in the comparison between coding and the Constitution is this growing conversation about bias in algorithmic coding.
Yeah, in artificial intelligence absolutely. When they train the systems, if they train them on really dumb, racist data, they make really dumb racist decisions.
Right. And there’s a sense that there’s an analogue in the Constitution, that racism—among other biases—is algorithmically woven into the Constitution.
Absolutely. The Constitution was written by a bunch of guys with land who—they were a mix of people, but they were a lot more elitist than most people today would imagine them to be. So their decisions were very much designed to reward guys in their station. And we’re stuck with the implications of that. It was also designed to manage competing problems of a bunch of states that owned slaves. Entire economies were based on slavery, so that’s written right into the Constitution. They had to come up with rules slave owners would abide by. So, of course, you’re going to get rule sets that are going to have problematic knock-on effects in the modern world. I think the one that is probably most common in the conversation right now is the Electoral College. It dramatically over-rewards sparsely populated rural areas with far more power than densely populated urban areas. And that’s a huge problem for a country that that is still urbanizing.
It was also a Constitutional patch for those Southern states that didn’t want slaves counted as people but still wanted full representation.
Exactly. The percentage-humanity of [enslaved people] is written right into the rule set. They were wrestling with stuff and trying to perfect things, but they had their own blinders on. We see these problems over and over again in coding. When code was written line by line—if this, then do that—the coders’ bad decisions would redound upon the user. I think about this. I’ll write a little piece of code to use in my work, but if someone else wants to use it they’ll discover that it doesn’t work very well for them because I wrote it. I’m a hobbyist, I’m writing it just to solve my problem, and I’ve written it so specifically to me and my needs that it doesn’t even remotely work when anyone else tries it. “Why did you make it this way?!” It makes sense for me. But it’s very easy when writing code to use yourself as the most logical test case. That’s a problem.
AI is creating new and gnarly and complicated forms of bias. With a modern deep-learning neural net, you’re not sitting there as a coder and writing a bunch of if-then statements. You’re creating a little mathematical model and training it. If you wanted it to recognize dogs, you showed it a picture of a dog. In the beginning, it’s going to be just guessing blindly. If it gets it wrong, then you feed that back into the model, saying, “No, that was wrong,” and it adjusts its many, many, many mathematical weights. And if it gets it correct, you feed that back into the model and it adjusts its many, many mathematical weights. And if you showed it like a million dogs and you keep on correcting it every time, it’s really good now. It’s seen a million dogs and now you can show it a dog and there’s a 99.99% chance it’s going to get it right. Now the problem becomes: What sort of pictures did you show it? What sort of data did you feed it and train it on?
Henry Gan, with Giphy, wanted to use a deep-learning thing to help autorecognize faces in gifs. A huge number of his users are big fans of Korean pop, K-Pop, so he gets this free open-source software that’s been trained on millions of faces. He tries using it and it’s catastrophically bad at recognizing Asian faces. And that’s because it was trained mostly on white faces in a British data set. The reverse happens in China, where they’ve got this data set trained entirely on Asian faces that cannot recognize white faces at all. So suddenly all these biases are no longer about individual lines of code you’ve written, but how you train the system and what you feed into it. The problems are getting very weird.
If you are Facebook, how do you say, “Like these baby pics, but don’t advocate genocide”?
You write that the civic role of code is only going to grow. In what ways do you see that happening?
There are three obvious ones that come to mind. One is that we’re conducting more and more of our civic conversation online. Take a look at how political candidates communicate now, shifting further away from traditional media and TV and more into the online world. That means all those algorithms of the major social networks that determine what you’re likely to see are becoming increasingly important. And anyone that makes a new tool that lets you communicate in a new way can have an impact. So, with more and more of our important conversations moving online, we need to be super vigilant about the impact of the big social networks.
The second thing that’s happening is economically. More and more things are being governed by software, ranging from shopping to the way we find work to the type of work we do to the way that work is automated or not automated. Our activity is being governed more and more by software, and thus, unfortunately, given the way the venture capital markets work, by a small number of very large companies. This is why you’re seeing antitrust conversations about breaking up some of these companies, like Google or Facebook.
And the third thing is the increasing use of AI, highly trained neural networks in all sorts of devices. This is a very big civic issue. I think the way it’s most easy to see is the rise of face recognition. It’s getting tucked into everything: doorbells, the way you unlock your phone. Local police departments are using cameras and trying to build databases of all the faces in their towns. Companies are using it to detect what’s happening in your emotions and expressions during interviews. A lot of important decisions about your life are being made not by humans but by these highly trained systems. And it’s very much in our interest to figure out whether they are being trained well, or whether they can ever be trained well enough for us to really trust them. There’s definitely a pushback right now. I’m very happy about that—towns are banning face recognition and states are looking at banning it. I strongly believe that it is an extraordinarily ... I was going to say “extraordinarily dangerous technology,” but what I really mean is that it is incompatible with certain types of human freedom. It’s less that it’s dangerous, but just the drift of what it’s really good at doing tends towards authoritarian command and control, which may be fine in China but it’s not all compatible with the society we here in America have long tried to build for ourselves.
David Becker/Getty Images
Attendees at CES 2020 have their images captured with CyperLink's facial recognition software.
In the book you quote a blog post from crypto hacker Moxie Marlinspike that it’s only by breaking the law that citizens can discover some laws are nonsensical. Do you think that works in the opposite direction? That technology, code, all this digital life is showing that, no, here’s why these laws matter?
Yeah. Code impacts with our self-governance in a bunch of ways, some of which are contradictory, or pull in different directions. Sometimes, the code is there to heavily reify the way society works, strengthen the power of the powerful. Sometimes it’s there to do the exact opposite, to make it harder for centralized powers to do what they do. The big example of that I talk about in my book is encryption code, which is designed to help you communicate privately. Whatever helps you communicate privately also helps criminals and terrorists communicate privately. There’s no simple way code interacts with our legal norms and desires. Sometimes it tears it apart and sometimes it lets the powers that be strengthen things dramatically. I could take almost any type of invention or any type of concept that people have made in software, and you can sort of see, OK, there are things I think are going to be great out of this and things I find terrifying. Sometimes they’re woven together. To take a trivial example: There’s a LIKE button on Facebook because they wanted to make it easier for people to be nice to one another, to show, “Hey, that was great. I like what you did there.” And it worked, right? It uncorked trillions of likes and tons of positive affirmation. But it also made us into neurotic freaks about how many likes we’re getting. That’s a little small example, but it’s pretty hard to look at any individual piece of technology and say this is this is inherently good or inherently bad. It’s very contextual.
In that way, it’s sort of indicative of what makes America America. The First Amendment that protects all this good speech but also the bad speech.
Exactly. And hackers think a lot about this because code is a form of speech that you speak to a computer. So they tend to be very, very strong free speech absolutists. I think it’s what’s baked into the original naivete of the social networks, where they were like, “Well, we’ll just let everyone say whatever they want. What harm could possibly come?” They were coming at it from the naivete of being young white dudes. They were coming at it from the general predisposition Americans have for unfettered speech. But they were also coming at it in that subconscious way that coders have with the idea that, yeah, just letting it all happen is a good thing. It’s quite a culture.
This conversation has been edited for length and clarity.