A stock photo/digital illustration of the USA stars and stripes flag made out of binary computer code.

Interview

Patching the Technology of Democracy

In their book System Error, three Stanford professors representing the fields of ethics, computer science, and political science diagnose the nation’s digital ailments—and share prescriptions for healing them

An ethicist, computer scientist, and political scientist walk into a classroom—sounds like the start of some terrible teacher’s lounge joke. But in the case of Stanford University professors Rob Reich, Mehran Sahami, and Jeremy M. Weinstein, what they’re here for is no laughing matter. What’s at stake in their lecture hall is the state of technology, the health of our society, and maybe even the future of democracy.

Chronicling the myriad ways Big Tech has facilitated civilization’s collapse makes for clicky headlines, but wade into that content stream and two things become readily apparent. It’s always the same story, repackaged and retitled to match whatever company is behaving badly. And it’s incredibly shallow, focused on what will enrage without pushing the conversation forward. Larger scandals and revelations, like those offered by Facebook whistleblower Frances Haugen, can engender some constructive reaction, but even that often gets buried by the next scandal.

Reich, Sahami, and Weinstein aren’t so quick to move on. And in their new book System Error: Where Big Tech Went Wrong and How We Can Reboot, they confront the questions without autofill answers head on: What can be done to rein in the worst impulses of the most consequential industry of the 21st century? What are the implications for American democracy of an unchecked tech sector? What role do everyday citizens—along with tech execs, coders, and computer scientists—have in making meaningful change? And how can education shape the future of the industry and its place in our society?

Creating an environment of open, honest conversation and idea sharing around these queries is the source code of System Error, a compelling and energizing book that grew out of an ethics and politics of technological change class the three have jointly taught since 2016. And they’re uniquely qualified for the task. Reich, a political science professor and director of Stanford’s McCoy Center for Ethics in Society, handles the ethical side of Big Tech’s impact and future. Sahami, a computer science professor and a Google veteran who was with the company in its start-up days and is one of the inventors of email spam-filtering technology, speaks the students’ technical language. And Weinstein, a political science professor and veteran of the Obama administration who served as director for development and democracy on the national security council staff and later deputy to the U.S. ambassador to the United Nations, tackles the policy implications.

“One of the great things about teaching together, as I think Mehran will agree, is having these conversations across fields,” Weinstein recently told The Elective. That, he adds, is crucial to the goal of the class and the book: creating a “common language” that allows everyone to regain some agency in deciding what kind of role technology has in our everyday lives and our democracy.

Weinstein and Sahami spoke with The Elective about what’s at stake if we opt out of active engagement with the issues Big Tech poses, the importance of working as a community to find solutions, and the ways democracy is itself a technology.

Headshot of Mehran Sahami on the left and Jeremy M. Weinstein in the middle with the cover of the book System Error on the right

Stanford (Sahami), Christine Baker (Weinstein), HarperCollins (cover)

The book feels very timely and forward looking, but the tech world moves really fast. How did current events impact writing the book?
 

Jeremy M. Weinstein: The book has its origins in a multiyear effort of the three of us to develop a common language for engaging students about issues at the intersection of the technical code that underwrites the technological moment that we're in, the normative values that are at stake, and the policy considerations that need to be weighed. We wanted that common language to be accessible to those who build technology, but also those who actually have no idea how technology is built. The lack of agency people feel over our technological future has put us in this position where only those who understand are given undue amounts of power to make decisions that need to be made democratically.

In 2016, we began trying to find a way to mount a cultural intervention on campus to address what we felt was a tech utopianism that didn't adequately account for the incredible power being concentrated in technology companies to referee value trade-offs and the social or societal costs of technology. We went through an exercise for three years of developing this common language by engaging 300 students a year in grappling with these questions and engaging professional technologists in an evening version of this course. We decided maybe three months before the pandemic that we needed to bring this material to a much broader audience because, ultimately, the refereeing of value trade-offs is a collective enterprise, not an expert or elite one. If we're going to have a political system that reflects people's preferences about how to value privacy and national security, and how to think about the value we place on free speech versus the health of our information ecosystem, lots of people need to be involved in that conversation. And to be in that conversation, they need to understand why we are where we are, what's at stake, and what kinds of solutions are on the table. So that's what led us to write System Error.

Mehran Sahami: Jeremy noted that part of the point of the book is an understanding of what technology is capable of doing and what it's not; how oftentimes, in building out that technology, we have to make explicit choices about what to optimize. Those choices are made by a small group of people who have the expertise in that technology. Those choices are influenced by the executives at companies that want to see certain things, like revenue generation, time spent on the platform, click-through rates. Those are explicit choices that are made within the company, these value trade-offs Jeremy talked about.

I think we’ve seen a very timely example of this, since the book was published, when Frances Haugen, the Facebook whistleblower, came forward and said, “Look, here’s a bunch of internal documents that talk about internal research that we did.” This research was brought to light for certain executives, and she very clearly and pointedly makes the point that, okay, now they have to go through a decision-making process to decide what they want to optimize and how they want to respond to these things. And her ultimate point in coming forward was to say, the choices being made inside these companies are not the choices that we would want to see societally, and as a result regulation is necessary.

That provides a great instantiation of what the themes in the book are. Those are exactly the things we lay out: There’s a bunch of things that are being optimized, they’re being driven by a drive to profits, and certain kinds of measures are optimizing things that are probably not the things we want societally. We see the impact of that play out in terms of political polarization, people’s mental health, and decision-making processes that are opaque to them. At the end of the day, it’s regulation that’s necessary to put guardrails on these systems.

Were you worried about future-proofing the book, making sure it didn’t have a half-life of a few news cycles?
 

JMW: We try offering people a diagnosis of problems that’s independent of any particular company. It’s not a takedown of Facebook. Rather, we’ve got an ecosystem that’s generating technology with enormous benefits and evident social harms because of its component parts and the way it aligns people’s incentives. Any frontier technologies, the ones that we can’t yet anticipate that are not in the book, are going to involve some sort of value trade off and we need to be aware of those, make them explicit, and make judgments about where they should be refereed. Our strong view is that they shouldn’t be refereed, by and large, primarily inside companies.

MS: I think the book is future proofed in the sense that, until we get to a place that we are happy with as a society with where Big Tech is and where it’s headed, the themes in the book will continue to ring true. And to be honest, I hope we actually get to a place where the book becomes obsolete because then we actually get the outcomes we want. But until that point, I think it’s still a good read.

You raise this idea in the book of democracy being a kind of technology. Can you expand on that concept? And given everything you cover in the book and what we’re experiencing in the world, is democracy an obsolete technology?
 

JMW: We talked about democracy as a technology for solving a particular problem, which is refereeing disagreements among people about the social outcomes that we want to achieve. Anytime you live in a collective as opposed to living individually, people have different preferences about what they want, where they want roads built, what they want to be taught in school, how much they want to invest in national defense, how much they want social media platforms that are a home to toxic hate and misinformation. These are things on which people have preferences. The challenge of technologists’ optimization mindset is it suggests that we can choose an outcome that we want to optimize for. We can ignore what’s traded off, we can build the most efficient system for producing it, and all will be well. But the problem is you can’t optimize for everything at the same time. So you’re implicitly and then explicitly making choices and imposing how you referee those value trade-offs in the design of technology.

Why do we call democracy a technology? Well, democracy is a process. Democracy isn't designed to optimize for the best outcomes. It's designed to referee value trade-offs and disagreements among people in a political community about what they want. It has some extraordinary advantages. It has notions of equal participation and voice. It has processes for aggregating people's preferences. It has accountability of the political leadership to citizens. So if we're dissatisfied with the social impacts of technology, it's not just Mark Zuckerberg who's to blame. It's also our politicians who have created a 25-year regulatory oasis around technology. So that's what democracy is a technology for.

When you ask a question like, “Is that obsolete?,” part of what we do in Chapter 3 is make sure that it's 100% clear that you can't answer that question without reference to the alternatives, without reference to the counterfactual. So, obsolete as compared to what? The pathway we're on now is one in which we have abdicated responsibility of ourselves as citizens and our leaders as elected officials for governing tech. We’ve left the refereeing of value trade-offs to those who inhabit companies, those who build technologies, those who invest in companies, those who are CEOs. I think what's 100% clear now is how that abdication is producing social outcomes that large numbers of people find unacceptable and a world in which those externalities are unacceptable. That's one of the principal justifications for government intervention in the market, right? Think about pollution as the paradigmatic example. A company can do extraordinary things to build products, but if they're polluting our water or our air the government gets involved to ensure incentives are aligned for the private company to produce what it's producing and mitigate the harms that it causes for the quality of water and air. We're at that moment with Big Tech that requires the refereeing of value trade-offs, that requires a process of democratic deliberation. That is a technology. It's a deeply imperfect technology. But it's clear that the alternatives have their own limits.

Woman speaking at desk with a television behind her showing her speaking

Alex Wong/Getty Images

Former Facebook employee Frances Haugen testifies during a hearing before the Communications and Technology Subcommittee of House Energy and Commerce Committee December 1, 2021 on Capitol Hill in Washington, DC.

Is it possible in this environment, where it can seem like some people value the tech more than the democracy, to claw back that abdication you mention?
 

MS: Currently, in the tech sector, the choice that you're given as an individual is basically: do you use a piece of technology, or do you not? Do you use Facebook, or do you delete Facebook? It's the same way with all the apps, and it's this notion of, this is your freedom and your choice to choose what you want. But given that choice, the tech companies, with their terms of service, can basically do whatever they want, assuming you agree to it when you choose to use their app. There are a few things you can do where you can try to set your privacy settings in an application or you can choose which web browser you want to use in terms of what information is tracked about you. But, again, those become choices of what apps you're using and not using.

It's kind of like thinking about cars and someone telling you that your choice is you can either drive or you cannot drive, but we're not going to put any regulation or systems around that. There are no roads, there are no traffic lanes, there’s nothing like that—if you choose to drive, be safe; otherwise don't drive. And you can see how ridiculous a choice that would be to have cars without having any kind of system around it. That's the same sort of situation we're in with tech right now. There are lots of ways we could introduce the equivalent of lanes and stoplights and speed bumps in the Big Tech system. It’s something like sensible privacy regulations. It’s bringing levels of transparency and auditability to the kinds of algorithms being used so people get a better sense of how decisions are being made for them, how information is being pushed at them, and what kind of agency they have in doing that. That requires regulation.

The idea isn't that regulation is some evil term; the real point is that if we build a system around technology, we create a system of safety in line with people's values for everyone. And then, within that system, you still have a choice as to what apps you want to use, what web browser you want to use. It's not taking away choice but creating guardrails so that things don't completely go off the deep end.

That leads to this discussion in the book about expertise, which feels related to this abdication of responsibility you talk about. Often this takes the form of, “Well these tech people are the experts, so we’re going to listen to them. It’s not on us to figure it out.” At the same time, there’s growing skepticism of experts among everyday people. How do citizens navigate—how does democracy navigate—this contradiction?
 

JMW: You're tapping into some really challenging aspects of the American political moment that we're in. I'd say two things. The first is that, on the critical issues of the day with respect to how to regulate big technology, the most important decisions we need to make are decisions about competing values. In the case of social media platforms, we talk about the tension between free speech, the protection of individual dignity that's threatened by toxic speech and harmful speech, and the health of our information ecosystem that's critical to democracy. You don't need to be a technologist to weigh those value trade-offs. Where technology is relevant is it helps us understand what's causing us to feel those trade-offs in the way we are. That is, what decisions are being made in the design of technology that is putting weight on one value at the expense of others? And then what can technology plausibly do to help us achieve a better balance of those if that’s what our social goal is? In this moment of considering the regulation of Big Tech, in various ways for lots of the different and distinct problems that exist, everyone needs to be involved in the conversation around value trade-offs and right now too few people are.

The second thing is that we need to make a set of changes around how we structure our public institutions to make them far more open to engaging scientific expertise and knowledge. That way, those who have the delegated responsibility to regulate Big Tech are not dependent primarily on the scientific expertise that's offered up to them by the companies themselves. That requires the transformation we talk about in Chapter 8, around our orientation toward the federal civil service, our orientation toward who staffs elected officials, and how we provide scientific expertise in the executive branch. We have made choices over the last 25 years to put our elected officials at a scientific and technical disadvantage. We've broken down the sources of scientific expertise in the effort to scale back government. We are accountable for having a government that isn't up to the game, and that's something that we can change.

Group of people smiling and taking a selfie

Drew Angerer/Getty Images

Apple CEO Tim Cook greets customers and takes photos with them as they enter Apple's flagship 5th Avenue store to purchase the new iPhone 11 on September 20, 2019 in New York City.

MS: I will just add it's important to keep in mind that there's a difference between thinking about technological expertise and business expertise and what's right for society. Those things definitely come into conflict with each other when you consider the fact that there's a profit motive in the companies and among the executives who are making these kinds of decisions. And when there's a profit motive, there's a different set of interests there than the interests of society writ large, where you have individuals who are thinking about what's best for me, outside of the fact that I'm not getting revenue when someone's clicking on ads. So there is a mismatch between thinking about different types of expertise.

I can give you a simple example. Consider YouTube. The vast majority of parents who see their kids grow of age to watch videos on YouTube will likely say they're spending way too much time on that platform, watching videos, engaging in content that becomes hard to monitor. And that leads to things like kids making a career choice to be a YouTube star. But if you look at what YouTube is trying to optimize on their platform, it's exactly the amount of time people spend watching the videos because they have a business interest in that: the more videos you watch, the more ads you watch, the more revenue is generated for the platform. So to abdicate our responsibility and say, “Well, they’re the experts in online video, they know what's really good for us,” belies the notion of what decisions they're making as a for-profit business.

And one final thing, if I can channel our third author and partner, Rob Reich. One of the things he says in talking about Mark Zuckerberg specifically is that Mark Zuckerberg basically controls the speech environment for 3 billion people. We give him credit as an expert who built technology and a platform to connect people. But what makes him an expert in what free speech rights should be and who should have them? And even if he were an expert on free speech rights, currently, in the United States, we don't give all the decisions about who has free speech and who doesn't to one person, either. So that's the importance in maintaining this distinction between that expertise and what we want the collective decision making to be with respect to value trade-offs and what that means for our society writ large.

Young people are very active in this space, especially when it comes to the issues you raise in the book. But there is also this massive incentive structure, as you write about, to get students out of education, out of the classroom, and into start-ups and the venture capital world. What needs to happen in the education space to allow for more engagement in these conversations? I’m thinking especially about the high school level, where computer science classes are permeating and there’s this moment of deciding what computer science is in the classroom.
 

MS: That's a great question. And I think the first step is understanding what are the unintended, and sometimes intended, consequences of technology, so that at the same time that students are learning about all the power of the tools that they get when they learn about programming and the things that they can do with the technology itself, they’re also learning about the harms. One of the powers technology provides is this opportunity to scale something to billions of people. It’s a tremendous lever to have a huge impact on the world. But you need to understand that if you’re going to have a huge impact on the world, there are lots of different people who will be impacted in different ways. And some of those people will be impacted negatively. There will be things that happen when you try to connect to lots of people that you not only see the greatness of humanity, you also see the ugly side. And if you only take one view that everything is just good and that your technology is only going to work as you intended without a critical examination of ways it can go wrong, then you get a very skewed sense of what a technical technological education actually is.

One of the projects we’ve been working on in partnership with a few other universities like Harvard and MIT is a project called Embedded EthiCS. What we do is, when teaching people about various technologies, we talk about ethical dilemmas, places where there are unintended consequences, and about the value of trade-offs, so that the good and the bad becomes very clear as part of the educational process. So people can come out more informed, and when they go build technology themselves they’re more aware of consequences in a broader context.

How do you see the relationship between computer science education and civics education? A lot of classes can be siloed: you take computer science, you take civics, you take history. Do the walls need to be kicked in so these things all bleed into each other?
 

JMW: We need to stamp on those walls and break them down. That’s what we’ve been doing in the context of this class. Obviously, there are core ingredients in history that people need to understand, there are core ingredients that put you in a position to code. But it’s the siloing of these issues that’s getting us into trouble. One of the reflections that I have on teaching computer science majors is that I’ve been struck by, in their trajectory, how little reading they do, how little writing they do, but most importantly, how hard it is for them to grapple with questions that don’t have a right or wrong answer. If you’re a social scientist or a philosopher, you’re constantly dealing with questions that don’t have a right or wrong answer. You’re developing the skills of critical thinking and deliberative inquiry that enable you to develop arguments and to test those arguments against other people and figure out when you’re making stronger arguments and when you’re making weaker arguments. But often for questions for which there’s no optimal solution, there’s no right answer.

High School Teacher Talking To Pupils Using Digital Devices In Technology Class

monkeybusinessimages/Getty Images

The classroom is one of the most important places to alter our relationship with technology. The more students understand computer science, civics, and ethical responsibility are intertwined, the better equipped they'll be to change the Big Tech paradigm.

What’s been great about teaching together with Mehran and Rob is that, within a single class, we can help our students understand something like the Facebook News Feed and how different ways of tuning a recommendation system can generate different outcomes in terms of revenue, but also different outcomes in terms of how polarized a social network is. Once you understand that that technology exists, you can frame the questions: “What should we be optimizing for in this news feed? And how do we think about those trade-offs? And who should make the decision?” They have no uniquely right answer, and it's challenging for engineers to engage those questions. But if they had never engaged them before they got into a company, they might never ask, “Are we optimizing for the right thing? And are we even the right people to decide what we should be optimizing for?”

This is why we need to break down those walls, and you can't break them down unless you teach at the intersections. If I just teach, the students might not even show up at my civics class or my social science class because they won't think that they need it. But unless I'm meeting them where they are, which Mehran enables us to do, thinking about the power that's embedded in code, then I can bring the social science knowledge to the table and say, “These aren't just abstract problems or technical problems. We can learn about them in the world through social science. And we can mitigate these harms through policy or through institutional change.” That's missing when these courses are taught in a siloed way.

In the book you identify lots of possible solutions and paths forward, and you got into some of them here. Are you hopeful for the future of our collective experience under tech to address the challenges and move ahead in a positive way?

MS: I think the tone of the book is very hopeful and optimistic, but not naively so. The idea is to lay out what the issues really are in a realistic way and lay out the fact that what we need is policy. In some cases, that policy is actually pretty low-hanging fruit. Despite the divisions in Washington, there are lots of sensible policies we can get to around privacy. There are lots of things we can think about with respect to artificial intelligence creating job displacement in the future and what that means around putting more resources into education and re-skilling the workforce. Those are things that we would probably see bipartisan support for. And when we begin to see some of that legislation move through to grapple with those issues, it also shows the power of democracy to be able to deal well with the bigger issues. The real message in the book is we can get there—it's just going to take some time.

JMW: The other thing I’d add is, we're blessed to have a platform for engaging young technologists, and young people more generally, at Stanford. Stanford is one of the institutions that is a major pipeline into Silicon Valley, and part of what I see now compared to five years ago is just far greater awareness of this power, the concerns about the legitimacy of this power, and the concerns about the harms that have been generated by this power. That gives me a lot of hope. We see the activation of agency around these questions, not only in our students but also in young tech workers that are challenging norms and approaches that have become dominant in these companies. So while I feel good that Washington is waking up and starting to move, I also feel that the productive dynamic in the serious engagement with these issues is also going to change these companies from below. They're in a war for talent, and the extent to which young technologists take these issues seriously is really going to change the game.

This conversation has been edited for length and clarity.