As Colin Garvey pointed out last week, AI firm CEOs are essentially rulers. They have the authority to deploy consequential technologies like ChatGPT without any kind of due process, limitations, or responsibility to the public. OpenAI handed educators a raw deal by releasing their large language model in the middle of the school year. But there are other potential disruptions, such as filling the Internet with misinformation and ever more realistic deep fakes, with uncertain consequences. Our lives will be reshaped by technologists who neither ask our permission, nor seem to understand what consequences their technologies will bring.
There is a widespread sense that something needs to be done. Back in March, Tech leaders signed onto an open letter proposing a global “pause” on AI deployment. Sam Altman, CEO of OpenAI, went to Washington a couple months ago to lobby Congress to regulate his own budding industry, a move that would have seemed unreal even a few years ago.
It is less clear how to actually regulate AI. The open letter had few practical suggestions. Nor is it clear how Altman’s federal agency would actually regulate. We feel the need to have a critical debate about the merits of generative AI and the collective future we want. But, at the same time, these deliberations are kinda worthless.
Writing in The New Atlantis, Louise Liebeskind takes us through a thought experiment. Imagine it is 1712, and some prescient observer calls for a summit on a novel, potentially epoch-making technology: the steam engine. Would they have been able to come to some agreement on the coming steam age and how to enjoy its benefits in a wise and just manner?
As Liebeskind notes, it seems unlikely that Newcomen, deliberating alongside European royalty, would have even been able to anticipate the upheavals of the coming centuries, much less devise a way to better navigate them. The trouble is that we usually can’t effectively think about the problems we face until they start affecting us.
But public deliberation tends to be even less productive when it happens too late. If you follow Urbanist Twitter, you might have seen a provocative tweet by Alan Fisher arguing that “literally nothing good comes” from doing public hearings. If you’ve witnessed a city council meeting, it’s hard not to relate.
Fisher’s complaint is mostly partisan complaining about “conservative” residents who oppose planning changes because they threaten their property values or their neighborhood’s character. But he doesn’t give much thought about how he might feel if it were his home that was affected by some new city policy. More importantly, he fails to recognize the source of the bitterness found in public meetings.
When public hearings are called, whether for constructing a new highway, building homeless shelters, or adding bike lanes, most of the important decisions have usually already been made. Residents are presented with what is being done to them, rather than invited to help solve a collective problem. Their resentment is little different from that of educators now faced with an AI-driven plagiarism problem in the middle of the school year. Consequential decisions were made, but their interests were never considered.
Put simply, public deliberation is hard. When it happens too early, we don’t know enough to make wise decisions. If it comes too late, everyone is too pissed off to actually deliberate.
Part of the issue is that we misunderstand the point of public deliberation. It often seems necessary to insert more democracy into public problems, whether AI or building a homeless shelter in a residential neighborhood. But token exercises in public debate generally fail to take advantage of democracy’s strengths as a set of strategies to aid collective problem solving. We mistakenly expect public deliberation to deliver consensus on action, when really it is best used to bring disagreements to the surface.
This mistake in thinking was recognized long ago by political scientist Charles Lindblom. In Inquiry and Change, he associated “government by discussion” with visions of a “scientific society,” the idea that rational minds could discover the “correct” solution to public problems and ultimately “find harmony in the universe.” But decades of contention to nearly all of America’s public problems: abortion, gun control, COVID-19, climate change, etc., at the very least shows that neither durable solutions nor harmony is readily forthcoming.
Lindblom’s alternative was the “self-guiding society.” In place of “correct” solutions, we seek answers that can be implemented. Rather than believe that “science” or public deliberation can tell us what we must do, we strive to learn from our actions, especially our errors. We seek not consensus, but sufficient agreement to legitimate imposing a temporary solution.
What does that actually look like? How to go about building a homeless shelter, or even governing AI? Step number one is to give people affected some role in the policy making process. For urban planning that is dead easy: Consult local homeowners early on. Perhaps the original idea for a homeless shelter in Fisher’s tweet was unworkable: too big, too close to homes, and so on. Maybe the proposal will need to be scaled down, resulting in a trial homeless shelter that can demonstrate to homeowners that their worries were either exaggerated or could be easily addressed.
Romantic notions about publicly spirited debates happening in 17th century Vienna coffeehouses or the “marketplace of ideas” lead us to think that democracy is much more about knowing and arguing than about doing and learning.
For AI, the question seems harder only for the lack of established venues. Had Sam Altman met with teachers’ representatives (and had the incentive to do so), strategies like watermarking could have been implemented to discourage plagiarism. Or, they might have come up with an arrangement to more quickly discover how AI is actually used by students. What matters most is that they could have kicked off a process for turning the awesome complexity and uncertainty regarding AI into something more manageable—before people got pissed off about it.
All this is basically to say that most of us have got democracy all wrong. Romantic notions about publicly spirited debates happening in 17th century Vienna coffeehouses or the “marketplace of ideas” lead us to think that democracy is much more about knowing and arguing than about doing and learning. If we are to solve any of the problems facing 21st century humans, it will be leaving behind the idea that discussion and debate can give us “the right answer.” The way forward lies in diving into the unknown, learning together, and hashing out compromises.