Politically active billionaires, like the nationalist/populist everyman/bogeyman characters haunting our news for the better part of the last decade, have been observed to increasingly hold fringe views. Although they were primarily focused on the economy in decades past – the unpopularity of the David Koch’s brand of libertarian fundamentalism in the 1980 US Presidential election is a classic example – the scope has grown to the extent that, as Paul Krugman pointed out, even Qanon-type radicalism is no longer beyond their ken.
This is especially prevalent in the tech sector, where mostly white, often highly educated, and predominantly male figures, focused on futuristic technologies such as AI, blockchain, IoT and so on, appear to be coalescing around extreme ideologies such as “Longtermism.” In other words, tech-billionaires are, despite their immense wealth, ultimately “just a bunch of dudes,” as one of their members admits. As such, they’re built with the same evolved psychology and therefore just as susceptible to the current information ecosystem of predatory attention-economics as the rest of us.
No one should be surprised that, behind the curtain, the “tech-titans” are just people too. What is surprising is how little ink is spilled by the many observers of these billion-dollar quirks about why these strange—and by some accounts, dangerous—beliefs are a problem for society, and what exactly that means. Besides pat accounts about holding too much power, of course.
Yes. Of course: Billionaires hold too much power. (We’ve heard it a million times.)
But what are the dynamics and why is that a problem?
Let’s take the example of AI Governance.
The Biden Administration recently announced it had received “a set of commitments” from seven of the largest AI firms “designed to enhance safety, security and trust.”
With details vague—whether intended or not, Kevin Roose’s sympathetic assessment is comical—and no mention of any legislation for enforcement, monitoring, or accountability, the question arises: What does this horse-and-pony show even mean, if anything? How should we understand the announcement of these “nonbinding and voluntary guidelines”? With Roose as a good “first step”? That is, a bit of well-intentioned if harmless public-private collaboration? Or a more cynical take: Mere posturing by the all-too-powerful billionaires and their political cronies? After all, as one public-interest group representative noted of the announcement, “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”
While leaving open these familiar-if-thought-terminating lines of explanation I propose we can find our answer by taking a step back to the foundations of political economy. I argue this meeting is evidence of a stunningly obvious state of affairs that nevertheless goes mostly unnoticed.
I am referring of course to the fact that our society has two sets of rulers.
Two Sets of Rulers:
The First Set of Rulers
One set is government functionaries; elected officials, bureaucrats, and so on. As political theorists have pointed out at least since Schumpeter, democracy is not “self-rule,” per se. In contrast to monarchy and oligarchy, in what we can call “polyarchy”—to be specific—a cybernetic feedback loop allows citizens some degree of control over their rulers; through elections, recalls, oversight committees, etc. That is, we get to pick our rulers, but they’re still a tiny elite with disproportionate power over us, the mass. They’re still rulers.
Now, despite how we love to hate our democratic rulers, the system of polyarchy America pioneered and continues to practice is quite an accomplishment. Let us quickly review some ways our system significantly constrains the behavior of these government functionaries, even as it grants them vast powers:
No grant of authority without responsibility to act. Kanye West said “George Bush hates Black people” in 2005 because the US President was shirking his responsibilities in the wake of the Hurricane Katrina disaster.
Authority is granted to a role—president, secretary, prime minister, cabinet chair, etc.—not to specific individuals. The system of lifetime appointments to the US Supreme Court is increasingly drawing scrutiny for potentially violating this stipulation to power.
Granted authority cannot be used for private ends; only for assigned public purposes in a limited way. Former Illinois Governor Rod Blagojevich was sentenced to 14 years in prison for blurring this line when deciding how to reallocate President Obama’s senate seat. Likewise, Chicago politicians have long been famous for using their authority in an extended way to create political “machines”—remember your “ward boss”?—that entrenched their power for decades.
Authority can be delegated in part to subordinates, but cannot be transferred wholesale to another individual or passed to offspring. The presence of President Trump’s children in the Whitehouse raised concern about the erosion of this requirement for democratic rule.
Grants of authority are subject to procedural rules that, when applied, can result in revocation. Democratic “due process” of this kind was invoked twice during the impeachments of President Trump.
We’re as familiar with this list of requirements on our governmental rulers as we are with its violations. What we’re not as familiar with, apparently, is the second set of rulers.
The Second Set of Rulers
To help adjust our eyes, consider with me an inversion of these five restrictions:
1. Grants of authority with no responsibility to act.
2. Authority granted to specific individuals rather than role players.
3. No limitations on use of authority for private ends, or on scope of use.
4. Authority passed, inherited, or sold as its holders see fit, when they see fit.
5. Authority largely free from “due process.”
Consider that this describes the relationship of ownership over property. If it’s my stereo, I can smash it up and leave it broken and there is nothing you can do about it. It’s specifically mine–and the role of “owner” is not an office to be held in the case of this stereo. I play whatever songs I want on it, wherever I want, and if I get tired of that, I might sell the entire thing to someone else at the price I like.
Scholars describe this form of authority as propertied authority. It becomes easier to see why an excess of propertied authority is a problem for society when we expand the scope of our example beyond a simple personal product.
Let’s consider instead the propertied authority held by OpenAI over ChatGPT. They released this AI system in the middle of the school year, and have no responsibility to take action on the vast plagiarism problem schools worldwide are now coping with. The company is not large, with a few key players in decision making roles over “their” system. While claiming to be working on behalf of all humanity, the company uses ChatGPT and its other AI systems to pursue their private ends, which parallel those of corrupt politicians or Chicago gangsters: Use authority to build a machine that will entrench your power for a very long time.
For Al Capone and Donald Trump, the “machine” is made mostly of human parts, woven together through communication media used to exert influence, whether telephones or tommy guns. Today, however, the power-entrenching machine these private actors seek to construct is literally a “machine”: First artificial general intelligence (AGI), smart as us (and smart enough to reprogram itself to become smarter, and so on, infinitum) then artificial superintelligence (ASI), an entity so powerful that it could pose an “existential risk” to humankind. "The bad case — and I think this is important to say — is, like, lights out for all of us," as the 38-year-old CEO of OpenAI opines. And yet, they are successful in their gambit – an ASI that can actually be controlled by its human creators – they will have used their propertied authority over their AI systems to gain authority over the future of humanity.
Now, let us ask, what kind of authority would this be? Would it be constricted by the humble-yet-pragmatic stipulations of polarchy? Or would it be of the second kind?
If OpenAI released ChatASI in the middle of the school year, would they suddenly have any responsibility to address the social problems caused by the mass unemployment of teachers? (every smartphone offers ASIocrates, etc.) Would ChatASI no longer be possessed by its owners, but instead be retained by democratic officeholders (and which offices)? What limits could be placed on this superintelligent system, if only to prevent it from irreversibly modifying the world so as to further entrench the power of its owners? Could they be stopped from selling its use as a service, or passing the ownership rights to their very wealthy children? Or even to ChatASI itself? With little in the way of “due process” operating on AI technologies in America or elsewhere, would it be wise to expect the gears of government to kick into action once things are “superintelligent”?
Remember when Elon Musk warned of AI becoming an “immortal dictator”?
*
We need not engage in such flights of fancy, however, to see how propertied authority over a more familiar “machine” is a problem for society. OpenAI itself is just such a machine: a corporation (despite their protestations to the contrary). As Ted Chiang has pointed out, AI and corporations are alike in certain significant regards. But here let us part ways with the fashionable analysts who think the important parallel here is to be found in the head – tech-elites’ fears of ASI taking over the world may be something like the “Freudian return” of tech-execs’ anxiety about the impact of their own actions on our society – and look at the functioning of this more basic machine, the business.
Businesses are machines held under the propertied authority of “executives,” who are granted sweeping authority by market-oriented societies to make decisions of significant public consequence: who gets what jobs, at what pay; where plants are sited, to make which products (and which wastes); what technologies will be produced, released, and at what cost. That is, whether ChatGPT should be developed, how, by whom, and when it should be released, school year or no. And as consumers, our options are limited to Hobson’s choice: we can take it or leave it. Because as citizens, we have no polyarchal control over the second set of rules. None of us can vote Sam Altman out of his gig at the helm of humanity.
In short, there is little of social life that is not touched by the economy–even the choice to not participate is an economic one–and there is little of the economy that is not shaped to a significant extent by the decision making of business executives. This is the first sense in which they enjoy a privileged position within our society.
So, the second set of rulers? Business executives. They sat as equal-yet-separate partners at the table with President Biden. Why is that a problem? Because without intervention by the first set of rulers to constrain the propertied authority of their counterparts, decision making on important challenges for the future of humanity–such as AI governance–will be left to this second set.
Conclusion
When it comes to AI governance and the “big challenges” facing humanity, instead of engaging in semantic squabbles over whether “capitalism is still capitalism” or it has morphed into “techno-feudalism,” let’s open our eyes and take a hard look at the propertied authority business executives wield over nomically-democratic market-oriented societies like our own. Then get out the scalpels, chains, and blowtorches to re-jigger this busted machine.
Next week I will dig into the feedback loops at work between these two sets of rulers, and what this form of “circularity” portends for AI governance and the future of humanity.