The Coup

The Coup
Image credits
That was when they suspended the Constitution. They said it would be temporary. There wasn't even any rioting in the streets. People stayed home at night, watching television, looking for some direction. There wasn't even an enemy you could put your finger on. – Margaret Atwood, The Handmaid's Tale

In some ways, I liked it better when people could roll their eyes at my doomsaying, when they could still insist that technological innovation meant "progress" – not just scientific progress, but political and social progress as well. "Maybe tech entrepreneurs don't know much about, say, teaching and learning – about education research or about education history – but gosh darn-it, Audrey, they mean well." It gives me no joy to have been right about the designs that men like Elon Musk and Marc Andreessen and Mark Zuckerberg have for our future – for education or for life itself.

But here we are.

"The mask is off," people keep repeating. Indeed it is, although I do marvel that people were willing and able to ignore the mask – it was so obvious and so ugly all along.

Computers: bad for democracy. Bad for the planet. But at least we all got some good conference swag out of it, I guess.

I joined the brilliant Helen Beetham recently to record several episodes for her Imperfect Offerings podcast. The first of these was released this week, and in it, we discuss some of the history and the politics of AI. We ask, in part, if this techno-fascist turn was inevitable.

The roots of computing are, after all, if you trace them through Alan Turing, deeply militaristic – "intelligence" as surveillance and deception; if you trace them through Charles Babbage, they're incredibly managerial – the automation of calculations for The Nautical Almanac, which in the nineteenth century, was instrumental to British imperialism. Somehow, personal computing magically wrested the machinery away from this "command and control" model so as to liberate us all in turn; and the Internet did the same so as to connect us all together. And yet… And yet Silicon Valley has always been clear that its allegiances are to the military and to management; its ideological underpinnings were never truly politically progressive but rather bound up in a fervent embrace of neoliberalism, libertarianism, and radical individualism.

Computing technology has never not been the tool of the powerful, the oppressor, the imperialist, the cop.

It's worth pointing out too that education policy in the US has always, at least to a certain extent, been dictated by wealthy industrialists, since well before we turned to taxation as a school-funding mechanism – something that still irritates plenty of people, but particularly the rich who prefer the control that philanthropy affords them over any sort of democratic decision-making that includes the rest of us riffraff. Certainly for the past few decades, one oligarch in particular has had an outsized influence on the shape and direction of teaching and testing and technology in schools. His endeavors have all been wildly undemocratic, and let's be honest, pretty unsuccessful too. But plenty of people have ignored both of these things — austerity has, no doubt, made it hard not to. So eager for that sweet sweet Gates Foundation money, we have allowed education to be engineered to meet the tech industry’s demands.

Side-note: Did you know that, decades ago, I wrote my Master's Thesis in Folklore on political pranks, and I had a whole chapter on political pie-throwing? In the late 1990s, if you'll recall, Microsoft was under investigation for multiple anti-trust violations, and Bill Gates was reviled. Deeply and utterly reviled. When he was struck in the face by a pie by the activist Noël Godin, lots of people laughed. (OK, I did.) The richest man in the world had been taken down a notch by one of the oldest gags in slapstick comedy; he had got his just desserts. But in the intervening decades, thanks no doubt to his philanthropy, Gates has laundered his reputation. Microsoft has too, and despite still making shitty software.

As one of the major funders of OpenAI, Microsoft stands to profit from any AI windfall while (it hopes) distancing its brand from any AI collapse.

On Monday, the California State University System issued a press release, announcing "first-of-its-kind public-private initiative" to "leverage the power of artificial intelligence to create an AI-empowered higher education system that could surpass any existing model in both scale and impact." Some 460,000 students and 63,000 faculty and staff will "have equitable access to cutting-edge tools that will prepare them to meet the rapidly changing education and workforce needs of California." That means lots and lots of ChatGPT licenses that serve, I’m sure the Microsoft, Nvidia, and OpenAI executives involved all hope, to make DeepSeek seem like less of a body blow to American AI efforts.

Everything in the announcement was vague enough but shiny enough for a flurry of breathless stories – a reminder of the excitement a decade ago when that same university system and then Lieutenant Governor Gavin Newsom announced a pilot program with the MOOC-provider Udacity to offer $150 online courses in core curriculum courses. This changes everything. This will "end college as we know it," TechCrunch insisted with such vitriolic glee, even offering a timeline of how – good riddance – the demise of expensive, in-person public higher education was soon to follow. (Just a few months later, if you'll recall, Udacity's Sebastian Thrun admitted that his company had a "lousy product" and the project was scrapped. Unfortunately, TechCrunch was not.)

As Rob Nelson points out, it's not at all clear what "AI-empowered higher education" even means. But now, with a full-throated endorsement of this dumb and dangerous, anti-democratic technology, a public university system has made all OpenAI's and all Silicon Valley's problems their problems too. Good work, everyone.

Sadly, AI and Silicon Valley are everyone's problem now. Brian Merchant argues that the Musk coup – "Government by Grok" – is an AI coup, one that is clearly "about control. Automation necessitates the narrowing of scope, of information input into a system, of possibility — so that a job or a task or work can be more predictably and repetitively performed on behalf of an administrator. In DOGE we see the logic of automation — of enterprise AI — being imposed by a nascent oligarchic state. We thus see less information available to the world, fewer options available to the humans working to provide it, fewer humans, period, to contest those in power, as that power concentrates in their hands."

Whoever could have seen this coming?

Well, David Golumbia did, for one. Fellow tech critic and scholar Chris Gilliard appeared on Paris Marx's podcast this week to talk about David's book Cyberlibertarianism: The Right Wing Politics of Digital Technology, which was published posthumously late last year. Chris and David co-authored an essay several years ago about "luxury surveillance" (and Chris has a book coming out next year on this topic too) – "surveillance that people pay for and whose tracking, monitoring, and quantification features are understood by the user as benefits they are likely to celebrate."

This is precisely the kind of surveillance that Reid Hoffman argued we all should gladly surrender to in his recent op-ed in The New York Timesthe handing over of all of our personal data to artificial intelligence systems that will, as he promised, optimize our life's decisions. But you have to really believe that you are safe from "discipline and punishment" to acquiesce to this. That’s the “luxury” of it — you have to feel quite assured about your position, your power, your privilege. (Who are you if, in this particular moment, you do.)

In 2015, an essay made the rounds (in my household at least) that argued that jobs could be classified as above or below the "API line" – above the API, you wield control, programmatically; below, however, your job is under threat of automation, your livelihood increasingly precarious. Today, a decade later, I think we'd frame this line as an "AI" not an "API line" (much to Kin's chagrin). We're all told – and not just programmers – that we have to cede control to AI (to "agents" and "chatbots") in order to have any chance to stay above it. The promise isn't that our work will be less precarious, of course; there's been no substantive, structural shift in power, and if anything, precarity has gotten worse. AI usage is merely a psychological cushion – we'll feel better if we can feel faster and more efficient; we'll feel better if we can think less.

We're all below the AI line except for a very very very small group of wealthy white men. And they truly fucking hate us.

It's a line, it's always a line with them: those above, and those below. "AI is an excuse that allows those with power to operate at a distance from those whom their power touches," writes Eryk Salvaggio in "A Fork in the Road." Intelligence, artificial or otherwise, has always been a technology of ranking and sorting and discriminating. It has always been a technology of eugenics.

Literally. Literally. Do you see it yet? The genocide in Gaza, facilitated by Microsoft and AI. Andreessen Horowitz's hiring of Daniel Penny, who was just acquitted of the murder of Jordan Neely, a homeless Black man. Executive order after executive order taking aim at trans people in this country – barring them access to healthcare. The closure of USAID (and the impact this will have on, for example, HIV treatment globally). The weaponization of data, the destruction of data, the blurring of data, flooding the zone with shit. This isn’t just shit-posting or the disappearance of websites into some ethereal nothingness. This is real material suffering. Real actual death. Dead bodies. Dead bodies. Dead bodies. That’s our AI future.

As Timothy Burke passionately argues, these people are poised to take everything away from us. Everything. "One thing that I do think is worth saying right now, however, to everyone who might hear it, is that whatever you think of universities, professors, scientists, college students, you need to understand that both the Maga and Musk wings of Trumpism are aiming to take all of that away from you. You, your children and your children’s children. They are not hoping to land commandoes on the roof to take it away from us. They do not have replacement scientists, professors, libraries, labs, ready to substitute in after they fire or imprison everybody who works at universities now. They have nothing."

They can say – and they do, don't they – that their AI is going to run the world smoothly and efficiently, don't worry. But that's so obviously a lie. Their AI is little more than bullshit-at-scale, crudely strung together with lots of computing power but very little reasoning and not a scrap of ethics or care. Perhaps it’s heartening that none of these people on Team Trump or Team Musk actually have the capability to build anything. Their only substantive skill is burning through other people's money; their careers, their fortunes have been made on taking credit for other people's work. (e.g. Mosaic, Tesla, PayPal.)

(It’s not heartening.)

Billions of dollars, and so much environmental degradation later, their generative AI is still rife with errors. So when you hear anyone tell you that AI is the "future of education" or the "future of research," they are likely selling you a product that is incredibly shallow. Maybe this is what passes as "research" in a school of business (sorry, not sorry) but it sure ain't gonna cure cancer. Generative AI feeds and feeds on essay mills; it feeds and feeds on predatory research journals, and it's already filling libraries with AI slop. That is not "deep research" – it is, once again, the enactment of Steve Bannon's "flooding the zone with shit."

"AI Boosters Think You're Dumb," John Warner contends, and it appears their plan is that we stay that way. Indeed, OpenAI's Sam Altman came out last week and said that his unborn child will never be as smart as AI – which is one of the more heartbreaking things I've heard recently, in a news cycle that's been filled with utterly gutting news.

I received a PR pitch this week for AI-based gun detection in schools – another reminder that, despite the pretty picture that AI promoters like to paint (or, more likely, burn a chunk o' fossil fuel to generate), the future of education under or even adjacent to Silicon Valley autocracy is fucking grim. It is a future of surveillance and predictive policing.

You might think you and your kids are above that "AI line," but you're not.


"You Can't Post Your Way Out of Fascism," 404Media reminds us. Goddammit. And here I've typed up 2000 words, really hoping that I could. So how can we (re-) evaluate technology? Erin Kissane suggests we start with "Ursula's list." We gotta start somewhere, and I reckon that involves strengthening human capacities for humanity — for love and expression and community.

Look out for each other. Take care of yourself.

Thanks for being a subscriber to Second Breakfast. I wish I had something comforting to say to send you off into Superb Owl weekend. Our nation turns its lonely eyes to you, Kendrick Lamar...