Un-Listed

Un-Listed
Troupial (Image credits)

I've started and paused and restarted this email several times. I thought I'd send it Monday, then Wednesday, and now look: it's Friday afternoon (my time). We're already a week into the new year.

Conventionally, “it’s a bit late to say “happy new year.” Realistically, I think we knew it was going be anything but.

And I mean, what can I say?! What do I say?! Things are bad, so very bad. You know that already, I'm certain (even if you're trying to ignore it all, whether you're opting for ignorance or denial); and I'm not sure my repeating that things are bad is terribly helpful.

Then again, I’ve always tried to convince myself (at least) that my writing does help somehow: that it helps people make sense of "what's happening," broadly speaking, in and around education technology. (And things in ed-tech, at least as I frame them, have rarely been stellar.)

But I'm really struggling to form thoughts out of such utter thoughtlessness all around me, to shape a story for you that can make sense of all this senselessness, to find narrative coherence in the deliberate chaos and destruction.

"People will die." Many of us said this when Trump was first elected and then elected again. And goddammit, we have been proven right again and again. The atrocities continue; the body count accrues. “Happy new year,” indeed. At some point, we do have to wonder why so many of us all keep trudging forward, adamantly insisting everything is okay and that thing will be “normal” ever again. When you say, when you think "what else can I do?!", in a way you've already surrendered.

The tragedies are occurring everywhere; they're happening at an unfathomable scale. And they're happening on the smallest of scales too and simultaneously: broad sweeping politics -- nationally, globally, and all the little broken pieces of each individual life upended, each future shattered. The personal is political, the political is personal, and it's all so very very fucked up.

Silicon Valley tries to soothe us into submission: here, have an app, have a hack, use this agent. And pretend that you’ll be spared the violence built right into the software.


I tend to keep a lot of lists: lists of things to do, lists of things I've done. Task lists. Shopping lists. Wish lists.

Making a list supposedly keeps you on task – focused, productive. Lists order and organize. Lists make meaning; they rate, and they rank. "He's making a list; he's checking it twice": lists discern; they judge. They're seasonal – or at least Santa's is. Santa's, along with all the end-of-the-year, beginning-of-the-year content that's endlessly churned out, posted, shared, repeated, obligated, serialized, listed. Lists are supposed to help you remember, and I often tell myself I won't remember if I don't write it down. But lists can get so long, and they're so ubiquitous (particularly this time of year), you can't help but glance over them.

Maybe we want to forget. Maybe we need to forget.

A list is "a catalogue or roll consisting of a row or series of names, figures, words, or the like," according to the OED, which dates this particular usage of the word (there are other, older meanings) to 1604, to Shakespeare's Hamlet: "Young Fortinbras... hath in the skirts of Norway here and there sharked up a list of lawless resolutes."

"A row or a series" – comma-separated, perhaps, or in some sort of tabulated format, like, say, a spreadsheet. More likely, what with our lives organized by computing, your list is "bulleted," a typographical mark popularized by PowerPoint.

It's all some sort of "software way of knowing," the way we've come to organize not just "data" and "information" and numbers and money but our very existence; the way we've already shaped our thinking – well before the onslaught of this "AI" bullshit – so as to fit into the computational machinery, into the sub-routines of productivity, into some sort of twisted version of "optimization" where we click and consume more and more rapidly so as to hollow out our lives, our souls more and more quickly as well.


I didn't make a list of goals or resolutions for 2026, as I have done for as long as I can remember. It's not that I don't have goals; it's not that I don't want to get better, be better, do better.

I just... I don't know. Maybe I didn't feel like writing things down, laying out plans when everything seems so far outside my control.

Or maybe, as I didn't complete so many of last year’s goals, I remain deeply deeply unresolved.

Or maybe I started to type things up and the software felt like strangulation.


Make a list. Or rather, instruct the software to make a list. It’ll start you off with a round black dot: a bulleted list.

"Bulleted."

C. S. Lewis once spoke of a world "shot through with meaning." Now we have all the software prompts and all the bullets, and none of the care or the sense.


No one is dancing any more. Everyone's afraid that someone will take their picture – that other inescapable kind of "shooting." Everyone's afraid of being seen. Everyone's afraid of being laughed at – and sure, that's not entirely new. But now, everyone's afraid of everyone laughing. Everyone's afraid of becoming a meme.

We live under "memetic fascism," as Ryan Broderick put it in a recent newsletter, arguing that the Trump Administration is more interested in creating content – viral content – than in offering or enacting any coherent policy. "Memetic fascism" – a world where information and technology are granted the agency that regular folks (particularly children, particularly students) are increasingly denied. "Memetic fascism," aided no doubt by the proliferation of surveillance cameras and AI slop.

Memetic fascism and memetic ways of consuming and producing data points – these seek to undermine and to replace democratic ideas of education and democratic ways of knowing.

It’s a landslide, Erin Kissane writes.

The knowledge substrate of our society has become increasingly loose and disorganized. It’s now composed of an extraordinary variety of highly disparate things that claim to be news, from vivid and skillful investigative reporting to openly partisan propaganda through the distributed op-ed cultures of podcasters, Substackers, and streamers. The substrate is also enmeshed with “news” organizations devoted to discrediting reporting and with social platforms that deliberately shape the subset of information we see based on our own dumb desires and susceptibility to casino tricks—and on the megaplatforms’ drives toward profit and power. It’s an unstable mess, and we all know it.

Everything is also just happening too much, and I think we know all that, too.

I’m almost entirely “off” of social media now. I’ve been unsubscribing from newsletters too – perhaps a bad thing, karmically, since writing one is largely how I earn a living. But I just can’t, anymore, with all the “AI” slop; I can’t with the moral and intellectual contortions folks are churning out to assure readers (more likely, to reassure themselves) that “AI” will be good, that they've managed to wrangle the technology into some weird academic utility.

How will I “stay informed?" Ha. As Erin argues in her essay, one might feel like one is very informed when one keeps one's face to the screen, when one is inundated with streams of social media data. But this overabundance of "information" is actually untethered from meaning-making – thanks, Claude Shannon! – and I can assure you I will probably know more by being on the Internet less.

Maybe this is the point where the ed-tech enthusiasts balk. Maybe they balked earlier.

Education technologies have almost universally signed up for the "digital" and then the "social" and now the “AI” show, always eager to demonstrate they're on the cutting edge but not always willing to address who that edge might cut. Ed-tech has long been "cop shit," of course, mostly engineered to work in the service of compliance and control, despite the insistence from some very well-meaning progressive educators that they know how to use computing in ways that are liberatory.

(Sidenote: When we talk about the problems with school, I think we understand we’re speaking of issues that are deeply structural, historical, sociological; but when we talk about the problems with technology, it’s all “user-error” and individual consumers’ responsibility to figure out how to make the shit not stink. Actually, more and more, that's the framework that's seeping into our discussion of school too: Americans are "giving up on democratic equality and is trying, instead, to maximize the social mobility of their children however they can," as Dan Meyer argues.)

(Maybe not a sidenote. Maybe that's the central issue: ed-tech and the tech industry more broadly is literally invested in our giving up on democracy.)

Here we are – “happy new year” – and it really does boggle my mind there are still those who insist that they can wrest "AI" into "doing good," as if technofascism can readily be reshaped for any sort of truly "generative" purpose, as if one hundred years of teaching machines has brought us anywhere other than, to borrow from B. F. Skinner, a world now truly spiraling "beyond freedom and dignity."

I mean, I get it: people (some people, at least) want to do the right thing; and apologies for quoting Upton Sinclair in back-to-back emails, but "it is difficult to get a man to understand something, when his salary depends on his not understanding it." And what the hell, let me invoke Neil Postman again too, from a passage in Technopoly where he channels Adolph Eichmann: "I have no responsibility for the human consequences of my decisions. I am only responsible for the efficiency of my part of the bureaucracy, which must be maintained at all costs."


John Hermann recently argued that "Elon Musk Owns the AI Conversation”: X (the site formerly known as Twitter) is where the discourse about "AI" is being played out.

The reality of the AI industry is far larger than a subculture on X with hundreds of billions of dollars flowing into physical infrastructure, reshaping jobs, and deploying into products used by billions of people. But that makes X’s capture of the AI conversation all the more notable. Much as Twitter long ago captured, amplified, and distorted elite conversation in media, in sports, in parts of finance, in left politics, and more recently in right politics, the world’s understanding of what’s going on in Silicon Valley right now — the hype and the doom and the bubble and the progress — is first processed through the strange culture and incentives of X, which is now owned by estranged OpenAI co-founder, Google antagonist, and xAI founder Elon Musk. It has been merged into xAI, where the in-house chatbot occasionally confuses itself with Hitler. When the history of this period in tech is eventually written, it will be composed of, among other things, a whole lot of threads and replies from blue checks on X.

This seems incredibly important for education, if as Silicon Valley’s investors and entrepreneurs hope, “AI” is going to finally “revolutionize” school. (Meaning: end teachers’ unions; control information, predict and monetize futures, eliminate “woke,” eliminate trans people, make sure only “the right people” go to college -- men, that is. White, American men.)

Hermann published his piece a few days before the explosion of deep-fake child porn on X – child porn generated by and facilitated by Musk's "AI," Grok. Generative "AI" has finally found its "product-market fit," Max Read noted: the production of CSAM at scale.

The production of CSAM at scale, and according to WIRED at least, the pushing of CSAM mainstream. But what counts as "mainstream"? There has been quite a bit of research and reporting in the past few years that indicates non-consensual CSAM has already become widespread in schools.

Everyone wants to chit-chat about "AI" for cheating and "AI" for lesson plans – it's more familiar, more comfortable ground for educators (and other adults) to cover – and sure, we should talk about the automation of academic thoughtlessness. But the proliferation of “undressing” technology should remind us that the lack of consent is a fundamental element of "AI" – data and content taken without our permission, text and images "generated" without our permission, algorithmic decision-making without our permission, that little sparkly "AI" icon forced into our everyday software and thus everyday lives without our permission.

Musk can shut down Grok's ability to create these images, but what remains is a technology trained and prompted and designed to do this. It's not some accidental "innovation"; this violence is by design.


I've long been a fan of Emily Bender and Alex Hanna's use of the phrase "synthetic text extruding machine" to talk about generative "AI." It conjures an image of the pink slime that's used in ultra-processed meat products – it's not not meat, but yeah, ewww. But maybe I'm going to start saying "child porn machine" instead: "oh, did you use the child porn machine to summarize that article? Cooooool." "oh, you used the child porn machine to grade your students' essays? Niiiice." "Ah, you had the child porn machine write letters to parents, praising their child's behavior in class? Excellent." "It's faster to use the child porn machine to write letters-of-recommendation, you say? I see." Nice school system, nice future you're building.


To list. To rank. To rate. To catalogue. To categorize. To profile. To label. To name. To judge. To prioritize. And presto, somewhere along there, once you've shoved the unwanted ideas and people down far enough, you can say you have attained AGI.


Image credits

Today’s bird is the turpial (or troupial), the national bird of Venezuela. It is the largest of the orioles, but unlike others in the species, the troupial does not construct its own nest. It steals others – "nest piracy" – sometimes violently removing other birds from their homes.

It’s not really a good metaphor for, you know, everything; but it's a striking bird nonetheless.


Thanks for reading Second Breakfast. Please consider becoming a paid subscriber as this whole reading, writing, thinking about the implications of ed-tech is my full-time job.

I reckon this newsletter is going to be a little more unscheduled, unscripted, malformed, messier this year. Maybe at some point I’m just going to start sending you a zine instead of an email. We'll see...