Into the Breach

Into the Breach
Red-legged Partridge (Image credits)

There's been plenty of ink spilled in the last week in response to The New York Times's story on NYC mayoral candidate Zohran Mamdani and his college application to Columbia University, in which he ticked both the boxes for "Asian" and "Black or African American" when asked to identify his racial background (he further clarified by writing in "Ugandan" ). Indeed, Mamdani, whose parents are Indian, was born in Uganda and moved to the US at age seven.

Plenty of ink spilled, but I'm writing about it anyway.

That NYT article notes, in passing that it "could not find any speeches or interviews in which Mr. Mamdani referred to himself as Black or African American." Nonetheless the "paper of record" insisted that, by publishing this story, it was doing important public service journalism. So this is really a story about "that one time" when he was a teenager. And although his father is a professor at Columbia, Mamdani was not actually accepted to the school. The URL slug reveals what the paper wanted readers to believe: mamdani-columbia-black-application.html.

Margaret Sullivan, who once served as its public editor, asked in The Guardian what seems like a pretty fair (and obvious) question: is the paper trying to wreck Mamdani's bid for mayor? But it's doing more than that too.

Lest you think that this incident is far afield from the questions of education and technology that I write about here, it actually underscores almost everything that's happening right now with regard to the politics of "AI." This is a story of racism, eugenics, privacy violations, and the weaponization of personal data, not to mention, the willingness of powerful institutions to perpetuate rather than investigate let alone ameliorate harms, particularly to immigrants and people of color. And it's literally a story about education and technology.

This data about Mamdani came from a hack of Columbia University's systems earlier this summer. (It doesn't appear that Mamdani was specifically the target, but as his own application – from 2009 – suggests, the information that was stolen went back decades). The alleged hacker told Bloomberg that they had "sought to acquire information about university applications that would suggest a continuation of affirmative action policies in Columbia’s admissions, following a 2023 Supreme Court decision that effectively barred the practice." The school's newspaper also reported that, following the cyberattack, campus computer screens were left showing a photo of Donald Trump. This was a political act.

For its part, The New York Times saw no problem with the provenance of this data – this stolen data. To publish its story, the paper worked with the hacker's intermediary, a racist blogger named Cremieux, best known according to The Atlantic for compiling charts on the "Black-White IQ gap." Chris Rufo, a self-described "anti-Woke" activist who's made his career by crusading against public universities, applauded the paper for publishing their story before he could, stating they'd just beat him to the scoop.

The laundering of stolen data to the detriment of democracy. Sounds familiar.


Schools must do everything in their power to protect their faculty, staff, and students. (OK, yes. School's out here in the US. But this is how administrators need to be spending their summer vacation.) One could stop the sentence there, of course: schools must do everything in their power to protect their faculty, staff, and students period. But I will append a prepositional phrase: schools must do everything in their power to protect their faculty, staff, and students from the eugenicists and the fascists and the "anti-Woke" mobs. (And to do that, schools must not simply acquiesce to what is the technical wing of these movements, the "AI" industry.)

If we are to prevent the tentacles of techno-fascism from further expanding their reach into all aspects of campus life (and quite literally from sending secret police to round people up), schools must seriously consider not just cybersecurity but data minimization. What tech companies want and what "AI" companies desperately need is data maximization.

As Faith Greenwood writes in her "Brief Ode to Data Minimization," "data is inherently dangerous, and you should capture and store no more of it than the bare minimum that is required to accomplish your goal." And even that much data may well be, in some cases (and under fascism), too much.

We can see this again and again and again and more and more and more, particularly as DOGE and as ICE become more brazen with (and certainly much better funded for) their searches of our medical records, our social media, our border crossings, our taxes, our student loan data, and yes, our school records.

(I'll be talking about this idea of "small" versus "scale" in more detail in my upcoming keynote at the 2025 Civics of Technology conference. You're invited.)

And yeah. I have some things to say about "abundance."

Related: "Resisting the Techno-Fascist Takeover: Are We Ready for Decomputing?" asks Dan McQuillan.


"Artificial intelligence is here to stay," the Washington State Office of Public Instruction pronounces on its website. But is it? I mean, is it really? How do they know? Who says so? What are their interests in furthering this narrative – this narrative where we all just roll over and give up on truth and accountability and democracy?

I'll write more in Monday's newsletter about the announcement this week that the American Federation of Teachers' is launching a new "National Academy for AI" here in NYC – an effort bankrolled by OpenAI, Microsoft, and Anthropic.

But I will say this today: this is pretty dispiriting news – "a gigantic public experiment that no one has asked for" – particularly as unions should be one of the ways in which workers resist, rather than acquiesce to (be trained for) the tech industry's vision of the future.

"Reading and writing and arithmetic and learning how to use A.I." are the core tasks of schools, according to Chris Lehane, OpenAI’s chief global affairs officer. Lehane is, for those keeping track at home, one of the lobbyists who spearheaded the recent push to include language in Trump's "Big Beautiful Bill" that would have prevented states from passing their own AI regulations. These are profoundly anti-democratic forces peddling profoundly anti-education initiatives, and no invocation of some empty "seat at the table" cliché makes up for this. At all.

Teaching teachers how to use a suite of Microsoft tools does not help students as much as it helps Microsoft. Teaching teachers how to use a suite of Microsoft tools is not so much an "academy," as a storefront. All ed-tech is "sponsored content."

Quite possibly one of the most awful and ominous pieces of "AI" I've seen on LinkedIn, and that site is teeming with bullshit and bullying

It's almost tomato season here in the northern hemisphere, which is very exciting. And it's also open letter season, which is also exciting because – hey, if we don't have unions, we do still have each other:

And apparently it's open season on those signing these letters and refusing "AI" too, as those who say "no" are increasingly being threatened with exposure, exclusion. Those who resist are increasingly being painted (I should try to tie in "tomato" here maybe) – by some of the most wealthy and powerful people on the planet, mind you, as well as by those most obsequious to them – as the real threat to the future of humanity. Not the environmental havoc this bullshit machinery is causing. Not techno-oligarchy. Not rising economic inequality and precarity. Not racist surveillance technology baked into the LMS. But Brenda, the sixth grade social studies teacher who refuses to use Claude to vomit out lesson plans. Laura, the adjunct writing instructor who refuses to outsource her grading to ChatGPT.

Whenever someone "tut tuts" that those resisting oppression are being too divisive, making us all too divided, what they're really saying is "shut up and deal with it."

As the brilliant Helen Beetham observes (in a post full of brilliant observations),

...every time the mask slips on the ugliness, greed and rapacity of the AI industry, whenever the lies and misdirections and fabrications become unsustainable, come the grown-ups to tell us to calm down and carry on. Because polarisation is unhelpful. No matter that on one side are the four or five largest corporations that have ever existed, the biggest bubble of financial over-investment, the most powerful military and surveillance states and all the combined forces of tech hype and mainstream media, while on the other side are thoughtful people with arguments. The good academic must always plot a middle course between naked power and poor thought.
But what if there isn’t a middle of this road? What if the project of ‘artificial intelligence’ is not a road to new kinds of education - not even a slow and bumpy one – but the reversal of everything education stands for? What if, at at least in its current, (de)generative, hyper-capitalistic guise, the project of AI is actively inimical to the values of learning, teaching and scholarship, as well as to human flourishing on a finite planet?

"Is it the phones?"

"What would Socrates do?"

"What Would You Do?"

"Why Do Fascists Dream of Alligators?"

"Are Humans Destined to Evolve into Crabs?"

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Writing this newsletter is my full-time job, and I can't do this work without your support. I swear, when I started composing this email, I had more links and more news to share, but it's all exhausting and we should put our phones down and eat some breakfast and breathe and think – the latter an act that, I promise you, cannot be reduced to predictive tokens. More on that on Monday too...