AI as School Monitor and Measurement

On Saturday night, ICE agents arrested Mahmoud Khalil, a recent Columbia University graduate and prominent pro-Palestine campus activist still living in campus housing with his 8-month-pregnant wife, a US citizen. Initially, Khalil, who was born in Syria, was told his student visa was being revoked; when agents were told he was a permanent resident, they said his green card had been revoked as well. Khalil was "disappeared," with federal authorities refusing to tell Khalil's wife or attorney where he was being detained. (They have since learned he is being held in a detention facility in Louisiana – one that a 2024 report by the ACLU criticized for "systemic human rights abuses against immigrants detained and disappeared.”)
Khalil had appealed to Columbia for help the day before his arrest — he said he was “in fear for his life and well being." His email went unanswered.
Khalil has been convicted of no crime; he has not been allowed to speak to his lawyers; and the Trump Administration has been gloating not only about his arrest and forthcoming deportation, but about the government’s intention to round up other pro-Palestinian activists.
And AI, we're told, will play a key role in this. Axios reports that the State Department plans to use AI to identify students "who appear to support Hamas or other designated terror groups." And the government and AI are not just coming for students: “Yale Scholar Banned After AI News Site Accuses Her of Terrorist Link.” 404 Media lists "The 200+ Sites an ICE Surveillance Contractor is Monitoring" as part of an effort to link on- and offline data, to "map out a person’s activity, movements, and relationships."
“The AI State is a Surveillance State,” Erik Salvaggio asserts, with a nod to the history of government surveillance, concerns that came to a head in the 1960s and 1970s, not just with the rapid expansion of computing and data collection, but you know, due to the privacy violations ordered by Richard M. Nixon and COINTELPRO – all background to the passage of the 1974 Privacy Act.
This history of surveillance is worth pointing to within education too — and not just because FERPA was passed that same year. Social media monitoring is not something new on campuses. While it hasn't explicitly been branded with that ubiquitous "AI" label until quite recently, software that sifts through students' and teachers' data to identify "risks" – whether these be risks of depression, self-harm, "radicalization," school shootings, cheating, or teacher unionization – has been sold to schools for well over a decade now.
We are living in a reality – a political reality, an economic reality – built by and for digital technology – and that absolutely includes ed-tech.
The Factory, Remodeled:
When you watch teachers working within the dominant edtech paradigm, you see their relationship with students transformed from nurturing to supervisory. What more can those teachers do but patrol the rows of desks, making sure kids are staying on the right tab and staying off of Retro Bowl.
One of the things that struck me about Dan Meyer's recent talk to Amplify software developers (cited above) is how the constant and repeated invocation of the "factory model of schooling" by various ed-tech entrepreneurs (their investors, their political backers) actually belies their recreation of this very thing: their obsession with efficiency and productivity, with data and measurement. They are the heirs of scientific management, not its opponents.

Don't Know Much about History:
Much of the coverage of the Department of Education in recent weeks has noted that it's a relatively new federal agency – its creation enacted by President Jimmy Carter in 1979. This is true, but only partially so. Its origins extend back over a century earlier, when in 1870, the federal government first began to collect statistics about the American education system: how many students were enrolled in primary and secondary schools, for example, and how many actually attended; how much teachers earned; how much money municipalities spent in order to run these institutions.
This week, the Education Department announced it would lay off 1300 staff (its workforce now half of what it was at the beginning of the year), as Trump continues to insist he will shut down the agency entirely.
That early if not original function of the department – data collection – has been pretty well obliterated. The Institute of Education Sciences, the department’s research wing, has already seen its budget slashed by hundreds of millions of dollars, and the majority of its staff were ousted in this week's mass firings. The administration of standardized tests like NAEP ("the nation's report card") and PISA (which compares US students to students globally) is in question. "Are Schools Succeeding?" asks The New York Times. "Trump Education Department Cuts Could Make It Hard to Know." Educational statistics have been used, at least since Brown v Board of Education, to identify and underscore educational inequality – so no surprise, really, that the administration seeks to dismantle a tool that has been wielded by civil rights activists to demand educational justice.
But, let's be very clear: measuring students will continue. As Daniel Castro from the Center for Data Innovation argues, "AI Is Key to Trump’s Education Overhaul." Testing won't simply be something that happens once or twice (or sadly more) a year; assessment will be ongoing and ubiquitous. All aspects of students' lives – in and out of the classroom – will be monitored and measured.
This task will, of course, be privatized. (It already is administered by private companies – it's just underwritten with public funding.) In true "shock doctrine" form, EdWeek's Market Brief advises ed-tech companies to provide "important counsel" to schools in this time of turmoil.
This task will also be "personalized" – and again, let's please recognize the role that education technology has been playing in facilitating this very moment. The federal government is signaling its adherence to the Silicon Valley narrative about schools: a rejection of any collective effort in education – how it's to be practiced or appraised – and the privileging of individual attainment.
"Covid’s Deadliest Effect Took Five Years to Appear," writes Siddhartha Mukherjee. "Covid was a privatized pandemic. It is this technocratic, privatized model that is its lasting legacy and that will define our approach to the next pandemic. It solves some problems, but on balance it’s a recipe for disaster. There are some public goods that should never be sold."
This, but for education.
AI is/as Cheating:
From the press release: "Turnitin launches Turnitin Clarity, bringing transparency and integrity insights to education."
There's a lot to be said about the history of that OG plagiarism detection tool Turnitin – the whole market for products that help students cheat and that help students avoid getting caught cheating and that help teachers identify cheaters. A Venn diagram that's just one big circle.
Soon after the launch of ChatGPT, Turnitin launched a tool that it said would identify students who'd used AI to write their essays. Surprise surprise, it didn't work — lots of false positives, frequently for the writing of non-native English-speaking students. So now, rather than simply auto-flagging essays as auto-generated, this new product will compel students to write their essays on the Turnitin platform, where they can be watched the entire time — oh, and get instant AI feedback on their writing.
Now, not only the final "product" becomes part of the Turnitin corpus, but the entire writing process.
We're got this song on repeat: students are all using AI, and they're using it to cheat. And teachers – often depicted, in this story, as technologically inept and unsophisticated, a vestige of a bygone era – are struggling to keep up. It's not a good song (is it even a real one); but it's an ear worm. And it sure sells product: to students, who are being told they'll be left behind if they don't use AI; and to schools, who are being promised that AI can stop students from cheating, can help students write, and make everyone everywhere AI-ready in the process.
On Standardized Testing, Deception, and Eugenics:
"Chatbots Are Academically Dishonest," Matteo Wong in The Atlantic declares, suggesting that AI models might be cheating on the benchmarks that have been designed to assess their "intelligence."
And well, duh? I mean, when the origin myth, the foundational document of a field – in this case, "Computing Machinery and Intelligence" – proposes that one might answer the question "can a machine think?" with a game based on deception, yeah no shit, your field is going to privilege deception. Stories make the world.
AI benchmarks are standardized tests, and artificial intelligence is very much bound up in psychometrics – in measuring, ranking, and rating "intelligence" based on test performance.
Arguably the most important mental test in US history was – just like artificial intelligence – a military endeavor: the administration of Army Alpha by Robert Yerkes et al in order to evaluate military recruits for the First World War. This test was not, to be fair, "standardized"; but it was nonetheless administered in all its inconsistencies to some 1.75 million men. Stephen Jay Gould admits in The Mismeasure of Man that "I do not think that the army ever made much use of the tests." But its dubious findings, he argues – that Black men and immigrants from southern and eastern Europe were far less intelligent than men of German, Scandanavian and British descent – offered "objective data" to justify passage of the Immigration Restriction Act in 1924, which imposed "harsh quotas against nations of inferior stocks."
Intelligence testing has always been the key tool of eugenics propaganda.
An anecdote, offered in passing, from this NYT story on Trump's call to scrap the CHIPS Act (which was designed to make the US less reliant on Asian chip manufacturing.). Apparently Michael Grimes, a new senior official at the Department of Commerce, has been conducting employees of the CHIPS program office:
"In interactions some described as 'demeaning,' Mr. Grimes asked employees to justify their intellect by providing test results from the SAT or an IQ test, said four people familiar with the evaluations. Some were asked to do math problems, like calculate the value of four to the fourth power or long division."
There's been much talk this past week about AGI (artificial general intelligence) and much insistence (again) that it's here or it's coming — that the government knows, as Ezra Klein breathlessly reported. Cool that the government is now headed officially by a guy who likes to insult women and people of color by calling them ”low IQ”; and by another guy, unofficially, who’s openly embraced eugenics. Again might I remind you: the g in that acronym AGI was the contribution of eugenicist Charles Spearman.
"Mens Sine Manus" – engineering professor Josh Brake writes quite beautifully about AGI, asking in part what might be our moral responsibility with regard to this technology and its fantasies.
I will say – I will shout over and over and over – that our moral responsibility with regard to a technology and a fantasy of eugenics must be full-throated rejection.

Creative Necropolitics:
Sam Altman announced this week that ChatGPT is working on a product to write fiction. He offered a sample of the prose, a short story on – of all the Second Breakfast-related things you could prompt a prediction-machine to generate – AI and grief. "It's beautiful and moving," novelist Jeanette Winterson wrote in The Guardian. And yeah, I was moved for sure: absolutely fucking disgusted.
Grief has been an obsession of sorts with many in AI – "grief tech," we’ve been told, is a thing. Perhaps, after COVID (or my god, worse: what’s coming), folks see money to be made in the death market. Or perhaps the tech industry’s interest in grief is simply connected to its obsession with longevity – these rich men's quest for immortality (the singularity), their fear of aging, let alone dying. Perhaps it's a "fuck you" to the Vatican, which recently cautioned against AI idolatry, observing that "as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment—longings that can only be truly satisfied in communion with God."
Whatever the reason: to autogenerate text and call it a short story about grief and artificial intelligence is to twist what it means to feel and to think, to cheapen what it means to be human – to know death comes for all of us, to face throughout the course of our lives, the deaths of those we love the most. Each one of us is utterly unique. Nothing about our being and our end-of-being can be replicated by a series of tokens, churned out in the case of OpenAI’s short story, with an ironic sneer; our lives and our losses cannot be generated then regenerated, the traces of us in data simply something for others to extract and play with, infinitely remixable, reducible, repeatable. Grief — one of the most powerful and harrowing experiences of humanity — mere artifice, embraced as artifice, auto-performed for admiration.
It’s been five years since the pandemic. We have not, as a society, come to terms with all the loss, even though, I think, we have all very much spent the last few years entirely subsumed in the grief. It’s been almost five years since Isaiah’s passing. I carry the grief with me still — not just in my heart, like the clichés that the LLM corpus feeds upon would have it, but in my body.
AI will never ever know.
Too many people have chosen, rather than to struggle with both the personal and structural implications of death and suffering and impermanence, to embrace AI, to believe its promise of easing all burdens, life-tokens and text-chains everlasting.
Too many people are ready to thrust aside what it means to be human – quite convenient, I'd say, since those in power right now clearly have no plans to let us survive.
Thanks for subscribing to Second Breakfast. I don't intend on sending a newsletter on Monday, as on Sunday I'm running the NYC Half Marathon. I mean, I could probably write something on Saturday, sure sure. But I'm likely to spend the day eating carbs and pacing back and forth nervously instead – not much different than usual, now that I think of it.