Spot the Difference
"School hasn't changed in hundreds of years." So goes the story invoked by politicians, entrepreneurs, and journalists -- a cliche often followed with an urgent call for school administrators to buy and teachers to adopt the latest technological gadgetry, gadgetry that's poised so these storytellers insist, to "revolutionize education," to utterly transform how teaching and learning will happen.
Of course, school has changed over the last century, in ways both big and small. (This is, as many of you know, some of the Introduction to Teaching Machines, which opens by arguing that Sal Khan’s “history of education,” just one of these popular “schools haven’t changed” stories, is wrong.) There have been changes in demographics, laws, expectations, pedagogies, and science, just for starters. But I’d say that we can no longer pretend that technological changes, particularly those brought about by digitization, are somehow yet to happen in education. Computers are always marketed to schools as "the future." But they are also very much now the past.
Even “AI,” which is [barf emoji] heralded as the latest and greatest revolution humankind has ever seen, is old. “AI” has been a part of education technology now for over fifty years.
2026 marks the 100th anniversary of Sidney Pressey's landmark article that launched the whole teaching machine industry: "A simple apparatus which gives tests and scores-- and teaches." Pressey, like many early educational psychologists, had worked on early efforts to develop standardized testing -- at first a way to rank and rate soldiers in World War I and then a way to rank and rate students. Pressey and others believed that an educational machinery could automate both testing and, importantly, teaching. And while his device predated the computer by decades, the digital tools that followed have never really broken from this legacy, one bound up in eugenics, behaviorism, control. Indeed, these are the values that underpin education technology to this day.
And these ideas, these technologies have changed education. They have reshaped how we think about thinking (the pervasiveness of the mind-as-machine metaphor); they have altered pedagogical practices; they have shifted the kinds of work that students and teachers do, along with the ways in which they do them. They have shaped the expectations of what students and teachers believe they can do -- not just the “everyone should learn to code” stuff and the twisting of the purpose of education to be solely about job training and “career and future readiness,” but about how students understand their own abilities, how they see (or don’t see) their own agency, how they control (or don’t control) their own inquiry, curiosity, attention.
The algorithms tell you who you are, who you can be, what you should do; you cannot be trusted, students have been told by the machinery for decades now, to know yourself ...as anything other than a consumer, that is.
“Formatting as many minds as possible, shaping people’s desires, recrafting their symbolic world, blurring the distinction between reality and fiction, and, eventually, colonizing their unconscious have become key operations in the dissemination of microfascism in the interstices of the real.” – Achille Mbembe, Necropolitics
As one of the core ideologies of computing is individualism, education technologies have served to undermine a democratic vision of schooling -- often quite explicitly through the funding of various educational initiatives by the tech industry’s wealthy investors. Powerful forces have convinced us to invest in computers, but not in one another, not in people; and we’ve dismantled democracy with a shrug -- but hey, at least the kids have Internet.
The damage to education is even more awful, even more insidious than this: the kinds of pedagogical practices that these technologies encourage -- students working alone, “at their own pace,” for starters -- have helped to undermine our shared understanding of, our shared respect for one another. The answer, these technologies insist, is in the machine, not in one another.
The machine, so the “AI” supporters now insist, is vastly superior to the human. Why learn when you can never be as fast or as shiny as a computer? Why even try? Why even practice? Why even bother?
Epistemic nihilism is a fundamental element of this surrender to an “AI”-assisted technofascism; but perhaps we should look at how this has been building -- has been built into -- education technologies for a much much longer time.
Mostly, I’m offline. And yet somehow the bad news still gets through...
- “One Week Without Smartphones on a College Campus” by Callie Holtermann in The New York Times. In some ways, this is a fairly formulaic article: students try to eschew technology, but in the end can’t live without it. Yawn. But you do catch glimpses here of what I suggested above: digital technologies have circumscribed knowing, not just of “course materials,” but of the world and people around you. Also it’s just so grim that one administrator’s response to students organizing this “tech fast” is “what if there’s a school shooter and we have to text everyone?!”
- “On Evaluating Cognitive Capabilities in Machines (and Other "Alien" Intelligences)” by Melanie Mitchell -- wherein “other ‘alien’ intelligences” here means babies and animals. I couldn’t help but think of B. F. Skinner who, in explaining the use of operant conditioning, added that “comparable results have been obtained with pigeons, rats, dogs, monkeys, human children… and psychotic subjects.” Same same.
- “AI’s Memorization Crisis” by Alex Reisner in The Atlantic. “AI does not absorb information like a human mind does. Instead, it stores information and accesses it.” Although “AI” companies have insisted that their LLMs do not contain copies of the data they’re trained on, oooops, it looks like they do. And that’s super neat-o considering the amount of stolen copyrighted materials and child porn that these have been trained on.
- "Microsoft is closing its employee library and cutting back on subscriptions" by Tom Warren for The Verge. Workers are supposed to learn from "AI." Translation: they want you de-skilled and replaceable. Good kicker: "Most publishers won’t be keen to discuss losing corporate contracts or subscriptions, but SNS didn’t hold back on its thoughts about Microsoft’s AI-powered learning future. 'Technology’s future is shaped by flows of power, money, innovation, and people — none of which are predictable based on LLMs’ probabilistic regurgitation of old information,' says Berit Anderson, chief operating officer of Strategic News Service. “We look forward to welcoming Microsoft back into the SNS community whenever they decide they would like to return.”
- “The risks of AI in schools outweigh the benefits, report says” by Cory Turner for NPR, in an article that just exemplifies the “balance” -- one pro, one con, one pro, one con -- that drives me up the wall. Here’s a direct link to the Brookings report: “AI’s future for students is in our hands” by Mary Burns and Rebecca Winthrop. “These risks are neither inevitable nor immutable: We can bend the arc of AI implementation toward supporting student learning and development,” the report says. Suuuuuuuuuure, Jan.
- “‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid” by Joseph Cox from 404 Media. Well, thank goodness the think tanks and consultants are out there bending the arc of AI implementation...

Today’s bird is the common loon. The Wikipedia entry for the loon contains this trivia gem: “With improved gene-sequencing technology, a draft genome of the common loon has assembled and identified at least 14,169 genes. 80.7% of chicken genes are found in the common loon genome.” My dad used to joke that the loon was our family bird (and my brother and I got matching loon tattoos in memoriam). It’s the state bird of Minnesota too (and yes yes the Canadians lay claim to it too symbolically, I know I know) so here’s to all the good people of Minneapolis who are fighting for us all.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, as your financial support enables me to do this work. My plan for 2026, as it stands at least, is to write more about the history of ed-tech and to remind people that this "AI" stuff isn't necessarily new or revolutionary as much as it is the fantasy of Cold War-era scientists and military officials: information, order, control, obedience. But on Mondays (or thereabouts) I write about breakfast, actual breakfast – you can add the "personal day" newsletter to your subscription by clicking on your account profile above.