AI Foreclosure

AI Foreclosure
Image credits

"AI Will Empower Humanity," says Reid Hoffman, venture capitalist, OpenAI funder, co-founder of Linkedin, and member (along with Peter Thiel, David O. Sacks, and Elon Musk) of the "PayPal Mafia." (Related, from The Guardian's Chris McGreal: "How the roots of the ‘PayPal mafia’ extend to apartheid South Africa.")

Hoffman imagines a future in which "individual empowerment" stems from the extraction of all our personal data by technology companies – data which is then used to build AI that will help us in turn to optimize and automate our decision-making (and, of course, keep Hoffman a billionaire). No doubt, this total surrender of our data, our privacy, our autonomy is already well underway.

"Imagine A.I. models that are trained on comprehensive collections of your own digital activities and behaviors," Hoffman writes. "This kind of A.I. could possess total recall of your Venmo transactions and Instagram likes and Google Calendar appointments. The more you choose to share, the more this A.I. would be able to identify patterns in your life and surface insights that you may find useful. ... [I]magine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6."

Imagine a world in which AI dictates your decision-making, limits your options about what you can and should learn, and thus forecloses your future. This is the disempowering and dehumanizing future of education and AI, one in which students' futures are constrained by the past – by their own past decisions and by the data trail of other students, those that the algorithms decree to have similar profiles.

"This will go down on your permanent record" – long a largely empty threat that, under a regime of data extraction and surveillance, now means that students' futures are permanently recorded, predicted, and policed. And thanks to the opacity of AI's algorithms, there will be no redress – no ability for the student or their parents or a teacher or counselor to demand an explanation or appeal.

In this AI future, there is no accountability. There is no privacy. There is no public education. There is no democracy. AI is the antithesis of all of this.

Education is a liminal space – one of becoming. (Arguably, every day of our life is that very thing; that is, every day we choose who we want to be. We choose – a machine should not.) In school, we have carved out a specific time and a specific place for this emergence, one that is – ideally at least – a time and place to discover, to practice, to take risks even, to learn to love a world beyond one’s own. It is not merely a place for personal self-fulfillment, but one in which students engage (and yes, disengage) with others – ideas built and shared in community, not just as elements for individual refinement but, we always hope, for social progress, for all our benefit.

Liminal spaces are, as the anthropologist Victor Turner argued, "betwixt and between." Education similarly finds itself in that awkward middle ground between its obligation to the past – the pedagogy, "the curriculum" – and its commitment to the future – a radical belief that, in every student and in every lesson, there is potential for something utterly new and transformative to emerge on the other side.

Education, too, is where we decide whether we love our children enough not to expel them from our world and leave them to their own devices, nor to strike from their hands their chance of undertaking something new, something unforeseen by us, but to prepare them in advance for the task of renewing a common world. — Hannah Arendt

But AI – a large-language model or predictive algorithm or otherwise – is built on a corpus that is, quite literally, bound to the past. Education's AI has been trained on outmoded curriculum, exclusionary practices, and racist data; it is trained on YouTube videos and YouTube comments and Wikipedia entries; it is trained (mostly) on the English-language Internet – trained on a very small slice of knowledge and culture because not all knowledge and culture have been recorded, let alone digitized; and yet simultaneously trained on a disproportionately large slice of discrimination and violence, because that has been the experiences of Black students, poor students, students with disabilities, non-English-speaking students, undocumented students, and queer, nonbinary, and trans students. AI bends students to fit that old bell curve (yes, that bell curve), and there it breaks them. It does not, it cannot liberate them.

To insist, as Hoffman does, that AI offers something other than compliance and control, is to admit to existing beyond the reach of these discriminatory data regimes and practices, to being beyond reproach politically and financially and intellectually.

AI cannot hurt you or harm you or stop you from becoming because it already reflects your beliefs, your reasoning, your values. It sings to you in your voice, with your words and inflection, assuring you how very reasonable, how very intelligent you already are.


It’s all too much. Already. Thank goodness for Rusty Foster, who reads a lot of the Tabs so I don’t have to.

Who Will Pay: "The Chaos in Higher Ed Is Only Getting Started," Ian Bogost wrote last week, after Trump announced new restrictions – financial and communicative – on various federal departments and agencies, including the NIH. This presaged this week's move to shut down funding for almost everything. “Oops,” but not really. “They are digging a shithole so deep that there will be literally no way to climb back out again. They want us all to be down there in the dark as a punishment for the temerity of having been who we have been,” as Timothy Burke puts it.

Via ProPublica: “Vera Rubin Was a Pioneering Female Astronomer. Her Federal Bio Now Doesn’t Mention Efforts to Diversify Science.”

Agency Will-to-Power: Casey Newton on OpenAI's new agent, Operator. "What are 'AI Agents' For?" asks John Herrman. Ben Betts on "The fall of click-next e-learning: What Operator means for training." — David Wiley wrote something similar several months ago. These echo what students at Sidney Pressey’s university foresaw a century ago: schools automate teaching and testing; students automate their schoolwork and test-taking in turn. We've long decided that education can be reduced machines chattering at each other, that somehow humans and our humanity are extraneous to the process.

Microsoft Copilot stops working on gender-related subjects. But sure sure, remind me again, Reid Hoffman, how AI is going to unleash everyone's "humanity."

Men on DeepSeek: Casey Newton with "Four big reasons to worry about DeepSeek (and four reasons to calm down)." Brian Merchant on "The great undermining of the American AI industry." "DeepSeek: The Greatest Growth Hack of All Times meets its David in a Chinese Quant," by Georg Zoeller. "I don’t believe DeepSeek crashed Nvidia’s stock," says Timothy B. Lee. Ryan Broderick posits that "The AI guys were lying the whole time." Benjamin Riley calls the whole thing "DeepIrony." "OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us," 404 Media reports. Gary Marcus with the last word: "Karma's a bitch."

I'll have more to say on how all this — AI, federal funding, national security, standardized testing — might feed into the latest "Sputnik moment" [eyeroll] on Monday. A hint of where some education reformers might be headed with this is in this wildly shitty headline in The 74: "Across All Ages & Demographics, Test Results Show Americans Are Getting Dumber."


Teaching and Testing Machines: "Why the Term ‘Artificial Intelligence’ Is Misleading" by Pablo Sanguinetti. The NYT on "Humanity's Last Exam" – the standardized test of dooooooom.

"In Edtech, You Either Bet On Teachers Or You Have To Build One," Dan Meyer argues. I’d say, even more broadly, that you have to trust students too – trust in their capacity to learn and grow and reflect and become. You have to bet on people. (The tech oligarchs and the Trump Administration are not betting on people.)


AI Literacy: "Google pushes global agenda to educate workers, lawmakers on AI," Reuters reports. Can there be a less trustworthy teacher on this topic? (Sadly, yes!)

In The Conversation, we find some serious handwringing that the more you know about AI the less you want it in your life. So researchers are eager to explore how to get folks to become "AI literate" while also keeping them in the dark about how AI actually works, about the unethical, exploitative foundations of the technology. (Damn, I sure called this one.)


AI is a Bullshit Generator, and so is the White House: “Ending Radical Indoctrination in K-12 Schooling.” Again (am I repeating myself enough yet?), as AI obfuscates its sources and removes accountability, it is going to be the perfect technology for erasing anything and everything “critical” in the curriculum. Read: written by someone who isn’t a conservative white man.

This is the future of ed-tech:

in coordination with the White House Office of Public Liaison, coordinate bi-weekly lectures regarding the 250th anniversary of American Independence that are grounded in patriotic education principles, which shall be broadcast to the Nation throughout calendar year 2026.

Technology is a Loss: "Are we losing the ability to write by hand?" asks Christine Rosen. "What happens to our culture when websites start to vanish at random?" – s. e. smith on link rot.


Panic Buttons: "Are Cell Phones Really Destroying Kids’ Mental Health?" asks Siva Vaidhyanathan. "Is Social Media More Like Cigarettes or Junk Food?" asks Cal Newport.


Going Going Gone: In the latest missive from the excellent The Sword and the Sandwich, Talia Levin asks "Who Goes MAGA?" – a play on Dorothy Thompson's classic 1941 article in Harpers, "Who Goes Nazi?"

Who we were born to, who we choose to be on emerging from that chrysalis, what we love and who, these shape us. Nevertheless, who we are is always a choice: every indrawn breath is a choice, too. Nice people do not go MAGA, although people who are respectable and who are good at seeming nice go MAGA all the time. 
That’s what makes the game so fascinating, the game of who goes MAGA: who would choose to drink the poisoned chalice when pushed up against the wall—and who reaches for it with both hands. And why. 
An upbringing or a code, innate instinct, rough experience, empathy or politesse can draw us away from vulgarity and cruelty. Pride and fear, venal self-absorption, a desire for vengeance, cowardice, conformity, jealousy and loneliness can draw us into hate.

It's a “fun little parlor game,” Levin argues – no algorithmic predictions needed here, just your own sense, when you look around at friends and family and neighbors and professional colleagues, of who is wildly receptive to fascism. So. fucking. fun.

Thanks for reading Second Breakfast. As I've hinted above, I'll be back on Monday with some thoughts on Sputnik, which played an instrumental role in the development and adoption of ed-tech in the US (along with efforts to discredit progressive education).