Secret Agent Man
Some weeks, the education technology news is incredibly grim, and sorry to say this was one of those weeks. (Warning: this is a long email.) Indeed, anytime education- and child-related tech stories fill a Garbage Day newsletter as they did on Monday -- Garbage Day describes itself as a publication that “doomscrolls so you don’t have to” -- it’s just not a good sign.
Several of those stories fell under the subheader “Roblox, OpenAI, The New Web, And Radicalization,” including The Wall Street Journal’s coverage of the internal debate at OpenAI on whether or not to alert Canadian authorities about what eventual school shooter Jesse Van Rootselaar had been typing to ChatGPT. (OpenAI banned Van Rootselaar from the platform but did not alert police.) And 404 Media wrote about how Van Rootselaar had created a shooting simulator inside Roblox, a video game very popular with young people.
Garbage Day’s Ryan Broderick argues that
...both ChatGPT and Roblox are not traditional social platforms. We are very used to the social media wild goose chases that happen after mass shootings, where users scour public platforms for content that might provide some kind of insight into why the attack happened. The unspoken hope being that if we had just caught it in time, things may have been different. To say nothing of all the would-be attackers that are reported to law enforcement in time because of their Facebook or X posts. But apps like ChatGPT and Roblox are not simple feed-based platforms. They are far more reactive and personalized and we are quickly discovering how hard they will be to moderate.
Later in the week, Vulture published a Q&A with the head of Parental Advocacy at Roblox, who (no surprise) says “we’re all responsible” for kids’ online safety.
No need to worry. No need to regulate. No need to hold the company -- this company, that company -- responsible. We just need better “digital literacy.”
“Literacy.” Honestly, that word is getting to be some real bullshit. Often, what’s framed as a “literacy” problem is actually the technology working by design, urging us to be compliant clickers.
But damn, “literacy” is such a friendly way to frame training and branding exercises. It sounds so progressive, so eminently philanthropically fundable: Digital literacy. Web literacy. Coding literacy. AI literacy. Gambling literacy.
[Record scratch.] Wait what? You haven’t heard of the latter?
Yeah, apparently some folks [cough] are trying to make it “a thing” – and what with the rise of sports gambling and prediction markets, I think we can see what the next ed-tech trend will be. At least Edsurge, bless their hearts, tried to make the case for gambling literacy this week: “The Math Skill Schools Should Teach — Gambling.”
When I texted a friend with a link to the article and my very savvy commentary “what. the. fuck,” I learned that the Alliance for Decision Education exists, its founder a former professional poker player. So that's something to look forward to once education inevitably pivots away from "AI" (as it did with MOOCs and adaptive learning and every other ed-tech trend ever).
Speaking of literacy-laundering, on Monday morning The 74 came out with “the exclusive”: “New Google Partnership a ‘Sizable Investment’ in AI for Teachers” – that is, a three year deal between ISTE+ASCD and Google to “offer AI training to ‘all six million K-12 teachers and higher education faculty’ in the U.S.” (Or as Ben Riley wryly put it, “Google and ISTE+ASCD announce new partnership to destroy US education.”)
This "sizable investment" (an undisclosed amount) will flow into ISTE+ASCD under the guise of "AI training" and “AI literacy,” the latter of which, as MIT’s Justin Reich told The 74, is a phrase that doesn’t really have an agreed-upon meaning, let alone being a thing with any substantive research supporting its application. (As Justin has argued elsewhere, we got “web literacy” really really wrong for a long time, and we miseducated a couple of generation of students as a result. So why exactly are we rushing into this whole “AI literacy” thing? I mean, other than the obvious grift, of course.)
Interestingly, the Pew Research Center released some survey data this week on teens’ use and views of “AI,” and somehow somehow without schools providing adequate (or any) “AI training," more than half of them are using “AI” to do their homework. Why, it’s almost as if, chatbots are just another in a long line of consumer-facing technologies, and akin to posting on Facebook or watching YouTube don't actually require any special courses or classes.
To be clear, when Google (or OpenAI or Anthropic or Microsoft or whoever) says they’re offering teachers and/or students “AI training” (let alone promoting “AI literacy”) what they’re really doing is brand marketing. This is simply an effort to get more users to outsource their thinking to their particular product.
Perhaps what we need is not technology training but technology un-training – the former is cognitive surrender; the latter will be the only way we can actually pursue learning. Of course, what anti-democratic billionaire technoauthoritarian would ever pay for that?
“What’s the Point of School When AI Can Do Your Homework?,” asks Matthew Gault in 404 Media. Arguably, one point might be to help people not ask such stupid fucking questions.
The story covers “a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein’s website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions.”
And I know that we’re all supposed to freak out about this stuff and wring our hands and teachers are supposed to sign up for the “AI” training programs so that they can be “AI” ready and engineer “AI ready” classrooms and churn out “AI ready” students, but my god. This is bullshit. Einstein is bullshit. It’s a scam, a fraud, a con, a grift.
I mean, yes, it’s bad. The promise of a magic button that will do your work for you is bad. This is bad: the founder who says
agentic AIs are a method of freeing people from the labor of education. ‘I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us,’ he said. ‘We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?’
But also: agentic AI cannot do all that, most of the time not at all, and even if it can, certainly not consistently – and that’s despite all the ways in which universities have sought over the past few decades to instrumentalize and standardize everything, primarily through the terrible technology of the learning management system. (Einstein claims to complete coursework hosted on Canvas.) Agentic AI cannot do this because all classes are different and all instruction is messy and no two professors, even those in the same department, teach the same or grade the same or set up their course in an LMS the same way. I know that the “AI” hype-monsters think we’re on the cusp of the “cheat on everything,” “automate education” world. But we’re not. (One demo, hell, 20 demos of an agentic AI posting on a discussion forum or answering a quiz question does not an agentic anti-education revolution make.)
That’s not to say that this threat is meaningless or irrelevant. We have to confront the beliefs and practices that underlie Einstein’s promises, let alone its supposed adoption. We have to challenge them and examine them with students, with professors, with university administrators, with parents, etc etc etc.
For the past seventy years or so, everyone has been told that education is a the silver bullet and going to college will lead to financial stability, if not success. That was a lie but not because learning is worthless or because school is a rip-off. Rather, it’s because capitalism never cared about your Bachelors degree, and society has been purposefully structured and restructured in such a way that opportunity has declined and precarity increased.
Two more school-related stories from that Garbage Day newsletter on Monday:
- “Student who punched another student holding pro-ICE sign at Lake Zurich High School received 2-day suspension”
- “He made a fake ICE deportation tip line. Then a kindergarten teacher called.” Called, that is, to report parents of a student in her class.
Yik Yak is back, I learned this week. The pseudonymous social network launched over a decade ago, but quickly shut down after a number of high profile scandals and cyberbullying incidents (and after college students lost interest in its toxicity). It was apparently relaunched a couple of years ago, because Silicon Valley insists on shipping their shitty, exploitative, democracy-destroying ideas to the young.
Any space, any place where there is a potential for community and growth will be surveilled and poisoned.
A little pushback on a comment that Justin Reich makes in that article in The 74 in which he claims that we don’t understand how LLMs work, that even Google engineers don’t understand how LLMs work. We do.
As Rusty Foster insists in a glorious Today in Tabs missive, AI, “isn’t a black box. It’s a statistical model of data connected to a mechanism for producing more data that resembles the data in the model.” Yes, sure, there’s a lot more math and a lot more code going on in there (and much of it is beyond my pay-grade), but it’s not actually a mystery, despite those heavily invested in the Great and Powerful Oz sort of rhetoric about the technology.
Sam Altman suggested this week (again) that humans are radically inferior to “AI” and, when challenged about the amount of electricity that it takes to manufacture Oz, made some dumb quip about the amount of food it takes to raise a human, an amount that results in a far inferior “intelligence.” What an utter misanthrope.
But Rusty writes something rather lovely/loving while comparing the ways in which “AI” ostensibly and children actually learn (the latter really is marvelous, miraculous, beautiful), as he offers a critique of a recent piece by Gideon Lewis-Kraus in The New Yorker:
Lewis-Kraus writes: “At the dawn of deep learning, a little more than a dozen years ago, machines picked up how to distinguish a cat from a dog… Once they had seen every available image of a cat, they could reliably sort cats from non-cats.” Later he asserts that “If a language model can bootstrap its way to linguistic mastery, we can no longer rule out the possibility that we’re doing the same thing.”
I’ve watched all three of my children learn what a cat is, and in each case the number of pictures of a cat they needed to see was not “all of them.” It was like, two or three? Half a dozen, tops. I helped them learn to speak and read fluently, and the number of Reddit posts required was not “every Reddit post.” I don’t need to know what mechanism underlies human intelligence to rule out the possibility that it’s the same as what a large language model does. The whole trick underlying the apparent magic of modern A.I. is simply giving it tons of data. Give it the whole internet. Give it every book ever written. This is required — it does not work with less training data....
Read all of Rusty's essay, particularly if you think that Anthropic are "the good guys."
And then let's ask this quite serious question: what is to be gained with arguing that humans have been surpassed by “AI”? Why push for an end to education? Why insist that your LLM is more intelligent than your child? What does this say about your belief in humanity, in your vision for the future? Why do "AI"' advocates hate humans so much, why are they so committed to engineering away all the complexities and richness of the human mind, the human life, the human psyche, the human experience?
Still more links, mostly without commentary:
- “The Right-Wing Nonprofit Serving A.I. Slop for America’s Birthday”
- Ongoing coverage at The LA Times on the FBI’s raid of the home of LAUSD Superintendent Alberto Carvalho. Surprise surprise, the investigation seems to involve AI and fraud
- “Bill Gates Apologizes to Foundation Staff Over Epstein Ties” – where “Epstein ties” mean having two affairs. Good thing no one put this fool in charge of education policy or anything
- “X Really Is Pulling Users to the Right.” Algorithmic manipulation shapes people’s thinking? Gee, who’d have thought? Good thing no one is handing over their cognitive development to a probabilistic technology
- “Overselling the Mississippi Miracle” by Jennifer Berkshire
“They Built Stepford AI and Called It ‘Agentic’” by Abi Awomosu. "Women’s ‘ick’ for AI isn’t technophobia or a gap to close. It’s wisdom to act on.”
The industry narrative about AI automation tells a story about factories — robots replacing assembly workers, self-driving trucks replacing drivers. This is the visible, masculine-coded story about production.
But look at what’s actually being automated first: customer service (predominantly female), administrative assistants (94% female), data entry (predominantly female), scheduling and coordination (predominantly female), contact centers (70%+ female), emotional support (feminized).
The factory narrative is the cover story. The actual automation is happening in the reproductive economy—the care, attention, organization, and emotional labor that women have always performed.
The labor was always treated as mechanical. If a machine can do it, the implication is the work was never truly human. Essential but not skilled. Now it’s being replaced by software that doesn’t need to be paid.
Women don’t need “AI training.” Teachers don’t need “AI training.” They need their work -- all their productive and reproductive labor -- recognized and valued. Politically. Culturally. Materially.

Today’s bird is the Eurasian hoopoe. According to Wikipedia, “The call is typically a trisyllabic oop-oop-oop, which may give rise to its English and scientific names, although two and four syllables are also common. An alternative explanation of the English and scientific names is that they are derived from the French name for the bird, huppée, which means crested.”
And isn’t that just exemplary of Internet informatics: could be this, could be that, who knows, but let’s hit “publish” anyway. Wonder and curiosity once prompted more scientific investigation, but now we just have intellectual choose-your-own-adventures and chatbots that (wrongly) reassure their users that “that’s just how it is” and “there’s nothing more to know or do.”
There's plenty to do. There's plenty still to know.
Thanks for reading Second Breakfast. I'm exhausted.