The Alpha Bet

Sometimes you have to repeat yourself. Sometimes you didn't say things clearly the first time. Sometimes your intended audience didn't hear you or they didn't listen. Sometimes there were louder voices, different messages that drowned yours out. Sometimes you need a do-over. Sometimes you are certain, "ah, this time, this time, I'll get it right."
To be clear, this is not my making the case for yet another attempt at a Fantastic Four movie – good grief, Marvel. Stop it already.

It's simply a lament that, back in January, Dan Meyer wrote an excellent essay on MacKenzie Price's "2 Hour Learning" hustle – schools in which students spend just two hours a day on math and reading and thanks to "AI" instruction (the school boasts "no teachers") students achieve "2.6x," even "6.5x" growth. They "crush it," the company's website reads, echoing the language of startup and hustle-culture figures like Gary Vee and Tim Ferriss.
Dan's essay really should have been the final thing anyone had to say about this whole "2 Hour Learning" endeavor, which a lot like Vee and Ferriss's very popular shtick, sells a certain story that a certain audience finds very appealing. Problem is: it's mostly bullshit.
As Dan quipped, "They haven’t replaced teachers with AI. They have replaced poor kids with rich kids." (They haven't replaced teachers with anything, I'll add. They simply call the adults in the classroom "guides" instead; in some job announcements, they still require these adults have teacher certifications.)
There are, in fact, two versions of the startup product that MacKenzie Price is selling: one, a private school that costs $40K a year, in which students spend those 2 hours doing interactive worksheets (it's not "AI"; it's just plain ol' "adaptive learning" software, cleverly rebranded) and then engage in various hands-on projects for the rest of the day. Shockingly, these affluent students seem to turn out okay!
Price's other startup, Unbound Academy, is a virtual charter school, and she's expecting you conflate the two. She's hoping you not ask obvious questions like "what the hell do students do for the rest of the day once they've done their obligatory click-farming" – because virtual charter schools don't exactly cater to days filled with fun, hands-on group activities. Even more damning, we know that virtual charter schools – "AI"-enhanced are not – are bad, bad news, so bad that even the Walton Family Foundation, which has regularly funded all sorts of truly terrible educational initiatives, has admitted as much. So bad that students would learn as much math by not attending school at all as they do by attending an online charter school.
I wasn't totally shocked when The New York Times pronounced this week that "A.I.-Driven Education: Founded in Texas and Coming to a School Near You" – an Alpha School is quite literally coming to a neighborhood near me in NYC. But reading the story, you can see how this vigorous handwaving that some folks are doing about "AI" is already shutting down our critical faculties well before the LLMs have had a chance to do so.
It's all a con. A dangerous, dangerous con.
ICYMI - Attack drones will be deployed at select U.S. schools to stop school shooters in emergencies, by Campus Guardian Angel, a company founded by U.S. military and defense contractors. pic.twitter.com/AX5qxUgePk
— Disclose.tv (@disclosetv) July 28, 2025
AI is cop shit.
Sonja Drimmer writes, "Every so often someone like Mark Zuckerberg or Sam Altman will dribble out some unadorned text, announcing with stentorian certitude the advent of a new world that their latest product will avail. Zuck seems to love dressing up his thought bubbles in Times New Roman for the purposes of LARPing intellect, which I find funny and tragic."
The CEO of Meta typed out some deep thoughts on "Personal Superintelligence" on Wednesday. Or maybe he typed them out earlier – days, weeks, months ago – and it was simply on Wednesday when the folks in PR decided it was okay to hit "publish," lest all the discussions about OpenAI's educational endeavors and Anthropic's astronomical valuation pushed Meta out of the "AI" limelight yet again.
Zuckerberg argues that, with superintelligence (whatever that is) "now in sight," we will be freed from the chains of productivity software – a claim I do find quite interesting as I believe this software (the spreadsheet, the "doc," the PowerPoint) has profoundly shaped our thinking over the course of the past few decades. A claim I find interesting, but not appealing because the vision that Zuckerberg has instead – blah blah blah "more time creating and connecting" – is at best totally banal. (There are echoes of Altman here, whose "gentle singularity" is also incredibly vapid.)
The tech oligarchs talk a lot about the coming capabilities of their "AI" to utterly transform everything everything everything but particularly "work"; and yet they seem to have no fucking clue what "work" is, other than writing a few lines of code or sending a few emails. That group chat with Andreessen maybe. Work, to them is a white collar affair, almost exclusively managerial at that.
The kinds of reproductive labor that is foundational for everything, that actually maintains the world, is so absent from their vision because they literally do not see the people – Black, brown, immigrant, women – who do this.
But as Zuckerberg tries to carve out his visions for an "AI" that is about everything else beyond work – something that "helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be" – it's so painfully clear there is absolutely no vision at all.
It's all a con. A dangerous, dangerous con.
OpenAI has launched something called "study mode" in ChatGPT, which Wired says is "designed around the Socratic method" because god knows, if you can reference an ancient Greek philosopher when extolling the benefits of AI tutors, you are home free.
Study mode, according to OpenAI, will not just give students the answers to their homework questions. "Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something." The system prompt specifically instructs the chatbot to
DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response. Instead: **talk through** the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing.None of this will stop students from using plain ol' regular ChatGPT to do their homework for them, of course, but I guess we're supposed to still clap that OpenAI "takes education seriously" or some shit like that.
The system prompt also says
Be warm, patient, and plain-spoken; don't use too many exclamation marks or emoji. Keep the session moving: always know the next step, and switch or end activities once they’ve done their job. And be brief — don't ever send essay-length responses. Aim for a good back-and-forth.As Benjamin Breen observes in his testing of study mode, there are a lot of assumptions here about what "good teaching" looks like. (Socrates, clearly – renowned for his warmth, patience and plain-speaking.) Breen finds he's able to get the chatbot to be quite agreeable, raising the specter of the recent update that made its responses even too sycophantic for the very sycophantic Sam Altman. "A future of LLM tutors which are optimized to keep us using the platform happily — or, perhaps even worse, optimized to get us to self-report that we are learning — is not a future of Socratic exploration," Breen writes. "It’s one where the goals of education have been misunderstood to be encouragement rather than friction and challenge."
The vision of the future, as imagined Altman and Zuckerberg and Thiel et al, is one in which they cannot fathom anyone ever pushing back, ever bristling at them. It's a world without friction. A world without disagreement. It's a con. A dangerous, dangerous con.
(Related: Timothy Burke argues "Generative AI IS the Marshmallow Test." And Rusty Fowler contends "We Need to Talk about Sloppers" – those people who use ChatGPT to make every decision.)
There have been a number of posts on LinkedIn recently, that hub of AI hype, about how students are going to use AI agents to do every task assigned in the LMS and how teachers are going to use AI agents to do every task in the LMS and what-are-you-going-to-do-about-it. And listen, I get the urge to sing "the Doom Song." But as I watch some of these, I am utterly unimpressed with the technology mostly because it's the goddamn LMS. I mean, yeah – we've built an utterly templated pedagogy with templated tasks on top of a templated online portal and someone's trained a bot to ingest the templates and automate the templates and click the little boxes and we're supposed to be panicked / thrilled?!
The Wall Street Journal reports that "The Most-Taught Books in American Classrooms Have Barely Changed in 30 Years," drawing on a recent NCTE survey. The top 10: Romeo and Juliet. The Great Gatsby. The Crucible. Macbeth. Of Mice and Men. To Kill a Mockingbird. Night. Hamlet. Fahrenheit 451. Frankenstein.
"The staying power of the classics...has as much to do with inertia as literary merit" – inertia. The ol' "schools haven't changed in hundreds of years" (or in this case decades) narrative strikes again.
But it's so much more complicated than that. (It always is.) Book bans are at an all-time high, and attempts to introduce different texts, diverse texts, are met with hostility, even violence. According to the NCTE data, 20% of teachers reported having no choice in book selection; even more said they were following a scripted curriculum.
Such a strong push for "AI" agents; barely a word about teacher autonomy. Perhaps that's the point.
Such a strong push for automated text extrusion in the classroom, but little questioning about why the machinery might write so passably about Romeo and Juliet.
"Waiting until kindergarten to start teaching AI literacy misses a key window of opportunity," says sponsored content in The 74, so that's a depressing way to begin a learner's life (and end this newsletter).
Thank you for reading Second Breakfast – or for scrolling all the way to the bottom of this email before hitting "archive" or "delete."
Some digital housekeeping: I'm going to start publishing my weekly essays – that is, those missives I mail you that are not primarily links and reactions to the news – on Tuesdays. On Mondays, I'll send out a personal newsletter, which is an idea I've stolen from Kin, who brilliantly decided that this was the best way to keep friends updated without having to participate in the whole social media or blogging data-extraction / LLM-training machine.
There's no way in hell I was just going to opt you into receiving yet another email from me, particularly one in which, these days, I mostly talk about marathon training – what I've eaten and how many miles I've run. But if you log into your Ghost account here – that button in the upper right corner that says "Account" – you can update your email preferences and sign up if you like. (As a subscriber to Second Breakfast, you will actually be able to read these updates from me simply by visiting the website. You really needn't get another email. You needn't.)