Stuck Character Service
"It's still early days," some guy splutters in anger, peeved that people like myself are cackling loudly at Sal Khan's admission that Khanmigo has been a big flop. "It's simply too soon to say" anything about "AI" in education, he insists, posting angrily at the sheer audacity that someone, anyone (but specifically Dan Meyer and also specifically me) would dare offer an "RIP" for Khanmigo and for those "edtech industry dreams of AI tutors."
"Someday soon" we will have robot tutors, this guy vows.
And perhaps someday, god help us, we will – not because the technology will ever be all that good but because technological autocrats will give us no choice, because we will have abandoned public education for some promise of instantaneous individualization – maybe for our kids but much more likely for those ones.
I don't believe that "AI tutors" are the future, to be clear. Most people don't, and increasingly they are questioning this weird vision of their kid being trained in some Ender's Game click-factory.
"AI tutors" are not the future; they're the past.
"It's early days" – people toss out that cliche all the time to defend the failures of "AI" to work well, to be embraced, to make money, and so on. But it's not early days. Not remotely. We've been trapped in Sam Altman's ChatGPT hustle for almost five years now. More broadly, the field of "AI" – in education or otherwise – can only be described as nascent if all technologies that emerged in the Cold War era – the dishwasher, the television, the intercontinental ballistic missile, the chatbot, for example – are also still in their infancy.
Several of the earliest "AI" researchers – Marvin Minsky, Seymour Papert, Roger Schank, Herbert Simon – were interested in human, not just machine learning. And they (and their grad students) then went and built computerized systems that tried to teach. The first intelligent tutoring system was developed in the late 1960s. "AI tutors" are older than Mark Zuckerberg; they're older than Marc Andreessen.
Alas, some people do love to ignore or dismiss history (ironically, many doing so while also embracing "AI," which is in its own way just an amassment of historical data algorithmically re-presented). I'd quip that these folks have the memory of a goldfish, but apparently goldfish can actually remember things for up to a couple of weeks; whereas some folks can read the admissions on one day that Khanmigo didn't really revolutionize education and then turn around, just a few days later, and not blink at all at the news that Khan Academy, along with ETS and TED, is going to revolutionize education.
The brain fog here isn't just this strange, impenetrable aura of awesomeness surrounding Khan Academy either. (Having very deep financial resources along with the backing of industry and media surely helps keep many people duly reverent.)
This new venture, the Khan TED Institute, will supposedly offer a competency-based bachelor’s degree in "AI" “for as little as $10000.”
Sound familiar? Yeah, it's an initiative that sure sounds a lot like all the coding bootcamps that sprung up a decade or so ago when the "jobs of the future" were all purportedly going to be in software development. Remind me, what happened to that trend?
While some bootcamps did partner with colleges, the primary thrust of the initiative was to bypass the university – part of that larger Silicon Valley narrative (ugh, again with those technological autocrats) that traditional educational institutions and practices were too slow. Too human (too humanities). Too feminized. Too impractical. Too "woke." Silicon Valley has been more than happy to join forces with those sowing discord (plenty of it, frankly, well deserved) about the costs and content of higher education. As Jeppe Klitgaard Stricker observes, this new Khan TED Institute plans to "reimagine higher education without it." While partners include Google and Microsoft and Bain and McKinsey (LOL. Consultants), there is not a single university involved.
For this and for many obvious reasons, KhanTED is, in the words of John Warner, "bullshit." Bullshit yet again. He is right that Khan's "history of failure relative to his stated intentions is both instructive and encouraging"; but I still worry because these sorts of projects are precisely the exploitative, deceptive (and sometimes expressively fraudulent) visions of the future of education (and ed-tech) that actively serve to make things worse for students, and often the most vulnerable ones at that.
Ben Riley has published a very useful “illustrated guide to resisting 'AI is inevitable' in education,” which points to recent research and recent news. (Still more research and news from the past week alone: “AI Assistance Reduces Persistence and Hurts Independent Performance.” "Delegation to artificial intelligence can increase dishonest behaviour." “The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought.” “How AI Disrupts the Teen Mental Health Field.")
Do also please read The New York Times profile of Ben and his father: “He Warned About the Dangers of A.I. If Only His Father Had Listened.” It's incredibly heartbreaking and enraging, and just one of so many stories of how this technology is tearing us apart.
(I still feel like the "'AI' psychosis" discussion is missing the mark and failing to capture the larger social shifts around sociopathy that digital technologies encourage. I really will try to finish up some writing on that.)
On Tuesday, I published an essay (for paid subscribers, hint hint) on "The Productivity Software Way of Thinking" -- what were meant to be a 20-minutes talk, but ugh. COVID-related cancelation.
I haven't stopped stewing about some of these ideas (as one does when one actually reads and writes and thinks rather than relying on the fancy autocomplete to perform a sad parody of these tasks. But also, also as one does when one is feverish with COVID.)
As I noted in my remarks, I think it's significant that "AI" is being injected into the productivity suite of products. "AI" is, after all, at its core about squeezing more productivity out of workers. But it's also the site in which the artifacts of knowledge production have been, well, produced for the past few decades. And now, under the guise of the "unreasonable effectiveness of data," you get a product like Notebook LM, in which users (students and teachers and principals and professors alike) believe that "the answers" will magically emerge, without theory or thinking.
Why is it that the most vocal cheerleaders of generative A.I. are always the hackiest motherfreakers around? – Colson Whitehead
I'll be at the Humans First: Adolescent Education in the Age of AI conference in Atlanta this summer. This is going to be a stellar event – no vendors! no sponsors! – focused on "keeping human formation — not efficiency or automation — at the center of conversations around student development as we examine the social and ethical implications of AI."

Today's bird is the common kestrel (Falco tinnunculus), a.k.a the European kestrel, the Eurasian kestrel, the Old World kestrel, or in countries like the UK where there are no other related birds, simply the kestrel. The species name "Tinnunculus" comes from the Latin "tinnulus" or "shrill." While the kestrel is notably smaller than other birds of prey, it was once -- according to Wikipedia at least – known as the "windfucker," because of its habit of hovering while hunting.
Thanks for subscribing to Second Breakfast.