12 Years and 60 Minutes Later
This will be the last Friday you'll receive a Second Breakfast newsletter in 2024. I'm not writing my usual "year-in-review" series either, as easy as it could have been to just copy-and-paste "artificial intelligence" a thousand times over and call it good.
Paid subscribers will continue to hear from me – lucky lucky! On Monday, for example, I'll write about what was surely one of the most-discussed AI stories this week: Casey Newton's rather disingenuous claims about "the phony comforts of AI skepticism." Lots of other people have responded already (and rightly so), but I'd like to talk a bit about what technology criticism does and why technology criticism matters.
Spoiler alert: criticism is a form of reading, one that is both implicitly and explicitly under attack by forces that seek to privilege a specific kind of reasoning – an algorithmic reasoning, best exemplified (arguably) by ChatGPT. Criticism is a political act; but then again, so is AI – far less a technology of knowing or thinking than an ideology of prediction and control.
But AI is a technology that CNN's Anderson Cooper pronounced on Sunday night "could one day change the way every student is taught." And that’s an assertion that might seem exciting and innovative if it weren't the same argument we've been hearing about various technologies, including AI, for almost one hundred years now.
Yes, Khan Academy was back on 60 Minutes, returning to the venerable Sunday night news program for the first time since we were breathlessly told that Sal Khan and his videos were poised to revolutionize education, way back in 2012. (Ah yes, "the year of the MOOC," that other AI-experts-understand-education disaster.)
Anderson Cooper, the correspondent for this week's segment, does not delve deeply into the last decade-plus of Khan Academy's ed-tech revolution – on the question of, say, whether the thousands and thousands of videos watched and multiple choice quizzes taken on the platform have changed things at all, let alone for the better. “Things” could mean students’ test scores, I guess. Could mean students’ happiness. Could mean teachers’. Could mean what Khan himself has learned. But in fact, Cooper does not delve into any of that at all. Cooper does not have any follow-up questions on Khan's prior work, on the goals he stated back in 2012 for his organization: that classrooms would some day be comprised “20 to 30 students working on all different things,” moving at their own pace, with a teacher who simply "administers the chaos." Indeed today, that doesn't seem to be the goal – in all the classrooms we're shown, the students seem to be working on the same lesson, one designed (we're lead to believe at least) by the new Khan Academy AI, Khanmigo.
It's very striking in that first segment a decade-plus ago how little consideration Sal Khan gives to pedagogy. He tells Dr. Sanjay Gupta that when he has to make a video about a subject he hasn't thought about since high school, he goes out and buys four or five textbooks and reads all he can online about the subject. "I’m 95% of the time working through that problem real time or thinking it through myself as I’m explaining something,” he admits. There is no consideration of how to best communicate, how to teach. Teaching, if it’s considered at all, is merely a matter of content delivery; and somehow his short, animated videos are not lectures — the bad teaching he imagines is pervasive in all classrooms, across all schools – somehow transformed into "active learning" because a student can hit "pause" or "rewind," because they're accompanied by a quiz.
So what has changed in the intervening years? You can see in that 2012 episode some of the early expressions of the "platforming" of education – something we're neck deep in today. Khan Academy's CTO boasts of the "massive amounts of data" that the site has amassed – "a gold mine for learning what paths through learning are effective." (In fact, it is actually just massive amounts of data about how people use Khan Academy, but hey. No one would be silly enough to try to train AI on this or anything.)
Oh. It is this data trove that was seen as a gold mine by OpenAI – the reason why the organization first reached out to Khan Academy, requesting to use the platform to train and test ChatGPT. (That is to say, ChatGPT was built, in part, on the labor of students. Education platforms are extractive machinery, in this case, a machinery out of which OpenAI and Khan Academy built Khanmigo.)
Teaching machines teaching machines – and I'd say it's "turtles all the way down" except that all that data still seems to be the selling point that Khan Academy believes will lure teachers to the product too. While managers and engineers might love a data dashboard, I'm not sure teachers really do. In both episodes – then and now – teachers who are beta-testing Khan Academy in their classrooms explain how you can click click click click to "drill down" into what each individual students is doing, how many seconds they've spent on a problem, what exactly they were doing at 8:13am, what they typed, when.
But you don't really get the sense that that's what happens – I mean, who has time for that?! As though teachers, already grossly overworked, are supposed to find the time to micro-manage each individual students' "digital footprint" after class, not just during class. (Perhaps, they are supposed to do so with all the free time they have now that Khanmigo can create lesson plans in "minutes.") It all feels quite speculative — that data dashboards reveal something that teachers could never see and then enable them to… I don’t know. Nonetheless, this greater surveillance is positioned in the storyline as an act of greater care.
“I can imagine a lot of teachers watching this and saying ‘this will replace me,'" Cooper chuckles in the closing sequence. Does Khan really believe there will always be a need for human teachers? he asks. “Yes," Khan replies earnestly, "that’s what I’d want for my children.”
“The hope is that we can use AI and other technologies to amplify what a teacher can do so they can spend more time standing next to a student, figuring them out, having a person-to-person connection.” Or as education psychologist Sidney Pressey wrote in 1926, also hoping to convince the world of the need to automate the classroom, a teaching machine could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”
There are a few moments in the segment where you see teachers doing that — the very excited chemistry teacher, amping up her 8am class. And in sharp contrast, the AI feedback in Sunday’s segment is far from inspirational. "Don't patronize me," Cooper responds at one point when ChatGPT tells him his attempts to draw a human body is "a good start." You can argue – and my god, do people lean into this – that soon the technology will be much, much better. But when you look at Khan Academy in 2012 and Khanmigo now, it's fair to wonder, will it? And even if it does get better – if, say, AI is able to recognize the hypotenuse of a right triangle as it fails to do in a demo by OpenAI's Greg Brockman – is that where "the problem" really lies? I mean, does, after all this time, Khan Academy know it should do to get “better”? Would that even matter?
In a different scene, Cooper uses Khanmigo to get feedback on an essay he wrote about his mother back when he was in sixth grade – again, the emphasis here is on how quickly the AI responds. Efficiency, that's the priority (and as with surveillance, this speed is confused with care.) Sarah Robertson, a former seventh-grade English teacher who's now a product manager for Khan Academy, walks Cooper through the process remarking that, when she was teaching, she had 100 students; if she spent 10 minutes on each of her students' first drafts, that was 17 hours of labor. "The burden that we place on teachers to give that specific, timely, actionable feedback is just so great that it's just not possible," she says. But perhaps the solution here isn't to have students write their paper for robots; perhaps teachers should not have 100 students.
In that first 60 Minutes appearance, there's a clip of Bill Gates at the Aspen Ideas Festival, boasting that he'd been using Khan’s YouTube videos to teach his own children. We have Gates to thank for – waves hands around – a lot of this shit. But of course, Gates' children were not taught by Khan Academy. Nor were they taught by Gates himself. They attended private school in Seattle. (Average class size: 17.) Here's Dan Meyer with pure poetry:
Anderson Cooper (a Vanderbilt scion) says, “If every kid could have a private tutor, that would level the playing field,” and Sal Khan (making >$1MM / year, per Khan Academy disclosures) responds, “Yeah, that’s the dream.” It should not surprise us to see economic elites dream of leveling the playing field through technology contracts that would benefit other economic elites.
Other countries level the playing field for their kids through redistributive policies like child welfare, public housing, nationalized health care, etc, all of which would require increasing the tax burden on economic elites. It should not surprise us to see which of those dreams receive fawning coverage during primetime corporate media.
For all the talk I hear about ed-tech revolutions – AI or otherwise – the only example I think the mainstream media consistently talks about is Khan Academy. On 60 Minutes. On Oprah. There is no “endless possibilities” for AI in education. This is it. This is the vision of the future of education that’s backed by the world's most powerful people, not because it is good but because, for them, it is expedient and it is profitable. We’re supposed to dream of a future where AI "could one day change the way every student is taught" as Cooper puts it, and not confront the realities of classrooms today.
Other robots that are coming for your children: Embodied, a "social robot" for kids, goes broke; robot breaks. "Startup will brick $800 emotional support robot for kids without refunds," as Ars Technica puts it. "A chatbot hinted a kid should kill his parents over screen time limits." I made the mistake of looking at Edsurge because this is an actual headline: “Does Facial Reconition Belong in Schools? It Depends on Who You Ask.” Gotta hear all sides when it comes to hustling “cop shit,” I guess. "Your AI clone could target your family, but there’s a simple defense." You'd think the "simple defense" would be "don't build an AI clone, you fucking fool." But I guess we're going with something like "use a strong password" instead.
Building bodies: "Designer Babies Are Teenagers Now—and Some of Them Need Therapy Because of It," says Wired. Totally normal world we’re building.
Who's zooming who: According to Popular Science, "This Android Can Experience Feelings Humanity Has Never Felt." Um. Just re-define “feelings” and then, sure. A "rebrand" of sorts from the video-conferencing tool Zoom, which now declares itself an "AI first company." "By summarizing meeting tasks, drafting email responses, and preparing you for meetings, AI Companion is your digital assistant that reduces your overall workload. Over time, we believe these capabilities will translate into a fully customizable digital twin equipped with your institutional knowledge, freeing up a whole day’s worth of work and allowing you to work just four days per week." (I guess you're going to want to read that article, linked above, on how to defend yourself and your family from your AI clone?)
It's been a rough year for Comparative Literature — RIP Fredric Jameson — but next semester looks even worse: "UCLA offers comp lit course developed by AI," Techcrunch reports. "Kudu" (I keep wanting to write "Kudzu," after the invasive plant) was developed by a physics and astronomy professor, so clearly someone with deep expertise in literary studies. It's almost like asking a hedge fund analyst with 3 MIT degrees and an MBA from Harvard to design your classroom.
AI under Trump: "The Big Shift" in AI, according to the CTO from the Emerson Collective (that is, Laurene Powell Jobs' venture philanthropy; Arne Duncan's employer), will involve the "new regulation reality" under the Trump Administration. His first point – "It's all going to be about beating China in the AI race, with every policy decision viewed through this more permissive lens" – is important and should serve to remind us that, at the end of the day, AI is fundamentally a weapon of war, not (despite folks wanting it to be so very badly) a nifty gadget for personalized tutoring or workplace productivity.
AI slop: OpenAI launches (then un-launches) its video production AI, Sora. $200/month for ChatGPT Pro – "Maybe there's a market?" "AI slop is already invading Oregon’s local journalism," OPB reports. The LA Times billionaire owner "Patrick Soon-Shiong adding AI ‘bias meter’ to the LA Times to convince readers it’s not biased." Tom Scocca finds that "The Washington Post burns its own archive." "I Went to the Premiere of the First Commercially Streaming AI-Generated Movies," writes Jason Koehler of 404 Media. "All of these films are technically impressive if you have watched lots of AI-generated content, which I have. But they all suffer from the same problem that every other AI film, video, or image you have seen suffers from. The AI-generated people often have dead eyes, vacant expressions, and move unnaturally." [Insert joke about how I look after watching Khan Academy videos here.]
Brain rot, but make it cinematic: Thanks to AI, "the celebrity machine never dies," says The Atlantic. Thanks, but no thanks. Werner Herzog's new film, Theater of Thought, explores the benefits and risks of neurotechnology. That's something to look forward to.
"The GPT Era is Already Ending," according to The Atlantic at least. Well. Phew. I guess that's a wrap, folks.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your support helps me do this work – sifting through all this AI bullshit that I know we're all incredibly exhausted by, trying to make sense of the senselessness. Paid subscribers hear from me twice a week, which is about as fun as a Herzog movie on neurotechnology, for sure.