Surrender

Surrender, surrender (but don't give yourself away) – Cheap Trick
"Does AI coaching cause more injuries?" asks Raziq Rauf in his latest Running Sucks newsletter. I've heard these concerns about the "personalized training" app Runna for a while now– call it "personalized training" or call it "AI," the marketing is similar in fitness tech as in ed-tech. I've heard how Runna's algorithmically-generated training plans were incredibly demanding, asking people to run more, farther, faster than they were used to. So maybe there are more injuries now, particularly among the new users who've joined since the company was acquired earlier this year by Strava (and, as with every app "AI" is constantly shoved in your face); or maybe it's just fall-marathon season, and you know, people are pretty banged up.
"That’s the word on the message boards," Rauf writes. "Complaints of Runna’s training plans center around ramping up both distance and intensity too quickly leading to overuse injuries like stress fractures, muscle tears and shin splints. But I’m not so sure it’s AI’s fault."
I'm not sure it's AI's fault.
I've heard a lot of that lately. If you get an incorrect response from an LLM, I'm not sure it's AI's fault. Rather it's your fault for not continuing to prompt it until it surfaces the appropriate answer. When you are angry – and angry not just about one error but about the whole immoral illogic of this "thinking machine" chicanery – it's your fault for not being satisfied with the technology's "imperfections." "AI" might be mediocre; it might even bad (emphasis on "might be") but humans are bad too (oh, definitely). Humans make mistakes. And humans who question "AI" – my god – they're the worst.
Implied here: give up. Surrender. Lower your expectations. Things might be bad, sure, but if you struggle, you'll just make it worse.
That refusal and resistance is labeled as Luddism, that critics are decreed "dinosaurs" and "alarmists" – this isn't surprising. It's a fairly common rhetorical maneuver to insist that one's own beliefs and practices mark the future – inevitable even if imperfect – while others' are simply longing for, clinging to the past.
But let's be clear: the rejection of "AI" is not nostalgia; it is not a longing for some once-upon-a-time of school.
Certainly many educators – at both the K-12 and university level – say they're experiencing a great sense of loss; many describe what "AI" is doing (or threatens to do) to education using the language of grief. But this isn't a denial of the future; it is grief about a future denied: their students', their own.
The university, to borrow from Tressie McMillan Cottom, does not love you. It never has. We know that. (Or at least, some of us do.) Perhaps it's easier for pundits – still primarily white male elites – to invoke the "thousand years" of higher education history – ancient practices, medieval curricula, and other such cliches – and imagine finding some solace, some comraderie in there. Perhaps it’s easier for them to overlook that universities have only very very recently (and reluctantly) included faculty and students who did not (quite literally) embody that white western ideal.
I do wonder: what moment do these folks imagine we Luddites yearn to return to? Those four weeks in May 1968, I suppose. Or maybe the few years that marked the height of California’s Master Plan. But wait, those were years before the advent of Black Studies or gender studies or disabilities studies programs, years before women or people of color were admitted as students or were hired as professors or god forbid earned tenure at many colleges and universities. That's what you think we want to go back to? Seriously?
To reject "AI" isn't to demand the restoration of some former glory of the utopian campus. There is no such place, as the original Greek reminds us; there never has been. Rather to reject "AI" involves a political and intellectual commitment to better institutions. And better institutions, better practices now not just off in some hand-wavy tomorrow.
Better now – and so we reject technologies and practices that foreclose thinking, that extract value (in both a financial and moral sense), that surveil teaching and learning, that circumscribe agency and freedom, that outsource and privatize, that reduce everything everything everything to a transaction to be financialized, optimized, leveraged, controlled.
This is about much much more than "AI," but this compulsion to acquiesce to an algorithmic vision of education has raised the stakes and has threatened the very core of what those who are most hopeful – knowing all we know about the flaws of school – see as its possibilities: education as a practice of freedom.