AI Slop Education

AI Slop Education
Oxpeckers on a warthog (Image credits)

There's a tendency to write about technological change as an "all of a sudden" occurrence – even if you try to offer some background, some precursors, some concurrent events, or a longer, broader perspective, people still often read "all of a sudden" into any discussion about the arrival of a new technology. That it even has something like an arrival. A recent oral history, published in Quanta Magazine, of what happened to the field of natural language processing with the release of GPT-3 is, arguably, exemplary here – of how hard it is to tell a story when the "everything changes" narrative is so very very strong in how we frame things. There are many different voices in this article – many researchers recollecting their first reading of a paper outlining the transformer technology that now undergirds generative AI; many debates and divisions within the field (was this a flash in the pan? did scaling really solve everything?); and many concerns that corporate interests were shaping what "success" meant in research, overriding any other perspectives. I don't think that I blame Thomas Kuhn specifically, but damn, it sure seems we'd like everything to be a paradigm shift, everything to be a scientific revolution.

A lot of people are heavily invested – some quite literally – in an AI revolution, and the "everything changes" story gets invoked all the time, often strategically to undermine and shame certain sectors (to threaten labor, broadly speaking) that they've deemed in the way, reticent to change. It cedes a lot of power to technology; it obscures a lot of other machinations (say, capitalism).

"Everyone is Cheating Their Way Through College," says James Walsh in The Intelligencer. This is the "everything changes" story everyone's talking about this week, right? "In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments," Walsh writes, and his interviews – with professors and students – seem to confirm this narrative: all of a sudden, everyone is cheating with ChatGPT; “it’s short-circuiting the learning process, and it’s happening fast.”

Perhaps it's happening fast. And perhaps it's been happening for a very long time. Certainly if you read Neil Postman, you can see that there have been grave concerns for a while now about how technologies – it was television for Postman – were distorting both knowledge and civic participation and what that meant for education (and democracy). "Technopoly," he called it. You can go farther back too: to Jacques Ellul, for example, who wrote about "technique" and cautioned about its pressures: that "the human brain must be made to conform to the much more advanced brain of the machine. And education will no longer be an unpredictable and exciting adventure in human enlightenment, but an exercise in conformity and an apprenticeship to whatever gadgetry is useful in a technical world."

I wrote about "academic honesty" last week, and I really don't feel like rehashing that essay here. Cluely and its founders have already received way too much air-time. But I will say this: I don't think that the availability of a new technology – ChatGPT or similar – explains everything that's happening; nor does "cheating."

Indeed, I think what we are facing is as much a cultural and social problem as it is a technological one. Students' utter disengagement with college (hashtag not all students) stems from a variety of factors, both within and well beyond education policy and practice: the incessant standardized testing in K-12 (and the curricular changes that this has wrought – less art, music, and PE, and no recess, for example); a frenzied emphasis on learning outcomes, not on the process of learning; blah blah blah screens/social media; a decline (in and out of the classroom) in reading fiction and reading for pleasure (the decline in parents reading out loud); students' experiences during the pandemic; shifting narratives about the instrumental value of higher education – it's all job skills or nothin'; the increasing cost of college, alongside an insistence that everyone must attain a college degree; growing economic inequality, hyper-competition, and the fears – rightly so, it seems – that there are no good jobs, even for those college graduates; there is no future – so why bother?!

AI, as I've argued elsewhere, is a technology of the apocalypse (and, not just for young adults. It is definitely a symptom of a lot of men's mid-life crises).

When it's unsafe to take risks (and learning is all about taking risks), unsafe to do hard things (and learning is all about doing hard things), the push-button oracle of essay generation makes a lot more sense than this ridiculous mess we now expect students to "adult" in.

Much of the post-publication chatter about Walsh's article blames teachers for not adapting quickly enough to AI. The students cheating isn't the problem apparently; the problem is that school – its practices, the professoriate – are outmoded, having failed to keep up with the Internet let alone this latest innovation. This strikes me as far too pat a story, particularly since almost two-thirds of faculty in higher education are now contingent – that is, the majority of professors are adjuncts working without long-term contracts let alone tenure; professors with very limited pedagogical freedom, whose institutional precarity positions them quite far from the stereotype of an academic concerned only with his research and not with his teaching. The idea that teachers are all Peter O'Toole-as-Mr. Chips types is quaint AF.

(Side note, while I think of it: you know what else is quaint AF? "Prompt engineering." You'd like to think that anyone telling students that that is the future of work is now deeply ashamed, but you know that they're not.)

Millions and millions of dollars have already been spent on reshaping the teaching profession to meet the goals of the ed-tech industry – an "affective construction of teacher innovation," and certainly there is money to be made shoving everyone head first into the AI university. (God forbid we actually fund public education instead of funding the privatization of education via ed-tech.) Conveniently too, the focus on teachers always deflects from the party that never seems to bear any responsibility for fucking up teaching/learning/knowing, for enshittifying the planet: university administration the technology industry. (I am also willing to blame school management, FWIW.)

Case in point, the news this week that Google plans to roll out its Gemini AI chatbot to children under 13. Despite all we know about the potential and real problems of generative AI and chatbots – sexual predation, addiction, mental health crises, misinformation, data extraction, and privacy violations (and all we don't know about the long-term effects of this on any of us), Google is very clear that these issues are not its problem, as it explicitly shifts the burden onto parents in the announcement of its impending product release. Parents, Google says, need to remind their child not to share sensitive information with the chatbot. Parents, it says, need to "help your child think critically." Parents, it says, need to remind their child that "Gemini isn’t human."

We can see something similar when Anthropic and OpenAI et al sign contracts with universities. These companies are clearly ravenous for more data to extract and new markets to exploit; and they're keen to build lifelong customers out of the only part of the population that seems to be consistently using their products. But they care nothing for the values or goals of the institutions to which they're selling. Indeed, "academic integrity" – the practice of citation – is an absolute joke to those who have built their AI models, quite literally, by stealing the work of others. These companies have made it very clear: it's not their problem if students cheat. It's not their responsibility if students graduate "essentially illiterate," as one instructor in Walsh's story put it, after relying on AI tools for every assignment and assessment. AI is engineering us for conformity, banality – no wonder the Trump Administration wants every student and every school to embrace it.

AI companies want us to become utterly dependent upon them, not just for academic inquiry and scholarly output, but for every possible communicative utterance, every major and minor life decision, for all facets of companionship, including spiritual guidance. It's not simply a matter of automating formal teaching and learning (as if that's not bad enough); it's about undermining knowing, undermining caring, undermining democracy. That's the business.

The bullshit machine might feel good for a little while, sure – I get it, kids. But as it is part of a larger political project of "soft eugenics," where the rich get richer, the weak get sick, and the clever get to breed, there's not anything coming out of that machine long term but slop and rot.

Love yourself, love the world enough to think and learn.


"AI is not a collection of algorithms, rather it is a social relation among people, mediated by algorithms” – signed G. Debord."
– "A Psychogeography of AI" by Dan McQuillan

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your support makes this work possible.