War Pigs

While I was pleased to see millions of Americans turn out for last weekend's protests, I have to say I've spent this week more depressed and more worried than ever about the road ahead. Perhaps it was the arrest of NYC Comptroller Brad Lander – the latest in a series of arrests of Democratic Party leaders. Perhaps it was the Supreme Court decision to strip transgender youth of their right to healthcare. Perhaps it was the Trump Administration's announcement that they will end the 988 suicide prevention service for LGBTQ+ youth. Perhaps it was the military strikes on Iran by Israel, and (this is my child-of-the-80s fear, I recognize) the looming threat of nuclear war.
And perhaps it's because my work is focused on "AI" and education, and a good number of folks seem to care very little that the technology that they're extolling and compelling everyone else to use is bound up in all this eschatology, in all this violence. It's such casual carelessness too...
"This is the gentle singularity?" Brian Merchant writes in disbelief in response to Sam Altman's claim that the "AI" project promises a painless (although as I noted in last Friday's newsletter, a rather banal) future – all while OpenAI seeks funding from Saudi Arabia and the UAE, two countries with some of the worst records on human rights; all while OpenAI works with fossil fuel executives and the UAE to expand energy production to fuel its hyperscale data centers; all while OpenAI's Chief Product Officer joins a number of high profile tech leaders (from Facebook and Palantir) in some new division of the Army Reserves called "Detachment 201"; all while OpenAI has pursued Department of Defense contracts to integrate "AI" into military technology; all while OpenAI and other tech companies have successfully lobbied the Republicans in Congress to ban all regulation of "AI" for a decade; all while the leadership of OpenAI and Anthropic cackle about the devastating effect their technology will have on workers globally.
[Killmonger dot gif – Is this your king]
"Now more than ever, we need to think about 'AI' not merely as consumer technology, but as an idea and a logic that is shaping political economy around the globe," Merchant argues.
I think this is why so many of the debates about whether or not "AI" "reasons" utterly miss the mark. It does not matter – to me at least – that "AI" "works" or doesn't work or works-well-enough-because-who-cares. It matters that it's unethical, immoral, politically regressive, environmentally destructive, wrong.
There was yet another research paper released this week that posits there are significant "cognitive costs" of using "AI," in this case for essay writing. Listen, I'm not a big fan of arguments that rely on electroencephalography, but that's just me and my own disciplinary preferences – the pictures do make for nice slides in keynote, I guess. Needless to say, at this point, there's so much evidence that one can draw on to affirm that "AI" is an anti-education technology, that automated bullshit machines are very very bad for our thinking. But I confess: even if someone published some sparkling, gold-standard results that showed massive "learning gains" or marked improvement on standardized test scores, I still wouldn't bend a knee to the tech oligarchs, ya know? "AI" would still be unethical, immoral, politically regressive, environmentally destructive, wrong.
Mattel announced this week that it's partnering with OpenAI to release its first "AI" toy this year – in its words, to "bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety." But see, you can't have privacy and safety – or at least, you can't have those words mean anything – if you're building products that rely on the surveillance and engineering of childhood. Children cannot have privacy and safety and "AI" toys. Children will not be free to play, free to explore, free to choose who they are in an algorithmically scripted world.
I saw an educator post a video with "AI"-generated students in it, and I had to comment (violating my own personal rule to never comment) that it was worth considering that one of the ways in which technology companies have made these recent breakthroughs in image and video generation is by the ingestion of so much pornographic content, including CSAM (child sexual assault material). "AI" is the result of and now the vector for CSAM. Every time you "AI" an image, you're exploiting the exploited.
Too many educators and parents hear the marketing from Mattel or [insert your favorite K12 "AI" app] and believe the promises that their "AI" is different: good, safe, private, secure. But it's not – all the large language models have been trained on child pornography because they've been trained on the Internet.
But it actually gets worse – what happens after that initial training is "reinforcement learning." That is, human data labelers must go through and review, label, and flag content, in the process watching and reading hours worth of sexually explicit and graphically violent material.
These labelers – usually contract workers who are grossly underpaid, with little to no job protections let alone mental health support for the highly traumatizing aspects of the labor they perform – work for companies like ScaleAI. ScaleAI has been in the headlines a lot the last week or so as its founder has struck a deal to work with Meta, to help the latter "catch up" in the "AI" race. (ScaleAI has been sued multiple times by contract workers for psychological trauma but – hmmm – the Department of Labor has just dropped its investigation into the company.)
When The New York Times says "Meta has offered seven- to nine-figure compensation packages to dozens of researchers," you know it does not mean the expert labelers in Africa and southeast Asia. Indeed, these workers are rarely mentioned at all, because, as Mattel says in its press release, companies want us to believe this technology is "magic" – that this new "superintelligence" is the work of computers, not the work of people.
AI is neither artificial nor intelligent – Kate Crawford
"AI" is harmful through and through – even if overtly harmful output is flagged and eliminated, the harmful inferences remain in underlying associations and linkages.
You don't need EEGs to see that this shit sucks. You just need to care.
There's something remarkably Scarlet O'Hara-like about many "AI" evangelists – except for the part where they're overwhelmingly men, of course. (Women are lagging behind on AI but they can catch up, The Financial Times assures us. LOL. No thanks.) I mean the whole "fiddle dee dee" attitude, the dismissal of the material suffering of others, the scorn when reminded of such things, the self-assurance that one's social status will be sufficiently protective. All in the name of the ability to turn a bulleted list into an email slavery.
Alberto Burneko, for his part, has a different take on the "AI" boosters: "Toward a Theory of Kevin Roose."

Anyway, here are a bunch of other stories related to ed-tech and anti-education tech: "Coding Provider Tynker Sold for $2.1M as Byju’s Bankruptcy Plays Out." "Chinese tech firms freeze AI tools in crackdown on exam cheats." "What's Happening to Reading?" "Critical thinking was in decline before AI." "AI Is Poised to Rewrite History. Literally." "AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums." "AI Chatbots Are Making LA Protest Disinformation Worse." "The off-ramp: assessment security and the rollback of equity and inclusion." "Amid AI Plagiarism, More Professors Turn to Handwritten Work." "Kids are protesting against I.C.E. in Roblox." The villain in Toy Story 5 is a tablet.
I learned from Karen Hao's devastating rebuke to the tech industry, Empire of AI, that Bill Gates told OpenAI he wouldn't be impressed with ChatGPT until it could pass AP Biology. And so the team came back to him with a demo in which ChatGPT could indeed pass AP Biology, and Gates was apparently amazed. If you don't know why this is ridiculous and hilarious and very very sad and stupid (and there are many reasons), I can't help you.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your support makes this work possible. And I do try to help. But it's Friday, and I am fucking exhausted.