The Agents of Unproductivity

"Could AI slow science?" Sayash Kapoor and Arvind Narayanan ask, and proceed to poke a sizable hole in one of the major claims about "AI" – about its application in healthcare, in education, and beyond: that "AI" will "enable dramatic scientific progress: curing cancer, doubling the human lifespan, colonizing space, and achieving a century of progress in the next decade. Given the cuts to federal funding for science in the U.S., the timing seems perfect, as AI could replace the need for a large scientific workforce." Kapoor and Narayan, best known perhaps for their book AI Snake Oil, posit that "AI" might simply serve to exacerbate the current "production-progress paradox" – that is, the extraordinary growth in the number of scientific papers published alongside the stasis or even decline in scientific progress.
Kapoor and Narayanan's essay coincided with a big story in Wired this week that asked "Where Are All the AI Drugs?" and described the "AI" companies quite literally invested in pharmaceutical research. In typical Wired fashion, the prose regales in excited futurist speculation:
After that, what else might be possible with AI? What if it were shown all the drugs that have ever existed, with all the data about how they work, and then set loose on a database of untried molecules to identify others to explore? What if—and this is where the discussion around machine learning has gotten to now, in 2025—the software could take in a decent chunk of all the information about biology generated by humankind and, in an act both spooky and profound, suggest entirely new things?
Many people seem so feverish, so hungry for "AI" to be useful, for it to be good. In some ways, this makes sense. Faced with the complexity of the problems the planet faces – the "polycrises" (you can't even just go with the singular "polycrisis" any more, things are that fucked up) – "AI" has emerged with solutions, salvations. "AI" makes all sorts of glorious promises – you'll never be lonely, you'll never have writer's block, you'll never suffer.
So despite the obvious villainy of almost all the giant "AI" startups and their founders – MechaHitler and eyeball-scanning orbs and a growing number of techno-fascist companies named after various Tolkien characters – many people keep insisting (wishful wishful thinking) there are good guys and good uses and good models and someday, someday soon even, with the right tweaks, "AI" will surely cure cancer. (Few of these people are using "AI" to cure cancer, of course; the biggest proponents seem to simply be autogenerating PowerPoint slides for their "AI" consultancy side-hustles.)
It seems much more likely, as Dave Karpf pointed out this week, that "AI" is merely a "satisficing technology," a portmanteau he borrows from Herbert Simon that combines "satisfy" and "suffice." That is, "AI" isn't going to be great; it's not even going to be good. But it doesn't have to be. It's not going to unlock the big secrets of the universe, but for a lot of the things it'll be used for – shrug – who cares. It'll be merely good enough to get by – echoes of Tressie McMillan Cottom's observation that "AI" is decidedly "mid." And I supposed we could wait and see how much mediocrity and slop the world is willing to tolerate as the mediocre slop machines destroy the environment.
But even calling it "satisficing" or "mid" feels a little too generous to me, and I'm really not sure we should just cross our fingers and hope it all works out okay – hope not just for a cure for cancer but an avoidance of environmental collapse, a reprieve from the growing threat of fascism. Because this shit is bad. And "we need to talk about AI in terms of values, not vibes," as Neil Selwyn argues.
(That said, I am probably going to talk at length about the vibes in Monday's newsletter. I'm very keen to think through the research released this past week that suggested that, despite feeling like they were being more productive using "AI" coding tools, people were substantially less so. Because there is definitely something afoot with how "AI" is making people feel – all that glazing and sycophancy – that is really key to getting people to use this bullshit machinery and to abandon human relationships and political solidarity in turn.)
Cop Shit
- "The campaign to make it illegal for ChatGPT to criticize Trump."
- "Grok Adds Pornographic Anime Girlfriend, Lands $200M Defense Contract."
- "Why are liberals cozying up to race science?"
- "Two Days Talking to People Looking for Jobs at ICE."
- "Inside ICE’s Supercharged Facial Recognition App of 200 Million Images."
- "Tech billionaire Trump advisor Marc Andreessen says universities will 'pay the price' for DEI." "Marc Andreessen is a Traitor" by Adam Gurri. "The Techno-Fascist Soul of Marc Andreessen" by Alejandra Carabello.
- The EFF warns that Axon's new product, Draft One – an AI tool to autogenerate police reports – is "designed to defy transparency." I'm really just including this news here to remind you that Hadi Parvoti, the founder of Code.org, remains on the board of directors for Axon, a company that makes all manner of surveillance technologies. Learning to code cannot be a path of student liberation or agency as it's literally bound up in "cop shit."
- "The Columbia hack is a much bigger deal than Mamdani’s college application."
- Cal State LA has moved online, because of the threats that ICE agents pose to students.
- "The Enshittification of American Power" by Henry Farrell and Abraham L. Newman.
Some Sort of Futurism
- "What the f*ck is futurism?"
- "What Do Commercials about A.I. Really Promise?"
- "Why my p(doom) has risen, dramatically."
Acts of Refusal
- "What it’s like to be in school, trying not to use A.I."
- "Against Compression" by Nicholas Carr.
The War on Children
- Nearly 3 in 4 teens have used AI companions, according to Common Sense Media, which strongly urges those under 18 not to use this technology. (More on this in Monday's newsletter)
- "How ICE’s Arrest of a High School Student Activated a Massachusetts Town."
- "Queens student to be released from ICE detention after month in Texas facility."
- "How Are Students Really Using AI?"
- The Supreme Court, which seems to be rubber-stamping all of Trump's questionable declarations lately, has ruled he can proceed with the mass firings at the Department of Education and continue his administration's plans to close the department.
- "Kicking Away the Ladder."
- "Youth Sports Are a $40 Billion Business. Private Equity Is Taking Notice." (A reminder that 20% of parents think their child-athlete can play in college and 10% think their kid can go pro. Instead of addressing economic precarity, we've decided to monetize and optimize child's play.)
- "This year is the first time that more U.S. college students will learn entirely online compared to being fully in-person. And research shows most online programs cost as much or more than in-person." Good job on addressing the whole "Baumol's Cost Disease" thing, everyone.
- "Scholastic Became a Children’s Publishing Giant. Now It Needs a Turnaround." It's turning to YouTube apparently, which as Ryan Broderick has observed, is full of AI slop.
- "Google’s Veo 3 Is Spawning an Icky Wave of AI Slop Coaches" – the future of teaching and learning is creating AI slop.
Misogyny is a Feature
- "How incel language infected the mainstream internet — and brought its toxicity with it."
- (One of those "ethical" "AI" companies) "Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People."
- "a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise"
- "Grok's new porn companion is rated for kids 12+ in the App Store."
- "ChatGPT advises women to ask for lower salaries, study finds," It's gonna be awesome when they put ChatGPT inside of Barbie. "Math is hard. Let's go shopping," deja vu.
Productivity Sweet
One of the ideas I'm working through (for that whole "book" thing) involves the ways in which the "productivity suite" of tools has shaped education, and in turn has shaped teaching and learning and thinking. "A spreadsheet way of knowledge," to borrow from Steven Levy – the epistemology of Microsoft and then Google, bending "understanding" towards "productivity," then tying the logic of schooling to their platforms. So the news this week, via The Information, that OpenAI is preparing ChatGPT agents that will "challenge" PowerPoint and Excel caught my attention – particularly this wording: "OpenAI’s agents will instead write code to generate a spreadsheet or presentation that looks like what the user wants." That looks like what the user wants. Vibes, not values.
Imagine a future where none of us need to know what shitty Microsoft or Google software looks like. Imagine! But I beg of you: imagine a future where we are in control of our thoughts, our writing, thank you very much, not Sam Altman or any of his fraudster cronies.
Related, from the New York Magazine profile of Robert Caro on the 50th anniversary of The Power Broker:
That Caro’s work is still done on paper, with no digital backup to speak of, marks him as one of the last of his kind. (He had never seen a Google doc until I offered to show him one. He was mildly startled to discover that, in a shared document, the person on the other end can be seen typing in real time: “That’s amazing. What’s it called? A doc?”)
Thanks for reading Second Breakfast. Consider becoming a paid subscriber to read all of Monday's newsletter and to support my whole anti-AI pro-human endeavor.