Who Cares about the Holidays?
I'd originally planned to take today off from the newsletter – a little post-Thanksgiving respite from my writing (and your having to read it). But the steady drumbeat of AI hype hasn't stopped, so how can I? (You're free, as always, to not-click.)
I really can't say if I'll be emailing you regularly on the final few Fridays of the year. I'd like to get my book proposal written in December, and I have such a massive stack of books to read before then – it's all a little daunting. Nonetheless, paid subscribers will continue to hear from me on Mondays with updates on my research and writing (and, of course, on my running and eating).
Autogenerated holiday content
From the Google blog: "5 ways I'm handling the holidays with Gemini." Absolutely depressing AF. As Caitlin Dewey wrote this week, you probably should not outsource acts of care – even brainstorming about acts of care – to a goddamn machine.
Capitalism, for what it’s worth, tries to convince us that the act of care is the gift, or even the gift-giving. But it’s not: it’s the thought that counts. “Handling the holidays” with AI is thoughtless.
Who cares about work?
"Pretty much all Gen Z knowledge workers are using AI, survey finds" – or so Axios tells us. Do we actually believe this? I mean, I am skeptical of any story that uses "pretty much all" in the headline, I won't lie.
The story summarizes a recent survey administered by Google (a totally unbiased source of computing-related content). The findings do run counter to some of the other polls that have been conducted on workplace usage of AI, all of which show far less enthusiasm.
But we sure do love the story that young people are the first to grok the latest technologies – it's the future, man – and that these tools will eventually/inevitably find widespread adoption among older workers too, who’ll learn from their younger peers on how to get things done faster/better/cheaper.
Even if we do believe that Gen Z knowledge workers are pretty much all using AI, there are other ways to interpret this finding, I’d argue. Perhaps it's less about some glorious capabilities of the tools – they're using it for note-taking in meetings? oh, you mean to summarize the transcript of a Zoom call? wowie-zowie– and rather because Gen Z (a group that has disproportionately high rates of unemployment) feels like they must do everything they possibly can to keep their jobs, to appear innovative and productive, and to survive in a heavily-surveilled workplace, a place where their presence and their ideas are not valued at all. Perhaps it's more "fuck this bullshit job" than "yay, AI makes my workday so amazing."
(I mean, I'd rather see people join unions than "quiet quit" or “phone it in” but hey.)
Or maybe the Gen Z knowledge workers that responded to Google’s questionnaire were mostly male technologists, a group that, as Judy Wajcman wrote in 2019 “sets time” — a group that is unencumbered by domestic demands and that has embraced the “engineering mindset, [where] economic rationality and efficiency become virtues in and of themselves.”
So maybe it’s not that this technology is transforming the workplace (I mean, summarizing emails that are already bullet points? That is a low fucking bar); rather it’s these workers are. That is, this group of young men who are (and have been for a while now) the model for what workers should be: hyper-productive, busily automating their own task list so there need not be a subsequent hire for the job. Not a care in the world.
Speaking of doing nothing at work: "Are Overemployed ‘Ghost Engineers’ Making Six Figures to Do Nothing?" asks 404 Media. God, I sometimes do wish more people in technology were doing nothing. The planet would be better for it.
"The AI Reporter That Took My Old Job Just Got Fired," Guthrie Scrimgeour reports.
Misunderstanding understanding
It's "A Revolution in How Robots Learn," James Somers declares in The New Yorker. And phew, there's a lot going on in this article – a grand display of that desperate belief among AI researchers and technology journalists that the human is a machine and this time for sure we're on the cusp of engineering the latter to match if not outpace the former – intellectually and physically. A snippet:
From the moment my son was born, he’s been engaged in what A.I. researchers call “next-token prediction.” As he reaches for a piece of banana, his brain is predicting what it will feel like on his fingertips. When it slips, he learns. This is more or less the method that L.L.M.s like ChatGPT use in their training. As an L.L.M. hoovers up prose from the Internet, it hides from itself the next chunk of text, or token, in a sentence. It guesses the hidden token on the basis of the ones that came before, then unhides the token to see how close its guess was, learning from any discrepancies. The beauty of this method is that it requires very little human intervention. You can just feed the model raw knowledge, in the form of an Internet’s worth of tokens.
Contrary to the formulation above, human learning does require a lot of human intervention — formally and informally. That is, in fact, how and why we learn – because we are social. You cannot just feed babies "raw knowledge." Knowledge is social. Yes, learning is embodied — brilliant insight, guys — and it is intertwined with others’ bodies. Channeling Lev Vygotsky here: a gesture of an infant is not an act-in-itself but an act-for-others.
There’s so much going on in this article — fantasies about unsupervised machine learning and autodidactic robots that made me wonder about what this vision means for teaching — for teachers and more broadly, for those who work in all sorts of female-coded jobs. I mean, in the final paragraphs, the author questions who, in the future, will do the work of caring for babies (and spoiler alert: it isn’t Nurse Marsha).
Standardized testing for robots
From Livescience: "Mathematicians devised novel problems to challenge advanced AIs' reasoning skills — and they failed almost every test." There's a lot to unpack about the use of "standardized testing" to measure AI – they call it "benchmarking" but whatever.
For all the arguments that education activists have made about the flaws of standardized testing, it's pretty fascinating to see computer science double down on this particular mode of assessment for measuring "intelligence" – but they would, wouldn't they.
Elsewhere in education technologies
From The Wall Street Journal: "5 Ways Students Can AI Proof Their Careers." (This is mostly just “evergreen” content — “advice” about learning to get along with people and to complete your work on time. But it's funny how these sorts of stories get re-packaged and retold, no matter what "innovation" is forced down our throats.)
Benjamin Riley marks "A tiny victory in the battle against AI-generated stupidity."
According to Axios, "AllHere founder arrest shows it's easy for startups to scam VCs." Lots of folks sure want to reassure us that this isn't an ed-tech thing; it's just a startup thing. Not great either way. And certainly not a good look for Forbes' "30 Under 30," which honored Joanna Smith-Griffin back in 2021. I'm not saying everyone who's made that list is bad news but I’m not not...
"ChatGPT Has No Place in the Classroom," Emily from Mystery AI Hype Theater 3000 argues.
More bad chatbots
"Character.AI Is Hosting Pro-Anorexia Chatbots That Encourage Young People to Engage in Disordered Eating." "Google's AI Chatbot Tells Student Seeking Help with Homework 'Please Die'." "Users Furious as Character.AI Deletes Countless Beloved "Harry Potter" Chatbots."
The future of the mind/body
Wired wants us to believe that "Combining AI and Crispr Will Be Transformational." And "Neuralink Plans to Test Whether Its Brain Implant Can Control a Robotic Arm." Hard pass.
"The Enclosure of the Human Psyche" by L. M. Sacasas.
Alone together
"Can a fluffy robot really replace a cat or dog?" – Justin McCurry goes there, I guess.
Japan’s 125 million people appear to have the emotional and financial capacity for pets and robots, but less so for children. According to a 2023 survey, Japan has more pets – including 7.1 million dogs and 8.9 million cats – than it has children under 15 (14.7 million).
AI versus the planet
Via Palo Alto Online: "Tesla spills chemicals that cool its AI supercomputer into local creek." It's gotta be bad if officials in Palo Alto are objecting.
Police state
Via MIT Technology Review: "How the largest gathering of US police chiefs is talking about AI."
Brian Merchant has some thoughts on "tech under Trump."
The alleged apolitics of AI
"Bluesky is ushering in a pick-your-own algorithm era of social media," Chris Stokel-Walker argues in New Scientist. I can't help but think about the research published last month that AI supposedly can help build consensus, help us "find common ground." And about what Mike Caulfield wrote about this week about the "hyperreal presidency." That is to say, I'm not sure that "pick-your-own algorithm" is necessarily a win?
De-platforming
From the Institute for Advanced Study's "5 Theses on the Gravity of Platforms":
Tragically, platforms have been sticky in that they offer a solution for that problem too. Especially economists and political scientists have theorized this issue. One prominent example is Albert O. Hirschmann’s essay on Exit, Voice and Loyalty. If you don’t like a platform, the story goes, consider exit (leaving the platform for another one), voice (raising your concerns to those who operate a platform), or reconsider loyalty (the reasons why you might stick to the platform in spite of everything). The problem is of course that voice ceases to be effective when the platform does not need you, that exit is difficult if there are no alternatives, and that loyalty is not a quality of customers but rather a de facto condition of our participation.
Stolen goods
Via 404 Media: "Someone Made a Dataset of One Million Bluesky Posts for 'Machine Learning Research'." Side note #1: I have long assumed that the name of the AI company "HuggingFace" — whose employee is the “someone” in this headline — was a reference to the monster in Alien. Apparently, it's this emoji: 🤗. But I don't think I'm wrong. Side note #2: There are few things I loathe as much as this belief by the technology industry that every aspect of our lives — all our creativity and expression and activity — is just “data,” there for the taking.
"There’s No Longer Any Doubt That Hollywood Writing Is Powering AI" – more thievery of content by tech companies in order to train their AI.
John Warner has some thoughts on that study in Scientific Reports on people's preference for AI-generated poetry. Max Read argues that "People prefer A.I. art because people prefer bad art."
More AI slop
Via The Bookseller: "New publisher Spines aims to 'disrupt' industry by using AI to publish 8000 books in 2025 alone."
Knowledge of speech, but not of silence;
Knowledge of words, and ignorance of the Word.
All our knowledge brings us nearer to our ignorance,
All our ignorance brings us nearer to death,
But nearness to death no nearer to GOD.
– T. S. Eliot, Choruses from The Rock
Via Wired: "Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated." Hardly a surprise that all of Microsoft will soon be overrun with AI slop, considering its relationship with OpenAI. The arc of the technology universe is not terribly long but Microsoft still manages to bend it towards enshittification. Always.
AI as dipshit accelerationism
From Erin Kissane's "against the dark forest":
yes, the existence of dipshits is indeed unfixable, but building arrays of Dipshit Accelerators that allow a small number of bad actors to build destructive empires defended by Dipshit Armies is a choice. The refusal to genuinely remodel that machinery when its harms first appear is another choice. Mega-platform executives, themselves frequently dipshits, who make these choices, lie about them to governments and ordinary people, and refuse to materially alter them. But in the Dark Internet Forest, the mega-platforms and their leaders are missing from the frame except as shadowy super-predators.
"Watching the Generative AI Hype Bubble Deflate" by Mars Hicks and David Gray Widder. Good riddance.
Unserious considerations
The Verge’s Elizabeth Lopatto responds to a video of OpenAI's CEO discussing how he takes notes – in a spiral notebook?! WTF?! Then he rips out the pages and throws them on the ground?! Are you 12, Sam? "I have some notes on Sam Altman’s note-taking advice," she writes – and while I think she's wrong about pens and pen color here, she is correct about this: "This is a man who has not carefully considered his tools and expects someone else to pick up after him. That does explain a lot about OpenAI, doesn’t it?"
Meanwhile, "OpenAI is funding research into ‘AI morality’" Techcrunch reports. Guessing "AI morality" doesn't include picking your goddamned, crumpled-up, spiral notebook pages up off the floor. That's just more invisible labor that technologists never deign to consider.
To consider: to think with care.
Thanks for being a subscriber to Second Breakfast. I am immensely grateful for your ongoing support. Please consider upgrading a free subscription to a paid one so that I can continue to do this work. I'll post a link to this newsletter to BlueSky (sigh) but if you want to chat, just hit "reply" to the email.