When the Robots Have Brain Rot
The Oxford University Press's word of the year is "brain rot." I believe that's two words, but whatever. Let an academic press have its moment in the sun, with headlines that, this time around, don't involve selling off its authors' IP to giant technology companies. "‘Brain rot’ is defined as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration."
Incidentally, the first record use of "brain rot," according to the OUP, comes from Henry David Thoreau's Walden – the result of living at odds with the natural world, perhaps (and something about laundry and labor). And yes, you can bet I'm going to try to write about in my book, because my god, B. F. Skinner strikes again.
Speaking of technology and brain rot, we learned this week that the almighty knowledge machine ChatGPT cannot handle the name "David Mayer." "What's going on?" asks Mashable, after learning from Reddit that querying the name elicits an error message. Some people speculated that this might be a result of EU's Right to Be Forgotten laws – do note how AI's shortcomings are often blamed on EU regulations. And it's not just "David Mayer" apparently. 404 Media found that the names of two law professors, Jonathan Turley and Jonathan Zittrain, produce similar errors.
Thank goodness no one is suggesting that ChatGPT be used in any sort of capacity where accurate information is desirable, let alone demanded. Like, say, education. Or the law. Or journalism.
But it is a bit of a dilemma, isn't it. Because we recognize that Google sucks; indeed we're told – by The Wall Street Journal at least — that "Googling Is for Old People." The we’re scolded for trying to get answers from ChatGPT, So when Esquire's columnist, old person Charlies Pierce wrote an op-ed this week lambasting those complaining about Biden's pardon of his son Hunter, it looks like he turned to ChatGPT for some research assistance. Pierce argued Biden's pardon was NBD as other presidents have pardoned family members, namely George H. W. Bush who'd pardoned his son Neil for his involvement in the S&L crisis of the 1980s. Except, oops. Bush did not.
If it’s not the EU screwing things up, it’s you, dear user, who just doesn’t know how to use the technology correctly.
Or maybe you do, because hell, everyone just seems to be making shit up, citing made-up shit, leaning on and leaning into the brain rot – all in an attempt to stay relevant, I guess. Like Guy Kawasaki, who I sure hadn’t thought about in a good long time, who’s now telling people that "AI is God." There are rarely any consequences for this sort of brain rot. (Charles Pierce was among the 200 Hearst employees laid off last week, but that’s not because he made a mistake; that’s because the publisher says “reimagining” the writing business.)
Speaking of which, Amazon Web Services held its annual event in Las Vegas this week, so there were a number of very dull press releases about various AI products blah blah blah. (While there’s been a lot of talk lately that generative AI is facing insurmountable scaling issues, the industry has invested too much — emotionally as much as financially — to back away from this future they’re selling.)
This press release caught my eye: "Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks." I'm fascinated by the use of the word "hallucination" – one of many words that attempt to anthropomorphize AI, making it seem as though statistics-at-scale is something like, something better than how humans think. (See also: words like reasoning, inference, learning, intelligence, memory, comprehension.) The word "hallucination" taps into a long history of artistic and psychological "mind-altering" explorations, one that implies imagination, creativity, awe, innovation, rather than simply admitting that your math is totally fucking wrong. Hallucinations are also, of course, a symptom of brain disease — colloquially speaking, “brain rot.”
What computers still can't do: Ed Zitron says "I told you so." (At length, as Zitron is wont to do.) "Sam Altman lowers the bar for AGI." Indeed.
Anti-teacher sentiment: Inside Higher Ed with the dystopian op-ed: that AGI agents will start displacing people in 2025. A teacher friend shared this NBC news story about a Texas private school that teaches students with AI. No teachers. All the schoolwork done in just 2 hours a day. Totally efficient. Totally not snake oil.
Dan Meyer admits to finding "an interesting AI math edtech product." Sure, Dan. I loved the kicker:
At some point, we're going to have to sit backwards on a chair and have a serious chat with district tech leads about how their incentives do and don't align with the needs of teachers and students, how they are frequently rewarded for inflating expectations about new technologies.
"A Response to OpenAI's Student Guide to Writing with ChatGPT" by Dave Nelson.
AI is "cop shit": "OpenAI Is Working With Anduril to Supply the US Military With AI," Wired reports. Also via Wired: "AI-Powered Robots Can Be Tricked Into Acts of Violence." I mean, I'm sure cops or the military would never do that. Meanwhile, "83 Percent of ShotSpotter Alerts Might Not Have Been Gunfire at All," Hellgate reports. A new photo-sharing startup "shows how much Google's AI can glean from your photos."
Surveillance is a fundamental part of the logic of computing. We're building a world of authoritarian command and control because we cannot bear to trust one another. (I worry that this is far more damaging to the future of teaching and learning than ChatGPT.)
Expecting more from machines than we do from each other: The Verge on AI relationships and romances. Via Fortune: "Gen Z men could ditch real women for AI, warns ex–Google CEO Eric Schmidt." Via Business Insider: "I have an AI Boyfriend. We Chat All Day Long."
Use AI, or else: "Why Are Women Less Likely to Use AI?" asks Bloomberg, concluding with some real galaxy brain bullshit: "If women are more risk-averse and fearful of technology, at least on average, and if an appetite and willingness to adopt new technology is a precondition of being able to thrive in a brave new labor market, generative AI could feasibly exacerbate the gender pay gap."
"Study of ChatGPT citations makes dismal reading for publishers," writes Techcrunch, in its typical "you have no choice but to get on board with generative AI" framing.
You do, in fact, have a choice.
Resisting AI: Research from UCLA's Tiera Chante Tanksley: "Black students using critical race algorithmic literacies to subvert and survive AI-mediated racism in school." There's a popular narrative right now that the people who aren't using AI – e.g. women – simply don't understand the technology. And once they do, they'll get on board. But Tanksley's research shows something else entirely: once Black students understand the technology – the racist and extractive logics that are fundamental to artificial intelligence – they are better prepared to resist and survive.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber – you'll get an additional email from me on Mondays with more thoughts on technologies of mind and body, that is on my book research and writing, my running, and my eating. Fewer links; more prose.