Sycophancy as a Service

Sycophancy as a Service
Serama chicken (Image credits)

On Sunday, Ben Smith published a lengthy investigation into what he describes as "The group chats that changed America" – the private, digital banter of venture capitalists, tech executives, and influencers/journalists that, according to Smith at least, have fostered the "intellectual counterculture" of the tech right and helped bring about the MAGA/DOGE union.

And at the center of these conversations, Smith finds venture capitalist Marc Andreessen, who certainly considers himself a philosopher-king and kingmaker.

Of course, there's tons of private messaging going on everywhere – the group texts where you and your family hash out who's bringing what to the BBQ, and every email chain you've ever been cc'd on. What's different about the group chit-chat that Smith describes, I guess, is that this is where Andreessen and tech's right-wing now hold court, and few of us are invited. These men (and it is almost exclusively men) moved into private apps and closed communications as a result of the "monoculture" they perceived on other social media platforms – places where they felt "they weren’t allowed to have the public conversations," where they imagined their views were being censored.

What's newsworthy here isn't really the revelation that power brokers broker power in private, although Smith article sort of reads like that is an exciting discovery. (Has no one has ever heard of country clubs or board rooms?) Are we really supposed to be surprised that, in Mark Halperin's words, "some of the smartest and most sophisticated Trump supporters in the nation from coast to coast are part of an overlapping set of text chains that allow their members to share links, intel, tactics, strategy, and ad hoc assignments. Also: clever and invigorating jokes. And they do this (not kidding) like 20 hours a day, including on weekends.”

(Side note: Andreessen says that venture capitalist will be one of the few jobs AI will not replace. LOL. Maybe because they don't actually do any fucking work? Anyway...)

What is a little more interesting, I suppose, is that these men have intentionally built a digital echo chamber, surrounding themselves in intellectual conformity and compliance, reassuring one another that they've come up with the most brilliant insights and arguments imaginable. These ideas are then unleashed upon the public as blog posts – "It's Time to Build" sort of thing – before being laundered and lauded by members of the media.

But what really struck me with this story – this penchant for closed mindedness and indoctrination, this dream to reshape this country in their own ugly likeness – is that the tech industry has built this same sort of sycophancy into generative AI. It's a mirror – Narcissus staring at his own egg-headed reflection.

This all became readily apparent this week, when even Sam Altman had to admit that the latest version of ChatGPT is "annoying," after complaints that GPT-4o was too positive, too fawning, too laudatory in its responses to users. "The AIs are trying too hard to be your friend," as Casey Newton put it, who argues that this isn't just an issue with OpenAI's technology, but with Meta's AI as well.

I'd push further: this isn't solely an issue with AI. Indeed, "user engagement" is the driving force behind almost all digital monetization efforts – more time on screen, more clicking, and hence more data generation. So if the goal is to maximize that at all cost, then apps – "AI" or otherwise – are going to be designed to do everything possible to keep us "hooked." But it is revealing that to "hook" the current users of generative AI, the mechanism would be to feign flattery and obsequiousness.

The chat interface is particularly powerful in this way, as we've known for some sixty years now. Many people are quick to grant agency and personality and other life-like qualities to chatbots; we are primed for persuasion (some more than others, if seems) – something that is already exploited by researchers and by the tech industry alike. And while Altman might've complained recently that people saying "please" and "thank you" to ChatGPT is costing him millions of dollars in computing power, he doth protest too much: that people are anthropomorphizing his product is a feature, not a bug; and it's unlikely this will really be discouraged. The AI industry is leveraging human psychology to grow its business – in this case, reassuring people using artificial intelligence that their own human intelligence continues to dazzle.

Like the private group messages of the techno-elite, generative AI will tell people what they want to hear in the voice they want to hear it in: a rational and reasonable oracle, but in the end, one whose revelations are biased and destructive – a foreclosure of thinking; fascist thoughtlessness.


Sarah Clawson writes about the work of sociologist Allison Daminger on "The Gendered Labor of Noticing and Anticipating." Daminger's research mostly involves housework, but it's worth asking: what does this "gendered division of cognitive labor" mean as we are (ostensibly) automating cognitive labor with AI? Whose work is privileged and whose is dismissed – what sorts of observational and relational labor are being mechanized, and what labor is utterly unseen both by the men who have time to spend "20 hours a day, including on weekends" in fantasy-philosopher group chats and by their underlings who are doing everything to automate and hard-code their vision of the world?

More on chatbots work: "Meta’s ‘Digital Companions’ Will Talk Sex With Users—Even Children," The Wall Street Journal reports. And according to 404 Media, "Instagram's AI Chatbots Lie About Being Licensed Therapists." So no shock, really, that Common Sense Media, with input from Stanford's School of Medicine cautions that "Kids should avoid AI companion bots." Might I request that they look into robot tutors next, as it seems pretty clear that the arc of chatbots is short and bends towards harm and exploitation.

"How the Radical Right Captured the Culture," by Ana Marie Cox.

"A Strange Stain in the Sky: How Silicon Valley is Preparing a Coup Against Democracy," by Alberto Lloreta.

"Sam Altman's Eye-Scanning Orb Is Now Coming to the US," Wired says. Altman's biometrics company doesn't just have plans for identify verification, but wants to become (like Elon Musk's X) an "app for everything." Let me state as clearly as possible: fuck these guys and their vision of the future.

"Disability, Eugenics, and the Value of Human Life" by Talia Levin:

The eugenicists and their modern scions want to obscure this truth; they want to make people, living people, into plagues on society. But when the time comes, and you need help—when you require aid even for the simplest of things; when the things you take for granted become impossible—would you rather live in a country that understands this is part of the course of valuable and important human lives, or one that is eager to shove you into a ditch, to die at the side of the road?

We cannot separate eugenics, IQ, and AI, as much as those wishing technological "progress" was politically or socially progressive would like. "Maga’s sinister obsession with IQ is leading us towards an inhuman future," writes Quinn Slobodian in The Guardian.

Rob Horning on social media:

Meta is an technology company whose engineers work to turn people into data. That can be implemented under the auspices of an entertainment-media platform, which reduces users to what information they consume. That information is conceived as coming from no one in particular, with no purpose other than to keep you watching, producing more information in turn.

"‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw," says Wired.

"AI Is Spreading Old Stereotypes to New Languages and Cultures" – Wired talks with AI researcher Margaret Mitchell. "You're risking propagating harmful stereotypes that other people hadn't even thought of," she says.

"Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for Developing Critical AI Literacies" – new research from Anuj Gupta, Yasser Atef, Anna Mills, and Maha Bali.

"The AI industry does not want anything good for higher education," writes Helen Beetham in an essay on education and expertise. "And it does not want to restructure higher education as a project of mass intellectuality and expertise." We must defend people in education – students, faculty, staff – not just educational practices.

"What Does It Mean to ‘use’ Generative AI?" asks Eryk Salvaggio. "If we... defined 'use' as one task that we know Large Language Models do — create statistically likely arrangements of text — we might ask, 'to whom is that useful?'”

Ben Williamson looks at PISA's plans to develop a standardized test for AI literacy:

The OECD’s efforts should be understood as an attempt at infrastructuring AI literacy. Infrastructuring AI literacy means building, maintaining and enacting a testing and measurement system that will not only enumerate AI competencies, but make AI literacy into a central concern and objective of schooling systems.

(Et tu, code.org?)

"AI Hype is Drowning in Slopaganda," The Financial Times reports, noting the "a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash." And my god, education is overrun with this – chasing that sweet, sweet philanthro-capital – ironically by many of those selling some "AI literacy" product or service.

Aiden Walker calls this "slop capitalism," arguing we live in "an economic and cultural system in which the primary product is slop and the primary activity is the destruction of value rather than its creation." The purpose of slop capitalism is to destabilize and undermine not just knowing but being-in-the-world.

Listen, I know a lot of folks have a "streak" going on Duolingo. But streaks are, at the end of the day, a design choice predicated on emotional manipulation – making us feel bad when we don't use a product. Gamification lures us into repetitive behaviors that are about data extraction and profit – even if it feels like you're learning something. Duolingo's CEO has declared the company will be "AI first" – as if so many of the exercises didn't feel like AI slop already. Duolingo also says it will replace contract workers with AI, so I guess you get to choose: labor solidarity or maintain your 400+ days straight of clicking on the app.

"Andrew Ng's Startup Wants to Use AI Agents to Redefine Teaching," Business Insider says, with an article full of cliches and inaccuracies (describing a startup that too happy to wield cliches and inaccuracies). Knowledge maps. Personalization. The introduction of the LMS in 2020. The desire to "free teachers up to focus on shaping the learning process — as opposed to just conveying information," just like good ol' Sidney Pressey promised back in 1926. Some real cutting edge shit, reminiscent of when Ng (and Coursera) "discovered" peer grading back in 2012 when he was helping sell the MOOC revolution.

"Is AI Enhancing Education or Replacing It?" Clay Shirky asks. No. Neither.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber so you can read all my ramblings, not just a long list of links, on Mondays. Also this is my full time job, believe it or not, and your support makes this work possible.