Influencers and Expertise

"It's Bad." It's very very bad. And each week, when I gather up the education and AI-related news here, I keep wishing I could say we've reached the peak of this mountain of horrors and that things will get easier, things will get better from here. It's quite apparent that the plan is to make things worse.
It's too easy to tie all this simply to Project 2025, I'd argue. There are other efforts, older efforts long underway to exacerbate inequality. (We might call these "capitalism." We might point to white supremacy, to patriarchy as well. Hell maybe we might want to point a little harder at computing.)
Everything is connected, and one of the problems, I think of information silos – whether algorithmic or academic – is that we don't always see that. Neil Postman, for his part, blamed expertise, an "important technical means by which Technopoly strives furiously to control information," not just by a specialization of knowledge but by it complete management — how expertise is wielded to control all aspects of our lives. And that’s rough because, while “experts” have certainly screwed us pretty good — it’s honestly no surprise we don’t trust them any longer — we’re facing huge crises (epistemological, ethical, educational) as a result, in part because we are still stuck with the technical machinery of the expert and the bureaucrat (in Postman’s framework): not simply the computer, but the questionnaire, the form, the standardized test, the benchmark, IQ.
And ah, that last one. It’s re-emerged as the new machinery of ranking, rating, control.
Yes, I'm currently re-reading Postman's Technopoly, which was published way back in 1992 but still had plenty to say about computing. In his newsletter this week, engineering professor Josh Brake invokes Jacques Ellul: "Technique's Deception." Ellul's The Technological Society was published in 1954 (and translated into English in 1964) and is similarly insightful. There's a very long, very rich tradition of thinking about technology that doesn't accept its inevitability, that challenges its simplistic identification with progress, that questions where our tools might be taking us. (Here's looking at you, Plato.) Folks like Postman and Ellul (and Ursula Franklin and Joseph Weizenbaum and Lewis Mumford and David Golumbia, to name a few others) pointed to computing and to AI ages ago and uttered very loud "yikes!!" — this should remind us that what we're witnessing now isn't necessarily new but part of a longer trajectory towards autocracy.
You can't simply talk about AI as though it emerged, with the release of ChatGPT, fully formed – a gadget, a goddess of wisdom – from Zeus's head in 2022. AI is inextricably tied to the history of military and surveillance technology R&D; to decades of data collection and data extraction; to a massive investment in computing power and storage to the detriment of the environment; to the recent failure of cryptocurrency (at least until this new grifter presidency came to power) to subvert governments' monopoly over money; to the rise of venture capital and the financialization of everything; to austerity, individualism, neoliberalism, and libertarianism – an utter hatred towards the government, not just towards regulation but towards democracy itself, by many in the technology sector.
AI as White Supremacy: Bloomberg reports that the largest call center in the world will use Sanas AI in order to alter the voices of its workers, making them "sound less Indian." I've talked about this previously (on the podcast with Helen Beetham, if I recall correctly), but this is important so I'll repeat myself. AI is neo-imperialism; AI is cultural erasure; AI is white supremacy. AI presents white voices – with certain kinds of British or American accents – as "neutral" and natural, a vicious cycle for machine and human learning, because the less we hear voices different from these, the more jarring, the more "wrong" other accents and other languages become. This doesn't just happen aurally. When you use generative AI for reports or emails and when you use AI tools like Grammarly or ChatGPT for essay composition, you’re engaging in a similar process for writing.
AI versus the Future: A trio of important arguments this week about AI and energy usage: Ed Zitron writes about Microsoft's cancellation of several leases for data centers in "Power Cut." Edward Ongweso Jr. writes about "The Silicon Valley Consensus and AI Capex (Part 1)." And Helen Beetham talks to Alistair Alexander, and they ask "How much of this stuff do we really need?"
In short: AI – model building and training, along with data storage – is making our climate crisis worse. Much worse. The petroleum industry and the technology industry are working hand-in-hand at this – "Oil is the New Data," as JS Tan put it years ago – as AI promises fossil fuel companies it will help them accelerate fossil-fuel extraction while (hahahahahahahahaha) also promising everyone else that it will magically help address climate change. (Or not, as Eric Schmidt recently argued: "we’re not going to hit the climate goals anyway.”)
If we're working towards building a sustainable future – and that is, in some ways, the mission of education, right? – we will have to decompute.
Computing's Visions for Childhood: 15-year-old Iman Pabani writes for Fortune on trusting influencers over experts (something I don’t think is just a problem for “kids these days,” to be fair):
Gen-Z isn’t just trusting influencers over experts, they’re redefining what “expert” even means. Doctors, journalists, and scientists are dismissed, not because they are wrong, but because they are inconvenient, a straw poll of teens told Fortune. Influencers, on the other hand, are fast, familiar, and on the medium we turn to most: our phones.
It’s not that Gen Z doesn’t believe in experts. Rather, it’s that social media has rewired the way they think about credibility. TikTok influencers are now our “friends.” The algorithm repeats and reinforces what we already believe. And a well-edited, engaging video is much more convincing than a long, complicated explanation from a professional. Credibility today isn’t about expertise but about who tells the most compelling story. This change is slowly reshaping how an entire generation decides what is true and what is not—sometimes with demonstrably negative results.
Of course, generative AI has a similar allure: it's convenient. And the chat interface: it sure seems friendly.
We’ve totally failed with almost all “digital literacy” efforts — to such an extent that it seems naive to take seriously all the calls for some sort of new “AI literacy.”
Common Sense Media has released its latest report on children's technology usage, and it's grim. By age 2, 40% of children have their own tablet, and babies are watching an hour of video a day. The number of children who read or are read to has fallen by 10% since the last report. One in three kids ages 5 to 8 has used AI for learning, according to their parents, a third of whom say that it's helping their children with critical thinking. What does that phrase even mean? What are we even doing?!
Via The Guardian: "‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI." So yeah, I get it. As Daniel Greene argues in The Promise of Access, we have dismantled the social safety net and told people not to worry: the Internet will save them.
It hasn't. It won't. It can’t.
Education's Politics: and speaking of dismantling... "'Final Mission' for Education Dept. Begins Now," says the new Secretary of Education, wrestling executive Linda McMahon. Diane Ravitch and Jennifer Berkshire each put the attack on public education in historical/political context: desegregation and re-segregation and, of course, vouchers.
Idaho plans to deregulate childcare, incidentally – allowing day care centers to set the child-to-staff ratios as they deem fit. More screen time for babies, natch — another reminder that it’s never that “robots are coming for your jobs.” It’s always that capitalism wants to replace workers with something it thinks will be cheaper.
“Will Harvard Bend or Break?” asks Nathan Heller in The New Yorker. He doesn't really mention AI at all, but you can see all of what he discusses – particularly how the institution's priorities are to capital, to management and not to professors or to students – as the greater context for AI's efforts to privatize, outsource, destabilize, erase certain kinds of knowledge practices, certain kinds of politics, certain bodies on campus.
Chris Newfield writes on “The Fight about AI.” "We are at the bottom of the knowledge pile," he observes of the professoriate. "And the slope of that pile got much steeper with the election of Donald Trump and his executive assistant Elon Musk, who are conducting a war on knowledge. It’s a war specifically on knowledge workers whose expertise offers perspectives that compete with corporate information technology, or what we might call managerial IT."
And yet the MOOC madness deja-vu continues, with a bunch of universities clamoring to join OpenAI's NextGen AI initiative. Here's Oxford's press release.
Meanwhile, “OpenAI Plots Charging $20,000 a Month for PhD-Level Agents” The Information reports. Pretty sure that’s less than what I earned in a year as a grad student. Pretty sure that’s less than what anyone’s earning as a grad student since we’ve axed all that federal funding for research. Good work, DOGE team.
Technologies of Eugenics: Byline Times reports on "Trump’s War on ‘Woke’ and DEI: Incubated by a Nazi Eugenics Foundation." That'd be the Pioneer Fund, which bankrolls a number of high profile conservative thinkers – and the ones in the article, incidentally, all have newsletters on Substack which continues to be home to a lot of Nazi content. (A friendly reminder to move your newsletter elsewhere.)
Justin Kirkland writes in The Guardian on "‘The basis of eugenics’: Elon Musk and the menacing return of the R-word."
Ars Technica says that "Researchers puzzled by AI that praises Nazis after training on insecure code." I'm not? But then again, I read books.
Resurrecting Pigeons: Big week for transgenic mice, I guess, and not just because Colossal Biosciences announced that, in its quest to resurrect the woolly mammoth, it has genetically engineered a a woolly mouse. The company, which recently raised $200 million in venture capital (from the likes of Peter Jackson and the CIA, incidentally), is also seeking to bring back the dodo. And listen, y'all know I love pigeons. (I'm thinking my next tattoo might be the dodo from Alice in Wonderland — the book, not the Disney movie, for what it’s worth.) But when your CEO says something like "why leave nature to chance?" I choose to skip over the obvious Jeff Goldblum response and just straight up say "you can fuck right off."
We have to protect everything we have now.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. I really cannot do this work without the support of my readers. Coming up on Monday: some initial thoughts about AI agents and agency and how we seem unable to escape B. F. Skinner's vision of a fully-engineered "utopia."