Nobody's Business

“If you’re so rich, how come you’re not smart?”
The quotation above, from Dan Davies's new book The Unaccountability Machine: Why Big Systems Make Terrible Decisions – And How the World Lost Its Mind, has been playing on repeat in my head since I read it this week. It is, I'd argue, a sentence that perfectly expresses almost everything about politics and technology right now, particularly regarding AI.
I asked this question over and over as I read Reid Hoffman's book Superagency: What Could Possibly Go Right with Our AI Future – a book I was going to review for you here but 1) he doesn't really say anything about AI and education (I thought he would as he was one of the first investors in Edmodo, an app that marked the resurgence of ed-tech investing in the early 2010s) and 2) it was – congratulations! – the dumbest thing I have read about artificial intelligence, and I have read a lot of dumb shit (a lot of it, no surprise, on the site Hoffman founded, LinkedIn). Since Dave Karpf recently (and spectacularly) skewered fellow venture capitalist Balaji Srinivasan and his book The Network State, I figured I'd just let Hoffman's book slide on by, its substance (or, truly, lack thereof) mostly unremarked upon.
That said, it really is worth talking about the stupidity – the dangerous and destructive stupidity – of venture capital, I think, because it is a crucial aspect of this historical moment, again politically and technologically. And again, Dave Karpf beat me to it, with his essay this week on "the three types of money behind Silicon Valley's rise to dominance" – "two of them are real. The third, notsomuch": 1) government contracts; 2) revenue from actually selling a product; and 3) investment and financial speculation.
It's the latter, Karpf argues, that has become disproportionately powerful as the technology industry is now far far less interested in revenue or profits or government contracts (although surveillance and military tech – "cop shit" – is certainly poised to thrive under Trump) and is instead fully committed to investing in investment, to a fantasy-finance almost entirely disconnected from any traditional notion of business or bookkeeping. That is, the business of start-ups is raising venture capital; and the business of venture capital, let's not forget, is all about extending the wealth and power of a handful of very rich men. "If the money is all in vibes and gambles," Karpf argues, "then we will have a vibes-and-gambling based economy."
This couldn't be clearer when it comes to generative AI, which is so clearly not a viable business. The numbers just do not make sense. This is not product that people asked for (side-note: OpenAI said last year that the majority of its weekly active users are students – this is truly damning, for education and for AI. More on this later); and it is not a product that its producers can afford to make. It's not one that the planet – in terms of environment or equality – can afford us to pursue (but of course the latter have never "counted").
Ed Zitron offered his own industry audit this week, noting that "OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion." 14. Billion. Dollar. Loss. The biggest generative AI company is not just a little bit unprofitable; its business is utterly unsustainable. And yet its valuation? $340 billion. One of its competitors, Anthropic, is currently raising $3.5 billion in funding based on a $60 billion valuation – a number also totally disconnected from its actual business: 2 million users, less than a billion dollars in revenue last year with around $5.6 billion in losses. There is no there there, Zitron argues. This industry exists only insofar as it continues to be propped up by venture capital and, of course, subsidized by running on Google and Microsoft infrastructure.
Vibes and gambling.
One of the many dangers, I'd argue, of schools going "all in" on this whole AI future is, quite obviously, this financial con game. (Reminder: the money that VCs play with is comprised of a lot of university endowments and retirement funds. The coming bust – and it is coming – will hurt regular folks more than investors.)
Perhaps it's worth remembering the broader and longer history of ed-tech investing here too, how speculative finance has involved the telling of certain kinds of stories about the future of education in the hopes of hyping a certain investment portfolio. You know, "in the future there will only be 10 universities and Udacity will be one of them" sort of silliness.
Education technologies have long been part of ongoing efforts to profit from what VCs have seen as an "untapped" market. If I'm generous, I'll put this in Karpf's financial framework as numbers 1 and 2: government contracts and the sale of products to teachers and students. But ed-tech investment has also been an attempt too to shape the education sector, not just sell to it. The result has been the increasing privatization of education – the outsourcing of core institutional capacity, capability, expertise, function; the invention of new functions (the average K-12 school district apparently uses over 2500 different tech products); and the creation of for-profit alternatives like bootcamps, MOOCs, and charter schools.
And crucially, schools have increasingly oriented themselves around not just technology usage – framed, of course, as "up-skilling" or what have you – but around digital data. Data gathering, data extraction, data analysis. And data, Jathan Sadowski argues in The Mechanic and the Luddite, is a new form of techno-capital. "Meeting the demands of this data imperative, and squeezing value out of that data, has become a prime directive for the design and use of capitalist innovation," he writes. "In other words, the reason why ubiquitous surveillance is built into our digital society is not because it’s a technical requirement or inevitable feature but because it’s valuable for capital."
Educational data – data from students and from teachers – has always been a tool for managing people, of course. But now there's a whole speculative market based on this lever for (behavioral) assessment and (behavioral) control – a nightmare technic and a nightmare economy and a nightmare politics.
As we watch DOGE dismantle various governmental departments and agencies, slashing funding for research grants and programs, do recognize that venture capitalists see this as a boon for their efforts: research, they believe, can and should be privatized, without any of the academic expectations of "openness" or peer review, without any of the burden of "theory" (in the sense of "woke" or "non-applied").
I know it seems counterintuitive and ridiculously short-sighted – how on earth can the US continue to lead in science and technology without scientific research? I'll just point you to that Dan Davies quote above: "if you're so rich, why aren't you smart?"
The vibes. They’re not good.
Margaret Renki writes in The New York Times that "letting a robot structure your argument, or flatten your style by removing the quirky elements, is dangerous. It’s a streamlined way to flatten the human mind, to homogenize human thought. We know who we are, at least in part, by finding the words — messy, imprecise, unexpected — to tell others, and ourselves, how we see the world. The world which no one else sees in exactly that way."
I don't mean to break it to some of you, but it's pretty obvious that you're letting a robot structure your argument. You've emailed me the quintessential 5-paragraph-essay, with all the dullness that that structure too often entails. The word choice is banal at best, its mundanity punctured from time to time by weird moments of terminal thesaurus abuse. ChatGPT writes like a prototypical college freshman [derogatory, yikes, sorry] probably because its corpus includes millions of shitty plagiarized essays.
AI slop is "deathly boring," says Mike Caulfield. Ted Gioia also yawns at "The New Aesthetics of Slop." 404 Media has been tracking the rise of AI slop publishing, infiltrating library catalogues, ripping off authors' materials to sell books at a lower price. Remind me again: what's the difference between AI slop and OER textbooks?
Jeppe Klinkaard Stricker cautions of "The Synthetic Knowledge Crisis":
Universities are not innocent victims in the ongoing synthetic knowledge crisis; they are accelerating it. The relentless pressure to secure funding, generate research, rack up citations, and deliver 'impactful outputs' has created an environment where quantity trumps depth. Research has become a commodity, and generative AI is only amplifying a trend that was already underway: the capitalisation of knowledge production, where intellectual labor is increasingly evaluated by metrics that reward speed over substance.
Meanwhile, Chegg is suing Google, arguing that AI previews are "eroding demand for original content and undermining publishers' ability to compete."
It's too easy to say "karma's a bitch." But listen, I won't judge if you do say it.
The gamble. It’s not good.
Justine Calma writes in The Verge on "The women who made America’s microchips and the children who paid for it."
There are real, incalculable costs to computing – environmental degradation, human suffering. Ongoing, if hidden from the view of those in the Global North who continually hype its adoption. The story, told in Superagency and elsewhere, is that AI will unlock, enhance human potential. But for which humans? Because as it stands, the industry is built on extraction, exploitation, endangerment of workers in the Global South. And AI is going to make things worse.
The AI empire wants to extend its reach everywhere — the destruction of the American public infrastructure only one smart part of a global plan. In her latest newsletter, Helen Beetham offers a thorough and damning examination of the UK governments's embrace of AI. There is enormous overlap in the players here: the PayPal mafia (which includes Elon Musk, Peter Thiel, Reid Hoffman, David O. Sacks) strikes again. And there is overlap too in this fantasy of (financialization of) data capital.
"The fast-fading sheen of generative AI is being used, it seems, as cover for social automation, social discrimination, and control of social data."
I guess I didn’t really cover the week’s AI in education news here, did I. Ah well.
On Monday, I'll send you an essay that tries to weave together all the recent stories about AI and the workplace: AI supposedly determining which federal workers get to keep their jobs, for starters; Y Combinator's backing of an AI surveillance system for factories; bots instead of school counselors. I'll talk (once again) about the fantasy of automating the classroom and explore how school surveillance hurts teachers and students. But in it, I also borrow from Lee Skallerup Bessette's recent explorations of the "ed-tech imaginary." And I discuss the crucial work of care and reciprocity that exists outside the financial fantasies of these very dumb, very rich men.
Meanwhile, do try to enjoy the weekend. At least we made it through February?
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your financial support helps me do this work.