Venture-Backed Conspiracies

Venture-Backed Conspiracies
Image credits

I regret to inform you that Paul Graham, investor and founder of the startup training program Y Combinator, has published a new essay: "The Origins of Wokeness." You can tell the political bent of his screed by the invocation of "woke," no doubt. But at this stage, I'm not sure we even need to see this sort of thing in writing – it's become abundantly clear that Silicon Valley has fully embraced reactionary politics.

"Wokeness," Graham claims, is a "mind virus," the latest version of "political correctness" which he says emerged from universities – surprise, surprise – in the 1980s. Those origins are not in their math or engineering departments, of course. "Obviously it began in the humanities and social sciences." Obviously. These are fields that are too "soft" and too feminized – intellectually inadequate to prepare students for a high-tech future, as we're often told, and yet somehow also so incredibly powerful that they can bend all of society to suit their outrageous political demands, all while making it dangerous for rich white men like Graham to openly speak their minds.

"I saw political correctness arise," he tells us. He was there. "When I started college in 1982 it was not yet a thing. Female students might object if someone said something they considered sexist, but no one was getting reported for it."

Oh.

It had been safe in earlier decades, he claims, to be a physics professor and be a radical – hahaha, tell me you haven't seen Oppenheimer without telling me you haven't seen Oppenheimer, Paul – but things soon changed. And now, thanks to the media (both traditional journalism and social media), folks get really mad and, even worse, really loud when you're racist or sexist. (The last time I interacted with Paul Graham, incidentally, was on Twitter circa 2016. He was upset that someone had called him a racist and I reminded him – before he blocked me – that there was one good way of not being called a racist....)

Graham rails against women – women, he argues, enjoy being "moral enforcers" more than men. He rails against leftism – Marxism, of course, and the Cultural Revolution – and against recent civil rights movements – Black Lives Matter, the Me Too Movement. He insists that these forces are all "aggressively conventionally minded" and engage in "aggressively performative moralism" – which writing a 6000-word essay decrying political correctness on the eve of Trump's second inauguration is definitely aggressively not.

Graham's essay is very long and very bad, but it's not necessarily the worst thing that a powerful technology investor wrote or said this week. There Graham has competition from Peter Thiel, who penned an essay in The Financial Times calling for "truth and reconciliation" – or at least an "apokálypsis," an unveiling of the secrets that the government has hidden from us: who killed Jeffrey Epstein, who shot JFK, who started COVID, who "debanked crypto entrepreneurs," and other important issues. Thiel, like Graham, is hardly an underdog or an outsider; this is a powerful, influential person at the center of Silicon Valley and DC circles. And crucially too, this is a man who literally traffics in secrets ("intelligence"), as well as in conspiracy theories, as the co-founder of Palantir, a surveillance company with billions of dollars of military and police contracts.

Mark Zuckerberg was at it again too, appearing on Joe Rogan's podcast on the heels of axing Facbook's content moderation teams and its DEI initiatives. There he lamented that corporations have become "neutered or emasculated," and he opined that he wanted to see more "masculine energy" at Facebook – where about two-thirds of current employees are men. Zuckerberg's language echoes Graham's: male aggression is good, necessary even. Zuckerberg has, in recent years, famously embraced martial arts – physically and financially, as he's replaced Nick Clegg on Facebook's board with UFC's Dana White. And all of this emphasis on strong, pure male bodies, as I've written before, is part of a longer history of "physical culture," masculinity, eugenics, and fascism.

Indeed, much of what these tech industry leaders have said and written has echoes in "The Futurist Manifesto" of Filippo Tomasso Marinetti – a proto-fascist manifesto that, as others have noted, Marc Andreessen seemed to directly channel in his 2023 "Techno-Optimist Manifesto." Where Marinetti wanted to drink from "the factory gutter" and savor "a mouthful of strengthening muck which recalled the black teat of my Sudanese nurse," today's techno-fascists want to inhale the data stream and swallow the extractions of a renewed racist order, a renewed imperialism. Marinetti – like Graham, like Zuckerberg, like Thiel, like Andreessen – railed against women, against schools: "We want to demolish museums and libraries, fight morality, feminism and all opportunist and utilitarian cowardice."

We will sing of the great crowds agitated by work, pleasure and revolt; the multi-colored and polyphonic surf of revolutions in modern capitals: the nocturnal vibration of the arsenals and the workshops beneath their violent electric moons: the gluttonous railway stations devouring smoking serpents; factories suspended from the clouds by the thread of their smoke; bridges with the leap of gymnasts flung across the diabolic cutlery of sunny rivers: adventurous steamers sniffing the horizon; great-breasted locomotives, puffing on the rails like enormous steel horses with long tubes for bridle, and the gliding flight of aeroplanes whose propeller sounds like the flapping of a flag and the applause of enthusiastic crowds.

The manifesto was a call for violence. Epistemological violence and literal violence. It is a call for violence. It is a demand to erase history, to re-inscribe hierarchy and order. Eugenics will be reaffirmed through today's discrimination engine: the Internet, the algorithm, AI.

For all that AI seems to promise "the future of knowledge," this is a call for ignorance and for, as Helen Beetham puts it, a radical thoughtlessness. This is the antithesis of care – care and criticality are feminine, weak.

This is the future, they tell us. But it is not a rupture. It is a continuation, a future unbroken from computing’s past.

Over the past few decades, technology has played an instrumental role in neoliberalism and austerity – as we have dismantled public institutions and "the welfare state," we've told people they should just rely on the Internet instead. AI is absolutely the next step in this – as Sam Altman wrote in his latest blog post, "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies." The system wants us to be ever-more productive, and if AI cannot do that as an “agent” or “assistant” – that sure seems to be the bet the British government is making with its horrifying plans to "unleash AI" on its people – then workers will be fired. Perhaps replaced with robots, but more likely replaced with a more compliant, more precarious workforce.

Technology enables this structurally, but it works for this culturally and intellectually as well, with its core ideological underpinnings — libertarianism, individualism. Technology has aided in the political project of reorienting the mission of institutions like schools away from civic goals (as well as away from any sort of human flourishing) and towards the goals of industry – hence all the talk about “technical skills” and “digital literacy” as the way in which children are supposed to “read” and interact with the world.

Critical thinking (and critical theory) is, despite Paul Graham's vehement mischaracterizations, one way in which we can help students to understand the world, to understand its diversity – to deal positively, and dare I say generatively, with people and ideas that are different from their own. Despite all the handwaving about the importance of critical thinking and creativity for "the jobs of the future" (more on this in the coming weeks too), it seems clear that what the system wants is for us to be productive and be compliant. The algorithms (and the storytelling) it's unleashing will serve to identify those who it deems useless at best, dangerous at worst.

We want to glorify war - the only cure for the world - militarism, patriotism, the destructive gesture of the anarchists, the beautiful ideas which kill, and contempt for woman. – Filippo Marinetti

Nation at risk: The US's banning of TikTok is a mess of nationalism and sinophobia – fears about algorithmic manipulation that don't seem to extend to the American corporations doing that very thing. Obligatory reminder that ByteDance, the Chinese parent company of TikTok, is (in part) an ed-tech company; TikTok is arguably more of a social shopping company than a viral video platform, but I don't really have many insights here other than this: AI is the next frontier for war. (See: Henry Farrell on "America's plan to control global AI.") And the return of Reaganism of all of that is going to be ugly for schools, as it will work in concert with many of the trends I've noted above – defunding, privatization, surveillance, compliance, control. That is what AI in education will do. Do not kid yourself that generative AI can be some sort of progressive path forward; it is part of a deeply reactionary dismantling.

Elsewhere in ed-tech: Tutoring (virtual or otherwise) – definitely a narrative to monitor as "Bloom's 2 Sigma Problem" remains a central part of the argument for teaching machines. Speaking of teaching machines, "Has ChatGPT killed the coding bootcamp?" asks Edsurge, which not so long ago was certain coding bootcamps were the future. Fool me once, amirite? A new study finds "a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading" (echoing earlier research that links "metacognitive laziness" and AI usage). "The School Shootings Were Fake. The Terror Was Real" – Wired on swatting at school.

AI Professing: History professor Seth Bruggeman on "A Crisis of Trust in the Classroom." Chemistry professor Chuck Pearson on "The ethical case for resisting AI."

Weights and measurements: So, you're saying first-year college enrollment did not drop by 5% this fall? Good thing we’re not building vast datafied systems of decision-making in education that are reliant on the accuracy of counting things. And come for Ben Riley's thoughts on how we might measure artificial IQ; stay for his thoughts on love and AI ethics.

The reading crisis: "Are men’s reading habits truly a national crisis?" asks Vox. I mean, when the New Statesman tries to sell you on "How 4chan became the home of the elite reader," you know we're dealing with some serious bulllllllshit.

The loneliness crisis: Derek Thompson argues in The Atlantic that we're living in "The Anti-Social Century." Are we? IDK. But no surprise, startups are here to monetize the hell out of this. Meanwhile, Kashmir Hill writes about a 28-year-old woman who's "in love with ChatGPT." As Sherry Turkle prophetically put it almost a decade ago, it seems we do expect more from technology than we do from each other.

Listen up: Erin Kissane says we're in bad shape. "AI Is Like Tinkerbell: It Only Works If We Believe in It" writes Jathan Sadowski. Stop clapping, everyone. Stop.

Bat signals: "Researchers ‘Translate’ Bat Talk. Turns Out, They Argue—a Lot" – AI is coming for everyone's inner thoughts and dreams, apparently — predictive policing for the more-than-human world.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber – you'll receive an additional essay from me on Mondays, a nice bonus (YMMV) for helping support my research and writing. Coming up: some thoughts on technology training and "literacy."

*At the bottom of his essay, Graham thanks Sam Altman for reading a draft. Before becoming the face of OpenAI, Altman was a member of the first cohort at Y Combinator; Graham named him President of the organization in 2014. Education startups funded by Y Combinator include Clever and Codecademy. Thiel, for his part, helped shape the technology industry's stance towards college education with his articulation of the "higher education bubble" and his fellowship that paid young people to drop out of college. Andreessen famously worked on the first Web browser Mosaic while a student at the University of Illinois Urbana-Champaign, and used that knowledge to build Netscape and become a billionaire. He's funded tons of ed-tech startups, including the massive failure AltSchool. Zuckerberg has, for his part, given billions through his philanthropic venture fund the Chan Zuckerberg Initiative in order to shape the future of ed-tech, defined as "personalization" through data extraction and automation. As these men embrace reactionary politics and run a technology industry that serves those ends, it does sure seem as though schools should sever ties now rather than deepen them with automated systems and chatbots.

But what do I know... I mean, I'm a woman, with a background in the humanities, no less.