Self-Driving School

Share
Self-Driving School
Eastern Wood Pewee (Image credits)
“The only thing that matters is the future,” he told me after the civil trial was settled. “I don’t even know why we study history. It’s entertaining, I guess -- the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow.”

I reference this passage – the kicker from a 2018 article in The New Yorker – all the time. I cite it in my book. I invoked it last week when I spoke to a group at the Center on Ethics at San Jose State last week about “the history of the future of education.”

Those few sentences, uttered by software engineer Anthony Levandowski, perfectly encapsulate a fundamental piece of Silicon Valley ideology: an utter repudiation of history and a relentless orientation towards a future, unmoored not only from the past and the present, but also, strangely enough, from science (which does actually require you understand what’s come before).

Maybe it’s been almost a decade since you’ve read it; I mean, it’s been almost a decade since it was published. So I’ll recap: this article details a lawsuit that Google and later the federal government had filed against Levandowski, a former employee who’d left the company to join Uber, allegedly taking with him some of the key trade secrets about Google’s self-driving cars. (Levandowski was indicted in 2019 on some thirty counts of theft, but was pardoned a few months later during the final days of Trump’s first presidency.)

Levandowski had made headlines for reasons beyond industrial espionage, when he’d founded a new religion, the Way of the Future, which involved the worship of artificial intelligence. “What is going to be created will effectively be a god,” Levandowski told Wired in 2017.

I want to rehash all this, in part because I fear that that tendency to embrace amnesia has seeped out of the geographical confines of Silicon Valley, beyond the tech elite – no surprise, perhaps, as their products seem to have brutally damaged the cognitive capabilities of many of those who spend all day looking at screens and whose understanding of the world is thoroughly dictated by algorithms. I want to rehash all this too because there is this steady drumbeat of stories that “now” “suddenly” we are “in an AI moment,” when if you take even the slightest of glances backwards into the past, you can see we have been inundated with these "AI" moments for a good long while now.

I will keep pointing at this recent history of “AI” – there is a much longer history as well – for many reasons, but today, it's to remind everyone of how, at the height of the hubbub about self-driving cars in the 2010s, the news was full of predictions about a coming jobs apocalypse for drivers: not just taxi drivers, but delivery drivers and long-haul truckers as well. Millions of jobs – and trucking is one of the most common jobs in most US states – were soon going to go away.

“Soon.”

“Any day now.”

And here we are, with some 8.5 million people still employed in the US as "drivers," now forced once again to listen to the captains of the “AI” industry repeat similar sorts of claims about the coming collapse of white-collar jobs. Perhaps it might behoove us to look at some history and to consider how well their previous predictions have turned out. To ask what happened to the last “AI” “revolution” (or two or three or so).

The libertarians in their midst – hell, they’re all libertarians – will surely rant and rave about the regulatory measures that have stymied the expansion of self-driving cars. But in fact, much of what’s prevented the widespread automation of cars and trucks and the elimination of driving jobs is the technology itself: it simply cannot do what these engineers promised it could. It is a hard problem to be sure, but as it stands their “AI” cannot and will not replace these workers anytime soon. Workers’ skills – physically and mentally – still far surpass the machinery.

Maybe not everyone gleans this lesson from the history, but I sure do. It’s a good reminder of much of the talk of automation and replacement – whether of truckers or of office workers – rests on an utter misanthropy; a blend of an ignorance of and a loathing for the work that real people actually do; a dismissal and denigration of humans’ capacities to think, to decide, to learn, to move, to adapt, to react. Capacities that exist beyond what can be measured and monitored and mimicked by software and hardware.

More history still: the 2010-era vision for the automation of roadways was intertwined with a vision for the automation of education. It was at TED 2011, after all, where Sebastian Thrun – Levandowski’s colleague in the DARPA self-driving car challenge and his boss at Google – gave his talk on his work developing the driverless car. It was at TED 2011 when Thrun witnessed Sal Khan give his famous speech, “Let’s Use Video to Reinvent Education.” Thrun claimed that he was immediately inspired to do something similar, to videotape a bunch of lectures and offer a free online version of the AI course that he and fellow Googler Peter Norvig taught at Stanford. And thus – ah, mythology -- the MOOC was born.

Remember MOOCs, dear reader? (Jay Caspian Kang, New Yorker writer of two embarrassingly bad recent essays on “AI” and higher education, I am asking you specifically. Or is this history, much like Neandrathals and the Industrial Revolution, stuff that doesn’t matter?)

Back in the 2010s, we were told over and over and over that MOOCs (massive open online courses) were going to “change higher education forever.” Professors would be replaced. Courses would be automated. Thrun himself predicted that in fifty years time (that is, by 2062), there would only be ten universities left in the world, and that his MOOC startup Udacity would be one of them.

And here we are, with tens of thousands of colleges worldwide still in operation; and those that have closed their doors in the decade have done so not because of MOOCs as much as financial mismanagement. (Although the two, one might argue, might be connected!)

Udacity was acquired by Accenture in 2018. Rumors put the price tag of the deal at just $80 million, even though the startup had once been hailed as a “unicorn” and valued at over a billion dollars. Also shuttered around this time was Levandowski’s “AI” church Way of the Future; or at least the URL wayofthefuture.church is dead. Levandowski claims that there are still “a couple thousand” people who worship the “AI” Godhead with him, and he told Bloomberg in 2023 that the church was back in business. (Is that the right phrase? It feels like it is...)

Sometimes I wonder if the delusional thinking that accompanies this latest explosion of chatbot usage is actually a feature, not a bug. The kind of nonsense that the technology’s most fanatical users regularly spout often feels like they’re parroting the kinds of things that Silicon Valley’s elites have always said – things like that Levandowski quotation, or the one from Thrun, or hell, any TED Talk that claims to solve the world’s most complex problems with a “hack” (or an airport book) – that “one simple trick.” This is the stuff that LLMs have been trained on, and they readily churn out a lot of banal punditry and more than a little casual sociopathy that sure does echo the standard VC blog post. Brain rot as a mirror; brain rot as a service. How else do you get something like the suggestion in that latest Kang article that feels precisely like some regurgitated mix of MOOCs+AI+stupid: that “the future of college could look like OnlyFans”?!

“Social science is going to matter so much less when your daughter goes to college,” Hollis Robbins, an English professor at the University of Utah tells Kang. “It is already on its way out. A.I. can do it. And here’s an example of the type of inquiry I’m talking about: I have a weird, funny Twitter group about life on Mars. Someone will ask, for instance, if it’s true that you’re going to need kidney dialysis on the way back from Mars. Another person is theorizing about a 3-D printer that’s going to use Mars soil, which will allow people to build on Mars using its materials instead of shipping everything there. These sorts of inquiries are obscure, specialist, niche, at the edge.”

These sorts of inquiries are, I'm sorry, not “at the edge.” These sorts of inquiries are not science – social science or otherwise. They're just repackaged stories that the tech billionaires have already spun: "AI" gods and Mars colonies. And they’re bullshit.

In the end, that’s the Silicon Valley game: bullshit that is always full of historical inaccuracies and scientific impossibilities drowned out by great bravado; big talk that always skirts criticality because it's actually quite small-minded; bad ideas divorced from history and expertise, from any of the actual practices of real people in real places doing the real things that their tools are supposedly going to disrupt, upend, replace.

But the bullshit now is wrapped in a chatbot, and this seems to have fooled some otherwise clever people that this is all very new and very exciting and very very smart.

Maybe the chatbot makes you feel good, as Rusty Foster notes in a recent Today in Tabs that absolutely excoriates Kang’s article (drawing on Kate Davies’s analysis of “AI”-generated knitting podcasts to do so), but whatever those warm fuzzies might be for you, they’re still strangely, psychotically wrapped up in a coming apocalypse – destruction that’s going to wreak havoc on someone else’s life. Maybe that should be a tell?

(Robbins claims that her school will be okay and won’t shutter because it has a football team, which funnily enough was something I noted back in 2013 in response to Thrun’s soothsaying about MOOCs: for starters, to get just ten universities, we’d have to end college sports. And maybe, I suggested, some schools would just relaunch as sports teams. The University of Oregon, for example, could make it official that it was, in fact, the Phil Knight Ducks.)

Increasingly people want me to weigh in on why I think there’s growing pushback against ed-tech in schools, and this ridiculous New Yorker interview certainly gets at some of it: this is a stupid vision of the future. And it's an incredibly mean one to boot.

Even if we’ve memory-holed the past – bless our hearts – I think more and more people now respond quite viscerally to these tired cliches that tech evangelists keep trotting out. More and more people are starting to recognize, even if it's still mostly subconsciously, that anyone who in 2026 is still pronouncing that “there’s going to be some move-fast-and-break-things” in education is a fool, a dangerous fool even, who really shouldn't be making crucial decisions about what and how our children should learn, who and what they can become.


Elsewhere...


(image credits)

Today’s bird is the eastern wood pewee, not to be confused with the western wood pewee. (They look identical, but their songs are totally different.) According to All About Birds, this small flycatcher “is inconspicuous until it opens its bill and gives its unmistakable slurred call: pee-a-wee!” I listened to recordings of the bird’s call as I typed this email, and I am not so sure it is unmistakable or slurred. I'm not so sure it's a “pee-a-wee” at all, let alone that the wee bird's call ends with that particular punctuation mark. But what do I know? (Not much, as many ed-tech practitioners will be more than happy to tell you.)

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. I'm afraid it won't stop the dumb shit from happening. But at least I can always reassure you that, yes, it is very dumb shit.