The Booster Shot

The Booster Shot
African grey parrot (Image credits)

A couple of weeks ago, Ed Zitron published one of his epic rants -- the kind that, as he warned newsletter readers, is probably better read on the web than via email: it’s 16,000 words long; so long that he added a Table of Contents to aid navigation.

Ed’s piece is titled “How to Argue with an AI Booster,” but honestly (and contrary to what some people seem to believe about me), I’m not interested in arguing with these people. Frankly I don’t think there’s anything that one can say to change their minds. It’s like arguing with addicts or cultists -- what’s the point?! Boosters will hear none of it -- no surprise, since they’re spending their days basking in the sycophancy and comfort of their machine-oracles.

I do think Zitron’s piece could be useful -- certainly for those who do still like to pick fights on social media, bless your hearts -- and he details how one can respond to some of the most commonly trotted-out pro-“AI” talking points. He positions these as “zingers” and “gotchas” with which you can retort, but like I said, I don’t know if that’s a rhetorical strategy that’ll change a true believer’s mind.

Thing is, most people aren’t true believers.

Those enamored with “AI” are in the minority (which, to be clear, does not mean they’re being oppressed when people call them on their bullshit). Recent polling (LOL, I know I know) still suggests that most people remain pretty uncomfortable about “AI”. They’re concerned, not even close to boosting. They view “AI” with suspicion, recognizing it as a threat to their personal economic security, as well as a threat to the larger environment – literally and metaphorically, to the information ecosystem as well as the entire planet’s.

I don’t know that we need to be arguing with “AI” boosters, as much as figuring out how to protect the rest of us from that mania.

Depending on what your personal “information ecosystem” looks like, you might find yourself surrounded by a lot of “AI” boosters, which might make it seem as though everyone is “all in.” You might have a boss that gives you no choice. This is particularly true if you work in or adjacent to education/technology, as the field is financially and ideologically obligated to adopt and promote every new gadget that comes along. This is often justified with some “the future demands it” rhetoric, of course -- not only is the latest tech product the greatest tech product, but it’s an inevitable product too. Students must be trained to use it (with some lip-service paid here to “use it critically”); their economic and social well-being depends on it.

Everything -- curriculum, community, curiosity, cognition -- has become secondary to digital technology itself. There is no point in resisting.

Or so the boosters would have you believe, because boosterism is, as Ed Zitron points out, not so much about what a technology – in this case, "AI" – can do. (Mostly, it can’t.) It is about allegiance.

(That was bad enough when this involved an allegiance to surveillance capitalism; now it’s allegiance to a violent, racist surveillance state as well.)

Of course, we can easily argue that educational institutions – past and present – also rely heavily on allegiance, on compliance, as well. We call academic subjects “disciplines” for a reason, as Michel Foucault (and others) would remind us. Schools supposedly encourage thinking – but, asterisk or hashtag or whatever, not all thinking. It’s never been “all thinking.” And "AI" will certainly hard-code that.

As such, it’s no wonder people bristle at the ways in which schools assess, interrogate, and invalidate their children, the ways in which they sort and rank, reinforce hierarchy and control. It’s no wonder people want something different; it’s no wonder that the stories that boosters tell – whether they’re boosting “AI” or just “ed-tech” – can sound so damn appealing, particularly when these stories talk about frictionlessness, future success, and importantly, individual freedom.

The response then shouldn’t be simply to challenge the “AI” boosters – although yes, by all means, call them on their bullshit if you feel like it. Rather it needs to be to also dismantle this larger context in which their harmful boosterism makes for such an appealing hustle. Some of that surely does involve pointing out, as Zitron always does, where the claims about “AI” are inflated or wrong. But I think we also need to ask why any of this bullshit resonates with workers (and not just the wealthy), what it means that people are finding comfort in techno-authoritarianism, why folks are desperate for an oracle-bot or a therapy-bot, and how we need to strengthen and/or change institutions like schools if we have any hope of preserving democracy.

"AI" is a symptom of a much larger disease.


I listened to the first half of the latest (70 minute-long, phew) Hard Fork episode on AI and education, in which tech journalists – and arguably two of the biggest “AI” boosters in the industry – Kevin Roose and Casey Newton talk to MacKenzie Price, founder of Alpha Schools, whose chain of “2 hour learning” schools keeps getting so much media attention – I'd argue because she’s smartly marketing them as 1) “AI” and 2) teacher-free.

I’ve been writing about ed-tech for quite some time now, and I know my ears are tuned to that frequency, but still: it does shock me a little how often the very same lines about “traditional classroom practices” and the very same stories about ed-tech innovation – new technological capabilities same as the old technological capabilities – get trotted out, year after year after year after year.