The Terminator

The Terminator

Noam Chomsky's savage review of B. F. Skinner's Beyond Freedom and Dignity, published in The New York Review of Books in 1971, is sometimes credited as the coup de grâce to behaviorism – its widely-read condemnation of that particular strand of psychology (alongside what Chomsky argued were its political ramifications) published just as the rest of the field made its so-called cognitive turn. "As to its social implications," Chomsky wrote, "Skinner's science of human behavior, being quite vacuous, is as congenial to the libertarian as to the fascist."

But as I argued in Teaching Machines, behaviorism's fall from favor – at least in popular parlance – was less a reflection of academic disputes (in academic journals and beyond) and more the result of another important cultural event that same year as Chomsky's book review: the release of Stanley Kubrick's A Clockwork Orange.

Behavioral psychologists can protest all they want, insisting that the film (and the novella it's based upon) does not depict Skinner's ideas or methods; but that hardly matters. For the public, behavioral therapy – conditioning – will be forever associated with the image of Malcolm McDowell's bulging eyes wired open.

I find it deeply, deeply ironic that director James Cameron has emerged as a big proponent of AI, most recently joining the board of directors of Stability AI, a generative AI company responsible for a lot of shitty avatars on social media. (Not to be confused with Cameron's shitty Avatar.) I'd argue that Cameron is responsible for one of the most powerful images of AI in popular culture – an image that encapsulates all of our fears of robots run amuck, of the dangers to humankind that artificial intelligence poses: Terminator, the cyborg assassin sent by Skynet to kill Sarah Connor and destroy the human resistance to AI.

One of the reasons I am not particularly worried about AI – I come to you as Ed-tech's Cassandra here – is that the vast majority of people don't want it, don't like it, truly believe it's all bad-fucking news. Folks aren't not selling anyone a new story of salvation with their algorithmic wizardry: they're selling them a movie they've seen before, one that involves the end of civilization. Hard pass.

Yes, I spent some of my hard-earned Second Breakfast revenue on this t-shirt

As I noted in last week's newsletter, technologists like Salman Khan like to cite science fiction as inspiring their particular vision of the future. That they often praise dystopian science fiction is weird and disconcerting, to be sure. But most normal people hear these stories and think "well, that's fucked up. Let's not do that" rather than "wouldn't it be amazing if we built Skynet." As long as Terminator is the dominant cultural figure when it comes to sentient machines, AI proponents have a huge uphill battle in the PR department. (And I'm just not sure, if you look at the oeuvre of James Cameron, he's your go-to messaging guy if you want a happy ending.)

Justine Bateman on Instagram: “World Science Festival 2024 4/”
4,537 likes, 6 comments - _justinebateman_ on September 28, 2024: “World Science Festival 2024 4/”.

Building the surveillance state: "The Secret Service Spent $50,000 on OpenAI and Won’t Say Why." Facebook "smart" glasseswhat could go wrong? (Related, maybe: Rob Horning on "The Envisioners.")

No, really. Fascists love AI: "How neo-Nazis are using AI to translate Hitler for a new generation." Via Wired: "Trae Stephens Has Built AI Weapons and Worked for Donald Trump. As He Sees It, Jesus Would Approve."

From the folks who gave you Clippy: "An AI companion for everyone."

Elsewhere in Microsoft updates that nobody wants: "OpenAI raises $6.6bn in funding, is valued at $157bn," The Guardian reports. "OpenAI Is A Bad Business," says Ed Zitron. (He has another interesting argument this week too, on the SAAS bubble – definitely worth thinking about the ways that schools and businesses alike have outsourced a lot of their functionalities to hundreds of very expensive, very mediocre startups.) Brian Merchant asks "Is OpenAI too big to fail?" (Is James Cameron?)

Gavin with the Good Hair does it again: "In California, no AI bill is safe." (Well, at least he'll never be President.)

I guess the Web does forget: "The Internet Archive’s Fight to Save Itself" via Wired.

Education and AI: "Artificial Intelligence and Writing: Four Things I Learned Listening to my High School Students" – a guest post by Brett Vogelsinger in John Warner's education newsletter. "Kids Who Use ChatGPT as a Study Assistant Do Worse on Tests," KQED reported earlier this fall. "OpenAI Hires Former Coursera Executive to Expand AI Use in Schools," Bloomberg reports. "Massive E-Learning Platform Udemy Gave Teachers a Gen AI 'Opt-Out Window'. It's Already Over," 404Media reports. Great to see all the MOOC players still up to shady shit though, right?

Feeding the AI beast: How dead people feed the AI algorithms. "This baby with a head camera helped teach an AI how kids learn language." Sure, Jan. "AI Is Triggering a Child-Sex-Abuse Crisis," says The Atlantic's Matteo Wong. "Has Social Media Fuelled a Teen-Suicide Crisis?" asks Andrew Solomon in The New Yorker. Maybe not. Maybe everything is fueling a teen-suicide crisis because adults have decided to go all in on the apocalypse?

Elsewhere in technologies of care

Cooking-related analogies: "AI Is a Language Microwave," writes Stephen Marche. "Easy, convenient, and far from perfect." I don't love the analogy, frankly. As the FDA explains it, "microwaves cause water molecules in food to vibrate, producing heat that cooks the food." But that's really how all heating of food works – vibrating molecules produces heat. What AI does with human language, however, isn't simply faster cooking-cognition. I mean, folks do lean on analogies a lot, but sometimes we forget that they're just that. "The mind" is not a machine – even though that is a powerful "metaphor we live by." (Related: another academic dispute involving Noam Chomsky.)

Our culture sure loves convenience though. Convenience. Efficiency. And doom.

Thanks for being a subscriber to Second Breakfast, which once upon a time was a newsletter about health technologies – aging bodies and grieving bodies and that sort of thing. Now that this aging and grieving body is writing a book on artificial intelligence and education, you get to enjoy my wading back into the tech news in general – which is really all AI all the time, it seems. On Monday, I'll send out "The Extra Mile" to paid subscribers, which will contain some thoughts on Sal Khan's new book and some questions about Aristotle and Alexander the Great.