Irrational Exuberance

Irrational Exuberance
Image credits

There's been quite a bit of mumbling and grumbling in the last few days (weeks? months?) that this whole AI thing might be a bubble and that the bubble might be about to burst. "The AI bubble is looking worse than the dot-com bubble," MarketWatch cautioned this week. Me, I find it a little ironic that this is worst sort of damage that some folks can imagine, considering it was the post-pop era – the time of a supposed "tech-lash" – that ushered in today's "digital economy." And phew, I wish we lived in a word where the demise of Netscape meant we were done with Marc Andreessen's bullshit.

Competing with this steady stream of caution about tech companies' over-valuation and fretting about a coming "market correction" is the constant drumbeat of another rhythm: the one that insists that "AI is never going away.” That there's nothing you can do, there's no turning back – the genie won't go back in the bottle, cliche, cliche, cliche.

I'm not keen on the framing of either of these stories. Neither seems to get to the heart of the vast inequalities we face today, with or without the threats of algorithms and automation. Neither narrative, despite all the handwringing that economists and pollsters and other prognosticators like to do, gets to the heart of what "risk" means or feels like for the vast majority of people. Mostly, I'm irked by the way in which these stories frame things as inevitable (and inevitable disaster as just something you gotta live with). In doing so, these narratives strip from us any sort of agency – the agency to invest financially, politically, culturally, intellectually, in other alternate futures. A future for people, not machines.


Please Regulate This: "A Trump Win Could Unleash Dangerous AI," Wired cautions. A Trump win will clearly unleash lots more listeria too — I’m more worried about the food supply than generative AI. (You know the joke: Ayn Rand, Peter Thiel, and Paul Rand walk into a bar...)

Other AI Predictions: Matteo Wong says that CEOs shouldn't put a date on their prophesies about the future of AI because someone might hold them account when the "expiration date" comes and goes. Ha ha ha ha ha ha ha.

"Technologically, as I have argued earlier, machines will be capable, within twenty years, of doing any work that a man can do." – Herbert Simon (1960)
"Within a generation … the problem of creating artificial intelligence will substantially be solved" – Marvin Minsky (1967)

I could go on…

Still More AI Hype: "Using Artificial Intelligence is Easier Than You Think" – via Wired, of course. Ariel Bleicher reviews three books on AI in MIT Technology Review. I think we need to be prepared for a deluge on books on the topic. (And when I say "we" I mean "me" as mine is going to have to compete with a lot of verbiage, ChatGPT-generated and otherwise.) "5 ways I actually used ChatGPT this year to improve my life" – bless this guy’s heart.

What Computers Still Can't Do: "AI Can’t Teach AI New Tricks," writes Andy Kessler in The Wall Street Journal. "LLMs can’t outperform a technique from the 70s, but they’re still worth using — here’s why," says Venture Beat.

Automating Justice: "This AI Tool Helped Convict People of Murder. Then Someone Took a Closer Look" – via Wired. Worth taking a closer look at the judgement in this court case, particularly the section on "use of artificial intelligence."

Automating X-Rays: "AI to help doctors spot broken bones on X-rays" – AI has been coming for the radiologists' jobs for a helluva long time now. Indeed, for so long, that there's a shortage of radiologists as students are steered away from the career.

Hollywood versus AI: Via The Wall Street Journal: "Meet Hollywood’s AI Doomsayer: Joseph Gordon-Levitt." (His wife, Tasha McCauley, was on the board of OpenAI, so "doomsaying" is probably a bit of a stretch here.) Via The Guardian: "Thom Yorke and Julianne Moore join thousands of creatives in AI warning." "Elon Musk sued for using AI-generated Blade Runner imagery at robotaxi event." (Ugh. Ridley Scott, machismo, violence, and AI. That’s a whole other discussion we’ll save for the premiere of the new Gladiator film, I reckon.)

Big Tech Updates: "Microsoft Has an OpenAI Problem" writes John Herrman in NY Mag. Also by Herrman, from earlier this month: "What If Google's Biggest Problem Isn't AI?" "Microsoft introduces ‘AI employees’ that can handle client queries," The Guardian reports. I’m still thinking the whole Microsoft-ification of our world is responsible for the terrible state of everything — from software to education. Via Ars Technica: "ByteDance intern fired for planting malicious code in AI models." (ByteDance, a Chinese ed-tech company, is the owner of the popular app TikTok, fwiw.) "X updates its privacy policy to allow third parties to train AI models with its data," says Engadget. The LinkedIn shitposter.

No, Thanks: "Anthropic Wants Its AI Agent to Control Your Computer," Wired reports. Anthropic, for those keeping score at home, is one of the major competitors to OpenAI. Meanwhile… "Sam Altman's Eye-Scanning Orb Has a New Look – and Will Come Right to Your Door.”

Reality, Personalized: "AI mediation tool may help reduce culture war rifts, say researchers" – The Guardian reports. Yeeaah, no. See: "Habermas machines" by Rob Horning. (I am hearing more and more about this very bad idea, however: that we all just get to live in our own algorithmic soup, never having to confront ideas we don’t like.)

Definitely the Most Upsetting Thing I Read about AI This Week: "Can A.I. Be Blamed for a Teen’s Suicide?" asks The New York Times.

Elsewhere in Psychology: "Cats beat babies at word-association game," says Science. "Under an L.A. Freeway, a Psychiatric Rescue Mission" – via The New York Times.

Putting the Eugenics Back in "Intelligence": "Google, Microsoft, and Perplexity Are Promoting Scientific Racism in Search Results," Wired reports. "US startup charging couples to ‘screen embryos for IQ’," The Guardian reports.

What You Pay Me to Pay Attention to: "ChatGPT doesn't have to ruin college," Tyler Austin Harper argues in The Atlantic, suggesting a "robust honor code” stops students from cheating — at least at the small liberal arts college he visits. It does feel like this is a question of ethics, of priorities and resources. And no doubt, it seems clear that the results of turning the university into a surveillance machine — using these technologies of predictive autocomplete to cheat and to catch students cheating — doesn’t really work. As Bloomberg Businessweek writes, "AI Detectors Falsely Accuse Students of Cheating—With Big Consequences."

We're coming up on 70 years of the phrase "artificial intelligence." And yet, leave it to my favorite "don't know much about history" org, Khan Academy, to say "it's early days for AI." Anyway. "Here's what we've learned," Kristen DiCerbo, KA's chief learning officer, writes in District Administration. Yikes.

Transcripts of student chats reveal some terrific tutoring interactions. But there are also many cases where students give one- and two-word responses or just type “idk,” which is short for “I don’t know”. They are not interacting with the AI in a meaningful way yet. There are two potential explanations for this: 1.) students are not good at formulating questions or articulating what they don’t understand or 2.) students are taking the easy way out and need more motivation to engage.

As usual with these folks, it's the students' fault when ed-tech doesn’t work as promised. Always blame the students.

Thanks for subscribing to Second Breakfast. Paid subscribers will hear from me on Monday, with more thoughts about food, aliens, running, and yes yes sure obviously AI and education.