The State of the "Art"
It's been absolutely fascinating to watch the narrative about AI shift over the last few weeks – from very boisterous claims that "AGI (artificial general intelligence) is just around the corner" to a rather big "maybe not." According to some researchers, the current approach isn't going to work after all.
While much of the attention has been on problems with scaling the current models, I think it's wrong to just look at this shift in storyline as an issue with the technology – that what we’re grappling with here involves explaining what works or doesn't work technologically. Whether or not tech works (let alone, works elegantly, efficiently) hardly matters a lot of the time – I mean, look around you; look at the crappy apps on your phone. Indeed, as I argued in Teaching Machines, technology adoption is never simply about the best science or the best product “winning.” (Bless your heart, B. F. Skinner.)
The Wall Street Journal noted this week that, while investments in AI have been "booming," "venture-firm profits are at a historic low." That is, the money-men have been placing a lot of bets on AI, but they haven't seen a lot of returns on the chips they've already got on the table – in AI startups or elsewhere. I’m not saying that the money-men drive the story or the research; but their response here will be interesting to watch: how will their story (and their investment portfolio) change?
Of course, VCs did place a very big, very successful bet recently on one Donald J. Trump for President – a President who some observers see as unleashing a very AI-friendly future (a future very unfriendly for humans and other living things, mind you). But this AI-friendly future is an austere, authoritarian future, and frankly it doesn't need more whiz-bang, neat-o generative AI to function smoothly – or to function at all. It just needs more of the same ol' predictive algorithms – more racist cop shit. (Andreessen Horowitz bankrolled Trump, and they directly funds this shit. And do watch the redemption arc the media craft for Palmer Luckey.)
Anil Dash penned a blog post (and from what I gather, a thread on Threads) this week about the newsletter startup Substack, reminding folks that it's "a political project made by extremists with a goal of normalizing a radical, hateful agenda by co-opting well-intentioned creators' work in service of cross-promoting attacks on the vulnerable." So while Substack might not give its investors – oh look, Andreesen Horowitz, among them – an "exit" for the $80 million they've poured into it, they will (they hope) get a solid kickback on their bet on right-wing media, and on the kinds of anti-regulatory measures they hope the new Trump administration will bring about.
That is to say, maybe all that crypto bullshit will finally be worth something real.
I can't help but wonder if we're going to hear a lot more from technology's meme-hustlers about bitcoin and blockchain in the coming months and a lot less about AGI. Again, it's not as though they're going to give up on AI or walk away from their massive investments in the field. Indeed, Marc Andreessen said this week that while we might be "running out of human knowledge" to train AI, this is going to mean a hiring boom as companies turn to "experts to actually craft the answers to be able to train the AI."
I mean, maybe. Funny thing though. This guy is just perpetually wrong. Remember when Andreessen wrote this last year: "Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love." Remember when he called the Kno tablet "the most powerful tablet ever made"? Do you even remember the Kno tablet? Yeah... My point exactly.
Anyway.
It kind of bums me out that “media literacy” is still often posited as the best response to all this hype and handwaving. I mean, we do need to ask better questions, sure sure.
Like, when you read this headline from Nature, "AI-generated poetry is indistinguishable from human written poetry and is rated more favorably," ask why it might be that "non-expert readers" couldn't identify the authorship of poetry correctly or why they preferred text generated from a corpus consisting of 5 poems each from 10 well-known English-language poets. (Like, I am not sure I could tell the difference between AI-generated music and [whatever band your teenager is listening to right now] and that's not necessarily because the former is amazing, yo.)
Or what about this, from Fast Company: "AI is already taking jobs, research shows. Routine tasks are the first to go." They're not, of course, despite the "just so" story that folks like to tell about "routine tasks" and automation. Life – at work and at school and at home – continues to be full of "routine tasks" in part because we are encouraged to organize it thusly, through structure and consistency. Atomic Habits is a bestseller not because it's some profound tome on the order of things but because "routine tasks" are part of a culture that praises this sort of thing – efficient routines, good habits as key to productivity and morality. Even if you reject "optimization culture" – because ugh, fuck that – routines are an essential part of reproductive labor. Someone's gotta get the kids up and dressed and fed and out the door to school every goddamn day. And it ain't gonna be Elon Musk or his new robot, is it. Finally, despite all this talk about AI eliminating jobs, AI entrenches bureaucracy – it seems far more adept at creating more busywork, more "routine tasks," more bullshit jobs.
There was this one from The New York Times this week too: "ChatGPT Defeated Doctors at Diagnosing Illness." Did it? Did it really? Perhaps, instead of marveling at how good ChatGPT might be at making predictions based on a corpus of medical case studies, ask too why these doctors were so bad? Ask why doctors might be reluctant to outsource their diagnostic thinking to a machine – not simply because they're all just terrible Luddites or because they have some sort of God-complex. And perhaps we might wonder how scrambling for a more robotic, technical prediction of disease might further undermine medicine as a practice of care. ("PSA: You shouldn’t upload your medical images to AI chatbots" – some friendly advice after Elon Musk encouraged X users to post their medical files to his website in order to train its AI.)
I dunno — I guess for all the talk that ”LLMs have indeed reached the point of diminishing returns,” I wonder how much the performance of AI even matters. (And to my earlier point, how much “AI literacy” matters.) What matters — at least for Trump, for Andreessen Horowitz, for Elon Musk, for the podcast bros and tech bros alike — is the aesthetics and the politics of AI: algorithmic authoritarianism.
Teaching machines: 1 in 8 NYC public school students experienced homelessness last year. Good thing so many folks in education are obsessed with AI. That'll sure fix things – wrestle things, whatever – pick your metaphor. “OpenAI releases a teacher’s guide to ChatGPT, but some educators are skeptical.” This “teacher’s guide” is a good example of the ways in which tech products — not just AI but definitely AI — attempt to craft the classroom (the pedagogy, etc) to suit its needs (its needs for users now and later), rather than to solve any sort of problem that teachers or students might actually have. “Founder of company that created LAUSD chatbot charged with fraud.” The Chronicle of Higher Education's podcast series, "Meet Professor Robot," is just a long series of "yikes." Also yikes: "Explicit deepfake scandal shuts down Pennsylvania school."
Turn PDFs to Brainrot videos with MemenomeLM: "Supercharge your learning."
Or not.
Automating "embodiment":
Physical Intelligence believes it can give robots humanlike understanding of the physical world and dexterity by feeding sensor and motion data from robots performing vast numbers of demonstrations into its master AI model. “This is, for us, what it will take to ‘solve’ physical intelligence,” Hausman says. “To breathe intelligence into a robot just by connecting it to our model.”
There's so much to unpack in that paragraph alone. From "Inside the Billion-Dollar Startup Bringing AI Into the Physical World," in Wired.
Automating disembodiment: "AI pimping."
Racist robots: "Audio AIs are trained on data full of bias and offensive language." That’s gotta be great for the Google Notebook LLM, right?
Right-wing influencers love racist robots: "Kim Kardashian has befriended Optimus, the Tesla bot." I bet. "How Donald Trump could help Elon Musk with his robotaxi plans." "AI didn’t sway the election, but it deepened the partisan divide."
Eugenics and efficiency: ""Elon Musk efficiency panel seeks 'high IQ' staff, plans livestreams."
“AI abundance” is a nice new bullshit catchphrase – thanks Marc Benioff. Elsewhere in big tech: John Herrman on "Why the Government’s Google Breakup Plan Is Such a Big Deal." "Relevance! Relevance! Relevance! At 50, Microsoft Is an AI Giant, Open-Source Lover, and as Bad as It Ever Was" – Steven Levy still out there, doing his thing in Wired.
Robot rebellion: “AI-Powered Robot Leads Uprising, Talks a Dozen Showroom Bots into ‘Quitting Their Jobs’ in ‘Terrifying’ Security Footage.”
Slop: "AI isn’t about unleashing our imaginations, it’s about outsourcing them. The real purpose is profit," writes James Bradley in The Guardian. Related: "AI-generated shows could replace lost DVD revenue, Ben Affleck says." Yeah, we should definitely take advice from that guy.
AI slop on Amazon. AI slop on Spotify. AI slop on Buzzfeed. AI slop on Substack.
Publishing bad faith: "HarperCollins Confirms It Has a Deal to Sell Authors' Work to AI Company." (Stay strong, MIT Press. Please.)
A few final AI observations: "What if today's LLMs are as good as it gets?" asks Benjamin Riley. "Fake Personableness" by Josh Brake. "Nobody Asked for This AI Future" by Marc Watkins. Henry Kissinger – may he rot in hell – has a new book out on AI, which pretty much says it all about the politics of artificial intelligence.
Thanks for subscribing to Second Breakfast. I'm off to Philadelphia today to run the half marathon tomorrow. Paid subscribers might hear from me on Monday with a race report; they might not. We'll see how it goes – the race, that is. I've got a zillion other ideas floating around in my head right now about artificial intelligence and education, so paid subscribers are going to get something on Monday – it may or may not be about running, is all I'm saying.