I Am Waiting at the Counter for the Man to Pour the Coffee

Political reporters love a Midwest diner. Even though most Americans live in urban areas, even though a third of Americans live in California, Texas, Florida, or New York, even though around 40% live on the coast (east or west), even though more than half are under age 40, the depiction of "average American" – particularly in an election year – is often an older middle-age white person who just happens to be sitting in a neighboring booth from the reporter, casually eating pancakes on a Wednesday morning but more than willing to chit-chat politics (and endorse Trump's policies).
Come to find out (although never remarked upon in the actual reporting) this diner also happens to be a former Republican Party co-chair for the county or the sister-in-law of an RNC delegate or the nephew of a major party donor or a former member of the board of the Heritage Foundation. Nevertheless, their opinions get laundered as representative, and readers' take-away from this sort of article is supposed to be that "average Americans" hold these truths – whatever story that Midwest diner patron happens to be telling, that is – to be awfully self-evident.
So think the same, but for digital technology, particularly for ed-tech: the young white teacher – formerly TFA, but that's not mentioned. The innovative principal – a featured speaker at SXSWedu, according to their resume but not the article copy. An evangelist from Google. An employee of a think-tank funded by Google. A parent – also, coincidentally, an investor. A star pupil (if in college, they attend an Ivy League school, rarely a state school and almost never community college). All very excited to endorse whatever product is being launched. (Or sometimes, the inverse: a teacher, close to retirement. A concerned parent, persistently making phone calls. Everyone slightly hysterical in their refusal to change, to adapt.)
If you read a lot of technology-oriented publications – and I'm guessing you might if you subscribe to Second Breakfast – or if you spend a lot of time on social media with a network of "friends" and "follows" who talk primarily about technology, then you're inundated with stories that sure start to feel like they're representative of what "regular folks" are thinking about and doing with AI. Everyone is talking about AI. Everyone is using AI. "Everyone is cheating their way through college." No one cares about privacy. No one cares about copyright. No one can stop it. That sort of thing.
But I don't know... I reckon far more people are resistant to technology than these stories want us to believe (certainly than the industry wants us to believe), whether or not those people have an intricately developed notions about technology's political or economic dangers.
Mostly, people know technology sucks. The phone sucks. The laptop sucks. The WiFi sucks. The apps suck. The Web sucks. Not in a "yikes, this sure seems like techno-fascism" sort of way, but in "this goddamn thing does not work" but "my boss/my doctor/my principal/my bank/the IRS makes me use it anyway." In a "this is fucking ridiculous" sort of way. AI? AI?! R U SRS?!
Cory Doctorow is right, I think, that technologies now undergo rapid enshittification. But we're being very generous – too generous – when we grant that they were, once-upon-a-time, ever that good.
There is, nonetheless, some very weird nostalgia for that once-upon-a-time, and I think we should be very wary of those who want to "make computers great again" for what I hope are obvious reasons.
Someone asked me the other day how I approach "the news" about various technologies, how I formulate my analysis, what shapes my thinking. And I admitted that these days, one of the very first things I notice is who is telling the story. Computing (in general and ed-tech specifically) has long been the bastion of white male privilege; and while there had been efforts to change that – in pipelines and on panels and whatnot – AI is clearly a re-entrenchment of that power, explicitly so with the Trump Administration's dismantling of civil rights protections, echoed by the tech industry's dismantling of its own DEI initiatives. It's hardly a surprise that the loudest advocates of AI (in education or otherwise) are white men, many just remarkably aggrieved not just by the critics of technology but by anyone in technology who they feel has displaced them from the center of the conversation: particularly women, particularly women of color.
Much of what's posted on Substack newsletters and LinkedIn is like that stereotypical Midwest diner, populated by very motivated hustlers masquerading as "the common man."
Don't eat that bullshit. Step outside. Talk to people who are not at the counter.
I took a week off from writing this newsletter and tried to take a week off from paying attention to the specials they're hawking on that AI diner menu. Tried and failed, and the rest of the newsletter was going to be a long list of a lot of links from the past ten days. But that feels a little hypocritical (and quite depressing), I realize: here I am encouraging you to get offline, and I send you away with hours of suggested digital reading to do.
If there is one thing that you must read – no really, I insist – it is Helen Beetham's excellent piece describing "how the right to education is undermined by AI," surely one of the most clear and thorough articulations of why we must resist and refuse the future that the tech oligarchs are engineering for us – a future that is absolutely contrary to the kinds of rights outlined in international conventions: "education shall be directed to the full development of the human personality and the sense of its dignity… to enable all persons to participate effectively in a free society, [and] promote understanding, tolerance and friendship among all nations and all racial, ethnic or religious groups’."
A "data-fied" and "platformed" education infrastructure – one that certainly predates the release of ChatGPT – circumscribes the development of humans, of humanity. Education technology, despite all the stories of innovation and possibility, has served to make education less accessible, less accountable, less effective, less interesting, less reliable. It has removed control from students and from communities; it has undermined the expertise of educators and researchers; it has eroded knowledge; and it has centralized power in the hands of the technology industry and its oligarchical philanthropists.
"Young people in education are therefore subject to three distinct threats to their educational rights" from AI, Beetham argues:
1.So-called predictive AI is used to collect their data, to make unaccountable and discriminatory decisions about their futures, and to surveil and discipline their interactions with educational systems
2.So-called generative AI is used to capture and monopolise cultural and educational content, to advance hegemonic languages and perspectives, and to replace pedagogical relationships with automated agents
3.The narrative of ‘AI futures’ is used to refocus educational outcomes around the automation of intellectual work, stunting the development of young people’s capacities, and preparing them for work that will be precarious, exploitative, and algorithmically disciplined
Beetham makes very clear that we already have a long list of documented harms that digital technologies, including AI and chatbots, have done to young people. It's a long long long long list.
Incidentally, I keep hearing some of the patrons of those digital diners say that "we don't have any research yet" about the effects of AI. (Others, just as loudly, are busily hyping fraudulent research and claiming AI is unbelievably, utterly transformative.)
Folks, we have at least 50 years of research on the effects of AI on students. We do, in fact, know a lot about chatbots and algorithms and various media/computing literacies; there are decades of debate on how best to design assignments and assessments – digital assignments and assessments. Just because some people started paying to ed-tech in 2023 doesn't mean no one has ever thought about or studied this stuff until now.
And no matter what that research on AI and education might say – it's good! it's terrible! it's mediocre AF! – people do still have every right to say "no thanks." Bless your heart, social scientists, but often your findings are wrong, even if they're statistically significant. Life is so much messier, so much more complex than experimental design.
(Much like my embrace of the word "Luddite," I'm going to start nodding and cheering for "moral panic" when it's used to belittle people's opposition to technology. "Yes, in fact, I do have a moral code. Don't you?" That said, I'm not panicked as much as furious. Yet exhausted.)
We are suffering, in part, from an amnesia that erases the past (all the coverage this week, for example, that touted Google's AI announcements at its flagship developer conference as if it has suddenly added artificial intelligence to its products – as if it has not been, since its very founding, an AI company).
We are suffering at the same time from a nostalgia that keeps us stuck in an invented past: I can almost guarantee you that Jony Ive's company, acquired this week by OpenAI, will be making AI-enhanced spectacles, because AI is a Cold War fantasy and these men are all still fixated on some juvenile, comic book con about X-ray glasses.
But mostly, it seems, people are suffering, and the story selected from the diners at least, is that AI promises salvation. It's way too easy a story to tell, but not so easy a tall tale to swallow. A lot of us – even more than you might think – are choking on it.
Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Reading and writing about education technology (and reading and not writing about education technology) is my full time job, and your support makes that possible.