Education as Prediction Market

In their bestseller AI Snake Oil, Arvind Narayanan and Sayash Kapoor open their chapter on "How Predictive AI Goes Wrong" with a story from Mount St. Mary's University: how, in 2015, the school had conducted a survey of freshmen to identify ones who were struggling – an attempt, it said, to help improve its retention and graduation rates. But the plan was not to offer more support to the students at risk of dropping out. Rather, as the president allegedly told faculty, it was to encourage those very students to leave. "My short-term goal is to have 20-25 people leave by the 25th [of September]," he said, before the deadline for reporting the data to the federal government. "This one thing will boost our retention 4-5%."
Faculty objected, arguing that it's impossible to judge someone's trajectory after just a few weeks as a college student. "This is hard for you because you think of students as cuddly bunnies, but you can't," the president responded. "You just have to drown the bunnies... put a Glock to their heads."
While his words might have been shocking – and he did subsequently resign – there are many companies that sell schools software promising precisely these sorts of insights: who is likely to succeed, who is likely to fail, what majors should students be steered towards in order to increase the likelihood of the former, not the latter. These sorts of analytics are ubiquitous in school, often built right into the software that students and faculty are compelled to use: the learning management system, the student information system, for example.
Of course, these tools haven't until recently been marketed as "AI" – they have been sold as "learning analytics" or "predictive analytics" or "student success solutions." Despite the euphemistic language that tries to make it sound like this is something for students, this is something done to them – tracking and evaluating them without their knowledge or consent. And as Narayanan and Kapoor caution, the consequences can be devastating as these predictive systems are increasingly making life-altering decisions automatically, with little to no human oversight or input.
Such is the promise of automation; such is the promise of "AI." The machine can drown the bunnies; you needn't bloody your hands.
In order to make these predictions, these tools need a lot of data. Some of it comes from internal systems – from the LMS, the SIS; from admissions or financial aid offices. Some of it comes from external systems – from data publicly available online (whether that's openly available or extracted through web scraping). As I've argued before, this is data about the past – not even an individual student's past, although sometimes that is included. It's historical data; and we know that the history of education – whether higher education or K12 – is full of racist and sexist practices, unequal access, unjust disciplinary measures (by "disciplinary" here, I mean behavioral infractions); unfair disciplinary biases (by "disciplinary" here, I mean the prioritization of STEM departments over the humanities. Or that we listen to the business school, like, at all).
Predictive analytics deny agency; they deny people the ability to control their lives by constraining them to others' pasts. If education is the practice of forging one's own future, predictive AI is fundamentally anti-education.
"Grammarly says its AI agent can predict an A paper," The Verge reported this week. It cannot.
It will, according to the company, use data – "publicly available information" – about professors in order to help students shape their writing. That means Ratemyprofessors.com, doesn't it – a site known to be particularly hostile to women. While the infamous chili pepper rating – in which students could rate their professors' "hotness" – was removed in 2018, Grammarly still reinscribes this antagonism between faculty and students, undermining the human relationship in the classroom with these algorithmic false promises.
This antagonism is being hard-coded – deliberately so, one has to think – as the technology industry actively works to undermine the trust we have in one another. Zoom will predict who's going to attend a class, who's going to participate. TurnItIn will predict who's going to cheat (or if a student, predict if you'll get caught). EAB will predict who's going to drop out. There is no time – at least as the president of Mount St. Mary's would have it – to learn about one another, to care about one another. No time, no money, no incentive when everything is merely a market to be manipulated, gamed.
It may well be that folks have been making too much out of "generative AI" in education as something new and transformative, when in fact it's all "predictive AI" – it's all harmful and exploitative.
It's all eroding our future in an attempt to sustain an elite who'd hate for our future (the future for education, our future by way of education) to be more just.

A mixed bag:
- From Princeton's Center for Information Technology Policy: "Emotional Reliance on AI"
- From MIT Technology Review: "How churches use data and AI as engines of surveillance"
- "Personalized AI is rerunning the worst part of social media's playbook" by Miranda Bogen
- EdWeek's Market Brief reports that "Duolingo to expand in Music Education after UK Acquisition." And in The 74, "Language Learning App Giant Duolingo Thinks It Can Conquer Math, Too." I know many folks still love Duolingo (and feel obligated to maintain their "streaks") despite all the recent bad press and weird marketing strategies. I kinda feel like one is obligated to include a CEO's shitty comments about schools and teachers when covering their company's efforts to woo the ed-tech market. But maybe that's just me
- Paul Musgrave says "Classroom Technology Was a Mistake"
- Katie Day Good makes the case for the blue book exam
- Move over PayPal Mafia. There's a new, more bloody sheriff in town: "The Palantir Mafia Behind Silicon Valley’s Hottest Startups"
- Charlie Warzel declares that "AI is a Mass Delusion Event"
- Via Asterisk Magazine: "Why Are There So Many Rationalist Cults?"
- "The twilight of tech unilateralism" by Henry Farrell
- "How Meta Became Uniquely Toxic for Top AI Talent" by John Herrman
- "What My Daughter Told ChatGPT Before She Took Her Life" by Laura Reiley

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber as your financial support makes this work possible.