Discrimination Engines

Discrimination Engines
Image credits

Digital technology and algorithmic recommendations have, as the story goes, segmented society into a million little data-driven designations, where we all watch and listen to different content – a radical individualization that has shattered any shared consensus about culture or politics, about reality. Even the classroom, particularly thanks to "personalized education," is being drained of the power of that communal experience of learning with and from others. That means one of the very few things we still do together – at the same time, if not the same space – involves live sporting events. And while I think Kendrick Lamar's Super Bowl show was genius and subversive on many, many levels – surely an incredible pedagogical moment, as José Vilson rightly argues – I wonder if part of the outraged response from some white/right viewers wasn't simply based on its messages about Black excellence or racist capitalism but that a world of algorithmic isolation has given them a segregated digital world that they're now trying so hard to reinstate in the material world once again.

They're busily trying to hard-code it into the digital world too, no doubt, as we have witnessed this week with the purging of data and of references to women or people of color on government websites; the banning of keywords pertaining to race or gender in federal grant applications – the erasure of language, the erasure of knowledge, and (they are quite explicit about about this) the erasure of people.

"You're not a colleague, you're a colonizer" – Kendrick Lamar

More about this in Monday’s newsletter. Google and the Gulf of Mexico and the false promises of truth and reason and LLMs and such...


There is such a steady stream of concessions to autocracy –in both its MAGA and Musk formulations – and I am struggling to keep track of it all, even just within the education sector. And I'm torn on whether or not to keep the Friday newsletter as a round-up of news, or not.

Nothing says "masculine energy" like the automation of reproductive labor

The AI Coup, Continued:

"Elon Musk's DOGE feeds AI sensitive federal data to target cuts," The Washington Post reports. "Musk's superteam of former iPad babies" – a phrase I picked up from Garbage Day that, for today at least, I refuse to apologize for invoking here – has been poking around the troves of data that the Department of Education has collected. And phew damn, I warned you – this has the potential to put all sorts of students and schools at risk. (Related: The Cut on ICE raids and schools: "'I Can't Teach Students Who Don't Feel Safe.'")

The confirmation hearings for professional wrestling executive Linda McMahon, President Trump’s nominee for Secretary of Education, have begun. Plans are well underway to dismantle the department. Trump says he hopes she puts herself out of a job; Elon Musk has tweeted that “no such department exists in the federal government.” As the apparent leader of the country, I guess he’d know!

"$900 Million in Institute of Education Sciences Contracts Axed," Inside Higher Ed reports. The NAEP, the standardized test known as "the Nation's Report Card," will continue, we're promised – and of course it will, as it's been a useful political tool for undermining public education for decades now. Do make note of the people who are gleeful about any aspect of this.

A federal judge has temporarily blocked the Trump Administration's new cap on the overhead costs of National Institute of Health grants. This 15% cap was far, far less than the 30-70% that institutions like universities and medical research facilities receive to administer the grants – a figure that surely imperils the future of publicly-funded research in the US. This is, of course, the point.


Elsewhere in ed-tech (AI or otherwise):

Elin Sundström Sjödin and Lina Rahm on “Robots, Dogs, and Drags: The Politics of Reading and Being Read.” A very good article on libraries, literacy, and performing "matters of care" in public spaces.

One example of an often repeated ‘straw man fallacy’ is about how adults in schools inherently judge and undermine children’s confidence. Consequently, the proposed solution often gravitates towards the implementation of ostensibly neutral, personalized robots and comforting dogs as a superficial remedy to a complex and multifaceted issue.

Jennifer Sano-Franchini, Maggie Fernandes, and Megan McIntyre on "Evaluating Arguments About GenAI." Some helpful thinking (and verbiage) for resisting AI in the classroom.

Students push back on AI in art classes at the University of Maryland, Inside Higher Ed reports. "Making Space for Student ‘Sorrow’ Over AI."

From Wired, "Meet the Hired Guns Who Make Sure School Cyberattacks Stay Hidden." (It's been a big couple of weeks of neuroscientist Gary Marcus repeatedly saying, "I told you so," as he's long argued that, if Silicon Valley has its way steering the future of AI and digital tech more broadly, we will see more cybercrime, more cyberattacks, and more bias — congrats in advance to all the AI-in-schools advocates on making ed-tech worse.)

Another example of how readily student data – in this case mental health data – gets weaponized: "Seattle-Area Schools Say Deeply Personal Survey Saved Lives. Then They Released Student Data," reports Seattle Times’ Linda Jacobson.

"Roblox, Discord, OpenAI and Google found new child safety group," Engadget tell us. Platformer's Casey Newton is starry-eyed, but color me (as always) pretty skeptical. Google announced this and other initiatives in celebration of Safer Internet Day 2025 – pretty fucking rich to be marking this invented internet day since the company says it just doesn't have the bandwidth to include holidays like Black History Month on Google Calendar any longer.

Rob Nelson says that "The CSU system and OpenAI have an alignment problem."

From the Civics of Technology project: "The rise of metrics-based school discipline: How ClassDojo is changing discipline practices in schools."

Helen Beetham released part 2 of the Second Breakfast x Imperfect Offering podcast collab.

"It Is Fun to Pretend That Hard Things Are Easy!" says Dan Meyer, pointing to Michael Pershan's recent review of Math Academy, an online education company that promises math learning at miraculous speeds.


They Not Like Us:

In Monday's newsletter, I reviewed John Warner's new book More Than Words: How to Think about Writing in the Age of AI, which I highly recommend. (All his non-fiction books are great – can't vouch for the novel. Sorry, John.)

In one part of More Than Words, Warner explores the frequently-made comparison between ChatGPT and calculators – a comparison that gets wielded in the service of all sorts of arguments for and against students' use of technology in the classroom. I've actually heard someone say they don't want their kids to use calculators but they're fine with them using generative AI for brainstorming the first draft of an essay. (No surprise, this was a software engineer.) But mostly, I think, people parrot something along the lines of what Stanford economist Erik Brynjolfsson claims: "ChatGPT will be the calculator for writing," both of which serve to eliminate "mindless rote work."

The overarching argument of Warner's book is that writing is the antithesis of mindlessness, the opposite of "mindless rote work" – writing and thinking are inseparable, and to outsource the former to autocomplete will only serve to undermine our ability to do the latter. The use of a calculator, on the other hand, doesn't really offload the mathematical thinking to a machine; it just offloads the calculating, the already mechanical process of adding, subtracting, multiplying, or dividing.

(Ooooh. “Already mechanical.” I don’t like that phrasing. And I will say, I did briefly wonder "is a calculator bad for thinking?" but then I remembered how I was ride-or-die for Desmos, so I didn't pursue the thought any further.)

A calculator is a computer is a calculator, which means that even this "simple" process happens differently inside the machine than it does inside our heads (or on our fingers or scribbled out on paper), as a calculator first converts numbers into binary – 1s and 0s – before sending out electrical currents and switching transistors on and off.

Funny thing about generative AI – for all the talk of amazing "reasoning" capabilities, we don't actually understand how it does even the "simplest" of mathematical tasks. Researchers just published a paper on arXiv that they claim describes the first representation of LLM's mathematical abilities:

We reverse engineer how three mid-sized LLMs compute addition. We first discover that numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic. We then propose that LLMs compute addition by manipulating this generalized helix using the "Clock" algorithm: to solve a+b, the helices for a and b are manipulated to produce the a+b answer helix which is then read out to model logits.

All that is to say: "Language Models Use Trigonometry to Do Addition." Sounds super efficient. Let’s burn down the rainforest so Claude can give us the answers to a first grader’s math homework.


AI agents versus human agency:

"Are you high agency?" Taylor Lorenz asks, pointing to a new phrase she's hearing the Silicon Valley hustle crowd utter. So of course, of course, ed-tech's AI hustlers are uttering it too – something something something "agentic learners." Yes, I realize this is related to the push for AI agents (i.e. automated task completion) – part of the ongoing efforts to come up with some plausible, profitable use-case for generative AI.

There's actually a long history within AI of "agents" that’s worth writing about, no doubt, but this latest OpenAI (etc) stuff ain't that. And there's something to be said too, drawing from the history of philosophy and science – something about whether or not humans (and which humans) and/or machines have agency and/or willpower and/or souls. I have an essay about all this on my To Do list, but I keep procrastinating, because for the time-being at least, I control my mind and body.

Meanwhile: "Elon Musk's AI Fuelled War on Human Agency" by Kyle Chayka.


More thinking about unthinking:

Elon Musk has made a $100 million bid – supposedly – for OpenAI, which is caught between its desperation to ditch its non-profit status and make some money for its investors and its unease with handing Musk keys to the AI kingdom.

Boing Boing reports that reCAPTCHA has resulted in "819 million hours of wasted human time and billions of dollars in Google profits." Elsewhere in Luis van Ahn's influence on our world: the Duolingo owl died. (Cringe.) Keep clicking, I guess.

"Should you let ChatGPT write your grandma’s obituary?" asks Vox. I don’t know. Are you a thoughtless person?

Marc Watkins writes about "AI's Illusion of Reason," cautioning that "when we AI systems in humanizing terms, we create false expectations about their capabilities and their limitations." He uses the eighteenth century mechanical Turk as an analogy here – "an automated marvel" that appeared to play chess but in the end was a hoax. But there’s a problem with this historical reference, I would argue, when the imperialism, the "exoticized alterity" of this automaton – then and now – are unexamined.

Yesterday, I tuned into the launch of the Data Labelers Association, a group in Kenya trying to organize and advocate for those who work in content moderation and data labeling – workers, mostly from the Global South, who are essential for digital technologies, particularly for AI, to "work." Their labor is precarious and poorly paid, but it is highly skilled and too often involves dealing with psychologically traumatizing content. While I don't disagree with Watkins that there is an illusion going on with AI, I wonder if that illusion isn't simply in its so-called "intelligence" but in its "artificiality" as well. That is, this is human work, the work of people who historically have not been seen — seen, seen as intelligent, seen as human; people who are displaced, disregarded, obscured, erased in order to make "the magic" of the machine appear as such.

Via 404 Media: "Microsoft Study Finds AI Makes Human Cognition 'Atrophied and Unprepared'." A rough summary: the more one uses generative AI, the less one thinks critically about the task; the less confidence one has in generative AI to do a task, the more critically one assesses generative AI’s capabilities. I’ve seen some pushback on this study as it‘s based on self-reported assessments of “critical thinking” — what is critical thinking, etc etc. I’m sure psychometricians would be happy to help us understand all this with a good old-fashioned multiple choice test if the IES hadn’t just been gutted. Oh well, all science is just “vibes” now.


What can we do? What can we do? What can we do? There is a case to be made, I think, for doing nothing. A radical do-nothingness. Do not cooperate. Do not comply. Move slowly. Misspell and misfile things. Lose things altogether. Forget. Delete. Look things up in the library. Wander around the stacks. Write your refusals out in long-hand. Say "I'll think about it." But then don't. Go outside. Walk away.

Thanks for reading Second Breakfast. Please consider becoming a paid subscriber, which enables me to do this work, to "read" the education technology industry narratives and help everyone make sense of the futures that technology oligarchs want to engineer for us.