Ballistic Misses

Ballistic Misses
Great white pelican (Image credits)

Simon Ramo published his essay "A New Technique in Education" in 1957 in Engineering and Science, a journal published by Caltech to showcase the institution's research. "A noted scientist proposes some radical changes in our educational system to bring it in line with our increasingly technical world," reads the subheader, noting in small text at the bottom that Ramo was a research associate in electrical engineering at the school and the executive vice president of The Ramo-Wooldridge Corporation. Ramo's essay was made famous a year later as its ideas were the basis of an illustration in Arthur Radebaugh' popular Sunday comic strip series about the future – the future, in this case, of the "push-button classroom." (I've written about this cartoon and this essay many times before – in Teaching Machines and elsewhere.)

But it's that connection to The Ramo-Wooldridge Corporation and to Ramo's other work – work that falls beyond the scope (ostensibly) of this still-lingering ed-tech imaginary – that I want to underscore here. For elsewhere, Simon Ramo is best known as "the father of the intercontinental ballistic missile," that is, a missile that is used to deliver a bomb with a range over 3400 miles. A missile that, according to Wikipedia at least, "primarily designed for nuclear weapons delivery," a weapon whose early versions had "limited precision, which made them suitable for use only against the largest targets, such as cities."

You cannot separate the history of education technology or the history of computing technology from the history of military technology, as much as one might want to. So when the UK government unveiled its recent AI Action Plan and signed a memorandum with OpenAI promising to "develop sovereign solutions to the UK's hardest problems," one shouldn't be at all surprised that the list of these challenges includes "justice, defence and security, and education technology." They are deeply intertwined – ideologically as well as economically.

In the US, education technology has always been bound up in concerns about national security, not simply because of the military's efforts in funding and building various teaching machines, but because of the steady drumbeat of narratives about looming educational crises – think Sputnik, think A Nation At Riskthat supposedly reveal the weakness of our students (physically as well as mentally – more on that below), the weakness of our institutions (too feminized, too undisciplined), the weakness of the curriculum (too soft) and necessitate technological interventions.

These interventions demand gadgetry, of course. But they have always always demanded data – standardized testing and intelligence testing and ranking and sorting and tracking and now of course massive massive data extraction.

In her latest essay, Helen Beetham traces the details of this latest push to quite literally weaponize public data in the UK. She concludes, "The neoliberal state is being transformed into an authoritarian, securitised state, with AI acting both as a powerful narrative for the manufacture of consent, and as a means of co-ordinating commercial and state capacities to govern through the use of personal and social data. The government’s role in collectivising risk to provide shared social security has been replaced by an obsession with ‘national security’ and military spending, as illustrated by the Orwellian rebranding of AI Safety as AI Security. From monitoring AI risks to using AI for surveillance and war."

In The New York Times, Mike Isaac claims that "AI Has Ushered in Silicon Valley's 'Hard Tech' Era" although he's not exactly clear on what is "hard" here or what that adjective means, although it certainly conjures up military imagery of defense bunkers and networks (the former of which many powerful tech executives are not ashamed to admit they're preparing for themselves). But it certainly feels like a lot of people – tech journalists and educators included – have been looking the other way at the implications of this build-out of surveillance and behavioral technologies. Or perhaps they really believed Google when it said "Don't be evil," willing to ignore all the evil that has occurred even during the "fun" decade (LOL?) of Web 2.0.


I think it's absolutely right that the storytelling is so critical to understanding how these companies operate. And ultimately, you know, the argument that I make is that we really need to think of these companies as new forms of empire. And a pillar of empire building is the narrative that is wrapped around what they're doing and the consequences of it. You know, historically, empires always had this banner of we are ultimately engaging in all this plundering and labour exploitation and aggressive expansion in the name of progress, LIke that was a really critical part of justifying what they did enabling widespread public support within the british empire at the time or within other empires at the time. 
And that's essentially what is happening in Silicon Valley, that they're using progress and this imperative to move forward and his very narrow definition of what progress is as specifically as like technical advancement or advancement of AI systems to portray themselves and the quests and why they need all of these resources. And to your point that there's kind of like this really interesting dynamic of... Not only do they have a narrative of the utopia, they also have this counter narrative of this dystopia.
And ultimately, it's kind of just the same thing.”

-- A conversation with Karen Hao, author of Empire of AI, and journalist Carole Cadwalladr



I told you so:

"Students have been called to the office — and even arrested — for AI surveillance false alarms."

"AI teacher tools display racial bias when generating student behavior plans, study finds."

"In federal lawsuit, students allege Lawrence school district’s AI surveillance tool violates their rights."


President Trump signed an executive order this week pronouncing the return of the Presidential Fitness Test, part of some ridiculous effort to "Make America Healthy Again" by ritualistically shaming students who cannot meet arbitrary physical activity goals. "Rates of obesity, chronic disease, inactivity, and poor nutrition are at crisis levels, particularly among our children," the order reads. "These trends weaken our economy, military readiness, academic performance, and national morale."

Cue the "national security" klaxon once again.

Aubrey Gordon and Michael Hobbs offered a pretty definitive history of the test on their podcast Maintenance Phase back in 2020 in which they clearly outline many of the reasons why the test was and is bullshit.

There are no details – surprise surprise – in Trump's executive order that outline what this fitness test will entail. Running a mile? Climbing a rope? All the horrible things that made you (read: me) never want to participate in any physical activity again for the rest of your (read: my) life?

(Jill Twiss has a good suggestion: perhaps all of us – not just kids – should work on strength training with the goal of being able to deadlift the President. That'd be a ~245 pound deadlift for the current occupant of the White House – a very reasonable goal. If that's too daunting, you can start with the fourth President, James Madison, who weighed in at a very slight 100 pounds.)


Nudges are perhaps best understood, though, as attempts to demarcate liability. They imply concern for users’ well-being and offer tools to help them manage it. But they also make it clear that, should these new products drive you sort of insane, that’s ultimately your problem, not theirs. You were warned! Or at least you were nudged.

– John Herrman, "Why ChatGPT Is Now Encouraging Breaks But Not Breakups"


Thanks to advances in AI, we are no longer allowed to truly die. The peace of the grave and the finality that defines the human condition are all being sacrificed on the altar of artificial intelligence. We are normalizing digital necromancy, a grotesque spectacle where the dead are reanimated as hollow puppets, forced to speak the words of the living. This is not progress; it is a fundamental assault on our humanity, a desperate, technologically enabled refusal to accept the reality of loss or to engage in the most of human of acts, which is processing grief.

-- Alejandra Caraballo, "Digital Necromancy in the Age of AI"

Former CNN correspondent Jim Acosta interviewed an AI avatar this week, a simulacrum of Joaquin Oliver, a student who was killed in the 2018 Parkland high school massacre, on what would have been the young man's 25th birthday. This is, as Brian Merchant notes, just incredibly bleak because this is, yes, a media stunt but it is one orchestrated in part by Oliver's grieving parents who have – like so many parents of children killed in their classrooms – lobbied long and hard and to no avail to change gun legislation in this country. So now they've "generated" this robotic ghoul to join in their so far fruitless advocacy – to be not just a face but a voice, to become an "agent," I imagine, to extrude text and video content for social media.

Of all the fantastic and dangerous promises of "AI," the promise to transform your dead child into a political actor seems among the most heinous; believing this, this promise of agency after death, the most despairing.


What do cultish and fundamentalist religions often do? They get people to ignore their common sense about problems in the here and now in order to focus their attention on some fantastical future.

-- Greg Epstein, Harvard chaplain in "Rise of Silicon Valley's Techno-Religion" by The NYT's Cade Metz


All those stories about verdant futures are being spun by men who are, at the same time, arming the secret police, designing missiles to shoot across continents to snuff out the lives, the ways-of-life they deem subversive and unAmerican.

But hey, I hear there's a new ChatGPT release. I bet it's simply earthshaking.


Thanks for reading Second Breakfast. Please consider becoming a paid subscriber. Your financial support enables me to do this work: smashing the AI looms, or at least the Kids in the Hall version, where I repeat "I'm crushing your head I'm crushing your head" as I make a little pincher gesture and think about what a world without billionaires would be like.