Everything Everywhere All at Once
Apparently there’s been an "incident" at PowerSchool. An incident. A breach. A hack. NBD, just one of the largest ed-tech companies in the world, provider of a student information system for about 16,000 schools (75% of districts in the US), the central piece of digital infrastructure for K-12 education, containing about 60 million students' data. It does sound as though, based on the note that PowerSchool sent its customers, that the company has paid some sort of ransom; it says it "received reasonable assurances from the threat actor that the data was deleted and that no additional copies exist." So not to worry, I guess.
There's not a lot of other information out there – mostly local reporting about local schools affected. Everyone is too busy, I guess, salivating over the future – the magic of AI in education – to talk about what's actually going on with the ed-tech that's actually deployed in schools.
PowerSchool was sued late last year, funnily enough, accused of selling student data. Whether it’s selling that data or not — or whether now hackers are or not — the company is certainly using student data to power whatever new algorithms and analytics it will sell back to schools as “AI.” Hilarious.
Schools have a lot of problems with and because of technology; and “AI” both is and is decidedly not one of them.
The Consumer Electronics Show has been going on in Las Vegas this past week, and “AI” was everywhere. CES is always full of the most ridiculous bullshit gadgetry that then gets spun by the slightly addled tech reporters into glorious visions of the future. (I got a newsletter today from Vox, reporting from CES, that “gadgets are good now.” FFS.) A decade ago, the promise was "Internet-connected" everything – toothbrushes, dildos, refrigerators. This year, no surprise, companies are slapping "AI" onto all their products: ChatGPT on your television, your car, your fitness watch, your hairdryer. No one needs this. No one wants this. (The guy from Vox is the exception that proves the rule.) But the marketing onslaught will continue. Steel yourself — and be prepared to hear in the coming months that those of us who are skeptical about artificial intelligence are actually the real problem.
Meanwhile, Nevada's neighbor, California, is on fire – Los Angeles's worst fires ever, and the worst, we're told, is still to come. While many of us remain in deep denial about the climate crisis, it does seem increasingly clear that none of this — none of this week’s news, ed-tech or otherwise — bodes well, particularly as one can readily imagine that the incoming administration will do everything it can to punish us and exacerbate the apocalypse rather than aid people/the planet.
Meta, parent company of Facebook **, is leaning into the doom, announcing this week that it will end its content moderation program, something that a former FB employee says is "a precursor to genocide." (Facebook has already incited genocide in places like Myanmar, so this seems not just plausible but likely.) "Mark Zuckerberg has gone full Maga," Siva Vaidhyanathan contends (although arguably he was all along). danah boyd calls Zuckerberg's announcement Orwellian, and certainly we can see that the Internet writ large – not just Facebook, not just social media – is a vast disinformation machine. War is peace. Freedom is slavery. Surveillance is care. Googling is research. And so on. In the words of Charlie Warzel and Mike Caulfield in a terrific op-ed in The Atlantic (rare words because yikes, The Atlantic), "A rationale is always just a scroll or a click away."
I struggle to reconcile how we are supposed to believe that generative AI – trained largely on this vast corpus of hate and bullshit – is supposed to lead us into a future that's bright and cheery and good for living creatures.
Political scientist Henry Farrell argues that the problem with social media isn't disinformation – the kind of stuff that "fact-checking," on Facebook and elsewhere, was supposedly meant to address. Rather, he claims, the issue is a "degraded democratic publics" – something I’d say is exacerbated by Internet technologies, to be honest. Indeed, I'm not sure that these two are neatly separable – it's not a matter of supporting or undermining one or the other. The whole notion of "information abundance" – something wrapped up in the ideology of digital technologies well before Mark Zuckerberg was even born (and my, he’s looking extraordinarily mid-life-crisis these days) – has actually shown to hold little allegiance to either “the truth” or “democracy.”
The stakes are high – incredibly, incredibly high and not just for education, although ostensibly that’s the focus of this newsletter – as what we know and how we live and learn together are being systematically undermined. And AI is central to this, despite all those who are joyfully embracing its promise (particularly on LinkedIn. JFC, I mean, the bullshit on Facebook is bad, but the stuff on LinkedIn is next level in its own way). As historian Timothy Burke writes,
The problem is that [AI] is not being used as a prosthesis to work beyond the frontiers of human capacity. It is being deployed in service to an anti-human ideology by a small class of oligarchs who loathe mass society, who hate democracy, who fear constraint. It is not being used to go where we cannot go, but in hope of replacing people in almost everything, to make a manorial society where a new feudal lord would hold court with a small group of loyal humans bound to his service while most of his needs and wants were satisfied by AI-controlled stewards, robots and simulations. That generative AI cannot do such a thing no matter how it is improved is not the point: it is what they are dreaming about doing with it, and are content if much gets wrecked along the way to discovering that they in fact live in a society and always will. These are people who have a half-assed belief in the Singularity without having really absorbed the point of the idea at all. They do not dream of being downloaded into a Moravec bush robot or being part of a post-scarcity transhumanity oozing across the stars one self-assembling machine at a time. They believe in ridding the universe of anyone who could say no to them, in order to stay just as they are. If they dream of immortality, it is the immortality of their present bodies, their present power, their present wealth.
Burke's point about the "half-assed belief in the Singularity" is, I think, particularly important. Because while there are many people who have suddenly become "true believers" in AI's supposed capabilities, I'm not sure that those at the top of the technology pyramid scheme believe so much in the immense cognitive power of AI as much as they loathe that the people beneath them still have any power at all – still think, still question, still resist, still act in ways that are unpredictable, uncontrollable, unrestrained. That is, it's not so much the cognitive power of AI that’s alluring; it's power, raw financial and political power. And that's the goal here, not some sort of streamlined, science fictional uploading of human consciousness into the machine.
OpenAI CEO Sam Altman recently wrote that "We are now confident we know how to build AGI as we have traditionally understood it." (He goes on, in the next sentence, to say something far more dangerous politically and economically, but let's stop here with this first claim.) Cognitive scientist Gary Marcus, for one, has a long list of why OpenAI is not close to AGI – "traditionally understood" or otherwise — technically. But as I’ve long argued, much of this hype cycle is not ans much about technology as it is about ideology. As such, it's Ben Riley's response to Altman's claims that I think are most salient, as Riley draws on his own experiences to make an analogy to Enron – to the fraud perpetuated by a different sort of extractive technology company, happy to invent all sorts of convoluted accounting methods and grandiose storytelling practices to obscure that there was no "there" there.
I'm guessing a lot of people had forgotten Enron (although the corporation has recently resurfaced – a prank, but my god, a beautiful one – now offering "The Egg, an At Home Nuclear Reactor," that'll surely help power the new personal supercomputer that Nvidia promised at CES.) But that's how our collective amnesia always seems to work: protecting corporate executives and politicians from accountability in part by forgetting how royally they’ve fucked us over.
Related: The Chronicle of Higher Education examines the ways in which Obama-era school reforms – reforms that were funded and promoted by technology companies and philanthropists – have shaped incoming college students' attitudes and aptitudes about learning (and yikes, probably not for the better). While there seem to be some pretty serious repercussions for these students, as well as for the teachers who work with them, no one ever seems to say to the reformers, "guys that’s it. GTFO." These same folks are still being placed on various boards and advisory committees; they're still keynoting. They're still founding new companies. They’re still on 60 Minutes. Hell, they’re still asked for comments in CHE articles. cough cough. They're repackaging the same old shit as shiny new shit, and – bless their hearts – some folks are still eating it up with a smile.
Like, say, Unbound Academy, which garnered a lot of headlines recently with its new online charter schools that say they’ll replace teachers with AI and teach students in under 2 hours a day. Oh, you mean like virtual charter schools? Something we know is worse than awful for kids? Recall, if you will, that 2015 CREDO study that found outcomes from virtual charters were so poor that it was “literally as though the student did not go to school for the entire year.” But we don’t recall, and we’re caught up in hyping virtual schools and automated tutoring systems yet again — always some sort of shortcut to the hard work of public education. As Dan Meyer writes in a glorious unpacking of the Unbound Academy's dubious claims, "the hardest part about public schooling is obviously the 'public,' not the 'schooling.' The hardest part is our commitment to educating everybody rather than just a wealthy few, a commitment that gets harder and harder to meet as life gets more and more precarious for more and more people."
And we return, full circle, I guess, to Henry Farrell's argument: we have hollowed out, handed over control of the public sphere – including so much of public education – to a small number of technology-sector billionaires (Gates, Zuckerberg, Andreessen, Musk, Bezos, etc) who are actively trying to circumscribe how we even imagine, let alone talk about the future, god forbid build towards one that’s more progressive, more just.
We aren't going to "generative AI" ourselves out of their grip.
(Mutual aid, OTOH... It’s probably a good place to start today.)
Thanks for being a subscriber to Second Breakfast. I'll be back on Monday with some thoughts on convivial technology and congestion pricing in NYC — or really, an essay about highways and information superhighways and how you don’t need to be a car driver or an AI user to see what bad infrastructural decision-making looks like.
** I didn’t even have space to talk about Facebook’s AI bot rollout, a horrific performance of digital blackface, if nothing else. Maybe I’ll come back to it — this dream of filling up social media with fake people, part of a weird vision for education where robots teach classes and robots take classes and humans get bumped from the system altogether. IDK.