Automating Mistrust

According to some of the surveys that its evangelists are prone to cite, AI is now everywhere in education – or if not now, then soon, widely and excitedly adopted by the majority of teachers and students.
To that, I utter the words of Marcia Brady, "Sure Jan."
When you step outside the echo chamber, you're much more likely to find apprehension and even revulsion to AI. And a whole lot of indifference.
So the urging, the scolding must continue: "If you're not working hard at it, you're going to fall behind, and you’re not serving your students well,” one industry official recently told Edsurge/ISTE in a story insisting AI is already here and is here to stay, “because in three to five years, every business is going to expect it. Even today, many businesses expect students coming out of high school to have the skill to be able to use these tools in the workplace."
It's not really clear what the "it" is in that paragraph, other than some amorphous obedience to the demands of industry, or rather to the fantasy of tech. This is the same "jobs of the future" rhetoric we've been hearing for decades now, the same rhetoric about "skills" that's prompted the policies and procurement that have, in turn, largely de-skilled us, that have undermined our capacity to think and grow and flourish.
And you're surprised people are skeptical?!
We should know by now: anytime someone maintains that a technology is inevitable, they're asking us to give up any say we might have over that technology – asking us to give our ability to question, to alter, to refuse; they're asking us to abandon our agency, the control of our future.
This is the antithesis of freedom, of course. This is B. F. Skinner's vision of behavioral technology – an old vision, and a pretty bad one at that. And because of AI's inseparability from autocracy, imperialism, and environmental destruction, all this talk of AI's inevitability is the antithesis of the future itself.