Agents versus Agency

There's an argument I make in Teaching Machines that I'd like to repeat here – a setup to this first of many essays in which I work through some ideas about AI, agents, and agency:
In 1971 the psychologist B. F. Skinner published his most controversial book, provocatively titled Beyond Freedom and Dignity. In it, he argued that freedom was a deception, a psychological "escape route" that persuaded people that their behaviors were not controlled or controllable, when in fact, they were. Utterly. "Autonomous man serves to explain only the things we are not yet able to explain in other says," Skinner wrote. "His existence depends on our ignorance, and he naturally loses status as we come to know more about behavior." The literature on freedom and dignity – in other words, much of Western philosophy – "stands in the way of the future of human achievement." What was necessary to break from this false sense of autonomy and agency, Skinner argued, was the science and, most importantly, the technology of behavior.
Beyond Freedom and Dignity was published at the height of the counterculture movement and the Vietnam War, and Skinner's arguments were received as an attack on the core tenets of democracy. He had been a well-known public intellectual for decades prior – and I'd argue he remains the second most famous and influential psychologist in the world after Sigmund Freud; and as such, people should have been quite familiar with his theory of behaviorism. But the contents of this book were seen as both new and shocking.
Even Skinner's colleagues in Harvard's psychology department had begun to turn against him (that's how he framed it, at least), with Herbert C. Kelman, for example, telling Time, "For those of us who hold the enhancement of man's freedom of choice as a fundamental value, any manipulation of the behavior of others constitutes a violation of their humanity, regardless of the 'goodness' of the cause that this manipulation is designed to serve.'"
The problem with Skinner's theory of behaviorism wasn't simply the science; it was its politics. Such was the crux of one of the most influential critiques of Beyond Freedom and Dignity, Noam Chomsky's review in The New York Review of Books: "The Case Against B. F. Skinner." "As to its social implications," Chomsky wrote, "Skinner's science of human behavior, being quite vacuous, is as congenial to the libertarian as the fascist."
In challenging Skinner, Chomsky insisted that modern science must investigate "internal states." Indeed, cognitive science's break from behaviorism is, in part, a response to and reversal of Skinner's rejection of mentalism or "the mind" as an object of inquiry. By refusing to look "inside," "Skinner reveals his hostility not only to 'the nature of scientific inquiry' but even to common engineering practice," Chomsky wrote. (Interestingly, observing and theorizing how machines function internally was a crucial element of cybernetics, the precursor to AI – a field that was also committed to ideas of feedback, regulation, and control, and which was profoundly behaviorist. But who’s counting.) "By objecting, a priori," to the examination of these inner workings, Chomsky sneered, "Skinner merely condemns his strange variety of 'behavioral science' to continued ineptitude."
Even if it was inept, Chomsky cautioned, that didn't mean Skinner's behaviorism wasn't dangerous. "There is nothing in Skinner's approach that is incompatible with a police state in which rigid laws are enforced by people who are themselves subject to them and the threat of dire punishment hangs over all." Skinner had insisted that behavioral engineering would make the world safer – as the main character in his novel Walden Two says, "The fact is, we not only can control human behavior, we must." But Chomsky did not believe for a moment that utopia, even democracy, once society was engineered and managed by behavioral scientists, would be the result.
In many accounts, both Skinner's theory of behaviorism and the man himself were thoroughly discredited by Chomsky's eviscerating book review. (If only eviscerating book reviews were so powerful, my god.) Certainly, in psychology departments around the world, cognitive science displaced behaviorism as the dominant approach in the field.
And yet. And yet. Behaviorism and behavioral engineering are still everywhere, particularly in computing – and even, for all the talk about its modeling of "the mind," in artificial intelligence.
This was the observation that Shoshana Zuboff made in her groundbreaking book The Age of Surveillance Capitalism. In it, Zuboff chronicled how deeply embedded in our digital infrastructure behaviorism and behavioral engineering have become. Methods of behavioral conditioning, more commonly known as "nudges," are used to shape our online behaviors: our viewing habits, our status updates, our clicks. And to shape them, to be clear, not in the service as Skinner had imagined – in building a better world – but rather in engineering a highly profitable business model, one based on the datafication and manipulation of our lives and decisions: a futures market for human behavior.
Interestingly, some of the pushback on Zuboff's book and on her assertions about surveillance capitalism relies on the strange faith that behaviorism was vanquished decades ago – thanks, Chomsky! — and that behavioral technologies like operant conditioning are ineffective, attached to a science that is passé. And most crucially, that contrary to Skinner's declarations about the impossibility of human agency or autonomy, we doggedly retain the ability – the will – to choose, to be free. If Skinner was wrong, Zuboff must be wrong too.
I’d add to that last point on freedom and agency: many people are also fully committed to the narrative that digital computing is inseparable from freedom, refusing to recognize its origins in military "command and control," its deeply exploitative history and trajectory, its right-wing allegiances, or its explicit efforts to undermine democratic oversight and governance.
And here we are: as democracies are under threat from autocratic forces – from techno-autocratic forces who clearly seek to circumscribe our agency, our ability to act democratically – we are simultaneously being sold stories about the tech industry's newest product, one that conveniently, explicitly also invokes agency (and sometimes – ha – "democratization" and freedom): the AI agent.