Dishonor Code

Tressie McMillan Cottom recently offered a neat little summary of how universities have responded to generative AI: "Academics initially lost our minds over the obvious threats to academic integrity. Then a mysterious thing happened. The typical higher education line on A.I. pivoted from alarm to augmentation. We need to get on with the future, figure out how to cheat-proof our teaching and, while we are at it, use A.I. to do some of our own work, people said."
Although others might have "got on with the future," I’m still stuck on those “obvious threats,” I confess – and not just to academic integrity but to integrity in general.
What happened, I wonder, that prompted this narrative shift, that made AI – a tool of deception (see: the very substance of Turing Test), if not one for cheating – no longer an educational or even a social concern?
I mean, we know the answer, at least in part: what happened is money – economic precarity and fear of economic precarity that has everyone chasing the rapidly disappearing research and investment dollars and employment prospects that now are not utterly beholden to AI hype.
But maybe something else has shifted too, quite generally, with how we feel about honor and integrity.
Even if we just look at AI in school, I think it's still worth asking, what happened to those concerns about widespread cheating? Have they grown? Or have they been resolved? Or have they been dismissed? Perhaps it's just a reflection of a louder refrain that those promoting AI in education repeat: that cheating isn't actually a problem. Or at least, no more a problem than it's ever been. We can’t blame generative AI for cheating, because students have always cheated.
If and when students do cheat, advocates of AI go on to argue, it's not the fault of the technology; the blame lies mostly with teachers and with their antiquated teaching practices. Cheating is the result of not making assignments relevant, meaningful, and/or "cheat proof," which educators should have done well before now anyway. Shame on them, serves them right, or something like that.
"Cheating is not caused by technology but by culture," says Richard Culatta, the head of ISTE and former director of the Department of Education's Office of Ed-Tech.
Of course, this distinction presumes that technology and culture are distinct, when they surely are not. Technology and culture are inseparable, shaping and shaped by one another.

A new AI startup called Cluely released an ad last week that, as Garbage Day’s Ryan Broderick put it, marks “a true low point for the human race.” It certainly underscores how central deception and dishonesty are to the culture of AI and of computing technology and Silicon Valley more broadly.
Cluely is out. cheat on everything. pic.twitter.com/EsRXQaCfUI
— Roy (@im_roy_lee) April 20, 2025
Cluely’s founder, 21-year-old Chungin “Roy” Lee, has raised $5.3 million in venture funding for an AI tool to “cheat on everything” – “exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver,” according to Techcrunch.
Earlier this year, Lee posted a video of him using the tool, initially called Interview Coder, to cheat during the coding portion of an internship interview with Amazon. After the video went viral (Amazon had it taken down with a copyright claim), Lee and his co-founder Neel Shanmugam faced disciplinary action from Columbia University – a one year suspension for Lee – where they were students; they’ve now both dropped out to become entrepreneurs apparently, something that speaks volumes, I'd say, about entrepreneurship.
The two told the Columbia Spectator that “they had checked the University’s student handbook prior to developing the software and concluded Interview Coder did not violate it.”
“This tool cannot be used to cheat in class, except in some really artificially contrived scenario that has never existed before in a Columbia class. And that’s kind of what they’re trying to get at in a very artificial way,” Shanmugam said. “So it felt like they’re trying to discipline something they had zero right to discipline. But they’re basically reciting academic honesty, and [Interview Coder] does not let you do that.”
One way to read this is cheating and "academic integrity" are contrivances, made-up scenarios of meaningless gotchas. Honesty, integrity – these seem to be irrelevant here, with or without the appellation of "academic." And those who demand integrity – culturally or technologically, academic or otherwise – are overreaching.
While Lee initially claimed that the Amazon stunt was meant to draw attention to the problems with corporations using LeetCode-style interviews for programmers, he has now fully embraced the “cheating” angle as the key selling point for his new tool. He's posted a manifesto on the company website that positions Cluely as part of the legacy of the calculator, spellcheck, and Google, which are all described as technologies of cheating. Indeed, all technological innovation is in at first a kind of trick or deception. “Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it's normal.” The work of the future, the Cluely manifesto declares, will not involve “effort,” but shortcuts.
“So, start cheating. Because when everyone does, no one is.”