Chit Chat

Chit Chat
Common hill myna (Image credits)

Chit: an official note, sometimes a voucher or an IOU; a slip of paper that grants permission

Chat: an informal conversation; a friendly talk

Chit-chat: "A reduplication with vowel variation of chat," according to the OED, like "bibble-babble" and "tittle-tattle." "The reduplication implies repetition or reciprocation, possibly with diminutive effect," the dictionary says.

"AI" chatbots might appear to be conversational. They might seem friendly, communicative, understanding. But what they're offering is closer to chit than to chat – orders from oligarchs and would-be overlords who deign to sell us "agency" by ensnaring us in their machinery of extraction and acquiescence.


Are Chatbots Safe for Kids?Education Week asked last week, on the heels of the FTC’s announcement of an inquiry into the effects of the technology on children and teens.

According to Betteridge’s Law of Headlines, at least, the answer to Ed Week’s question is “no.” But we might want to ask the follow-up: “Are chatbots safe for anyone?” Indeed, there’ve been a number of high-profile cases of “AI”-related deaths (and near-deaths and psychological breaks) -- several suicides and at least one murder; and while many of these have involved teenagers, not all of them have. To a certain extent, handwringing about whether “AI” – specifically “‘AI’ companions” – harms children distracts us from broader conversations about the negative consequences of the technology: the harms to all of us, to knowledge and art, to democracy, to the environment.

We have seen this with regard to social media and cellphones. We should know the trick by now. In several US states and in several countries (the UK, Australia, for example), there has been a recent push to enact age-based restrictions to social media, and one can readily imagine that any new regulatory efforts in the US regarding “AI” will follow a similar path: “think of the children” theatre that (maybe maybe) results in a few half-hearted “guard rails” – curbing access, censoring content, and most likely shifting responsibility entirely onto parents. These will be “guard rails” that are easily bypassed or ignored, that change nothing about the underlying issues of exploitation and manipulation. Issues that affect everyone.


This summer, Common Sense Media issued a report on teens’ use of “‘AI’ companions,” one subset of the generative “AI” market that claims to offer creative conversation and companionship via chatbots, with products pitched around “play” and “creativity” and often explicitly targeting children and teens.

The report found that 72% of teens had used “AI companions,” with 13% saying they used them daily – slightly more girls than boys. Almost half of those surveyed viewed these products as merely “tools” or “programs”; and a third described their usage as “entertainment.” But 12% reported using these chatbots for emotional and mental health support, the same percentage who said they tell the “AI” things they wouldn’t say to friends or family. The majority of those surveyed indicated they still prefer human friendships; and yet one third said they’d choose “AI” over people.