Who Cares to Chat?

Who Cares to Chat?
Brown pelican (Image credits)

In 2016 Microsoft made headlines when it debuted Tay, a chatbot modeled to speak like a teenage girl, which rather dramatically turned into a "Hitler-loving sex robot within 24 hours," as The Telegraph put it. The Twitter bot was designed to “learn” from the other Twitter users who interacted with it, parroting the content and style of their text; and – because, well, you know, Twitter – those users quickly realized that they could get Tay to say some truly horrible things. The bot's responses grew increasingly incendiary, denying the Holocaust and linking feminism to cancer, for starters.

Despite the public relations disaster – Microsoft promptly deleted the Tay bot – just a few days later Bloomberg Businessweek declared that “the future of Microsoft is chatbots.” “Clippy’s back,” the headline read.

For all the chatter that we're now, in 2025, entering the age of chatbots, we should recognize that we've been there a while. The automation of tasks through some sort of social, conversational bot – a "user interface agent" – has long been a goal of technology companies, whose history can be traced not just back to Tay or even to Clippy, but to one of the very earliest AI programs, ELIZA. Chatbots represent a lengthy effort in technological development, sure, but also a steady drumbeat of marketing and PR that this is the future businesses and consumers somehow want, that this is the future schools and students want.

User Interface Agent as Pedagogical Agent

Clippy was the user interface agent that came bundled with Microsoft Office starting in 1997. It remains arguably the best known and most hated user interface agent in computer history. 

The program’s official name was Office Assistant, but the paperclip was the default avatar, and few people changed it. Indeed, almost every early website offering instructions on how to use Microsoft’s software suite contained instructions on how to disable its functionality. (Microsoft turned off the feature by default in Office XP and removed Clippy altogether from Office 2007.)

The Office Assistant can trace its lineage back to Microsoft Bob, which was released in 1995, itself becoming one of the software company’s most storied failures. (TIME named it one of “The 50 Worst Inventions” – “Imagine a whole operating system designed around Clippy, and you get the crux of Microsoft Bob.”) Bob was meant to provide a more user-friendly interface to the Microsoft operating system, functioning in lieu of Windows Program Manager. The challenge – quite similar to the one that Clippy was supposed to tackle – was to make computer software approachable to novice users. In theory at least, this made sense as the number of consumers being introduced to the personal computer was growing rapidly – according to US Census data, in 1993 22.8% of households had computers, a figure that had grown to 42.1% by 1998.

How do you teach people to use a personal computer? And more significantly, can the personal computer do the teaching?

Microsoft drew on the work of Stanford professors Clifford Nass and Byron Reeves (who later joined the Bob project as consultants) and their research into human-computer interactions. Nass and Reeves contended that people preferred to interact with computers as “agents” not as tools. That is, computers are viewed unconsciously as social actors, even if consciously people know they’re simply machines. And as such, they argued, people respond to computers in social ways and in turn expect computers to follow certain social rules.

“The question for Microsoft was how to make a computing product easier to use and fun,” Reeves said in a Stanford press release timed with the Computer Electronics Show’s unveiling of Microsoft Bob. “Cliff and I gave a talk in December 1992 and said that they should make it social and natural. We said that people are good at having social relations – talking with each other and interpreting cues such as facial expressions. They are also good at dealing with a natural environment such as the movement of objects and people in rooms, so if an interface can interact with the user to take advantage of these human talents, then you might not need a manual.” In other words, if you made the software social, people would find it easier to learn and use.

Microsoft Bob visualized the operating system as rooms in a house, with various icons of familiar household items representing applications – the clock opened the calendar, the pen and paper opened the word processing program.

This new “social interface” was hailed by Bill Gates at CES as “the next major evolutionary step in interface design.” But it was a flop, panned by tech journalists for its child-like visuals, its poor performance, and perhaps ironically considering Microsoft’s intentions for Bob, its confusing design.

Nevertheless Microsoft continued to build Bob-like features into its software, most notably with Clippy, which offered help to users as they attempted to accomplish various tasks within Office.

Clippy as Pedagogical Agent

Start writing a letter in (pre-Office XP) Microsoft Word. No sooner have you finished typing “Dear” than Clippy appears in the corner of your screen. “It looks like you’re writing a letter,” Clippy observes. The talking paperclip then offers a choice: get help writing the letter or continue without help. Choosing the former opens up the Letter Wizard, a four step process in formatting layout and style.

Other actions within the Office suite triggered similar sorts of help from Clippy – offering to implement various features or offering advice if, according to the program, it appeared that the user was “stuck” or struggling. The Office Assistant also provided access to the Answer Wizard, suggesting a series of possible solutions to a user’s query. And it sometimes appeared as an accompaniment to certain dialog boxes – saving or printing, for example.

In all these instances, Clippy was meant to be friendly and helpful. Instead it was almost universally reviled.

Of course, software can be universally reviled and still marketed as useful and necessary. (The learning management system, for example.) But it seems doubtful that Clippy was all that effective at helping newcomers to Microsoft understand Office’s features, as Luke Swartz found in his study on Clippy and other user interface agents. 

Swartz suggests that part of the problem with Clippy was that it was poorly designed and then (mis)applied to the wrong domain. If you follow Nass and Reeves’ theories about humans’ expectations for interactions with computers, it’s clear that Clippy violates all sorts of social norms. The animated paperclip was always watching, always threatening to appear, always interrupting. For users busy with the rather mechanical tasks of typing and data entry, these were never the right situations for a friendly chatbot to interject, let alone to "teach" software skills.

And yet, despite our loathing and mockery of Clippy, pedagogical agents have been a mainstay in education technology for at least the past forty years – both before the infamous Microsoft Office Assistant and since. These agents have frequently been features of intelligent tutoring systems, and by extension then, featured in education research. As with so much of ed-tech, that research is, at best, inconclusive: depending on their design and appearance… pedagogical agents may or may not be beneficial … to some students… under some conditions… in some disciplines... when working with certain content or material.

Agents of Manipulation


What interests me at least as much as whether or not any technology, education or otherwise, works technologically is the cultural milieu in which the very assessment of this functionality and utility occur. That is to say, what makes "chat" so appealing now (and to whom) when it hasn't been for decades.

According to Luke Swartz's research on Clippy (a Master's thesis approved in 2003), people preferred asking other people for help with Office rather than relying on the talking paperclip (or the user manual even) for guidance. Novice computer users were more likely to turn to their co-workers for help and support. If no one was around to help them, they said they'd wait. They preferred to learn from other people.

But by the time that Microsoft had released Tay in 2016, things seemed to be shifting culturally; and as Sherry Turkle succinctly put it in the subtitle to her 2011 book Alone Together, we'd come to expect more from technology than from each other.

We feel afraid to ask other people for help. People are too slow, too frustrating, and often don't have the answer either. We feel it's too risky – professionally, politically – to show weakness, ignorance, vulnerability. We feel it's safer to trust the chatbot instead.