New research that I have conducted with my colleagues at the University of Oxford—Felipe Thomaz, Rhonda Hadi and Andrew Stephen—reveals that making chatbots more humanlike is a double-edged sword. On one hand, when customers are neutral, happy or even sad, interacting with humanized chatbots can boost customer satisfaction. Yet, when customers are angry, interacting with humanized chatbots only increases their dissatisfaction, meaning that a company’s most unsatisfied customers are often handled the most poorly.
More important, this lower satisfaction doesn’t just affect the single chat interaction or the customer’s feelings about the chatbot itself; it extends to negative feelings toward the entire company and decreases consumers’ desire to purchase from that company in the future.
Chatbots are becoming more common across a host of industries, as companies replace human customer-service agents on their websites, social-media pages and messaging services. Designed to mimic humans, these anthropomorphized chatbots often have human names (such as Amtrak’s Julie or Lufthansa’s Mildred), humanlike intonation (for instance, Amazon’s Alexa or Apple’s Siri) and humanlike appearances, using avatars or anthropomorphic virtual characters. Companies even design their chatbots to have cute or quirky personalities and interests.
Mickey and Doughboy
Typically, this trend toward humanization helps companies improve their brands, products and technologies, including chatbots. Companies that humanize their brands through anthropomorphized brand mascots such as the Pillsbury Doughboy, Disney’s Mickey Mouse, and the M&M’s Red, Yellow and Green characters (among others) develop more personal relationships with their customers.
Companies also humanize the products themselves, with advertising that depicts a Gatorade bottle as a heavyweight fighter or a BMW car as an enticing woman. Beyond advertising itself, products can be made to look more human, such as the popular British vacuum Henry and car grilles designed to look like the cars are smiling. Past research shows that humanization positively affects products because consumers rate them higher, choose humanized products over alternatives, and are more reluctant to replace these treasured “friends.”
The existing research has also shown that, in general, humanized chatbots benefit companies. Humanized chatbots have been shown to be more persuasive, increase enjoyment and provide the added benefit of social presence. Consumers are more likely to trust humanlike technology interfaces because they believe them to be more competent and less prone to violations of trust. Avatars can make online shopping experiences more enjoyable due to the experience being more like a social shopping trip with a friend. Little wonder, then, that the general industry trend is toward the humanization of technology, including chatbots.
Not quite human
Yet a growing body of research suggests that the effect of humanization is more nuanced. For instance, research shows that people’s preference toward humanized robots has a limit, after which it dramatically falls. Robots that are too humanlike are “creepy,” make people feel unsettled, and evoke an avoidance response.
Our research shows another instance where humanizing robots—in this case, chatbots—can backfire. We found that angry customers react negatively to humanized chatbots, because when companies humanize their chatbots they are, often inadvertently, increasing consumers’ expectations that the chatbots will be able to plan, communicate and perform like a human.
That works fine when the chatbots are fulfilling fairly simple tasks, such as tracking a package or checking an account balance, because the chatbot likely can complete those tasks effectively. But often humanized chatbots are simply unable to meet expectations, which leads to disappointment. Both angry and nonangry customers feel the disappointment of unmet expectations, but angry customers are more sensitive to this disappointment and prone to act on it. They hold the humanized chatbot more responsible, respond aggressively, and “punish” the chatbot and the company it represents through lower ratings and plans to purchase.
In light of these findings, companies can use several strategies for effectively deploying chatbots. First, companies should determine whether customers are angry (or not) before they enter the chat, and then deploy an appropriate chatbot. If companies don’t have the technological capabilities to deploy different chatbots in real time, they could use nonhumanized chatbots in customer-service situations where customers tend to be angry, such as complaint centers, and humanized chatbots in neutral settings, such as product queries.
If companies wish to continue using humanized chatbots in all contexts due to brand consistency considerations, they should play down the bot’s capabilities at the beginning of the chat. By lowering customers’ expectations that the chatbot will be able to perform as well as a human, the company decreases the chance that customers will be disappointed and potentially respond negatively. Some companies are already employing this strategy effectively. For example, Slack’s chatbot introduces itself by saying, “I try to be helpful (But I’m still just a bot. Sorry!).” Other companies aren’t as intuitive and describe their chatbots as “geniuses” or as having high IQs. In these instances, the companies are just setting their chatbots up for failure, and their most unsatisfied, angry customers for disappointment.