ellipse
pattern
Dec 08, 2020

Top Considerations for Chatbot Development and Chatbot Personality Design

Celene Osiecka
By Celene Osiecka

Sr. Director, Conversation Design

As I recently wrote (“Personality is Not Optional in Chatbot Design”), every chatbot has a personality whether or not you consciously install one, so you’d better plan for it. Now the question becomes: How do you design a chatbot personality that meets customer expectations in an engaging experience? 

Designing the perfect conversational chatbot interface can be a bit like the tale of Goldilocks. A comfortable yet engaging experience requires just the right amount of personality, humanity, and emotiveness. 

This blog draws on a lively panel discussion I recently hosted for [24]7.ai—now available on demand: Personalities in Conversational Chatbot Interfaces.

Design Chatbots with Audience and Use in Mind

It’s crucial you consider the conversational chatbot’s system application or use case when developing chatbot personality. A healthcare system AI chatbot, for example, will have a different tone from a financial services one, or one designed purely for entertainment.

Likewise, it’s important to align the chatbot persona with the brand. A customer accustomed to a fun and light brand will be disappointed by a dry, boring personality bot on your company website.

“Personality is so context dependent,” said Margaret Urban, Staff Interaction Designer at Google. “You might have a bit more flourish when the user is there for entertainment. For a persona hosting a game show, for instance, there might not be such a thing as too much flourish.”

Don’t, on the other hand, imbue your conversational chatbot with more personality than it needs. “If you can’t 100 percent justify the personality, it runs the risk of being too much personality,” said Phillip Hunter, AI Product Consultant, CCAI Services. 

Locale matters too. When dealing with multiple markets, you have to consider how chatbot interactions are perceived in each region. Markets have their own characteristics, so you can't blanket design across them.

“You really do need to think about how each market perceives this kind of interaction,” said Dr. Joan Palmiter Bajorek, CEO of Women in Voice. “In some places, it's perfectly fine to jump in with a ‘You need to do this’… It's all about getting down to brass tacks. In other places, you need to have some back and forth to establish the rapport in the moment.”

“Consistency and cohesion are really important when it comes to personality,” Hunter said. “People come to your device, system, kiosk, or voice application to do something, right? It might be a little thing they’re trying to take care of, but anything that gets in their way will frustrate and possibly derail them.”

Start with Chatbot Persona

We strongly advise you work off a frame of reference throughout the development process.

“We actually have a document that we can look back on and say, ‘Oh, this persona is Stephanie,’” said Dr. Palmiter Bajorek. “We go into what Stephanie is like and the type of responses she might give. Does she use emojis? Does she prefer exclamation marks? I imagine this person is almost like a friend to whom I’m giving a voice.”

She noted it’s best to develop the persona with a team. “Especially if you have three to five people who can sit down and say, ‘We agree today that this is the persona we’re homing in on.’”

Sorry (or) Not Sorry

How much should the persona of the chatbot emulate human behavior and empathy? It’s essential to find the right blend so you create rapport without seeming creepy.

For example, should a chatbot apologize? 

“The short answer is No,” said Warren Oshita, Senior UX Writer at [24]7.ai. “You want to start with sympathy then get to empathy and then compassion.” He suggested saying “Sorry” stops the conversation at the level of sympathy when it should move beyond that. Also, he added, it isn’t believable. Of course, Oshita added, it might not be believable coming from a person either: “I’ve read lots of chat transcripts where people accuse the live chat agent—a human being—of lying when saying they’re sorry.”

Urban had a different perspective. “There’s a difference between the discourse marker—Sorry—and the emotional declaration, ‘I’m sorry,’” she said. “It’s actually really useful to use the discourse marker as a way to show participants in the conversation, ‘Hey, we’re not making the progress you were hoping to make… and I’m going to pivot to something else.’”

She added that pivoting from “Sorry” is valuable for correcting an initial misrecognition or to better match to the user’s request.

The Helpful Chatbot Heart

Helpfulness plays a big part in providing the right amount of empathy.

“I think a lot of folks transpose the idea of empathy and whether to apologize,” Hunter said. Empathy is really about helpful action. And the most effective way to help, for a system, is to make sure that someone continues making progress toward what they want. “So, yes, we all want our feelings to be acknowledged,” said Hunter. “If you do that in a way that feels good, that will be seen as empathy.”

Hunter continued: “Some of the worst experiences are when the bot just says, ‘I’m sorry it didn’t work.’ You’re left with no recourse. You can’t fix it. And so even though that sounds nice, it’s a terrible experience.”

Learn More About Chatbot Personality Design

Read the next post in the series: Too Human for Humans: Ethical Issues in Chatbot Personality Design.
 
Read my first post in the series: Personality is Not Optional in Chatbot Design
 
Watch the full panel discussion as an on-demand webinar: Personalities in Conversational Interfaces.
 
Explore our AIVA conversational AI technology.
 
As always, feel free to contact us at info@247.ai

Related Posts

ChatGPT
Is ChatGPT ready to talk to your customers?

ChatGPT and the power of AI to power conversations

Call center services
5 Contact Center Lessons for Combatting High Turnover

Your organization, if it’s like most, has been fighting what feels like a…