ellipse
pattern
Mar 16, 2022

Humanlike Chatbots: Chatbot Ethical Issues in Personality Design

Celene Osiecka
By Celene Osiecka

Sr. Director, Conversation Design

Let’s dive into some of the important ethical issues around designing chatbot personalities.

This is the third blog based on an engaging panel I recently hosted, now available on demand: Personalities in Conversational Chatbot Interfaces

In the first two blogs, we explored the why and how of infusing your chatbots with personality. Check them out here: 

What is a human-like chatbot, anyway?

A human-like chatbot is an AI-powered chatbot that can hold conversations with humans in a way that is similar to how humans speak with each other. These chatbots are able to understand natural language, which allows them to respond to questions and comments in a way that feels more human-like.

The question of just how humanlike to make your chatbots has both practical and ethical consequences. Humanlike chatbots encourage connection and greater engagement, but are we deceiving users?

This is not an idle question. Thanks to advances in technology, it’s possible to create chatbots and voice assistants—as well as virtual reality, augmented reality, and photorealistic animations—that are nearly indistinguishable from humans.

Traversing The Uncanny Valley: The Humanlike Chatbot Continuum

The Uncanny Valley is the hypothetical line drawn between robotic- and human-seeming AI technologies. Studies show that likability scores initially decline as AI approaches the point where users can’t tell what they are dealing with. Interestingly, though, the scores climb again when AI seems to cross the Valley.

“We’re getting closer and closer to humanity,” noted Dr. Joan Palmiter Bajorek, CEO of Women in Voice. “Of course, the Uncanny Valley is moving all the time.” 

Use case and context play major roles in determining what is comfortable or appropriate for users. For instance, a healthcare company wanted a chatbot that sounded and acted less humanlike to avoid any confusion with actual humans in the room. “So, they were pushing for a more robotic sound or persona,” Bajorek said. “Usually, clients want it to be more empathetic, thoughtful, and savvy.”

“Transparency is important,” said Shyamala Prayaga, Digital Assistant Product Owner at Ford Motor Company. “Not only in terms of user data—what data we are collecting, and how we are processing it—but also being clear that it’s a bot. That it can do these things and cannot do these other things.”

Humanlike Chatbots: Embedding Empathy, Emotions, and Respect

How helpful should a chatbot be when facing queries outside its purview? Expert opinions vary as to whether a chatbot, which can’t be truly sorry, should even say “sorry” when it can’t help.

Nearly all humans have some capacity for empathy. That’s why yawning and laughter are contagious. Chatbots don’t have empathy or emotions; is it wrong to design them to imply they do? 

As a parent, I think a lot about how we raise our children in this digital era and how technology shapes them. The ethical considerations alone are staggering.

Should we treat chatbots as if they have feelings? Should our children say “please” and “thank you” to what is essentially a brilliant piece of software? A related consideration: How do we design chatbots to respond to verbal abuse? Is it unacceptable when directed at “only” a chatbot? Should they ignore it? Shut down? Call for human help? Return fire?

True story: My daughter was trying to get Alexa to play her dance-class song, “Candy Cane Lane” by Sia.  She said “Alexa, play dance class one” and many other variants, none of which worked because she didn’t say it perfectly. She got frustrated and said, loudly, “YOU DUMB THING!” I was taken aback and about to correct her for being mean when I thought, Maybe I should I just let her vent? It’s a robot, she can’t hurt its feelings. But then I thought, Would letting her yell at Alexa encourage her to be unkind and unjust elsewhere?

Ethics of Chatbots: Perpetuating Real-World Issues

Turns out, all these personality design issues have serious societal consequences. The growing ubiquity of AI means if we code such things as misogyny and racist attitudes into chatbots, or allow these evils to be perpetuated against chatbots, we contribute to real-world problems.

A 2019 UNESCO report, I’d Blush If I Could, showed how AI voice assistants, when presented as eager-to-please young women, propagate harmful gender biases and contribute to the gender gap that exists in digital skills. 

Bajorek suggests chatbot designers have a responsibility to create personalities that push back against harsh language and sexual harassment: “If people are using abusive language and saying abusive things, I think that it’s on us and our design teams to be prepared for it.” Personalities must be designed to react appropriately.

Speech recognition plays an important role in combating discrimination. Personal voice assistants are 30 percent less accurate when dealing with non-American accents. Chatbot designers need to address this bias that favors predominantly white, privileged Americans by giving AI a more diverse variety of voices from which to learn.

We also have to decide how much we allow chatbots or voice assistants to learn from their user interactions. Consider Microsoft Tay, a social learning chatbot the company launched in 2016. Sadly, internet trolls fed it so many slurs and such nasty rhetoric that in less than 16 hours Tay was spewing racist, genocidal, and misogynist messages at users—some targeted at specific individuals.

Ethics of Chatbots: Privacy Concerns

Imagine a digital assistant hears something troubling in the background. Or a chatbot designer stumbles across something alarming while reviewing transcripts for UX improvement. Where do we draw the line when individuals may be in harm’s way? How should we design chatbots to respond at signs of domestic violence, or suicide risk, or bomb threats? 

“Do we just shut them off?” Prayaga asked. “Because this is not something the chatbot can handle, and is not even supposed to answer.” 

Prayaga recently conducted an informal test of digital assistants to learn how they would respond to a woman saying she was in an abusive relationship. “All the different assistants had a different response,” she said. Most didn’t know how to respond or answered along the lines of “Here are some domestic violence articles.” 

Done ethically, analyzing user speech can save lives. Sentiment analysis around natural language can be tweaked, for example, to recognize the early signs of stroke or depression in the elderly. 

Learn More: Chatbot Personality Design and Ethics 

Ethical questions in chatbot personality design are as new and groundbreaking as the technology itself. We are all in the process of figuring out the answers together. One thing is clear: As AI capabilities continue to evolve, our biggest questions aren’t about technology, but how to use it ethically and responsibly.

For more insights on chatbot design personality and ethics …

Listen to our full panel discussion as an on-demand webinar: Personalities in Conversational Interfaces

Check out my first two blogs in the series: 


Check out our AIVA conversational AI technology.

Or contact us at info@247.ai.

Related Posts

Q4 Newsletter
[24]7.ai Quarterly Product Release Highlights Q4

Product Innovations and enhancements to keep our customers at the forefront of…

Future Trends: What's Next for Self-Serve Customer Service
Future Trends: What's Next for Self-Serve Customer Service?

Embracing the future of self-serve with seamless, intuitive and contextual…