Let’s dive into some of the important ethical issues around designing chatbot personalities.
This is the third blog based on an engaging panel I recently hosted, now available on demand: Personalities in Conversational Chatbot Interfaces.
In the first two blogs, we explored the why and how of infusing your chatbots with personality. Check them out here:
The question of just how humanlike to make your chatbots has both practical and ethical consequences. Humanlike chatbots encourage connection and greater engagement, but are we deceiving users?
This is not an idle question. Thanks to advances in technology, it’s possible to create chatbots and voice assistants—as well as virtual reality, augmented reality, and photorealistic animations—that are nearly indistinguishable from humans.
The Uncanny Valley is the hypothetical line drawn between robotic- and human-seeming AI technologies. Studies show that likability scores initially decline as AI approaches the point where users can’t tell what they are dealing with. Interestingly, though, the scores climb again when AI seems to cross the Valley.
“We’re getting closer and closer to humanity,” noted Dr. Joan Palmiter Bajorek, CEO of Women in Voice. “Of course, the Uncanny Valley is moving all the time.”
Use case and context play major roles in determining what is comfortable or appropriate for users. For instance, a healthcare company wanted a chatbot that sounded and acted less humanlike to avoid any confusion with actual humans in the room. “So, they were pushing for a more robotic sound or persona,” Bajorek said. “Usually, clients want it to be more empathetic, thoughtful, and savvy.”
How helpful should a chatbot be when facing queries outside its purview? Expert opinions vary as to whether a chatbot, which can’t be truly sorry, should even say “sorry” when it can’t help.
Nearly all humans have some capacity for empathy. That’s why yawning and laughter are contagious. Chatbots don’t have empathy or emotions; is it wrong to design them to imply they do?
As a parent, I think a lot about how we raise our children in this digital era and how technology shapes them. The ethical considerations alone are staggering.
Should we treat chatbots as if they have feelings? Should our children say “please” and “thank you” to what is essentially a brilliant piece of software? A related consideration: How do we design chatbots to respond to verbal abuse? Is it unacceptable when directed at “only” a chatbot? Should they ignore it? Shut down? Call for human help? Return fire?
True story: My daughter was trying to get Alexa to play her dance-class song, “Candy Cane Lane” by Sia. She said “Alexa, play dance class one” and many other variants, none of which worked because she didn’t say it perfectly. She got frustrated and said, loudly, “YOU DUMB THING!” I was taken aback and about to correct her for being mean when I thought, Maybe I should I just let her vent? It’s a robot, she can’t hurt its feelings. But then I thought, Would letting her yell at Alexa encourage her to be unkind and unjust elsewhere?
Turns out, all these personality design issues have serious societal consequences. The growing ubiquity of AI means if we code such things as misogyny and racist attitudes into chatbots, or allow these evils to be perpetuated against chatbots, we contribute to real-world problems.
A 2019 UNESCO report, I’d Blush If I Could, showed how AI voice assistants, when presented as eager-to-please young women, propagate harmful gender biases and contribute to the gender gap that exists in digital skills.
Bajorek suggests chatbot designers have a responsibility to create personalities that push back against harsh language and sexual harassment: “If people are using abusive language and saying abusive things, I think that it’s on us and our design teams to be prepared for it.” Personalities must be designed to react appropriately.
Speech recognition plays an important role in combating discrimination. Personal voice assistants are 30 percent less accurate when dealing with non-American accents. Chatbot designers need to address this bias that favors predominantly white, privileged Americans by giving AI a more diverse variety of voices from which to learn.
We also have to decide how much we allow chatbots or voice assistants to learn from their user interactions. Consider Microsoft Tay, a social learning chatbot the company launched in 2016. Sadly, internet trolls fed it so many slurs and such nasty rhetoric that in less than 16 hours Tay was spewing racist, genocidal, and misogynist messages at users—some targeted at specific individuals.
Imagine a digital assistant hears something troubling in the background. Or a chatbot designer stumbles across something alarming while reviewing transcripts for UX improvement. Where do we draw the line when individuals may be in harm’s way? How should we design chatbots to respond at signs of domestic violence, or suicide risk, or bomb threats?
“Do we just shut them off?” Prayaga asked. “Because this is not something the chatbot can handle, and is not even supposed to answer.”
Prayaga recently conducted an informal test of digital assistants to learn how they would respond to a woman saying she was in an abusive relationship. “All the different assistants had a different response,” she said. Most didn’t know how to respond or answered along the lines of “Here are some domestic violence articles.”
Done ethically, analyzing user speech can save lives. Sentiment analysis around natural language can be tweaked, for example, to recognize the early signs of stroke or depression in the elderly.
Ethical questions in chatbot personality design are as new and groundbreaking as the technology itself. We are all in the process of figuring out the answers together. One thing is clear: As AI capabilities continue to evolve, our biggest questions aren’t about technology, but how to use it ethically and responsibly.
For more insights on chatbot design personality and ethics …
Listen to our full panel discussion as an on-demand webinar: Personalities in Conversational Interfaces.
Check out my first two blogs in the series:
Check out our AIVA conversational AI technology.
Or contact us at [email protected].