As a young girl, I spent hours chatting with A.L.I.C.E., giggling and marveling at what she would reply to my inane questions. She did amazingly well for a chatbot made in 1995, if I remember. I was obsessed with all the AI things on shows and movies like Star Trek, Star Wars, Short Circuit, War Games, and the Matrix – basically anything that seemed human but wasn’t. In 2005, I eventually found a job where I could actually BUILD these things and it was heaven for me. I’ll be first to admit it - my name is Celene and I’m addicted to chatbots.
When I wake up in the morning, I never have that feeling of dread when I start my day. I am always elated to be doing what I do, because I know my team and I are building the technology of the future. We’re thinking of innovative ways for chatbots to seem more human while also helping people, and the cooler we can make them, the giddier I get (just like I did when chatting with A.L.I.C.E.)
Even today, I watch for opportunities in pop culture to see what people are dreaming up for AI hundreds of years from now, because ultimately I think that’s our goal as conversational designers. The other day, I had the opportunity to finally watch Passengers, the movie starring Chris Pratt and Jennifer Lawrence, where they are aboard a ship to a second Earth, when they’re awakened 90 years too early from hibernated sleep after their ship malfunctions. My team and I gathered together to discuss, and below is the result of our conversation.
(If you haven’t seen it yet and are not a fan of spoilers, stop now!)
The first to wake is Jim (Chris Pratt), and for the first 30 minutes or so of the movie, it’s just him so the only thing he has to interact with is AI. There are no crew members or other passengers or communication with Earth to help him. As a conversational designer, it was incredibly intriguing how this unfolded! I sat there munching on my popcorn, just shaking my head at all the severely poor design choices! It was a rare movie where I would watch something and be so close to it I couldn’t enjoy it for what it was. The verisimilitude vanished before my eyes. But it was cool.
A few things went right.
I was initially pretty astounded at the first interaction Pratt’s character had with AI. As he was waking up from hibernation, the holographic woman that appeared before him in his pod was programmed to be warm, non-confrontational, and welcoming. She was brief in the information she presented to him because after all, he did just reanimate from essentially being dead. So, it was nice of her to lessen his cognitive load, as any good AI would do given the circumstances. She also showed empathy for his situation (though not realizing he woke up 90 years too early) and was pretty bright too. She called him “James”, but he interrupted and corrected her by saying, “It’s Jim,” and she simply repeated “Jim,” with a smile. Pretty good use case of barging in and contextual awareness. She stated he was in perfect health and he was on his way in a few minutes. But what if he wasn’t in perfect health? What if he interrupted her with questions? All things a conversational designer thinks about – how a user can go off the rails and how to handle it. I’m thinking a case like this where a user wakes up 90 years too early just does not happen, so we can’t really blame her for not addressing it, can we?
It all went downhill from there.
She follows him to his room where she asks him to scan his ID bracelet to confirm his luggage delivery. She probably waits maybe a half second before she asks him to do it again, almost sounding impatient. My first thought was that the timeout was WAY too short for someone who had just woken from hibernation for, what she thinks, is 90+ years! So just cool your jets, Betsy, poor Jim has a lot to process!
Shortly after we’re introduced to another AI in a room that is intended to be an education session for all those who have awoken. As soon as Jim enters, he thinks he’s the first to arrive, and is eager to meet all the passengers. Little does he know he’s the only one! But the AI again is so unintelligent that it doesn’t realize Jim is the only attendee and starts by addressing the “group”. She has no ability to judge if everyone has arrived or who all is there. There are many very low-tech ways to gather this information (e.g. scan your ID bracelet when entering) so the AI doesn’t come across as stupid.
And then Jim pushes it further. He asks things like “So why am I alone?” and the AI responds with “We are all in this together.” Obvious natural language (NL) mismatch there, and a really bad one at that. Of course, this is off the rails handling that the AI has never been programmed with, so can we punish her really? She was kind of cold and kind of rude, so yeah, I’m going to take this out on her.
Her last response sends Jim into a bit of a panic attack, and we see him in frantic mode running down the halls. We meet another faceless AI bot in the middle of the main area that’s meant to serve as a directory mostly, it looks like.
This directory bot says, “Hello, welcome to the Avalon.” Um, what? I guess since he boarded the ship unconscious, this is a valid greeting, but come on, it’s a waste of words, really. Just let Jim ask his damn question. He is panicking!
And then it happens – something that Conversational Designers and VUI Designers see and hear all the time. “I want to talk to a person. A real, live person, please?” Even thousands of years in the future, it will be a use case we will never be able to give up, no matter how good we think the AI is. Well Jim is in for a rude awakening, because all the real, live people are sleeping! But again, this AI is dumb, really dumb, and it has no real ability to help Jim.
First it points him to the ship’s steward, but of course he’s not there. Sorry Jim! The worst part is, even though he should have told him that the Ship Steward wasn’t in at the moment, the bot ended his helpful hint with “Happy to help!” It always ends with “Happy to help!” which is SO not helpful. In my experience, it’s such a no-no unless you 100% know you actually helped the user by getting user feedback. Otherwise, just saying at that the end of a conversation leaves your AI vulnerable to “You didn’t help me! I was going to log off but now give me the number of your contact center. I’m going to call them in this really angry state.”
Jim eventually figures out that the captain may be of help, (at least more helpful than Mr. Dumb Bot). The bot reluctantly tells him where the captain is, but no other useful information. Jim runs all the way up to the bridge, but then realizes he doesn’t have access. Why didn’t you tell Jim that he didn’t have access, Mr. Dumb Bot? It’s important to always think about what the user will ask next and any other pieces of information that will help the user do what they need to do, without overwhelming them, especially if the majority of people have questions about it. “You’ll need your account number; it’s at the top right hand corner of your bill.” Sometimes more can be less. But you guys know that.
Finally, Jim finds the observatory and in it, another AI that ACTUALLY knows where, and more importantly, WHEN he is! It shows Jim, graphically how far he’s actually traveled. This begs the question, why aren’t all these AIs linked together with their information? Were their creators all worried if they let each other talk, they’d revolt, take over the ship and send it into the sun like the robots in I, Robot, Resident Evil or The Matrix would have done? Or maybe they were all made by individual departments in individual silos that don’t collaborate? In my experience, the latter is likely the more realistic one.
As a conversational designer, it’s important to always think about what the user will ask next and any other pieces of information that will help them do what they need to do, without overwhelming them.
So of course, this thing is just for viewing stats, so there is no empathy built in, and it basically just shoves this information in Jim’s face and tells him to deal with it, nearly causing him to have a heart attack.
Jim finds a communication terminal after talking to Mr. Dumb Bot again and tells it he wants to send a message to Earth. So Jim goes to ANOTHER AI. Why couldn’t this bot record the message? Oh well. Now, this new AI does warn Jim this message would be expensive. But Jim continues and tells it “I’m immigrating to Homestead 2 and I have an emergency.”
Then the AI responds with “I have a customer help line.” Wow, great NL match! And in terms of the response, that’s pretty good too, though of course I would have asked if they wanted to call it.
But then then Jim replies, “That sounds about right.” So luckily we have a way forward.
He records his message, which thankfully for Jim in his panicked state, was actually quite intuitive, and then it’s off on its way. “Message sent,” it says. There is a BIG DELAY, but the AI does comes back telling him his message will take 19 years to get to Earth, with the earliest reply in 36 years, totaling 55 years! 55 *%$^#! years. And then, THEN, the kicker: It has the decency to tell him, “We apologize for the delay. That will be six thousand and twelve dollars.”
Well, what I thought was the best AI of the bunch has now become the worst.
So, what does it all mean?
There are many other AI experiences in this movie that are intriguing, so as a conversational designer or not, I encourage you to watch it. You can see things like how a vending machine works in the future! (spoiler – the same) and even what a really cool AI bartender programmed with nothing but empathy and glass-polishing skills can and can’t do. It really was a good movie, design choices aside (which likely fueled a great movie so thank you writers for thinking of them), but it left me wanting so much more. We are not too far off from so many of these things, but if this is what my future looks like, I’ll be greedy and say I want more. I want robots to be indistinguishable from humans. There is actually a show called Humans on AMC that has something similar and they only have the colors of their eyes to tell them apart. Creepy, or no?
That’s really what I think I’m striving for when I do what I do, but in the interim, seeing these examples just makes me more determined to create something amazing so my kids can then create something even more amazing, and their kids can do the same, and so on, and we’ll soon be on our way.
Want to learn how to use AI to transform your customer experience the right way? Get tips and advice for getting started in our eBook.
Celene Osiecka is Director of Conversational Design at 7.ai. She lives for making customer experiences intelligent, effortless, and engaging.