top of page
Writer's pictureFahad H

Lost in translation: What happens when a Facebook Messenger bot can’t understand you

rock_em_sock_em_robots_1920_1080

“Oops, I didn’t catch that.” That’s a rough spot in any conversation, but it’s really rough when that conversation is with a computer. With a person, you can respond by asking what part they didn’t catch, and they should be able to tell you. With a bot, not so much.

I ran into that problem last month when Facebook opened up its messaging app, Facebook Messenger, to chat bots. One of the first bots to hit Messenger was Poncho, which tells you the weather in the voice of a computerized cat (It’s very internet). Rather than have Facebook’s computers automatically pass on my location to Poncho, I wanted to tell Poncho myself. Easier said than done.

Conversational UI is tricky pic.twitter.com/DOwxdLNi86 — Tim Peterson (@petersontee) April 12, 2016

Eventually, Poncho was able to understand me, and I was able to pick up on how to make Poncho understand me. But it took a while, without Poncho telling me what part of what I was saying it didn’t understand or what it wanted me to say. If Poncho had been a person, I might have sent them a middle-finger emoji and pulled up The Weather Channel app.

“What you experienced [with Poncho’s bot] is the initial learnings in an industry where early chatbots require you to learn their language instead of vice versa,” said Eyal Pfeifel, CTO at Imperson, a tech firm that helps brands like Disney and Universal Pictures build chat bots.

Apparently, I’m not alone. “It’s definitely a critique we’ve been getting a lot from a lot of users,” said Greg Leuch, who serves as head of product at Betaworks, the startup incubator that gave birth to Poncho. “It’s a problem that all bot makers face.”

Solving that problem is crucial if chat bots are to become a significant new way for people to interact with brands, publishers and other companies online. If people — especially people who aren’t accustomed to communicating with computers and don’t care how difficult these bots are to build — run into a bunch of problems when communicating with a messaging bot, they could be decide altogether that messaging bots aren’t worth their time.

“This is one of those things where users can get frustrated a lot easier than with a [graphical user interface] or an email message. So more vigilance on user research, user experience and design is going to be very beneficial for not only us but any brand that’s looking to do a bot,” said Leuch.

That vigilance explains why, within a couple of days of Poncho’s launch on Facebook Messenger, Leuch and his team of developers made the bot more helpful when people ran into problems communicating with it. If someone encountered the same error twice, Poncho would tell them that they could type “help” to get some guidelines on how to talk to the bot.

“We’ve also spent a lot of time reconfiguring our on-boarding, seeing how people were using it, seeing where people were dropping off at, finding those frustration points where it didn’t quite match on a location name because of a hiccup on our end or our provider’s end or maybe it was a misspelling,” Leuch said. “We’ve been very acute and very aware of how all of this is happening. This is a learning point for us and for the bot community and all the developers out there right now.”


The product teams at media companies Mic and Complex Media are similarly trying to anticipate where people might run into roadblocks when interacting with the Messenger bots they are planning to roll out. Mic has already gone through the process with the two bots it’s released for another messaging app, Kik.

Mic’s TrumpChat bot gives people an alternative way to catch up on news related to the Republican Party’s presidential frontrunner. Each day an editor on Mic’s politics desk actually writes a script that mimics The Donald’s Twitter persona when relaying a piece of election news. That impersonation has also parlayed into how the bot deals with errors. “If you send something that’s an error, the Trump bot says, ‘I didn’t understand that. We’ve got to make your responses great again.’ And then we give you choices,” said Mic’s chief strategy officer, Cory Haik.

Predefined choices also allow Mic to sidestep errors with its DisOrDatBot on Kik, which presents people with two choices — chocolate or vanilla, Spotify or Apple Music — and then, after someone’s picked one, tells them what percentage of other people agreed with them. If people don’t get the game, they can click the “Huh?” button which will trigger a link to a Mic story on the subject. That preset path and crystal-clear panic button could help to explain how Mic’s bot has been able to keep people in conversation for 20 to 25 questions at a time, on average.

Facebook is also touting predefined options as early guardrails to prevent people from getting frustrated with the early Messenger bots. “A lot of what Facebook has given out when it comes to their guidance, they’re giving you a story that you tell and then you have [call-to-action buttons] underneath it. So it’s not just this open-ended conversation,” said Complex’s VP of product, Ayalla Barazany.

Those call-to-action buttons may remove some of the magic from chatting with a seemingly artificially intelligent computer, but Haik said they also “remove the margin for error.” Eventually, artificial intelligence technology, like natural language processing software, should make chat bots more conversationally adept. But this early into the messaging bot trend, a solid if basic foundation is more important than more complex bells and whistles that can expand the scope of a conversation.

“As we get deeper into Messenger and developing out some of the automatic responses [for the caption contest bot Mic’s building for Messenger], that’s a question that we’ll grapple with. But because of the user frustration piece, we don’t want people to bounce because it’s too complicated and they’re not getting what they want,” Haik said.

“The point of the bot is to give you what you want as fast as possible,” said Complex’s Barazany. And the point of the guardrails is to get the conversation back on track as fast as possible. But it can be hard to figure out where the guardrails should be without knowing all the different directions someone might steer a conversation.

Poncho’s Leuch was able to to rely on his past experience at Know Your Meme — one of those sites, like Reddit and 4chan, that can showcase the best and worst parts of internet culture — to anticipate how people might try to eff with Poncho’s Messenger bot like they did with Microsoft’s Tay.

“One of of our developers wrote what we call the ‘apology pit.’ So if you use a profane word, a racist term, a slur, anything that was a negative in our books, it puts you into the apology pit, and Poncho calls you out saying, ‘That not’s cool. You shouldn’t use that type of language. Do you apologize: yes or no?’ If you say ‘no,’ then Poncho will actually ignore you for five or 10 minutes,” Leuch said.

The amount of attention Poncho’s team has spent — and continues to spend — understanding how bot conversations can get derailed appears to have paid off.

On average, people are spending two to three minutes at a time talking to Poncho, according to Poncho’s head of marketing Margot Boyer-Dry. “Some people stick around as along as 15 or 20 [minutes], just kinda shooting the breeze, which is beyond our wildest dreams,” she said. But that might not have become a reality if Poncho’s team hadn’t worked on eliminating the conversational nightmares.

0 views0 comments

Comments


bottom of page