Bot design is at a fork in the road. It keeps circling back to the same issues, like a chatbot that keeps saying “I didn’t quite get that” to a user getting more frustrated by the minute.
Ahead of us, there are two roads:
A future based on artificial intelligence, building toward a real understanding of what a person is saying, then generating the right response. This is hard to do with a high degree of accuracy, and leads to all sorts of potentially creepy sci-fi futures.
A future of controllable, scripted responses to a fixed set of commands. These bots may be full of personality and generate complex scripts, but their understanding is basic, and they can’t guarantee a nuanced response.
Choosing the right road means choosing the right way to understand the other side of the conversation. To what degree should a bot anticipate what’s happening on the other side of the wall?
What’s a conversation?
Anticipating the other side of the conversation is not a new communication problem. Between humans, communication is incredibly faulty.
Our participation in conversations means not just listening to words and parsing them, but subconsciously listening to dozens of other factors — how a person sounds, what they look like, their body language, where you both are. The semiotics of conversation are complex, even before you get to the content — and even then, the content has layers: the topic of conversation, the goal you’re trying to get to, how you feel about all of it.
It’s a wonder we ever manage to communicate at all.
To parse through all of this, the designer Paul Pangaro came up with a model of conversations called CLEAT:
Context
(Shared) Language
Engagement or exchange
Agreement
Transaction or action
He says that to reach an action, you must first reach agreement. To reach agreement, you must have engagement. And to even begin to engage, you must have shared language.
If done right, Pangaro says, we reach lofty goals — shared history, forged relationships, trust, and unity. And all of this should be the goal of good software, he says, because “Software with conversation at its heart is more human.”
Just enough communication
Software with conversation at its heart is more human.
Over the last year, we’ve begun experimenting with bot design at Intercom — we’ve written before about some of the design considerations we made along the way about its role, its name, its personality, and the jobs it was designed to do.
Our bot intentionally handles only a few small jobs, the most essential of which is to keep customers and businesses (teams) in contact, even when the team is away.
As a participant in a conversation, it’s deliberately restrained. Everything about its conversation style is intended, not to keep the conversation going, but to spark only a limited reply. Because we favor human to human conversation over automation for its own sake, our bot is allowed to have just enough communication to get the job done, but no more. That means we get rid of a lot of non-essentials: no hellos, no names or genders, no apologies.
Without all the usual niceties of human conversation, this could be the rudest bot in existence. Yet we’ve found, as we’ve beta tested the bot, that people don’t see it that way. They understand that it’s automated, find it helpful, and even thank it for getting them what they need.
Most of all, they don’t expect it to do the job they want a human to do. They don’t demand or expect more conversation once the transaction is achieved.
If we wanted to design the world’s first introverted bot, we seem to have succeeded.
The key to better conversations
Our bot philosophy mirrors Pangaro’s conversation model. The key to that model is that every conversation has a goal. While customers usually have a question in mind — their goal — the way the bot tries to engage them is much more about understanding their context and motivation than it is about understanding the actual semantics of their conversation.
If we wanted the world’s first introverted bot, we did it
We spend most of our time understanding that context, and making sure we can deliver the right answer, in the right way, with the right tone.
We also make sure the bot has a job to do, to exchange information or engage the user.
And we make sure there’s agreement about what the bot is, and what its role is.
Finally, we reach a transaction.
Our hope is that this model creates stronger relationships, built on trust, between the teams that use Intercom, the customers they speak to, and Intercom itself.
Designing nuances
In the real world, even when we understand someone’s words perfectly, we still don’t have a guarantee of successful communication. Real conversations have to tease out these nuances by looking for context and clues.
And real conversations can get it wrong: we make assumptions about others’ goals and motivations all the time. We’re human. Making assumptions based on context is how we pattern-match, so that we can engage in conversations in the first place. This is exactly what we do as designers, and what machine learning algorithms do — we look for patterns, and match our responses to them.
But in our pattern-matching, as in the real world, we hear things imperfectly. We get things wrong. In the real world, a little imperfection is fine. In the bot world, it’s a risk.
Allowing for imperfection
The way we get around mistakes is by narrowing our constraints — being very specific about the goal or job of our conversation — and by enriching the context signals we’re aware of as we design.
Constraints are our way of dealing imperfectly with imperfection. We do just what we know will be helpful, and no more, and we err on the side of silence.
Context is our way of building nuance into an interaction, and making the interaction with our bot feel simple, direct, and positive.
That’s why a bot has to lean back from the conversation in order to listen more closely: if we get something wrong, the trust that’s lost is far greater than between two humans. And our bot is there to forge trust between a team and their customers. That’s a big job, even when it’s done quietly and with restraint.
Comments