- 1 Contexts
- 2 Lifespan of a context
- 3 The optimal value
- 4 But I need to do more work to keep track!
- 5 Exceptions
Note: this is a somewhat advanced topic, and certainly a bit opinionated. I wouldn't recommend beginners get into this article until they have built at least a toy bot and experienced all the features in API.AI
If you are building bots using API.AI, you are probably aware of contexts. They are used to maintain state. For the express purpose of state management, I find their implementation quite fascinating simply because they have this concept of "lifespan".
Lifespan of a context
What is the lifespan of a context you ask? It is the number of "steps" for which a context is alive. As your user interacts with your bot, the remaining lifespan keeps ticking down by 1 per interaction until it hits zero and becomes inactive.
While the default value is set at 5, I suggest you immediately change it to the optimal value.
The optimal value
And the optimal value, in my view, is 1. Here are the reasons.
The best conversations tend to meander
What I mean is, the bot asks the user a very pointed question, such as "OK, so how many red roses would you like to buy?"
To which the user says, "I am not sure. How much per rose?". Now if you didn't expect the user to ask this question, you now have a ticking lifespan clock and you better get the answer you want out of the user before the remaining lifespan goes to zero.
I don't suggest designing bots with a strange constraint of trying to get the user to the right answer within a given amount of tries. The user is not playing a game of hangman with your chat bot, and the conversation will turn weird very fast if you enforce such constraints.
The machine learning is good, but not perfect
In other words, sometimes the user does give you a variant of the expected answer, but it is not recognized as a defined intent. I would propose this is actually a worse outcome than when the user actually typed in something unrelated. Why? Simply because whatever you are going to say to "recover" from this error, is very likely to further confuse the user who had typed in a perfectly reasonable answer already.
Don't forget the first step in voice to text conversion
This is related to my previous point, of course. Remember, there is a small probability that the user's words were incorrectly translated into text. In that case, you run into the same issue if you suggest that the user may have been on the wrong track while keeping a lifespan clock ticking.
Pre-existing domains can add unexpected complexity
I talked a little about this in my article on building a cricket stats chatbot with API.AI. The issue here is that there are pre-existing domains for which API.AI already populates entities. A good example is the name Steve Smith, who is a popular cricket player with a very common last name. API.AI identifies the Smith as a common name, but doesn't do a good job of mapping to a user-defined entity called "Steven Smith". It seems to think they are separate and extracts Smith out as a predefined entity and thinks the word Steve should just hang there at the end of the sentence, basically mapped to nothing.
Maybe you argue it is doing the right thing. I don't think so, but even if it were, this is the kind of silent failure which is sure to mess up your context lifespan.
The implicit state diagram stays deterministic
Using contexts, it is possible to translate any state diagram based conversation flow into a chatbot. However, the state diagram becomes much harder to reason about if you are having lifespans greater than 1 because now you have effectively two different states that your diagram could be in at once. Again, don't make your chat bot any harder to reason about than it already is.
But I need to do more work to keep track!
Yes, you do. And that is great!
You see, one of the big issues I have seen with all the chat bot building frameworks is the degree of non-determinism surrounding them. In other words, what is considered AI today is very opaque and not at all easy to reason about. (This should by itself be another blog article someday). Don't make it even harder for yourself by having non-deterministic context lifespans.
And you don't have to take this statement at face value. Just check out the API.AI forum and see how many people get flummoxed by the behavior of intent mapping.
Are there any exceptions? No. 🙂
I mean, I am sure there are some, although I would still recommend you try to rewrite that logic.
Have you had success with keeping lifespan greater than 1? I would love to know your thoughts in the comments.