On my website there is a SupportBot.
Since I have hooked it up to Chatbase, I can create funnels for different conversation paths.
Here is a possible path that the user could take. In this case, the funnel tells me how many people (out of those who start chatting) are interested in coaching services.
For the purposes of this discussion, the last step in the funnel could be any action that you might want the user to take.
Here is my question:
How can you be confident that the percentage (here 3.6%) is a reasonably accurate representation for the number of people who reach the end of the funnel?
First, we should consider false positives. This is the case where Dialogflow maps someone as being interested in coaching even when they are not. The user could have typed a phrase which wasn't really about coaching, but still Dialogflow picked it up as the intent where they indicate interest in Coaching (the last one marked EOC_UserInterestedInCoaching)
Next, we should consider false negatives. This is the case where the user does type a phrase indicating they are interested in coaching, but Dialogflow doesn't recognize it. In this case, your overall funnel "conversion rate" would actually be higher than what is being reported.
The role of input contexts
Suppose you set an input context for an intent. In the case of the final intent in this funnel, it happens to be awaiting_service_choice.
Here is what you can infer:
- this intent will not be invoked unless the awaiting_service_choice context is already set
- unless the previous intent sets the awaiting_service_choice as the output context, this intent will never get mapped
They are two sides of the same coin, but you need to keep this in mind for my further explanations.
In a previous article, I introduced the idea of the intent candidate list.
This is basically all the intents that could theoretically fire when a user types a phrase.
Note that, an intent with no input context is a selection candidate for every step of the conversation.
This is quite an important point, and sometimes overlooked. And often, people who don't understand this concept clearly are also the ones who are surprised that Dialogflow mapped an intent that they never expected. (Truth is, while Dialogflow can sometimes be inexplicable, the majority of times it is following a very predictable set of rules, and you can reason precisely about your chatbot a whole lot more than you might think).
Minimize competing intents
Here is your goal if you want Chatbase-friendly bots:
Minimize competing intents at every step of your conversation
In practice, these are some steps which help:
- assign an input context to ALL your intents (except for Default Welcome and Default Fallback)
- don't overload your Default Welcome with unnecessary user says phrases (that is, avoid adding domain specific words into your Default Welcome intent)
- don't set a context lifespan of more than 1 for your output contexts (can you see why?)
- it follows that most of your intents should set an output context (unless they are end-of-conversation intents)
- for intents which share the same input context, try to aim for as much inter-intent variation as possible
Having discussed these options, you should remember that it isn't enough to take the steps above if you want to minimize false positives and false negatives to as low as possible. For that, you also need to put more thought into how you design each intent.
In my "Conversation Design" course, I teach you many tips which will help you design your intents in such a way that you can reduce both false positives and false negatives. (In particular, the chapter on "Dissecting Intent Mapping" gives you tips on reverse engineering how Dialogflow does its intent mapping. Learning about this will immediately improve the intent mapping accuracy of your bot).