This is a case study which shows a proposed approach for developing flowcharts for API.AI chatbots.
(The approach should likely work for bot building frameworks which have similar concepts for intents, entities and contexts)
I have created a chatbot which helps users debug their API.AI webhooks. At the moment, the chatbot only supports two types of webhooks – the one I wrote about in a tutorial recently which deploys a weather webhook example to Heroku, and another one which is based on a forum answer for PHP webhooks.
Here are the reasons why it helps to create such a chatbot:
- Getting webhooks to work in API.AI is a multistep process, and a lot of the documentation online leaves out details of some of the important steps
- Sometimes people follow along with the instructions but fail to do just one tiny little step
- This is a checklist based diagnostics tool. Using a checklisting approach is an excellent debugging technique in both programming and for real life.
With that said, here is what the flowchart looks like for the webhook assistant bot. I have shown both the complete flowchart (which is too big for this page), and the two main branches in the chart. In Google Chrome, you can right click on the image and click on “Open image in new tab” to see the full view.
Python webhook branch:
PHP Webhook branch:
How to read this flowchart
While I already wrote an article about reading the flowcharts on this site, this flowchart has an additional visual element which I would like to explain.
When you create followup intents in API.AI, it automatically generates some fairly complex context names. To avoid writing all those names down into the flowchart (which we usually do in the other flowcharts), I have simply used the word “followup” inside the diamond to indicate that the context is system generated, and is used for followup intents.
In addition, the green rectangles mark end of conversations from the viewpoint of the chatbot. In the actual agent, you know this because the bot says “Bye!” whenever it thinks it cannot offer any further assistance in diagnosing the problem.
Creating the agent from the flowchart
Now we will see how we can create the agent from the flowchart.
The user initiates the conversation by typing a phrase such as “My webhook doesn’t work”. The bot responds by asking the programming language the user is using.
The user can type either Python or PHP (if the user types something else, that is a fallback since we do not handle it)
This is what the intent +followup structure looks like in the Intents list view:
This is the individual intent for the first message from user:
And this is the intent for when the user chooses Python in their response:
And this is the intent where the user chooses PHP:
Notice the following:
- In the Intent name, we use the following pattern: “user says > bot says”. That is, we type out an abbreviated version of what the user says, then a greater than sign (>), and then an abbreviated version of what the bot says
- Each rectangle in the flowchart has both the user says and the bot says, marked as “User:” and “Bot:” in their own respective lines
- Also note the system generated contexts for the followup intents – API.AI uses fairly long and descriptive names here for the contexts (which is useful when debugging) but quite awkward to represent inside the context box. So we just put the word “followup” inside the context box.
Filling out the remaining intents
Now, hopefully you are not reminded of the owl joke.
The idea is to repeat the process for each box in the flowchart – where you will choose the appropriate followup intent type – custom, yes/no etc.
In our case, only the first branch is a custom followup intent (i.e. the one where user has to answer Python or PHP). After that, all the branches are yes/no intents only.
Here is one more example.
The PHP branch intent list view:
The intent where the user answers yes:
The intent where the user answers no:
And for good measure, here is the full intent tree.
So your only job now is to fill out all the intents for the chatbot. 🙂
Or, you can just grab a ZIP file of the agent from the MBD Bonus Pack as always. The folder is zipped into [WebhookAssistant_v1] and it has the ZIP file of the chatbot (WebhookAssistant_v1.zip) and the XMind file (WebhookAssistant_v1.xmind).
You saw how to create flowcharts for a simple diagnostics chatbot which uses followup intents. In the next part of this series, I will describe how to get the same chatbot working without using followup intents, as well as the pros and cons of doing so.