I have been wanting to write about an interesting problem recently: choosing the intent granularity in API.AI.
But I wasn’t sure how to phrase the motivation for writing about it. I then saw this question on the API.AI forums:
Disclaimer: API.AI’s pattern matching is a black box
API.AI’s pattern matching capabilities are something of a black box. To the best of my knowledge, the underlying algorithms are not published anywhere.
I have come up with some rules of thumb to help out with the chatbots I build for myself. These rules of thumb are based on my knowledge of Natural Language Processing and Machine Learning, but I cannot guarantee that any of these will work consistently over time.
With that disclaimer, here are the general suggestions on deciding the granularity of intents.
1 The intention behind the intent
We can start by looking at the word intent – which means an intention to do something.
We can ask if the two statements:
“Which vendor has the greatest opportunity?”
“Which vendor has the greatest market share?”
express the same intent. (The answer is somewhat subjective and based on the specific topic).
So the first suggestion is to ask yourself if all the user says messages inside a given intent are indeed expressing the same intention.
2 Proper Nouns vs Common Nouns
It is probably a better idea to club intents when they differ by “proper nouns” and not a “common noun“. If you club “greatest opportunity” and “greatest market share” into a single intent, the differentiating factor (opportunity vs market share) is a common noun and not a proper noun.
Consider a different example: “which vendor sells Apple products” and “which vendor sells Samsung products” is a better candidate for clubbing into a single intent (and even then it may not always be the case). An advantage of this approach is that API.AI can help you with some automated expansion – for example, it could map “which vendor sells Lenovo products” based on the user phrases for Apple and Samsung.
3 Prominent words and phrases
Most of the time, user messages in chat applications are short. And your intent definitions should reflect this fact.
This means your intents cannot be very long either. You need to be able to distinguish intents based on what I call “prominent words” – for e.g. words which are longer tend to stand out in the userSays phrase. For example, “opportunity” would be one such word.
Prominent words are like landmarks in the midst of a sentence – they help API.AI “map” the user’s message to the correct intent because they are distinct in some way. Contrast this to non-prominent words – stop words are too common to be used as markers, smallish words can easily be misidentified if they have typos etc. I realize this explanation is somewhat vague, but I also think those who are familiar with why TF-IDF is used in search engines understand the point I am trying to make. Perhaps this is a topic for a future expanded article.
The same applies to prominent phrases – e.g. “market share” becomes distinctive when it is grouped into a phrase, while also appearing frequently in phrasal form in the given intent (that is, it will be present in multiple userSays phrases you define).
Analyze your userSays phrases within a single intent and see if they fit the following pattern:
- each userSays should preferably contain at least one prominent word
- a prominent word is repeated at least a few times (helps with the machine learning) within a given intent
- you don’t have too many distinct prominent words inside a given intent
For e.g. , clubbing “opportunity” and “market share” would fail the 3rd point. But, use your discretion!
4 Don’t do the heavy lifting on the webhook
Let API.AI do the heavy lifting: in the example above, we had two different phrases: “greatest opportunity” and “greatest market share”.
Suppose you did group them into a single intent. This means you need to decide, on your webhook, what the user meant. You will start trying to handle corner cases. You might find that it is difficult to handle unexpected punctuation. Sometimes the word ordering is a little different. You might not be expecting to get a different word with the same stem. And the more such phrases you try to parse in your webhook, the more and more extra rules you will add to handle special cases, until you end up building a mini-API.AI inside your webhook code.
My suggestion is: this is too hard to do on the webhook, just let API.AI do as much of the work as possible. Don’t end up building a mini-API.AI inside your webhook code. Choose your intents in such a way that API.AI does most of the heavy lifting.
5 Create a mock/prototype and see what works
If all else fails, simply user test your way to a better chatbot. 🙂
When you create a chatbot, you can define some success metrics.
- How often do people fail to get to the end of the conversation?
- How frequently does API.AI invoke the fallback intent even if user says something your chatbot should handle?
- On the other hand, how frequently does API.AI wrongly map a user’s phrase to an intent even if the user says gibberish?
Having defined these metrics, create a mock/prototype chatbot and allow your prospects to interact with it. If you find a certain intent granularity achieves the best results, just use it! Nothing beats real world data.
Long term tip: Learn about Natural Language Processing
OK, so this certainly isn’t a quick suggestion. But if you think you are going to be working on chatbots for a while, then it really helps to peek under the hood a little.
NLP is at the core of how chatbots work today. I think if you understand the fundamental concepts of NLP, such as POS tagging, Named Entity Recognition and Parse trees, you will be able to get a really good intuition into how chatbots work.
At the moment, I have two suggestions for books which provide a fairly good intro to NLP – Taming Text and NLTK Cookbook. Not that these are the only books on the topic – they just happen to be well written and fairly easy for the layperson to grasp.