In today’s era of digital revolution, Artificial Intelligence (AI) driven technologies such as Chatbots have a significant role to play which has taken the human-computer interaction and experience to a new level.
Chatbots are driven by natural language processing, including both auditory and text designed conversations to perform a pre-defined task. The challenge in building a chatbot is not as much technical but more an issue of user experience. So, the most successful bots will be the ones that provide consistent value to user’s requirement.
Chatbot testing is a critical enabler in the success of an effective and efficient chatbot, where we as testers actually analyse if all the required features are correctly incorporated into the bot and if it is responding appropriately to the user queries. However, the chatbot testing is quite different to traditional software application testing. Unlike other web and mobile applications where application runs in a predefined way of interaction, the chatbot applications run without any restriction. As a result, chatbot should be developed and tested with all unexpected scenarios.
Based on our experience, we tried to document what’s involved in a chatbot testing, which can be divided into the following four areas.
1. Conversation Design Testing
2. Entities Testing
3. Fulfilment Testing
4. User Acceptance Testing (UAT)
1. Conversation Design Testing
Natural language understanding tools illustrate user’s inputs, bot’s responses and calls to external sources, allowing us to have an overview of the whole conversation. NLP extracts both the intent and entity from user’s input and provides a precise response to the user.
From testing perspective Conversation Design testing has the following key areas:
1.1 Conversation Flow
Technically conversation is a process where two speakers exchange communications between themselves through meaningful sentences. This back and forth exchange of words is also known as dialogs. As Chatbots are also based on a similar concept of conversations, therefore, testing conversational flow is an essential step in chatbot testing.
We can test the conversation flow by adding happy path scenarios (usual talks) and negative path scenarios (unexpected talks) to the conversation flow. Also, we can add “Yes” (approval) and “No” (denial) expression to test bot’s behaviour. An appropriate conversation flow should talk tactfully and keep the user engaged with relevant replies while maintaining a balance between the message length and its meaning.
1.2 Intents matching, Training phrases, and responses
Intents are the goals or the purpose of the user’s input, or we can call it is a “collection of sentences.” The chatbot typically has multiple intents, which we create to define the scope of the application. To understand this in a better way, we can consider a simple example of “Ordering a coffee.” we can give the order to the bot to “make a coffee,” and we expect an order confirmation. So, we can test this scenario by adding more matching intent or training phrases and see if the bot can clearly understand user’s input.
If the Intent is to order a coffee, we can test against the training phrases, such as:
“I would like to order a coffee.”
“May I have a coffee.”
“May I have a long black.”
On the other hand, the bot response should be confirmation of the order, like “Which coffee would you like to have?”, or “Your order is confirmed.” or “Thank you for your order.”
1.3 Small talk
Small talks are basically a casual conversation between user and the bot. Example:
User: How are you?
Bot: Wonderful as always. Thanks for asking.
We could perform this aspect of testing by adding more and more small casual talks, and subsequently, we need to compare the responses with the bots reply to these talks. A collection of Small talks can greatly improve user’s experience when talking to the bot. Small talk makes the bot more conversational by having responses for casual conversation topics like greetings or jokes, etc. instead of redirecting the user to a fallback response.
Fallback is a process to measure the response of bot on unmatched inputs.
Example: While ordering a coffee suddenly if the user asks any unmatched query to the bot, such as “Which colour shirt are you wearing? “. As per human conversation the expected response would be “Sorry, I didn’t get it.” With Chatbots we are also expecting the same human language response from a bot. We can test these scenarios by writing appropriate test cases to make chatbot fallback.
In a conversational interface there are no options like back buttons or a search box to help users go from some parts of the interaction to others. However, users have the same needs they used to have traditional interfaces; they change their mind and want to go back, they want to skip steps, etc. We need to test such navigational scenarios in our testing to make the bot understand and deal with such needs appropriately.
Example: While ordering a coffee if the user wants to cancel or want to change the order, then what will be the expected steps for a user to come back to his/her previous state.
Emotion is tone of the language which includes anger, fear, joy, sadness, and disgust. Some of the chatbot platforms are using this in their chatbot messenger to understand the emotions of user’s mood and processes text messages into the chat window. Test cases need to be written to understand the emotions and respond appropriately to user emotions.
2. Entities/ Slot
An entity is a keyword extracted from a Training phrase. When the user speaks or types, Chatbot will look for value of the various entities it needs from the context of the conversation. We use entities to automatically extract the information from what the user says. Example:
“I am Andy, can I have a Cappuccino with 2 sugars please”
So, the intent here is “Ordering a coffee”
The entities are:
Andy — Given Name
Cappuccino — Coffee Type
Two — Count of Sugars
In order to carry out a command, the bot first needs to understand the intent, and then extract the entities from intent. From there bot will ask what type, how many and so on. Technically this is known as slot filling. It is fairly a basic conversation and quite laborious for the user.
These entities can be uttered any time during context of the conversation
User: Can I have a flat white
Bot: Do you need any sugars?
Assertions: If milk type is not mentioned default to ‘Regular’.
Testing need to cover all entities, it’s value and variations as well as assertions
The diagram shows how bot can extract entity values at different stages of the conversation.
Once the requirement from the user (along with the entity values to be sent with the request) has been received; the bot needs to request the information to fulfil the user’s request. Now, this data is to be sent to web-hook so that the required information can be fetched. Once the web-hook has fetched the required information, it will send the response back to the user in the desired manner. The response is the content which will be delivered to the user once the request for fulfilment has completed.
Example: The coffee order need to be passed to existing POS, so that Barista receives the order.
Fulfilment testing should cover all integration points as well as data sources.
The importance of assuring the quality of chatbot is very important. So, the main priority is to test that the chatbot functionality is as per requirements and goals. It is very important that user testing is done on a chatbot before releasing it to the market. Also, it is very important to test the chatbot with different users and people with different personalities.
UAT should be carried out after full implementation of the bot is complete including integration of intents, entities, Smalltalk, fallback, and fulfilment.
— — — — — — — — — — — — — — — — — — — — — — — —
Article is written by Gita Patra, Testing Lead at Ako AI