DRUID AI Agents Blog

How Do Chatbots Learn? - Druid Enterprise Chatbots Blog

Written by Daniel Balaceanu | Mar 6, 2020 1:11:07 PM

Chatbots are getting a lot of press these days. Which is great for chatbot companies, right? Well, yes, of course. And also, no. You see, what isn’t getting nearly as much press is how chatbots actually work. How they do what they do so well. Think about it, the last time you visited your bank’s website, did you see that chat window down in the bottom right? Maybe it popped up and asked if there was anything they could help you with. Did you use it? If so, at some point while the chatbot was retrieving your account balance, or setting up a money transfer, or whatever you asked of it, did you stop and wonder how on earth it knew how to do that?.

Experience has shown us that the better someone understands a technology, the more likely they are to be able to make the best use of it. So in the interest of you being able to not only see how beneficial a chatbot could be for your business scenario, but also of you having a better handle on just what that bot is doing for you and your customers—today we bring you a brief look at how chatbots learn.

Chatbot Conversation Models

Before we jump in, it’s key to understand that there are two primary types of chatbot, selective and generative. Each model has its strong points, just as with any technology. The main difference to understand here is how each type of chatbot handles incoming requests and retrieves appropriate answers. Selective bots parse the question asked by a user and compare the words used to their pre-programmed queries. Then, the algorithm selects what it believes to be the most appropriate answer from a list of available answers, again pre-programmed into the bot’s database.

On the other hand, a bot built on a generative model uses it’s AI and machine learning algorithms to compose unique responses based on the question asked. These models use Natural Language Understanding (NLU) and Natural Language Processing (NLP) structures to build their own database of answers, growing these databases with each new question asked.

Selective Conversation Model Chatbots Learn by Example

At their most fundamental level, selective chatbots learn by reading their own history of questions and answers. When a question comes in, the bot selects the most appropriate answer from its database, then it sees whether or not that answer satisfies the customer. If so, it marks the transaction a success and stores that information for future reference.

Then, when another question comes in that uses similar words, syntax, etc. the bot knows the answer that worked last time is likely to work again, so it serves that same answer up again. If this time the customer is not happy and rephrases their question, the bot starts over searching for a more appropriate answer. And the process continues, with the bot getting better with each successive round of Q&A.

These bots are using a version of machine learning to make these connections and to “learn” when they need to go back to the database and pick a different answer. The same machine learning algorithms also help the bot to learn when a question is related to a past one, or when the intent of the questions differ, so the bot can get more efficient at selecting appropriate answers over time.

This version of AI does not include NLP, however, so a selective chatbot can only respond using the syntax and language it was programmed with. This means that if a customer uses slang or regional vernacular in their question, the answer will not contain similar structures. Rather all answers served by the bot will sound the same, which can sometimes sound out of place or stilted.

Generative Conversation Model Chatbots Learn by Listening

Conversely, chatbots built on a generative model learn by listening. What we mean is that these bots can alter their syntax and word usage to match that of the questioner. They also develop each answer on the fly, so to speak, making up unique responses for each conversation they engage in. This is cutting-edge AI, using machine learning algorithms to parse not only the words used by their conversation partner, but also to work out the intent behind those words.

The basics underpinning generative bots are actually quite similar to selective in that the developers start with a set of labeled data containing questions and answers (most often supplied by the company that will be deploying the bot). From there the algorithms diverge, with generative algorithms predicting possible responses to unlabeled data in the form of new questions being asked by users of the chat interface. Whether for customer service, tech support, or internal enterprise functions, these bots are capable of learning to parse the intent behind users’ questions and assembling their own unique answers based on these predictions.

This predictive model does rely on the same historical foundation as the selective conversation model chatbots, the difference is in the ability to predict future questions before they’re asked, and generating new answers based on this predictive model. The bot then adds these new responses to its ever-expanding database to inform future conversations. All of this also means that generative bots can respond by mirroring their conversation partner’s syntax and grammar, sounding much more natural to their human counterpart. In fact, many generative chatbots have been mistaken for live humans in test settings.

Other Types of Machine Learning Algorithms Used to Teach Chatbots

There are other categories of algorithms being used in the training of chatbots. These algorithms further aid in the bot’s ability to parse intent, learn what mood people are more likely to be in based on word selection, and more.

Sentiment analysis

This class of algorithm helps a chatbot learn to detect how a person is feeling based on specific words used. It also takes into account context, previous interactions with the same person, and even syntax such as capital letters and punctuation use.

Anomaly detection

Particularly useful in healthcare settings, anomaly detection is used to alert a user, on either the customer end or the company end (hospital or Dr.’s office), of potentially hazardous situations. For example, if a user inputs their blood pressure to a chatbot daily for entry into their medical records, when the bot detects a spike in readings it can alert the user that it may be time for an appointment.

Speech recognition

With the rise of Alexa and Siri, voice-to-text and vice-versa are rapidly growing areas of research and development. The next frontier for chatbots is for their training to include listening to tone of voice and speech patterns, then using this to inform the bot on the person’s intent.

Predictive analysis

The reverse of anomaly detection, this class of algorithm helps chatbots determine what a user may want before they ask for it. Already making its way into some emerging generative bots, predictive analysis is making its way into mainstream bots to help with things like customer returns and FAQs for customer service centers.

So at a fundamental level, chatbots learn via algorithms. These algorithms are put in place by teams of developers and data scientists to help the bot better serve the company deploying the bot, whether it’s a customer service chatbot like at your bank, or an enterprise one serving internal customers information on their HR status or preparing reports.