How are you gauging the quality of your customer experience? How are you deciding whether your customer is really satisfied or not at the end of each call? Is it part of your agents’ ACW? In that case, how objective is your agent in marking your customer’s mood? Or are you are using a survey system? Sadly, we know that surveys are not always answered. Besides, every customer judges on a personal scale. One customer’s 4-star rating can mean they’re not completely satisfied, whereas that may be the highest rating another customer gives. The scale just isn’t standardized. Or are you still using metrics such as first call resolution to judge satisfaction quality? Do these metrics give you a complete picture? For example, let’s consider a customer who calls, speaks to your agent for 5 minutes, hangs up and doesn’t call again. Was that really a call resolution or have you just lost a customer? And, we all understand that even the best QA team can only go through a small sample set of call recordings. We don’t know how many unhappy (or happy) customers hide within the call recordings we missed. What you really need is an objective and intelligent analysis of customer sentiment at the end of every single call. That’s where Sentiment Analysis comes in. Sentiment Analysis is a function of your Speech Analytics system. A Speech Analytics system first transcribes every word your customer speaks to text. Then it analyzes the text using Natural Language Processing to understand what they meant when they said what they did. Based on the words used, and the context, Sentiment Analysis is able to gauge whether the customer’s mood was: a. Positive b. Neutral or c. Negative. And here is why it helps: Fastest Feedback Loop. A great plus point is that the results are instant. Your agent gets immediate feedback. Studies show that immediate feedback is the most powerful way of improving performance. 100% analysis. 100% of your calls are analyzed. You don’t miss out a single negative (or positive) remark. This makes a judgment of agent performance 95% more accurate than before.* Deep Dive. The results can be further scanned with various keywords to find common causes and trends. For example, are the negative results all pointing to a product/service flaw? Are there common complaints that can be used to improve your product offering? Are there some triggers in your script that cause a change in customer sentiment? Find your star players. Sentiment can also be scanned at the beginning and end of the call. Are there some star agents that can turnaround customer sentiment from negative to positive? You can automate call routing of certain customers to your star agents. Take immediate action. You can develop APIs that allow barge-ins or call redirects whenever sentiment drops to negative. Go beyond first call resolutions. Let’s go back to the customer who calls, speaks to your agent for 5 minutes, hangs up and doesn’t call again. Was this a [...]
At the beginning and end of every call, your Speech Analytics system displays customer mood on your Agents dashboard. How does a machine understand whether your customer is happy or not? It uses two things: Natural Language Processing Sentiment Analysis Natural Language Processing. Conventionally, people used programming languages to “speak” to computers. But now, we see Alexa, Siri and Cortana, following instructions we give in our “natural“ language. This is thanks to NLP. NLP or Natural Language Programming is the ability of a computer program to understand natural human language. It is an aspect of Artificial Intelligence. Sentiment Analysis. People don’t just communicate information using language. We communicate emotions too. Sentiment Analysis is a layer placed over Natural Language Programming, where a program is “trained” to understand sentiment in a text passage. How does this work? The short explanation Very simply speaking, your SA first translates your call recordings to text. It then scans for positive and negative words. It relates each word to the surrounding 5 words. Then it uses these words as context to understand word meanings. Like humans, machines to get better and better at understanding language through exposure and experience. So the very short explanation would be: Your Speech Analytics system can tell if your customer is happy or not, because of experience. The slightly longer explanation What we mean by experience is that your Speech Analytics System has been exposed to billions and billions of words—until it’s learnt to understand meanings. This is similar to how a well trained, educated—and possibly middle aged— person gets good at understanding what others mean to say. It learns to understand slang, humor and sarcasm. For example, It learns to differentiate this sentence: That is so sick! From this sentence: I’m sick of you. It also assigns intensities. For example, in this sentence: I am very unhappy with your service. SA will consider this sentence negative because of the presence of unhappy. And then assign it an intensity of +2 due to the presence of very. And then gives you an aggregate. For example, in this sentence: I am happy with your service but your product is too costly. SA will be trained to assign costly as negative. It will add ++ for intensity. But then it will calculate for the positive emotions. Then, it will use appropriate aggregators— like sums or averages— to assign a final value to your sentiment. How accurate is SA at analyzing sentiment? SA is more accurate than humans. BUT, this statement comes with two disclaimers. Disclaimer 1: SA is only more accurate than humans over large quantities of data that have to be analyzed within tight deadlines. This is because humans get bored with repetitive tasks. Pressure to meet deadline, tiredness and boredom can affect human accuracy. Disclaimer 2: SA is not 100% accurate. On an average, SA can be said to be is 60-85% accurate. But, know this—within the professional setup of gauging sentiment from text, humans are not 100% accurate either. [...]
Contact Centers have thousands of valuable data bytes stashed away as call recordings. Unfortunately it takes trained personnel 3.5 minutes to just listen to a 2 minute call for just keyword detection. Natural Language Processing (NLP) combines with Machine learning to to analyse every single call in minutes. The use cases for this are unlimited. But an immediate application will be seen in quality. Currently Quality assurance teams can go through about only about 5% of the total recordings. Speech analytics can go through every single recording— making the process 95% more accurate. Ozonetel’s Speech analytics system is already in use. Clients are using the system to get real time feedback on 4 parameters: greeting detection, agent quality, phrase detection and customer response. The system is 15x faster that a QA team in judging these basic parameters, giving real time feedback to the agent and improving quality drastically.