The 3 Pitfalls of Your Quality Assurance Program, and How to Solve Them

With Shoby Abraham, Customer Care Manager at Samsung Electronics New Zealand

 


 
 

Show Notes

Shoby Abraham is the Customer Care Manager of Samsung Electronics New Zealand. He’s COPC certified and has 20 years’ experience in the contact centre industry.

From his extensive experience in Quality Assurance, he’s identified there are 3 common pitfalls of most QA programs. He shares what they are and how to solve them.

The approach he shares helped Samsung shift Customer Satisfaction from 75% to 90%+.

The 3 Common Pitfalls of QA Programs:

  1. Calls are often given an overall score. Instead, give a call an overall ‘pass’ or ‘fail’ (03:34).

  2. There is often bias in how calls are picked for evaluation – Team Leaders and QA often pick them based on length. Instead, random calls should be evaluated (06:51).

  3. Team Leaders and Quality Analysts should be calibrated in their ability to assess calls. This ensures alignment of call quality across your contact centres (08:33).

You'll Learn:

  • The ‘sword’ metaphor that Shoby uses, which defines what a Quality Assurance program should and shouldn’t be (10:35).

  • The different way of thinking Team Leaders and QA need to adopt when evaluating calls, to ensure great customer outcomes (11:09).

  • What feedback should be given to agents – and when – which will make a massive difference to your Customer Satisfaction (11:28).

Connect with Shoby here: https://www.linkedin.com/in/shoby-abraham-08381587/

If your priorities include boosting employee engagement, lifting customer experience, or increasing your focus on coaching and development, I can help you out with a few ideas that you can steal. Send me an email (at blairs@bravatrak.com) and we can organize a 15-minute call over Zoom or Teams.

 

Transcript

Blair Stevenson (00:00)
Welcome to the Secrets to Contact Center Success podcast, connecting you with the latest and greatest tips from the best and the brightest minds in the industry.

I am Blair Stevenson, founder of BravaTrak, the Coaching System for Contact Centres. It helps you to boost employee engagement and customer experience, by measuring and improving your Team Leaders' coaching effectiveness.

Today, I'm joined by Shoby Abraham, who is the Customer Care Manager at Sumsung Electronics in New Zealand. He is COPC certified, and has 20 years' experience in the contact center industry.

And from that experience, Shoby believes most contact center Quality Assurance programs are fundamentally flawed. And you may too, after listening to him. Today, he's going to take you through the 3 common pitfalls most Quality Assurance programs succumb to.

So welcome, Shoby, great to have you along.

Shoby Abraham (00:57)
Hi Blair. Nice to be here.

Blair Stevenson (00:59)
Yeah. Cool. So for the people who don't know, just to start out, tell us a bit about your background and your experience.

Shoby Abraham (01:07)
As you said, my name is Shoby Abraham. I've been in the contact center industry for almost 20 years now, with different logos, like Dell, Vodafone, SITEL, Salmat, which is currently Probe [Group].

And currently, I'm a Customer Care Manager at Samsung, managing the contact center operations and the local team for escalations. Any customer facing role, I am part of it.

And this entire journey has been mainly focused in contact centers, and I'm COPC certified as a vendor manager, and also COPC certified as a CSP (Customer Service Provider), which is an outsourcer [standard]. So those are the certifications I have from COPC Inc.

Blair Stevenson (01:58)
Cool. Cool. So you mentioned COPC. Just for listeners who may not be aware of that organization, can you just tell us a little bit about COPC and their customer experience standard?

Shoby Abraham (02:11)
Yeah. So COPC was formed in 1996, I think, by a group of people who took all the top standards from their operations, and put it together and made a single standard out of it.

So what that means is when you follow those standards, you will get a high-performance [contact] center, mainly focusing on quality. If you follow their standard, your customer satisfaction goes up, your revenue goes up, and your cost goes down. So those are the three critical numbers to run any contact center or any customer operations center.

So that is what COPC is doing. They have gone further down into back office operations, healthcare and different kinds of operation. But their expertise is in anything which is customer related.

Blair Stevenson (03:08)
Cool, cool. So you mentioned to me that in your opinion, most contact centre Quality Assurance programs succumb to three specific pitfalls. And you've mentioned those as:

  1. The quality of the call.

  2. Bias [in call selection]

  3. Calibration [of calls].

So let's start with the quality of calls. What's the pitfall you're noticing there?

Shoby Abraham (03:34)
So let's take a scenario of a quality form, which has got 10 attributes you are marking an agent on. So traditionally, we will mark the agents on all those 10 parameters - or attributes - and there will be an overall score.

Now let's take an example of a survey going out to the customer. Now, the customer will be asked "How satisfied are you with the service you received from the agent?" And the customer's response is either 'satisfied' or 'not satisfied'. Now you have got an overall score of 80%. But the customer is saying "Not satisfied". There is a big gap between these two.

So that's one of the pitfalls, we believe. You shouldn't be having an overall score. Instead, you should be looking at a different way of measuring it.

Blair Stevenson (04:23)
Okay. So what's the solution, what's the different way of measuring it?

Shoby Abraham (04:27)
Okay. So now I'll give you an example of the traditional way of what we did in Samsung, and what we implemented, and I'll share the numbers also. So there's perspective to that.

Now traditionally, you have got all these parameters and you measure the agent on those ones. What COPC - or what we implemented - is an overall 'pass' or 'fail' on a call. Now let me break it down. There are three critical errors - or critical factors - in a call.

One is a 'customer critical', which is directly impacting the customer. Second is a 'business critical', which is all your policies and procedures to be followed. And third is a 'compliance critical', which is your compliance metrics, like a regulatory requirement or a government requirement.

Now you categorize all your attributes under these critical errors. If an agent is not doing what he's supposed to do. For example, let's take his resolution was not provided. He didn't resolve the call. The call should be marked as 'failed'. Because if you survey that customer, the customer is not happy.

So that's what the measuring should be. So it's a big paradigm shift from the traditional way, to a new way of either you did it, or you didn't. It's black and white.

Blair Stevenson (05:52)
Yeah. So you're moving from a kind of internal compliance focus to a customer perspective.

Shoby Abraham (06:00)
Exactly. You got the point. And just to give you an example. At Samsung, we’ve had the same operations with our vendor for almost six to seven years now. We've used the same vendor in the Philippines.

Our Customer Satisfaction used to be somewhere between 75 to 80%. Now we moved from the traditional way to the COPC way, with a different way of measuring it with an overall 'pass' or 'fail' of the call. And we have done a lot of other work like calibration, and things like that. And our satisfaction score currently consistently is 90 plus.

So there has been a 15 point jump in our satisfaction score. And this score is evaluated by a third party. That is the shift you can achieve.

Blair Stevenson (06:51)
Fantastic. So the second pitfall you mentioned to me was bias in relationship to the length of the call - the calls that get assessed for quality. So what's the problem you've noticed, and what's the implication of that problem?

Shoby Abraham (07:08)
When you want to evaluate a call, you look at the call recording, and then you look at, "Okay, let me take a call, which is between three minutes and possibly six minutes or seven minutes. Team leaders and the Quality Analysts don't want to spend time on a 30 minute call or a longer call.

What guidelines we follow, as far as per COPC, is you shouldn't be biasing a call. You shouldn't be picking a call based on any number. Your calls should be any random call, whether it's a one minute call or a 45 minute call, once you're picked it, you have to mark it.

Blair Stevenson (07:45)
And so what's the problem with that little range that you referred to?

Shoby Abraham (07:51)
Generally you would set that range for an ideal call. You don't want to measure your ideal call. You want to measure your opportunities because quality is a tool to measure your opportunities. So you want to look at your opportunities. Your opportunities may be lying in the shorter call or the other longer calls.

In shorter calls, maybe the agent is doing everything perfectly. And that's the benchmark you want to share with the rest of the business. Or, if you have a longer call, than there is an opportunity which you want to create a training plan and coaching plan for the agent and the rest of the business.

So that's one of the biggest pitfalls we see.

Blair Stevenson (08:27)
Makes sense. So the third and final pitfall is 'calibration'. So just talk us through that issue.

Shoby Abraham (08:33)
I'll give first an example for people to understand that better. Now let's look at a traditional way of calibrating a call. You have got 10 attributes. One of the attributes is, "Has the agent done everything to resolve the call?" Another one is, "Has the agent taken ownership of the call?"

Now a Team Leader marks the call and says, he believes the agent did not take ownership. So the overall score of that call is 90% on a 10 point scale. Now, the Quality Analyst marks the same call, but believes the resolution was not done. And again, the Quality Analyst will mark it as 90%.

Both show an overall score of 90%. And if you are close to what the overall score is, you're calibrated with a 3 to 5% variance. But one is failing on ownership. One is failing on resolution. So that is what we say is an uncalibrated scenario between the Quality Analyst and the Team Leader.

Blair Stevenson (09:37)
So what's the solution to that Shoby?

Shoby Abraham (09:40)
What I believe, and as COPC also says very clearly about it, is you should be calibrating on each attribute.

I spoke about three critical errors, which is the 'customer critical', 'business critical' and 'compliance critical', that may have different attributes. Now you should be measuring each of those attributes and see whether you're calibrated.

And if you're not calibrated, then there is coaching required for the Team Leader to be calibrated. The Team Leader, shouldn't be marking that call until they get calibrated. And that's an opportunity for the Team Leader to be aligned with the rest of the business.

Blair Stevenson (10:18)
Makes perfect sense. You've given us another way to think about Quality Assurance, Shoby. I really appreciate it. So perhaps just to wrap up, what would be your top three tips for contact centre leaders, as to how they could best structure the Quality Assurance program?

Shoby Abraham (10:35)

Tip #1

First thing I would say, I mean, obviously these are the frameworks which we can follow. And it's a shift, as I said, but definitely one thing I will say to all the people in the contact center who want to grow is a Quality Assurance program, or the quality evaluation is not a sword. It's not a sword to fight and slice and dice the agent. It is a tool for us to improve our performance. So use it as a tool rather than as a scary evaluation method.

Tip #2 (11:09)

And second is, I would definitely say, evaluate a call from the customer's perspective. Don't look at whether the agent did everything what they are supposed to do. Look at, "Did the agent do everything to make the customer happy and resolve the call?" So that's a different way of thinking.

Tip #3 (11:28)

And third is, make sure that you are providing the coaching and the feedback on all the calls you evaluate. Don't leave any. Even if it is a 100% call, or a perfect call, share the feedback with the agent so that they can repeat their behavior.

And if it is a call which has got opportunity, share the feedback that day. Don't wait for the right opportunity. Spend at least five to 10 minutes, share that feedback. That will make a massive difference.

And audit. This is for all the Operations Managers and Contact Center Managers - do a random audit, whether the feedback has been provided exactly where and when it should be.

Those are my tips.

Blair Stevenson (12:11)
Well, that's all we've got time for today. Thank you to Shoby for coming on the show. You've given us some incredibly useful insights into the common pitfalls, as you see it, in Quality Assurance programs. Now, for listeners, you'll find the link to the show notes in the episode description below.

And if you'd like to connect with Shoby on LinkedIn, you'll also find a link to his LinkedIn profile in the description too (https://www.linkedin.com/in/shoby-abraham-08381587/).

Now, if your priorities this year include boosting employee engagement, lifting customer experience, or increasing your focus on coaching and development, I can help you out with a few ideas that you can steal.

Send me an email (at blairs@bravatrak.com) and we can organize a 15-minute call over Zoom or Teams. My contact details are also in the episode description below.

Well, that's it from us today. Have a productive week.