How to Measure Customer Service Quality: Methods & Tools
Illustration by Saskia Keultjes

Just because a customer clicked a smiley face in your post-service feedback survey does not mean you gave them high-quality service.

They might love the product, and your service is just okay enough not to make a difference. Or they might be very happy with an answer, not knowing you’ve given them incomplete or out-of-date information.

Customer satisfaction and customer service quality are not necessarily linked at all, and that’s a problem because plenty of customer service teams rely on CSAT and NPS surveys to judge their performances.

In this post, we're going to explore the critical difference between customer satisfaction and customer service quality, and then we’ll show you, step-by-step, how to build an effective system for measuring the quality of the service experience you are delivering to your customers.


Learn more about measuring support quality beyond CSAT and NPS in this webinar, featuring Beth Trame of Google Hire, Shervin Talieh from PartnerHero, and Mathew Patterson of Help Scout.

Step 1: Define customer service quality for your company

How can you know whether your customer support department is consistently delivering high-quality service? You need to measure quality directly, which means first understanding what “quality” service means for your company.

It is your customer base who will ultimately decide whether or not you are delivering great service, but that leaves us with a conundrum. What if those customers don’t agree with each other’s assessments of your service?

What one person considers spectacular service might be merely acceptable to another, based on their unique expectations and past experiences.

Your team needs a way to consistently measure customer service quality, a measure they can use before the service is delivered instead of afterward. Start by pulling together data from the following sources:

  • Your company and team values.

  • Your customer service vision or philosophy, if you have one.

  • Existing CSAT and NPS comments that focus on positive or negative customer service interactions.

  • Reviews of your product or service that mention customer service.

  • Examples of excellent customer service your team has delivered in the past, as well as instances of service failure.

As you collect data, you will likely identify some common themes — the things that matter to your company and your customers and their relative priorities. Do your customers value detailed, one-to-one service? Does your customer feedback mention the speed of replies more often than anything else?

Use those themes to shape your answer to the basic quality question: What should a great customer service answer look like? Write down everything you can think of, have your team contribute suggestions, and refer to examples of your best customer service work.

That list will form the basis of your customer service quality scorecard, or rubric.

Step 2: Create a customer service quality rubric

A rubric is a list of criteria that you can measure a customer service answer against. With a clear, well-written rubric, two people should be able to review the same customer service interaction and come up with similar scores.

As a general guide, a customer service quality rubric might include these areas:

  • Voice, tone, and brand: Does the answer feel like it comes from our company (while allowing for individual personalities)?

  • Knowledge and accuracy: Was the correct answer given, and were all of the customer’s queries addressed?

  • Empathy and helpfulness: Were the customer’s feelings acknowledged and needs anticipated?

  • Writing style: Were spelling and grammar correct, was the answer clear, and was the layout helpful?

  • Procedures and best practices: Were the correct tags and categories added, and were links to the knowledge base included?

Having four to five main criteria is probably enough, though each one may include multiple elements. Keeping it relatively light will make the rubric much more likely to be used.

To save you some time, we’ve put together some of the most common elements of a quality rubric in this spreadsheet, which also includes a scorecard.

Customer Service Quality Rubric & Scorecard

Grab a copy and use the example rubric in this spreadsheet to create (and score) your own customer service quality standard.

Share your completed rubric with your team, and try applying it to some existing conversations together. You will quickly identify any missing or confusing elements and areas that may need to be reconsidered.

You’ll know your rubric is working when you can reliably get similar scores on a conversation reviewed by different people.

Finally, it is time to build a quality review process that works for your team.

Step 3: Select a quality assurance review process

Quality assurance can take many forms at differing levels of complexity. For example, when the wider Help Scout team takes part in whole company support “power hours,” we use a Slack channel to share draft answers with our Customers Team. They will review for accuracy and tone and offer suggestions for improvement.

The right choice for you will depend on your team size, conversation volume, and resources. Here are four common options to consider. You may use more than one or shift between them over time. We present them here in no particular order (though self-review would be our lowest-priority choice).

1. Leader reviews

Either team leaders review their direct reports’ work or a manager reviews work for the whole department.

Upsides:

  • With fewer people reviewing, it is easier to create consistent review styles and feedback.

  • It’s helpful for leaders to compare work created across their teams to identify issues and trends.

Downsides:

  • It's time consuming for leaders to review a reasonable number of conversations.

  • Feedback and insights from only one source limit the speed and amount of improvement possible.

2. Quality assurance specialist reviews

Common in larger companies, a permanent QA role (or team) can focus full time on monitoring and addressing quality.

Upsides:

  • Specialists can get very good at reviewing and giving feedback.

  • It allows for a higher percentage of interactions to be reviewed.

Downsides:

  • Specialists require a larger financial investment.

  • You aren’t developing QA skills in the individuals on your support teams.

3. Peer-to-peer reviews

Each support person reviews the work of other support people on the team, scoring them against the agreed rubric. Typically, each person would review a small number of conversations each week.

Upsides:

  • People learn directly from their peers by seeing different approaches and new information.

  • It promotes an open and collaborative culture.

  • You can review a lot of conversations when everyone is sharing the work.

Downsides:

  • Some people may be harsher or inconsistent markers, requiring extra training.

  • It can be tricky to get people to do the reviews when their queue is full of customers waiting for help.

4. Self-reviews

Individuals select a handful of their own customer interactions and measure them against the agreed standard to identify areas that can be improved. This should generally be your review option of last resort.

Upsides:

  • It allows for individual growth and self-improvement.

  • It’s simple to implement and much better than no reviews at all.

Downsides:

  • People are less likely to identify their own problem areas, as they have internal knowledge of their intended meaning.

Step 4: Pick which conversations you'll review

Whichever model you use, you cannot realistically review every customer interaction. So which conversations should you review, and how should you find them? Here are some suggestions — use what works for you:

  • Random sampling: Take whichever conversations pop out from your QA tool or blindfold yourself and poke your cursor at a screen full of conversations. You’ll get started — and that’s the main thing — but you may have to sift through some uninteresting conversations first.

  • New team members' conversations: When onboarding a new support agent, reviewing their work is critical both to protect the customer and to help the newcomer learn your tone, style, approach, and tools.

  • Complaints and wins: Work through conversations that resulted in complaints or praise, as they may be more likely to involve learning opportunities.

  • High-impact topics: Use tags or workflows to find conversations on particularly important areas of your product or service where customer service quality might make the biggest impact — e.g., during trials, pricing conversations, or with VIP customers.

  • Highly complex conversations: Focus on detailed conversations or those involving multiple people where new scenarios and surprises lurk to be explored by the team.

The specific process of finding and opening those conversations for review will of course depend on the system you are using to perform those reviews.

Step 5: Select a quality assurance tool

Your quality assurance tool does not need to be complicated. A simple spreadsheet scorecard will work fine in many cases and is an enormous improvement over not reviewing interactions at all.

If a spreadsheet is no longer working for you (perhaps because of higher volumes, larger teams, or a need for better reporting), there are plenty of customer service quality assurance tools on the market.

Here are some of the key considerations when selecting the right QA tool for your team:

  • Does this tool support the style of reviews you want to do (e.g., can it arrange peer-to-peer reviews)?

  • Will it integrate with your help desk, and, if so, how good is that integration?

  • Is the pricing acceptable at the volume of reviews you would like to do?

  • Will its reporting options help you answer the questions you have about your team’s performance?

  • Does it perform well, and is the user experience smooth? A clunky review experience is less likely to be used regularly.

  • Will it help you identify the types of conversations you are interested in reviewing?

  • How good is the customer service experience at the tool's company?

Here are some of the QA tools we hear about most often at Help Scout:

Step 6: Roll out your new quality assurance process

In addition to a clear, agreed upon rubric, launching a successful QA process requires the right environment for the team to work in and training on how to review effectively.

1. Build trust and psychological safety within the team

If people don’t feel safe to raise problems or disagree, it will be difficult to identify and improve on any quality issues. In How to Build a Strong Customer Service Culture, we share some relevant advice.

2. Share your rubric and discuss quality as a team

As part of developing your rubric, you should be holding discussions with the team, listening to their perspectives, and coming to understand together what quality service looks like. That may also involve higher-level metrics like average response times, CSAT, and NPS.

3. Train reviewers on giving good feedback

Feedback should be specific and include suggestions for improvement when needed. Share examples of good feedback and unhelpful feedback.

4. Begin your chosen review process

Try running the processes, keeping an eye out for any confusion, disengagement, or training issues.

5. Share feedback and take action

Use the review data to identify people or situations where quality could be improved, and share that feedback with the relevant people.

Your quality assurance will need to be modified over time as your team structure, conversation volume, and underlying work change. The process should always be in service to the goal of delivering higher-quality help, so do not hesitate to modify it when you identify issues.

Set (and raise) your own bar for customer service quality

Customers come and go, markets change, products launch, and staff are promoted, and through it all you need a way to know if your quality is improving or declining.

If you rely on your customers telling you when you haven’t done a good job, you will always be reacting to problems that have already happened.

Instead, by setting your own quality standard — and then building tools and systems to measure against — you can chart a course of continual improvement and stop worrying about that one person who somehow always clicks the “sad” face even though they leave a positive comment.

It’s Jerry. Or Garry.

Like what you see? Share with a friend.