Why Basecamp’s Support Team Tags With Intent

Track this! Tag that! Look at all those numbers!

It’s no surprise that the data-driven mindset in tech worked its way into customer support. Pick any support app out there and you’ll find plenty of reports and metric dashboards. You can slice it, dice it, and even export it for more data wizardry.

Once you can measure something, there is a sense of pressure to do it. It’s particularly noticeable with customer emails in your help desk. Every email that arrives in the inbox asks for a tag or label. Without those on the email, you lose a data point for your metrics. Now you’ve got this incredible pressure to tag/label everything, otherwise your reports won’t be “accurate.” Author and speaker Scott Berkun said:

"It’s easy to have processes/metrics for their own sake. Instead ask: what important question does this help us answer? If none, remove it."

Here at Basecamp, we’re big believers in Scott’s approach. There is no over-engineered process to generate TPS reports that no one ever uses. Each report has a defined purpose and focuses on answering specific questions.

Let’s dive into one reporting tool — tagging incoming emails — to see how this approach works.

Tagging with intent

Instead of tagging every single thing, we use tags sparingly. As a team, we decide when tagging make the most sense. This often starts with a person looking to answer a specific question (I’ve included some examples below). From there, we talk it over as a team and decide if tags will really help answer that question, and if so, which emails we should tag.

It’s OK if these tags change, too! If you find that a specific question has been answered, it’s okay to stop tagging emails. Or if you tweak your question a bit, you might need to tweak which emails you tag. Be flexible as you work to answer that question.

Using tagging to answer business questions

The Basecamp support team make use of judicious tagging of incoming support questions to help us answer questions like these:

How can we let folks know when bugs and downtime events are over?

Bugs happen. Downtime happens. Once you’ve got everything fixed up, you’ll want to let customers know. Tagging emails around specific bugs and events makes it much faster to reply to customers (and to report on the overall customer impact of the issue).

What’s the simplest way for our support programmers to see what we need from them?

With some tricky situations, you’ll need help from a programmer. For those, we tag the email with an “on call” label. From there, the programmer can look at emails with just that tag and easily see which ones need help.

How can individual teams find customer conversations of interest to them?

When members of our iOS team participate in all-company support, they like to talk with customers who are using the iOS app. To group those, we automatically apply an “iOS” tag to all emails coming from our iOS app. When they start their all-company support shift, it’s easy to pull up that group of emails and start working.

You’ll notice that in all of these examples, there’s no real “report”, because it’s not necessary. The business goal is to organize information to provide answers to each question, so that action can be taken. A report could be created after the fact, but if the desired action has already been successfully completed then it’s not worth the time or effort. You don’t have to report on everything just because you can.

When tagging doesn’t make sense

There are a few groups that we specifically don’t tag. We don’t have any tags around customer type, such as free versus paid versus VIP. Our approach is to offer all customers the best support we can. Splitting them up based on how much their account is worth doesn’t help us much.

We also don’t tag feature request emails. Suppose you decide to tag all calendar ideas with “calendar.” But now what? You can go to the product team and say “We’ve seen a 5% increase in customers requesting this calendar improvement.”

But how is that information helpful? The raw data alone doesn’t really help, and our support team couldn’t really put together a story from those tagged cases. Every customer was in a different situation when they shared the idea.

It’s also tough to make sure everyone on your team tags every email with the appropriate tag. When you’re relying on those solely for feature request metrics, you can get misleading data really quickly.

It’s all about tags in these examples, but you can do the same with any metric and any report your team has. Every company is different, so the set of reports you’ll need will be different than the ones we use at Basecamp.

The cost of over-reporting

“Not everything that can be counted counts. Not everything that counts can be counted.”

- William Bruce Cameron

With all this data at your fingertips, there’s an urge to report on everything. Resist that urge.

When you report on everything, it’s easy to lose sight of what truly matters. Those specific questions you’re answering become just another point lost in the sea of reports. You’ll pull your hair out trying to report on everything when a lot of those reports simply don’t have a real value.

Take a moment to review what you’re tracking and what reports you’re putting together each week. Look at that original question — what important question does this help us answer? Are we tracking this metric just to say we’re tracking it?

If you can’t clearly identify the question you’re looking to answer, then skip the reporting and go back to helping customers.

Chase Clemons
Chase Clemons

Chase is the guy behind Support Ops, a community devoted to bringing humanity back to the world of customer support. He works on the stellar support team at Basecamp.

Join 251,101 readers who are obsessed with delivering great customer service

Expertly curated emails that’ll help you deliver an exceptional customer experience.