It’s the most compelling customer interview I’ve seen all year.

Emil Sundberg, CEO of Snowfire, tells our product manager, Ryan Brown, that he wants nothing to do with AI in customer support. And he doesn’t mince words: “We are all about human support. We don’t care about AI. We don’t want to be AI-first…I don’t like the bot, no matter how good it is.”

Then Ryan has him try the chatbot in question — and everything changes. Emil is impressed. He smiles at the answers it gives. He even says it sounds like his team wrote the responses. By the end, he’s all in: “I want to activate this immediately,” he says.

You can watch it unfold for yourself in this quick video:

So what happened here? There was a gap between Emil’s perception of the AI’s function and its actual function. As Help Scout’s senior copywriter, I want to bridge this type of gap before the customer or prospect sits down to demo the product — or at least open their minds to the possibility that the gap can be bridged.

As part of that work, I’ve reviewed dozens of customer interviews and sales calls while also conducting my own interviews, both with Help Scout customers and with customers of our competitors. The primary goal? To learn more about AI perception gaps like the one in Emil’s video.

Defining the perception gaps

In this post I want to share my takeaways from the conversations I’ve been having — the perception gaps that support and business leaders have about the use of AI. Keeping these view points top of mind is crucial as we look to help more companies navigate their way through what is probably the most impactful technological shift in support since the internet itself.

While each person and business had their own take on the issue of AI, I’ve noticed that the concerns they mentioned landed in one (or more) of three main categories. I like to think of them as the heart gap, the hype gap, and the nuance gap.

Let’s explore what each of these looks like.

The heart gap

Many people I’ve spoken to share Emil’s initial negative feelings around AI in support.

We are all about human support.

They believe that:

  • AI chatbots or agents are designed to fake human interaction.

  • They’re like a bouncer at the front door of your customer support, requiring a hostile exchange before you can get “real help.”

  • They complicate support instead of simplifying it.

In other words, AI hollows out the heart that’s essential in great support, turning it cold, impersonal, and even inaccurate.

It’s easy to see why this is their initial take. Think of all the factors that impact people’s opinions of AI. Social feeds are crammed with AI content so objectionable it’s earned the name “slop.” Every day it feels like another CEO is flexing layoff numbers while tipping their cap to AI. And that’s before mentioning the endless support chatbot loops we’ve all experienced.

So when people are greeted with “AI-powered support” or “AI support agent” features, many steer clear.

The hype gap

Of course, not everyone I spoke to was trying to avoid AI. Many are actively seeking it out. These seekers are typically founders, VPs, or senior support leaders who are either looking for ways to give their teams an efficiency edge or simply following company-wide mandates to use the technology more. Raise your hand if you’ve been in a meeting where your boss has posited, “How can we be using AI more?”

In the last several conversations I’ve had with folks who head up support teams, they each brought up AI before I could even get to my first question about it. They were all searching for more or better ways to leverage it in both customer-facing and internal contexts. 

But something else interesting came out of those conversations: The seekers were also skeptical.

 I am automatically skeptical.

That’s what a head of support recently told me in response to our claim that our AI Answers feature has a 70% average resolution rate. Not because of the percentage, but because of that squishy little word, resolution.

“Any tool that I turn on could generate an answer,” she said. “I just don't have the confidence that it's going to be aligned with the standard of support that I care about.”

Welcome to the hype gap, home of those who have seen the big promises and have been burned before. Keep in mind, this is one of the people already sold on AI’s potential impact — a seeker. Yet I get the skepticism. There is a huge range of answer quality across AI-powered support tools. “Resolution” leaves a lot of room for uncertainty — not to mention that the current industry standard is to charge for each one, creating a financial incentive to count them.

While you could also argue companies are incentivized to only count quality resolutions so customers keep using the products, it’s still fair to see why support leaders might be skeptical.

This is just one example, but it’s important to remember that AI in support is still new enough that measuring its performance can feel mysterious. There aren’t as many automatically understood and accepted metrics like open and click rate in email tools, so education and transparency go a long way.

The nuance gap 

People generally have a tendency to fixate on their edge cases when shopping for any kind of solution. We think of less common scenarios and use them as reasons not to adopt a new product or feature, whether it’s AI-powered or not.

But I believe this feeling is even more powerful with AI support tools because customer-centric companies know that those edge cases are where quality support shines. They’re opportunities to make a customer’s day and win their loyalty. How could you not fixate on them?

The answer is not always as black and white as we’d hope.

This quote came from a customer explaining his team’s hesitancy to use an AI chatbot, despite being sold on the potential upsides. It’s an example of what I’ve been thinking of as the nuance gap.

If the answer isn’t necessarily straightforward, can AI really capture the right level of detail? Will it understand the product’s warts and how to help people work around them?

Sure, it’s reasonable to believe an AI chatbot or agent can handle the easy stuff. But what happens when the answer isn’t black and white, to borrow a term from the quote above? Will it make up an answer? Respond with something kind of right, but not really? When we leave room for uncertainty about these scenarios, it’s tough to earn trust.

Bridging the gaps

The external factors that impact people’s perceptions of AI are largely beyond our control as makers, marketers, and educators of support software, but we can control how we talk about AI and describe it to our customers. That’s where it’s easy to fall short.

After all, what do you see on the vast majority of B2B software websites right now? It’s some variation of a “We have AI” statement, often wrapped around a headline that reads like it came straight from an investor pitch deck. Instead of focusing on benefits to the customer, we get caught up with:

  • What do investors want to hear?

  • What are our competitors saying?

  • What makes us sound more innovative?

This does little for the folks I spoke with, who are much more concerned with questions like:

  • How will this make customers feel about working with us?

  • How does it make my support team’s life easier?

  • Isn’t this just going to replace the human touch customers love about us?

The task for me as our copywriter is to help current and potential customers “mind the gaps” and see how AI can be used to solve real problems that support pros face every day.

AI isn’t a person, but it doesn’t have to feel impersonal

In the video above, Emil entered one of his customers’ most common questions and received a response informed by his company’s help center, complete with links to relevant articles.

In reality, this interaction is just a different spin on the customer searching the help center themselves or perhaps even receiving a saved reply from a team member. The reply is informed by Snowfire’s knowledge base which was written by Emil’s team. The voice and tone can be fine-tuned to match the company’s. It’s a remix of their own words, curated for the question’s phrasing.

The difference is that the customer can get a response instantly rather than having to manually comb through the knowledge base on their own or wait a few hours for an email response. The customer gets the fastest possible help, and the support team gets more time to focus on the things that only they can do. 

Of course, that’s often not the type of interaction that comes to mind when you think of AI support. Instead, the truly irreplaceable human exchanges come to mind — the complex situations that require your team to go above and beyond to find a solution or the ones that show customers how much you care. Skepticism is fair, and it’s OK to acknowledge that. People will always be best suited to take on these more complicated issues, and the best AI products don’t try to deflect in these situations.

Showing the moments AI is best suited for creates “aha” moments like the one Emil had, which is one of the reasons we’re showcasing his story on our AI Answers page:

Emil - HS AI Answers LP

Favor clarity over hype

Data comes in handy when you’re trying to prove the benefits of a new technology, but it can also breed skepticism. We’ve all seen the claims that “AI boosted our efficiency by 1000%.” It’s easy for something like a 70% resolution rate to blend into the noise.

In both writing and marketing, we often talk about showing, not just telling. With AI, showing feels more important than ever. What does a resolution look like? How are they counted?

In Help Scout, it’s fairly straightforward. A resolution is a single conversation that’s resolved without human help. This happens when the customer receives an AI response and doesn’t use escalation, search the knowledge base, ask more questions, or indicate that they need more help.

AI Answers - Need more help

The mechanics of a resolution is something we’ve communicated inside the product and in our help documentation, but not as much in marketing assets. That will change going forward. Claims need detail beyond “based on internal data” to bridge the hype gap.

We want our customers to know that we stayed away from chatbots for many years, and just because that has changed, we don’t expect them to buy into hype just because we say so. 

Embrace the nuance of support

So what about situations when a simple yes or no answer won’t suffice — can AI handle the gray areas of customer support? The answer is often yes, but not always immediately. 

While you might be surprised by what AI can answer when it has full knowledge of your help center, website, and other custom sources, it also benefits from fine-tuning. Like most forms of digital automation, it’s not a flip-the-switch, set-it-and-forget-it panacea. You get out of it what you put into it, even if the baseline is rapidly improving.

For example, some tools offer ways to give the AI more context around a specific topic or question it has failed to answer. In Help Scout, these are called “improvements.”

Suggested Improvements

Improvements let you fine-tune the AI’s knowledge without having to rewrite sources you’ve fed it or draft new ones. Think of it like custom instructions in ChatGPT but for specific types of support requests. You can add them manually, and Help Scout will also recommend them to you proactively based on the email conversations your team is already having.

While this is a familiar type of feature to AI power users, it’s new to many folks. That’s easy to forget when you’re working in the middle of this stuff day after day. Crossing this gap takes education and a willingness to admit that AI might not always be able to answer a question the first time it’s asked — hence the term improvement. 

When you’re simultaneously promising accuracy and resolution rates, that can feel scary. But we’ve learned that acknowledging that AI is fallible right out of the box isn’t going to send prospects running for the hills. People even find it refreshing.

“It’s never going to be [perfect] out of the box, so just be upfront about that,” one head of support told me. “You have to put the work in to say that was a good response or that was a bad response.”

Still, there are times when the level of nuance — or simply the customer’s preference — will make human support the better choice. That’s why Help Scout is designed to make human escalation easy rather than burying it in menu options. AI isn’t meant to handle every question, and you can feel that from the start. On the front end, it’s presented as just one of multiple options to get help:

Beacon w/AI Answers [in-line blog image]

If you decide to chat with AI Answers and then determine you’d rather connect with a person, it only takes two clicks.

AI Answers - Help In 2 Clicks

That’s on purpose.

We know that AI isn’t right for every job, and we make sure that when your customers interact with Help Scout, there isn’t a doom loop in sight.

No matter how great it is, AI is always going to run into questions it can’t answer. Instead of minimizing these moments or pretending they don’t happen, we need to be upfront with how they’re handled. 

Illustrating AI’s adaptability, coachability, and, yes, even fallibility, will go a long way toward building trust with companies in the nuance gap.

Narrowing the perception gap

When I think about the best ways to narrow these gaps, customers like Luke Tristiani, the director of onboarding and success for the SaaS platform eCatholic, come to mind.

When eCatholic started using AI Answers in early 2025, it resolved 36 conversations in the first week. By August, it was resolving nearly 300 per week — which represented 15.46% of their total volume. 

“We’re not looking to replace support teams with AI,” he said, “but if we can let AI answer the simple questions, it frees us up to create ‘wow moments’ for customers.”

Getting that result didn’t happen overnight. eCatholic needed to add instructions for voice and tone. They uploaded custom sources to expand AI Answers’ knowledge of their product and customers. They monitored responses and provided improvements when necessary. Like any tool, how you wield it matters.

Bridging the AI perception gap is not only about telling more stories like the ones above; it’s about elevating the key idea within the stories. At its best, AI in support unlocks your team instead of pushing them to the sideline.

Like what you see? Share with a friend.