Artificial Integrity:  Trust, AI, and Customer Service

Every customer service person knows that the quickest way to guarantee an angry customer is to mess with their money. Billing problems, pricing changes, refused refunds — anything that comes with an easily measured financial impact.

That is what made the introduction of automated teller machines (ATMs) in the 1970s such a challenge. People trusted that a bank teller would give them the correct money, and they knew exactly who to shout at if they did not. Not everyone was willing to extend that same trust to a machine embedded in a wall. 

To trust means to rely on the character, ability, or truth of something or someone, and it implies a willingness to accept possible (though unlikely) risks. 

Curt Miller, writing in The New York Times in 1973 about the arrival of ATMs, claims that "the younger and highly mobile white collar population" were open to using the new machines, but that "Older people, however, seem to have a basic mistrust of machines that banks are finding difficult to overcome."

Those customers were not willing to risk the machine making a mistake with their money. The speed and convenience of a machine was not enough to overcome the distrust, at least at first. Eventually, after enough positive experiences (and perhaps through lack of alternatives) most people accepted ATMs as a normal part of banking and trust that they will reliably do the job they are intended for.

Every interaction between a customer and a business requires some trust, but the level needed can vary widely. Consider a customer asking an ecommerce store about shipping costs before a purchase. 

Perhaps the answer will be incorrect, but it could be verified by the customer before paying for an order, either in a published FAQ or as part of the transaction itself. When the risk and the cost of verification are low, there is only a small gap for trust to bridge.

This sort of low-trust customer service interaction is also relatively low value to all parties. The customer service agent is acting as a sort of human search engine, adding little additional value for the customer beyond knowing the right page to search for and making no use of the agent's deeper skills and knowledge. For the customer, it's merely providing information they should not have had to search for.

Deploying AI as a more conversational search engine which is trained on your own knowledge base could save everyone some time and build customer trust in the help documents and the company in general.

AI-driven interaction of that type is likely to become normal, especially for high-volume situations with a lot of repetitive questions to handle (and, often, a pretty mediocre base level of service quality).

As the questions become more complex, the need for trust in the company and the customer service team quickly rises. In "The Role of Trust in Consumer Relationships" the authors found that "Overall trust was most influenced by the customers' trust in their interaction with front line employees, self-service technologies and marketing communications, followed by the service providers' management policies & practices and thirdly customers' previous experience."

Every interaction with a customer is equally an opportunity to grow trust or a chance to destroy it. So we should be very thoughtful about the introduction of artificial intelligence into customer service scenarios beyond the most limited forms. 

You would never accept your customer service team members gaslighting your customers or confidently giving out made-up information. That is where the current crop of AI chatbots are today.

Those instances, at least, are out in the open. The greater risk to customer trust will come from the use of AI tools behind the scenes. For example, this could occur when mortgage applications are turned down or people are sent back to prison based on biased training data and an algorithmic decision that nobody can fully explain. 

Artificial Intelligence as a field is developing very quickly, and there will be pressure on every business to adopt the usage of AI tools. They will save time and money, and they will inevitably take over some roles currently filled by people.

As we decide how to best use AI in customer service, we must look to protect and honor the trust our customers have placed in us. Humans are imperfect, yes. We are biased, and we make mistakes. But part of building trust is acknowledging mistakes have happened, explaining them, and apologizing effectively. That requires someone in the business understanding how an incorrect decision was made. 

It takes a long time to develop trust but only moments to destroy it. Customer-centric businesses that want to thrive in an AI age will need to develop their own principles of AI usage and be prepared to roll out only the systems that lead to consistent, high-quality, and trustworthy service experiences.

Like what you see? Share with a friend.