Feedback is the nexus between user needs and product development. But gathering and using it effectively doesn’t come easily; it’s one of the trickiest things to get right.
Users are fickle, often misguided about what they want, and will give feedback you’ll want to build for but should completely ignore. This is a tough racket.
Like any business challenge, great execution is a composite of focusing on the right things while filtering out noise. Gathering and using feedback for product development is no different.
By getting in the right state of mind when talking to customers, you can avoid missteps and use feedback to its full potential.
Divorce yourself from the product
Researcher Charles Liu wrote a great article on asking better questions in user interviews. Referencing Erika Hall’s book, Just Enough Research, Liu reminds us of the number one rule for user research:
Never ask anyone what they want.
Doing so inevitably leads to missing the root cause of the problem. To get the best feedback, you need reality, not projections.
By not divorcing yourself from the product, you will find it tempting to lead the interview with your own product ideas or suggestions. This can get scary. You’ll start bringing your engineering team a list of your ideas that you’ve mistakenly assumed are customer needs. In the end, you’ll have no one to blame but yourself if the customer decides that they’re “not ready to buy.”
Liu suggests applying the 5 Whys approach to uncover the root cause of a customer’s problems. By asking instead of suggesting, you can help customers find new workarounds, give process feedback, and save valuable engineering resources.
Focus on uprooting the core problem your users are facing. Find out what is currently being done to solve this problem and what could be made better.
Remember that all feedback is not created equal
A great product manager will set up a diverse mesh-network of methods to get a holistic view of what users think—they might incorporate exit surveys, user interview requests, and NPS email campaigns, just to name a few.
But context matters. Placing all feedback on equal footing and overlooking the source is bad research; you’ll be met with a deluge of data and not a drop of insight.
A customer’s parting words as she’s filling out the cancel screen paints a much different picture than what power users say on a 30-minute call. Keeping a universal list of product feedback is a must, but use segmentation to explain the “why” behind a feature request.
“Best practices” can vary to the point of being contradictory. Should you maintain neutrality? Always go in with a hypothesis? Have mocks and drafts to show? Or should you only ask questions?
Though each has its merits, the trick is to know when to use each technique most effectively:
If you’re conducting user test interviews to find friction in your onboarding flow, it’s best to sit back, maintain neutrality, and let your customer’s confused mouse path and voice tell the story.
In A/B testing, it does pay to have a hypothesis, but be careful not to let observer bias derail your efforts.
Taking every piece of feedback at face value is a surefire way of dragging your team into what David Bland calls the “Product Death Cycle.”
This is what I'm calling the Product Death Cycle pic.twitter.com/1XtPlViOl7— David J Bland (@davidjbland) May 16, 2014
Warning signs for the product death cycle:
- Inability to convert customers who always want “one more” feature
- Inefficient use of developer time and resources
- Short-sighted product roadmap
By giving context to feedback, you can achieve greater clarity on not only what customers are asking for, but their motivations behind it.
Consider the “minimum viable solution” for small requests
This one is specific for the support folks. Not every piece of feedback requires a product overhaul or new feature to be built.
In a piece on the Framed blog called Stop Providing Answers, Start Providing Solutions, we asked: “Does this customer’s request really need three more months of development, testing, and implementation? Or is there a “minimum viable solution” that we can provide today?”
A personal example is when I was cross-shopping CRM solutions for our growing team and found one that I mostly liked, but I saw a glaring fault. I needed the ability to populate the CRM from multiple email addresses (one company-personal address and one support-specific email).
Unfortunately, adding additional email addresses would count as additional “desks” for our account, with additional costs. Instead of telling me the feature would be “on the way in X months” with the danger of not delivering, their support team suggested adding the extra account and waiving the extra desk fee, since they knew I was the only one using the account.
Does each piece of feedback need formal development time, or can a problem be solved by a manual process? If the request reoccurs enough times, you’ll have justification to move forward with a new feature.
Lower the bus factor
Achieving clarity on customer feedback is something we strive for. But how this feedback is delivered to your team is just as important as the gathering process. Don’t lead the race only to stumble at the finish line.
The muddling of customers’ thoughts isn’t just a possibility; it’s inevitable, especially when misinterpretation is amplified exponentially across multiple people.
To avoid communication entropy, maintain transparency and consistency of communication across your organization. Shared Inbox tools, shared documentation, and weekly syncs should all be staples in any company’s arsenal.
Even better, consider putting everyone on the front lines by establishing a culture of support. Having engineering watch a customer fumble through frustrating onboarding is much different than having them hear about it from a report from the support team.
The key is to make sure everyone involved with a product is relying on primary sources, not interpretations and assumptions.
Don’t make support and sales speak for the customers. Let the customers speak for themselves.
Know that most feedback needs time to simmer
A mistake to be made in gathering feedback is treating a campaign in its own time silo.
“The feedback we just received on this new feature we rolled out is awful! No one likes it, and we need to scrap it ASAP.”
Feedback as an ongoing process extends the window of opportunity to get a more comprehensive view of how people feel.
Say a poorly-received feature update requires a bit more coaching to get the full value. With a well-crafted “mini-onboarding” experience, you might get very different feedback in 1-2 months. Knee-jerk reactions shouldn’t immediately beget knee-jerk reactions.
Always think of the potential cadence to solicit feedback whenever you ship a new feature (pre-release, on release, 1 week after, 2 weeks, 4 weeks, 2 months, etc.).
Finally, accept that mistakes will happen
Mistakes will happen in any ambitious product strategy. Rarely will a single, catastrophic failure happen in a built out process. Instead, mistakes occur in a systematic fashion, one at a time.
It’s a given. Unavoidable. But if you consistently strive to keep the right state of mind when collecting feedback, you’ll be well on your way to developing real insights.