Organizing and Making Sense of Customer Feedback
With thoughts from leaders at Reforge, Chime, SpotHero, Quizlet, Slack, Gojek and more
Hi there, it’s Adam. I started this newsletter to provide a no-bullshit, guided approach to solving some of the hardest problems for people and companies. That includes Growth, Product, company building and culture. Subscribe and never miss an issue. If you’re a parent or parenting-curious check out Startup Dad. Questions you’d like to see me answer? Ask them here.
Q: Everyone says to be more customer centric; but how do I get better at tracking and making sense of customer feedback?
I’ve written about how to do product discovery in Mastering Product Discovery Part 1 and Part 2 (part 3 is coming soon!) and a few weeks ago I wrote about successes and failures in doing discovery around pricing. But what do you do if you’re sitting on a mountain of customer feedback and need to organize and prioritize it? Who should be involved in collecting that feedback? How do you operationalize it across a growing (or already-massive) organization!??!
I wanted to get a broader perspective from other people who have dealt with this challenge at their companies. So I got input from Dan Wolchonok (VP of New Products at Reforge, former Hubspot), Ely Lerner (Product Advisor and former Head of Product at Chime, Yelp and more), Jenny Wanger (Product Operations Leader and Advisor), Crystal Widjaja (fmr CPO at Kumu, SVP Growth at Gojek, and guest author of this newsletter post), Behzod Sirjani (fmr Meta and Slack, research phenom, advisor, instructor, and co-author of my product discovery newsletters), and Natalie Rothfels (exec coach and former product leader at Quizlet - also contributor to this newsletter from the archives).
In this article I’ll cover:
The most common sources of feedback you should be leveraging
Mechanisms for organizing that feedback
How to surface and prioritize insights from that feedback
By the end of this article you should know where to find and what to do with your customer feedback.
Leverage The Most Common Source of Feedback
Behzod Sirjani built an entire program around something he calls “Decision First Research.” To paraphrase this essentially means that you should know what decisions you want to make first, then look for the evidence that you need to make your decision, then choose a research approach that helps you get the evidence to make that decision.
This is great if you know what decisions you want to make, but what shapes the initial decision target is observation. And the easiest way to make these observations is via feedback that is easy to collect—feedback you might already be collecting in fact.
These are:
Longitudinal (set-and-forget) surveys like CSAT or NPS
Sales calls
Customer support
Direct email outreach
Ratings and reviews
Set-and-forget Surveys
I don’t care if this is customer satisfaction (CSAT), net promoter score (NPS), or a triggered email that asks “How are we doing?” This stuff is so easy to set up and collect that you should do it at the very beginning of your company once you have customers.
From Mastering Product Discovery, Part 2:
Asking about satisfaction is reasonably straightforward, and as we mentioned above, this often shows up as “How satisfied or dissatisfied are you with [product/service]?” The important part is to make sure that you’re not misconstruing the results of this question. If you’re asking about an overall product experience, you can report that X% of people are “satisfied” with the product, but you cannot say the same for specific features or areas of the product unless you asked for them. Attribute your results to the subject in question — the whole, not the parts.
And NPS:
For a variety of reasons, NPS is often one of the earliest and most consistent self-report metrics that an organization tracks. This means that even if your organization uses NPS incorrectly, the way that product or business changes impact NPS scores can often help you contextualize those changes relative to historical changes since you have longitudinal NPS data. You shouldn’t worry about tracking NPS until you’re already tracking customer satisfaction and feedback, since satisfaction is usually a precursor to evangelism (though not necessarily – you can recommend a product that is the best solution to a problem, even if you aren’t highly satisfied with it).
Crystal Widjaja likes standardized metrics like these for a few reasons:
I recommend the use of standardized metrics to quantify qualitative feedback. This could include scaling responses or implementing consistent metrics like Net Promoter Score (NPS), Customer Satisfaction Score (CSAT), or Customer Effort Score (CES). These metrics provide a quantitative basis for comparison and trend analysis, facilitating rational and objective decision-making.
How do I organize CSAT and NPS feedback?
Your CSAT can be set up with tools like Qualtrics, SurveyMonkey, Typeform or even Google Forms. NPS is supported by many of the same tools plus other tools like Delighted and Retently.
Each of these collects information with a score (5-7 point scale for CSAT if you’re doing it right or 1-10 scale for NPS) and a open-ended, qualitative response.
What type of magical technology is great at storing numbers and open text? A SPREADSHEET.
I recommend directly connecting your CSAT or NPS tool into your spreadsheet of choice – Airtable, Google Sheet, Excel, Zoho or even LibreOffice if that’s your thing. Most allow for direct integrations or you can use something like Zapier to connect them (“when a response happens in your CSAT / NPS tool send that information to Spreadsheet X.”)
If you want to create real time visibility into these scores you can also send them to a Slack (or Slack-like channel). At Patreon we had an #nps Slack channel where we sent each response to our Delighted NPS survey. I formatted it so the information was nicely organized and flagged the 9s and 10s and the detractor (6 and under) scores.
Jenny Wanger used a similar system at SpotHero:
One of my favorite systems is the constant information stream – at SpotHero we did this by having every "contact us" form that came in via our mobile app get published to a Slack channel. This meant that every day we were seeing dozens of comments about what was and wasn't working for our users. It was a great technique to get a high-level sense of what is and isn't working, and was also one of the first places we would get signal for an odd bug making things harder. It included customer emails, so following up on any comment was as easy as clicking their email and shooting off a note to get more data.
I don’t love piping information into Slack exclusively because if you have a lot of feedback this becomes a constant feed of information that is difficult to navigate. The real-time nature of it is helpful and grabs people’s attention but less useful for synthesis and insight gathering. Slack provides a jumping off point as Jenny mentions so I always recommend sending it to both a messaging platform like Slack and a filterable system like a spreadsheet. Pivot tables are your friend.
Sales Calls
If you sell something to an end customer which requires some form of face-to-face interaction (virtual or in person) then you have a built-in mechanism for product feedback just waiting to be consumed. Hooray! There are a few ways that you can take advantage of this treasure trove.
The one I always recommend is to join the call yourself.
But Adam, we have all these recording and transcription tools now!
Yes.
And you still recommend that I join the call?
Also yes.
There are a few reasons why. First, you will hear something that the robots don’t. This happens 101% of the time. The robots are good at giving you the “gist” of the conversation and a pretty effective summary, but less good at the specifics. Second, if you’re on the call you can ask follow-ups OR nudge your counterpart in sales to ask a follow-up on your behalf. The robot will not do this (yet). Third, you’re not the sales person so you can ask more product-specific questions that don’t necessarily get them to closing in that exact conversation – it also shows that you care about feedback as a company.
Adam, don’t sales people also care about feedback?
Yes, and still they come with the inherent baggage of being a sales person whose known job is to sell something to a customer. You don’t have that baggage.
But let’s say you can’t join the call because… … … reasons. Read the notes. All of them. Just make sure you understand whether it’s a prospective customer conversation or a renewal conversation as the feedback will vary based on product familiarity and usage.
How do I organize sales feedback?
Your sales conversations are only as useful as your method of organization.
“What was that one customer that had that one piece of feedback that one time?” Exactly. And again, while robots can help us mine for this information more quickly, not everyone has access to the robots.
You’ll want to store customer conversations in some type of CRM, Task Manager, or Spreadsheet (but probably one of the first two). I don’t care if it’s Salesforce, Gong, Highrise, Hubspot, Notion, Coda, Asana, or something free that bombards you with ads until you break or upgrade. Just use something with tags that you can use to categorize the entries.
If you’re reviewing sales feedback asynchronously then I recommend blocking this time out on your calendar each week so that you actually will do it. As Nir Eyal said in my Startup Dad podcast, make time for traction.
There’s another great way you can leverage your sales and account management team to help organize feedback: a submission process. In addition to writing raw notes into their CRM system ask them to submit something if they receive product-specific feedback on a call that you’re not on.
This is exactly what we did at ResortPass. We had a submission form that was attached to an Airtable sheet. Any time a member of the sales or account management team had an interesting piece of product feedback from a current or prospective customer they submitted it in the form. We made it very lightweight:
What’s the problem?
Which of our company OKRs does this relate to?
Who does this impact (list out customers)?
Anything else you’d like to let us know?
When the form was submitted it posted the information to an Airtable database and a Slack channel. We could acknowledge that the feedback had been received and seen in Slack, categorize and group the feedback together in Airtable on a weekly or monthly basis, ask follow up questions in Slack, and even attach it to in-stream projects to let people know that we were already working on addressing a particular problem. I’ll share how this helped us with insight generation in the next section.
There’s another treasure trove of feedback just waiting for you to tap into: your Customer Support tickets.
Customer Support
Thanks to Dan Wolchonok for pointing me in the direction of his favorite tweet of all time:
“[Customer Support] is also the single best tool for product development: you will have a complete view into the sentiment around your features.
And when you have your agents pasting recurring feedback & bugs into Slack, it will galvanize your engineers to fix those problems.”
I agree.
There’s a reason that a lot of phenomenal product managers come out of the customer support function. There’s also a reason why every product builder in your company should do a rotation through customer support when they join the company and once or twice more per year. It is a firehose of information from your customers and it’s mostly what’s broken with your product or falling short of their expectations.
The problem is that it can be very noisy and un-targeted feedback.
Ely Lerner solves this problem by ensuring a product-minded person in CS:
One related thing I’ve seen to be really useful is hiring, cultivating, or training a ‘product-minded’ person in the CS team to work closely with the team and do at least that first pass of aggregating and triage.
Thank you Ely, that’s a beautiful segue.
How do I organize customer support feedback?
I’ve experimented with a few different ways of organizing this information over time – from a weekly or bi-weekly meeting with the support leaders to review an Asana board of the “top” feedback to having a login to the Zendesk/HelpDesk/FreshDesk/Helpscout/whatever platform.
My favorite? The Monthly Support Eagle created by Angela Raiford at Patreon.
The “Monthly Eagle” was an organized summarization of the top feedback received by our support team. They highlighted whether it was a new or rolling trend, whether it was trending up or down, and whether it was an identified bug. It was subdivided into our two audiences: creators and patrons. Each section was hyperlinked into a deeper-dive and only the most important feedback made the list. As the title says, it came out monthly. It’s one of the clearest systems I’ve seen for organizing and communicating customer support feedback and I read it voraciously every month. In my time at Patreon it was the system that led to the least problems/disagreements and was the most helpful for the product team.
There are two other straightforward and easy methods of collecting customer feedback: direct email outreach and ratings/reviews.
Direct Email Outreach
Apparently I’m on the Dan Wolchonok hype train because this next feedback mechanism was highlighted by him (again). And because he’s the VP of New Products at Reforge he shared a Reforge artifact with me that you can take a look at. Here’s what he had to say:
This is a non-automated way of doing it [author note: the artifact is called ‘automated user feedback so we’ve gotta work on that title’]. That’s usually my go-to. I’m a big fan of sending mail from my Google account directly. Looks more real, I respond faster, and I assume I get better inbox placement.
From the artifact:
One of the things that we love about this type of process is that the responses show up in the sender’s inbox and we can respond to them as we would any other email. The responses can be filtered to a label so they don't clog up your inbox but you can use whatever workflow you’re already using with your email client. It feels so personal and we hope that it comes across that way with these audiences.
Ratings and Reviews
If you’ve got an app, a B2B product, or an HR department you’ve got some ratings and reviews somewhere. Time to collect and aggregate them. I recommend using something like Zapier to go out and grab reviews as they’re posted then dump them into a spreadsheet-like thing and tag them the same way you might tag your Customer Support feedback. There are also other tools like AppBot that shortcut this process and might be worthwhile for you to consider.
You don’t need to capture every single review; only the ones paired with qualitative feedback. And even then, you might only care to collect the bad reviews. At ResortPass, for example, if there was ever a poor review (1-star) of our iPhone app we would grab it, put it into a spreadsheet, and notify people via email. You could also pipe this into a Slack channel in addition to (or instead of) sending the email.
So now you know some of the easiest ways to collect customer feedback on an ongoing basis and the mechanisms for organizing it. But as one of my childhood cartoons used to say:
So what’s the other half?
Surfacing and prioritizing insights!
How to Make Sense of All This Feedback
It’s story time! Here are a few ways that myself and others have made sense of the firehose of (now neatly organized) customer feedback. I’m going to step you through a few examples of increasing complexity and end with an AI-based solution (which I believe is the future).
ResortPass
Remember that Airtable of customer feedback collected by sales and account management?
We did a few things with the information as it was submitted and then on an ongoing, scheduled basis.
First, because we piped it into a Slack channel we could engage with it in near real-time. We would bring the account manager or sales person into the discussion, ask for clarification and more detail, and dig into the underlying problem. A lot of the submissions were solutions that a customer had proposed but we needed to understand why they had proposed it. We would follow up to understand:
Who was the customer (our customers could be from multiple, different roles inside a hotel)
What they were trying to accomplish
Was this a problem for other customers that the account/sales team interacted with
Did other account/sales team members have similar feedback
…etc.
All of this would be captured in a conversation thread inside of Slack which could then be appended to the Airtable record so we didn’t lose track of it.
Second, because all of our submissions (and follow-up information) was stored in Airtable we could group, organize, and quantify the scale of the feedback. We could also attach it to specific product priorities and annual objectives for different teams. Since Airtable was also how we organized our quarterly roadmaps when we went to do planning we had a whole set of quantified, thematic feedback. We knew the scale of each problem and whether there were multiple groupings of customers who shared it. We were able to then understand “How big of an issue is this? Does it impact our target customer or a tangential one? How does it relate to one of our major product priorities?”
The organization and grouping was done on an every-two-weeks basis. By having this information at our fingertips we avoided recency bias in planning so we were able to focus on the most impactful work rather than the freshest memories or loudest individual voices.
Slack
As organizations get larger and are great at collecting and organizing feedback the trove of information grows very large.
Behzod Sirjani summarizes this quite well:
A challenge I see very often is that organizations don’t know things, people do. While there’s a lot of work that goes into trying to address this asymmetry at a micro level (person to person), I think the more valuable work is often done at a macro level (through systems).
Here’s how they were able to derive insight and knowledge share at Slack:
The way that we addressed this was by building a wiki that summarized “What We Know About X” for a range of topics. These were meant to be a source of truth that articulated what the organization knew or believed to be true at a given moment in time and help anyone in the company get up to speed on a topic.
They were structured like a Q&A, with headers that like “What do people think about Slack’s brand?” and responses that included citations linking out to research studies, dashboards, customer tickets, and so on. They often had a few levels of detail, so the page that summarized “What We Know About Slack’s Brand” would have questions like:
What do people think about Slack’s brand?
How does that change by customer segment?
How does that change by geography?
How does that change by tenure?
How does Slack’s brand compare to competitors?
Because they were written as text-based articles that linked out to other information, they functioned like a map of the world and were things that anyone who had the most up to date information could contribute to (rather than being a siloed source or tool).
Now you might be wondering how a company might get this started or operationalize it.
Behzod again,
We started this effort by asking the leaders of each discipline what they wanted everyone on their team to know about our customers, products, and business in their first 90 days on the job. We clustered these different topics into 10 categories and a number of subcategories, then wrote plain language questions that corresponded to each of them. From there we enlisted the help of different teams like data science, customer support, customer success, and so on, hoping to create a more holistic perspective (and drive traffic to develop habits).
And how they maintained this over time and measured success:
As we conducted more research, experiments, or customer conversations and developed a new perspective, we would update these pages. We also would review them at the end of the quarter to make sure they were up to date. We knew we were successful when people would stop DMing us with questions like “what was our recent NPS score?” and instead were sharing links to articles on the wiki.
My favorite part of this is translating the insight and information into text-based articles. I find a lot of “insights shareouts” are done in slides prepared for meetings. That’s “OK” if you’re in the meeting but not very easy to navigate if you missed the meeting (which the majority of the company will). Also it requires you clicking into 47 different slide decks to mine for information, isn’t easily searchable, and is more optimized for visual presentation than proper dissemination of information.
Gojek
Your insights are only as good as the “cleanliness” of your data. In order to derive insights from customer feedback, Crystal Widjaja does the following. I refer to this as “science-ing the sh*t out of the feedback.”
I focus on rational decision-making processes, bias minimization, and systematic analysis. I start with clearly defining what I want to learn from customer feedback—is it improving product features, customer service, or user experience?
Okay, but what does that mean?
I use Structured Aggregation Methods—feedback can be categorized by theme, customer segment, product feature, or any other relevant dimension. Categorization is an art more than a science so I can't say I have one that works for everyone. For categorization, back in the day I used Latent Dirichlet Allocation (LDA) as a topic modeling technique on user rating feedback responses but I don't know if this is now archaic given the improvements in LLMs.
Then you have to eliminate bias in your results, Crystal uses a few different mechanisms:
Random Sampling: randomly select feedback entries for detailed analysis to avoid selection bias.
Blind Analysis: Ensure that analysts reviewing feedback are not aware of the specifics of the customer (like their spending habits or loyalty status) to avoid bias in processing.
Determine Signal vs Noise: this is a framework to systematically organize feedback, making it easier to identify patterns and trends that are not immediately obvious. e.g. If a navigation issue is reported more than a threshold number in a week, escalate to immediate app development review; if less, accumulate data for monthly review.
A lot of Crystal’s advice, while advanced, is slowly becoming obsolete thanks to the rise of LLMs (as she mentioned above).
The Rise of AI in Surfacing Insights
One of the benefits of the new wave of AI and LLMs is their ability to synthesize large swaths of information and identify patterns. I think we’re in for a very transformative next few years when it comes to feedback organization and analysis.
Natalie Rothfels thinks this is the future and recommended a tool called Enterpret. I’m not an investor in the company, they don’t sponsor this newsletter, and I have no other relation to it other than meeting the founders several years ago. Here’s how it works:
It’s built on a LLM so it handles categorization and company-level taxonomy automatically. When you integrate your product it creates its own taxonomy of all the problems and inter-related issues, cross-linking them to one another. Essentially doing the manual job we were doing at ResortPass in the Airtable.
Advances in machine learning can also understand the content of the feedback and create reasons for it that are human-intelligible. Natalie’s example:
Let’s say you get a lot of feedback about your sign-in flow. It's one thing to be able to say ‘Oh shit, we’re getting a lot of tickets about sign-in!’ and it's completely different to see ‘Oh, half them are complaints, 10% of them are people saying it’s amazing, 20% of them are various bugs in a particular part of the flow, etc.’ It generates actual reasons for the feedback and not just the raw feedback itself.
For deriving insights you can now have back-and-forth conversations with these models. You could start by saying: “give me feedback about the sign up flow.” Then engage further and ask “how is this feedback changing over the last two weeks since we deployed X change” and then take it further by asking for a list of the users who submitted feedback so you can close the loop after the bug is fixed.
Eventually, Large Action Models will advance to a point where you won’t even have to ask these questions. They’ll be asked, prioritized, fixed, and followed-up on automatically.
But until then, you might just want to follow the simpler advice in this newsletter.
Wrap Up
Everyone says you should be more customer centric, but not everyone has the ability to consistently talk to customers and run advanced discovery processes. What you do have the ability to do is look for opportunities to collect and organize feedback from the customer touchpoints you already have: CSAT scores, sales calls, customer support, sending a simple email, and your ratings and reviews.
It can be overwhelming once you realize that you have a whole bunch of customer feedback just waiting there for you, but organizing it in a simple spreadsheet or database first, piping it into a Slack channel, and eventually leveraging AI to make sense of it all and derive insights.
Thanks for reading! If you made it this far I’d love your feeback on the below (quick) poll:
Great read, thanks for sharing!