Many companies feel like they don’t have enough data to support their company’s investment in customer experience. In fact, by 2020, Gartner predicts that more than 40% of all data analytics projects will relate to an aspect of customer experience. Whether that be deciding on new functionality to add to your product and support offering or even doing a deep-dive on customer feedback insights—those projects will be taking up almost half of a company’s data analysis resources in the near future.
If most companies feel like they don’t have enough data, what are leading CX professionals doing to combat this? We asked CX experts from leading SaaS teams to provide insights on how they manage CX data analysis, and how they use the insights once they have them!
Checking metrics on a weekly basis is a great way to better understand your customers’ immediate needs and wants. There are some important pulse-check metrics to check more frequently —ones that are good indicators of if something is awry or needs to shift. Here’s what each of our interviewees kept track of:
According to Nick Sayers, VP of Customer Success at Biteable, measuring CSAT is the bare minimum for him to see if his company is pleasing customers. “It can also tell us about trends when it comes to bugs and if we are acquiring wrong customers.” In addition to CSAT, Biteable also collects two different types of data:
- Activation metrics: Biteable tracks how many people have been "onboarded" by taking some set actions in the app that they’ve found correlates with an account conversion. They then use that metric to drive all of the customer success messaging and webinars that they do.
- Engagement metrics: For Biteable, that’s tracking any paying user that has logged in to two sessions and renders a video. Nick uses this to see if company messaging is bringing people back to the app and continually engaging them no matter where they are in the lifecycle.
At Biteable, they’ve driven an additional quarterly focus on churn. “All of our work, whether it be onboarding or customer support is to stop customers from churning. We track it monthly and do deep dives quarterly.” While 50% of customers naturally churn every 5 years, only 1 out of 26 unhappy customers complain. Keeping your thumb on the pulse of how high churn is and what is causing it is one of the leading indicators of your customer experience team’s success.
Did you know that 53% of customers think that 3 minutes is a reasonable amount of time to wait for an answer via phone? Richard Myers, VP of Vice President of Customer Support & Success at Linode thinks that this is a key indicator of loyalty and CX health for their customers. At Linode they measure the following metrics:
- Time to first response, in minutes
- Average number of responses per support ticket
- Average closure time per support ticket
- Customer happiness month over month
For Hubspot, Michael Redbord, General Manager for ServiceHub, noted that they’ve been able to gain a lot of insight by looking at the smaller scale metrics across a large span of time. For example, looking at daily NPS scores overall by customer type. Additionally, Hubspot offers omnichannel support, so knowing where most of their volume is coming from can help with staffing and looking forward to future innovations.
- Channel percentage mix
- Case intake sorted by customer type (Are enterprise customers sending in more tickets, or free trial users?)
- SLA achievement by communication channel
- NPS by customer type
- Incident rate by customer type
- Bug rate by product area
- Support-qualified leads
Chris Taylor, a support manager at Tyk.io is particularly keen to know how quick customer issues are resolved. It doesn’t matter whether it is in sales, support, or with a bug report to engineering—the quicker the resolution, the better.That’s why they focus on only a few metrics, including:
- Customer satisfaction
- Issue severity
How to make an impact
It’s all well and good to know what data successful CX leaders are looking at, but how are they using it? We asked our interviewees about the most impactful ways that they had used data in the past, and got some really excellent responses.
1. Connect & correlate data cross-functionally
Richard at Linode said that the most impactful thing his team had done was to measure the efficacy of customer experience initiatives against the goals of other teams. “For example, by measuring Customer Happiness per ticket category, we can identify areas for our Training Team to focus on. In our case, we either needed to change our new-hire training or run a continuing education class on the category.”
Another good example could be measuring the number of tickets opened in a particular ticket category. Use that to identify product, usability, or documentation issues. It's important to gather a variety of different types of data, such as quantities and sentiments. While the data itself is telling, the analysis and correlation of different metrics is where the most impact can be made.
2. Make a blueprint
Things are always easier when you have a map, and Nick from Biteable has been using his data to create one directing his team away from churn.
According to him, “the most impactful thing I've done is analyzed where people were churning in the user lifecycle and what their behavior was leading up to it. Then, we use that to build a good/ideal customer profile. We now use this as the blueprint for onboarding and to inform our ‘activation metric’ which bubbles up to all of our other data.”
According to Michael at Hubspot, the most impactful data exercise they've done is getting their staffing model right; specifically the support staffing model that they have used for the past several years. “It's a ‘supply and demand model’ for support. By using it, we can project customer demand by measuring the number of cases per day against agent productivity to understand when and how we need to get our hiring done. Getting that model right has been the single most transformative data exercise we've ever done.”
The important thing to remember, though, is that the process never ends. You’ll likely tweak the same model every month for years, much like Hubspot.
The worst metrics
As there are winners, so there are losers. Metrics and data, while useful, do not always tell the whole story. They can be easily skewed, manipulated,massaged and sometimes even willfully misinterpreted by their readers. Here are the two metrics that our interviewees unanimously nominated as the least helpful metrics.
Net promoter score (when used incorrectly)
This may (or may not) come as a surprise to you. Net Promoter Score, NPS for short, is one of the most well-known metrics used for customer experience, and can be extremely effective when used properly. Unfortunately, given how long it has been around, many people assume that it can be used in a vacuum, but it’s actually best used when considered alongside other metrics and when used to understand customer feedback - not just report on the raw score. Richard from Linode, a past NPS facilitator and user, weighs in: “numerous professional studies have been conducted to investigate the correlation between NPS and customer loyalty, customer satisfaction, customer happiness, and company growth. While many have found the metric unreliable in measuring customer loyalty, some have even found it harmful.”
"A big challenge with the methodology is that organizations tend to focus on the metric as the objective instead of gaining the insight to learn and act on to improve the customer experience."
- Steve Bennett, ex-CEO, Symantec
Nick from Biteable agrees, and finds that the mathematics behind the metric make little sense in the context of actual customer experience: “the scores are only transparent to those who understand what NPS is. If someone were told that giving a score of 8 would make them a "detractor,” they might answer it differently.” To some, NPS is abstract, versus something like customer effort score, or customer satisfaction. The other two metrics judge on tangible, real things. It may not be as easy to know if you’d recommend a product, but you definitely know if you had to put forth effort to get an answer, or had an excellent or terrible support experience.
No bad metrics, just bad strategies
Both of the leaders from Hubspot and Tyk.io agree that the issue isn’t in the metric that you use, but rather how you are using it. The trouble is when you overfocus on one metric to the point where it causes issues. Focus is critical, but focus to the point of blindness is a curse. Let's take response time as a metric, for instance. Customers value a fast response—it's a "good metric", arguably. But it can go bad if used as the only metric.” While focus is important, leadership with an over-focus can lead to a hyper-optimization cycle that produces negative patterns at all levels. If you are constantly trying to fix or adjust for one specific metric, you move away from a holistic view of experience and success.
Just because you can measure something doesn’t mean that you have to or should. Often times hyper-focussing on a single metric can draw your focus away from other more meaningful focus points.
Metrics are valuable to everyone. They let us see where we are excelling and where we could do better. They draw team focus to an important aspect of the business, and clue us in to things that we might not see otherwise. Keep your metrics focused on the same things both at the weekly and quarterly level, but not so hyper-focused that you lose sight of the actual customer experience. Remember: there’s no such thing as bad metrics, just the strategies that you put in place around them.