CCMC's Blog

Customer Rage – Most companies don’t really understand it because their customer experience metrics are insincere

  • Customer Rage – Most companies don’t really understand it because their customer experience metrics are insincere

    icsa-logoPosted as a guest blog on the International Customer Service Association (ICSA),  Scott M. Broetzmann (President & CEO) shares his views about how too many companies use insincere customer experience metrics and therefore self-perpetuate mediocre service.

    • What percentage of the customers contacting your customer care center would rate their experience seeking help as “worthwhile?”
    • What are the specific types of remedies or outcomes that your customers seek most when reaching out to your company with a problem?
    • What percentage of customers with a problem would say that they got “nothing” in return for getting in touch with your corporate contact center?

    In more than one quarter century of consulting with companies worldwide, I’ve yet to see any of these sorts of data appear on a corporate dashboard of contact center performance. There’s plenty of call volume, occupancy rate, ASA, mystery shopper, IVR survey, call quality monitoring and other assorted myopic, internal and boring data. But tough-love, customer-point-of-view data is a rarity.

    Yet, data from these telling questions – focused on the customer’s return on investment (ROI) – are essential to understanding a true level of contact center performance and are indispensable when it comes to achieving a better contact center ROI.

    In fact, the only place you’re likely to find such insights is in the National Customer Rage surveys; a 13-year body of research that I’ve co-authored with my colleagues at CCMC and Arizona State University’s Center for Services Leadership.

    Consider these disquieting facts from the 2015 National Customer Rage survey:

    • Customers are experiencing an ever-increasing level of problems with products and services; 54% of households reported a problem in 2015 – that’s up from 32% in 1976.
    • Two-thirds (66%) of customers reported experiencing customer rage (being extremely or very upset) in association with their most serious problem.
    • On the whole (putting aside the handful of companies that are spectacular), corporate responsiveness to customer complaints is horrid
    • Nearly one-half (49%) of customers report that the time spent complaining to a company (about their most serious problem) was not worthwhile.
    • About two-thirds of complainants report that they got “nothing” when they complained to a company (about their most serious problem).
    • Only 17% of complainants indicate that they were satisfied with the action taken by a company that they complained to (about their most serious problem). In 1976, fully 23% of complainants were satisfied!

    The reality today is more problems and less satisfaction, which seems to be out of step with the marketplace hyperbole about customer satisfaction. Who isn’t a JD Powers winner in some category these days? Who doesn’t advertise and promote the supercalifragilisticexpialidocious level of service that they provide? Which contact center doesn’t achieve a 95%+ rating on its call quality monitoring results?

    How are we to reconcile these competing truths?

    My own view is that most companies are not knowingly providing mediocre service – virtually no company that is a going concern would be willing to accept the levels of non-performance uncovered in the National Customer Rage survey.

    Rather, too many companies today don’t truly know how unexceptional their service is because they are all too often lulled into a sense of self-satisfaction by their use of tepid, ineffectual metrics. Companies are awash in data, but are frequently data ignorant; they get a low ROI for the significant time and money that they spend collecting data because those data aren’t then used to operationalize meaningful and customer-driven change in the way that the company does business.

    Take two examples – one pertaining to the metrics used in contact centers and the other relevant to measuring the broader customer experience.

    Contact centers may possess the richest source of customer data in most companies. Yet I would argue that the lion’s share of contact centers lack a reliable, unvarnished point of view about the customer experiences that they are creating. Far too many contact centers are reliant on weak, statistically unreliable and non-actionable voice of the customer (VOC) surrogates (e.g., IVR surveys with 3 to 5 questions that only address the representative’s demeanor and yield low response rates, text analytics tools that are useful for data mining purposes but offer no more than anecdotal data about the actual outcomes, call quality monitoring scores that are more telling about compliance with silly standards – think “said customer’s name twice” – than a customer’s ROI for the contact experience).

    A good starting place for reinventing contact center VOC metrics is to get serious about asking the right questions – to include more powerful “key drivers.” Our research with hundreds of companies over the past four decades has shown that the number one key driver of contactor satisfaction is “getting what you wanted.” Most companies don’t measure this attribute and are dismissive of it because they assume that they will get low scores (i.e., “we can’t give customers what they want because all they want is free product and money.”). The fact of the matter is that satisfaction with “getting what you wanted” accounts for 50% or more of the overall satisfaction with the contact center interaction. Not measuring it is reckless. And assuming a poor score is also flawed reasoning (as the National Customer Rage survey shows that the things customers most want are simple, non-monetary remedies).

    What about the broader customer experience?

    Take, for instance, Net Promoter Score (NPS). In our white paper, “How Net Promoter Score (NPS) Is Like Global Warming,” we argue that NPS (and similar omnibus metrics) often induce more of state of complacency than a sense of urgency. While such metrics are uncomplicated to calculate, easy to understand and simple enough to “rally the troops,” we have observed that many executives are less exuberant about acting on NPS (likely because they don’t know what drives NPS!).

    Customer rage – a real and consequential customer sentiment – has a profound impact on the overall customer experience and offers a wider portal into the sub-optimized VOC measurement approaches that many companies have adopted. A key to understanding customer rage – and the customer experience more broadly – is embracing a more sincere set of customer experience metrics.

    Leave a comment

    Required fields are marked *