CCMC's Blog

Target Setting – Folding Science Into Art

  • Target Setting – Folding Science Into Art

    “What next?” is a question that hopefully is asked as the end of any study nears. As a manager or executive, you ideally will have good data and great plans to move the needle. But how high does this needle have to jump to in order to be defined as successful? Setting targets can help clarify where this goal line lies.

    Truth be told, many corporate customer satisfaction and loyalty programs achieve a less-than-optimal ROI as a result of faulty target setting.

    The process of setting credible and realistic customer satisfaction and loyalty targets is part science and part “art.” Companies achieving the best ROI for measuring the customer experience have developed a genuine appreciation for the need to balance rigor and managerial considerations when setting such performance targets.

    So while target setting may be a process that resists definition, CCMC has established a set of best practices – essentially posing and answering four questions – to help standardize this process:

    • Will the target be based on a single item (e.g., recommend intention) or be a composite of multiple metrics (e.g., overall satisfaction and recommend intention)?
    • How will the target be numerically expressed (e.g., a frequency or an index score)?
    • Will the target be stated as a single or tiered objective (e.g., one target or a graduated target of minimum, to stretch targets)?
    • What statistical and managerial considerations will be used to calibrate the targets (e.g., statistical significance, past performance, etc.)?

    Will the target be based on a single item (e.g., recommend intention) or be a composite of multiple metrics (e.g., overall satisfaction and recommend intention)?

    Perhaps contrary to conventional wisdom (i.e., NetPromoter), there’s no single best measure for target setting. Our experience suggests three guiding principles should be used for selecting the “right” metrics for your organization.

    First, whenever possible, simpler is better. Using fewer metrics intensifies focus and simplifies the process of communicating results across the organization. Moreover, the fewer the metrics, the less the likelihood of anomalies where one or more measures improve, while others decline (resulting in an overall target score that doesn’t change from year to year). When multiple metrics are used, it’s best to weight them equally to mitigate the complexity of more intricate weighting schemes.

    Second, and perhaps more importantly, use metrics “with teeth”. The target metric(s) should be based on those metrics that influence the strategic relationship with the customer. Some typical measures of consequence include overall satisfaction, overall loyalty (e.g, recommend intention), or selected key drivers of the overall satisfaction and/or loyalty. Adhering to this guideline ensures that the targets are aligned with the organization’s focus on bottom line improvements.

    Third, give consideration to metrics aligned with specific areas of improvement that the organization is intentionally trying to address (which would increase the likelihood of using related diagnostic metrics) as well as those areas of the customer experience that are outside the direct influence of the organization (which would decrease the likelihood of using related diagnostic metrics).

    For example, when one of our clients in the health care industry shifted from basing their goals on a composite score comprised of seven metrics to one that was focused on a single attribute which aligned with the organization’s top strategic priority, the ability to focus on this single target greatly improved the ability for a cross-organization push towards success. They were held accountable for improvement, clearly knew what they had to focus on, and they hit their target.

    How will the target be numerically expressed (e.g., a frequency or an index score)?

    My wife and I needed a quick bite to eat last week and we debated between grabbing some burgers or ordering pizza. Both would have filled us up – but we went with what worked best for us that night (we swung by Five Guys). The same can be said about the use of indices or frequencies for target setting. Both can be equally effective; the key issue is fitting the selection to the unique needs of the organization.

    Index scores are an arithmetic mean. Using values assigned to each response category (e.g., Very Satisfied=100, Somewhat Satisfied=75, etc.), the results are summed and averaged. The key strength of an index score is its sensitivity to movement across all response categories. At the same time, index scores can mask the exponential, positive impact associated with moving higher and higher up the “ladder” (e.g., the loyalty lift is often higher when you move from “Somewhat Satisfied” to “Very Satisfied” than it is when moving from “Neither Satisfied Nor Dissatisfied” to “Somewhat Satisfied”).

    Frequency calculations are generated by dividing the number of respondents selecting a specific response (e.g., Very Satisfied) by the total number of respondents overall to the particular question. Frequency scores have the innate advantage of being easier to understand and communicate (relative to index scores).

    When using a frequency based on satisfaction, it’s imperative to consider the implications of using a “Top Box” (e.g., Very Satisfied on a 5-point Likert Scale) or “Top Two Box” (e.g., Very Satisfied and Somewhat Satisfied on a 5-point Likert Scale) approach. This decision should be guided by an analysis of the implications of each. For example, if there is a significant decline in loyalty (e.g., recommend intention) when moving from “Top Box” satisfaction to “Top Two Box” satisfaction – as is often the case – then a “Top Box” definition should be used.

    Regardless of the metric used – an index score or a frequency – best practices suggest that the score should be calculated to a tenth of a decimal. This approach offers sufficient sensitivity to uncover true differences (that might otherwise be masked when using a whole number) and is simple enough to avoid overwhelming users.

    Will the target be stated as a single or tiered objective (e.g., one target or a graduated target of minimum, to stretch targets)?

    Targets can be stated in one of two ways: 1) a single target to which the organization aspires; or, 2) a graduated set of targets with a “floor” and a “ceiling.”

    While a single target is simpler, we’ve observed two key implementation pitfalls. First, setting a single score too high can be perceived as unachievable, especially if the organization fails to reach the target. Conversely, setting a single target too low can invite complacency.

    Perhaps a better approach is the use of a tiered target. A typical tiered target has three levels: threshold, target, and stretch. The “threshold” goal represents “maintenance” of performance (and is sometimes calibrated to allow for a small decline in performance to recognize that small drops might be attributable to statistical error rather than true declines in performance). The “target” goal represents expected improvements that are significant enough to be meaningful, but are still achievable (e.g., there is a 50% chance that this goal will be met). The “stretch” goal represents transformational change (and, as such, should be set high enough that there is only a 20% likelihood of being achieved).

    What statistical and managerial considerations will be used to calibrate the targets (e.g., statistical significance, past performance, etc.)?

    The biggest pitfall-by far-is the actual calibration of the target. For example, a manager in an association we were working with selected 80.0 as her year-end target. This target certainly seemed like a nice round number and made sense until you considered the fact that the department’s current performance was at 71.6 and it had never shown a score change of more than 1.8 points in a given year. Was the 80.0 a laudable goal? Sure, but by ignoring a few simple guidelines, the target was setting our client up for failure.

    We’ve found that seven considerations should be taken into account when calibrating the quantitative targets, including the:

    • Spread between current performance and “100” (i.e., to account for the incremental, increased difficulty of improvement as you approach “100”)
    • Historical levels of average absolute change (i.e., how much do the scores typically move on average?)
    • Historical “floor” and “ceiling” levels of performance (i.e., what are the lowest and highest scores ever achieved?)
    • Direction/magnitude of year-over-year change in performance (i.e., are scores going up or down and by how much?)
    • Existence of any key marketplace/internal events that could influence performance (i.e., are there any mitigating circumstances that should be taken into account?)
    • Statistical significance of goal achievement (i.e., what level of statistical significance is achieved if the goal is met?)
    • Credibility of the targets (i.e., will the organization view the targets as believable and achievable?)

    These guidelines were of great benefit to our association client. They helped her develop a new set of goals that were achievable and representative of a real, meaningful improvement in customer satisfaction. And while at year’s end her department did not hit the stretch goal, they did meet the target goal, providing positive reinforcement for the continuous improvement efforts that she planned for the following year.

    So, by helping to answer these four key questions, our guidelines can lead you in the direction of setting achievable and realistic targets that work best for your company.

    Scott Broetzmann

    President & CEO, CoFounder at Customer Care Measurement & Consulting
    Scott Broetzmann has over thirty years’ experience in advising companies on how to invest limited customer experience dollars wisely to ensure happy customers and investment return.

    Leave a comment

    Required fields are marked *