There are three key metrics to track our progress when striving for the ideal of continuous improvement. What we call continuous improvement (CI) is in fact unattainable. Something that is continuous is uninterrupted and never rests. Even if we dedicated 100% of our time to CI activities, which is impractical since our daily work would not get done, the improvements we make would be in fits and starts, not a steady and unbroken stream. Most organizations have improvement cultures that are sporadic or crisis-driven. Those of us who strive for excellence practice *continual* improvement at best, in which we repeatedly improve with breaks in between. Continuous improvement is an ideal state. But with that caveat out of the way, what are the three key metrics in continuous improvement?

The first metric simply shows whether we are making things safer, better, easier, faster, or cheaper. The metric is derived as the difference between the old condition and the new condition as a percentage. Using workplace safety as an example, the formula could be:

% safety improvement = ((accidents before – accidents after) /accidents before) x 100

If an increase was the desirable condition such as for productivity or cash on hand, the difference between old and new would be “*productivity after minus productivity before*” with the denominator remaining productivity before. This first metric should be familiar and intuitive.

Unlike the first measure which is a results metric, the second is a process metric. It does not measure how much we improve, but how good we are at improving. Since continual and continuous improvement both value repetition and persistent activity over time, it is important to get better at getting things done. The completion percentage metric is derived as the total number of CI ideas implemented by the total number of continuous improvement opportunities identified:

% completion = (CI ideas implemented / (ideas generated + problems observed)) x 100

The numerator is straightforward but there is some explanation needed for the denominator. The number of CI opportunities includes both concreted CI ideas and observed problems without a solution. We may try to cheat this metric by only including fully-developed CI ideas or proposed solutions. If we don’t report a problem, our % completion is not penalized when we have no solution. This skews the number favorably, as the hard problems observed without good solutions are left out. In an honest calculation, if we have 50 CI ideas and 50 problems observed, and successfully implement 40 of the ideas, the score is 40% (40 out of 50 + 50) and not 80% (40 out of 50). This second metric helps to track how good we are at developing countermeasures to our problems and seeing them through to completion.

The third metric is also a process metric. It measures how good we are at spotting problems and bringing them to attention. It is also a pokayoke of sorts for metric 2. We take the same denominator for metric 2 and use it as the numerator, and divide it by a severity value for the problems. The formula for the third metric is:

Spotting score = ((ideas generated + problems observed) / RPN average) x 100

We can borrow from the FMEA (Failure Modes and Effects Analysis) method to assign a risk priority number (RPN) based on frequency, severity, and ease of detection or control of said risk. Or we can simply rank 1 to 5 ranking from not so bad to really bad risk. Ideally, the RPN or should the 1, the smallest whole number. This maximizes our spotting score, and it also means that we are catching the vast majority of problems early and while still small.

How does this spotting metric act as a pokayoke for the completion % metric? There are several ways. First, we create a bias and reward towards surfacing problems early when they are still small and their potential impact is limited. Spotting small problems helps to keep the RPN low, even when averaged together with a few big problems. Second, if someone were to leave out a few big problems for which there was no good solution, intending to improve metric 2, it would backfire. Real problems don’t get smaller over time if left alone. They do the opposite. Small problems are easier to solve, raising our completion % score. Third, because metric 3 rewards both the generation of CI ideas and the observation of problems without proposed solutions, more ideas and problems come to the surface. Better spotting skill score fuels the need to focus on metric 2, the skill at countermeasure development and completion.

The long-term aim of striving for continuous improvement is threefold: to deliver results, to raise peoples’ skills in problem resolution, and to make everyone better at and more comfortable with raising problems and ideas. These three factors work together to give us a chance at making our improvement efforts continuous.

## Juan Gomez Hoyos

April 10, 2018 - 6:58 amSimply brilliant! Great article.

## Michael

April 13, 2018 - 4:33 amPowerful message and simply explained.