Hoshin kanri involves the identification and deployment of various initiatives (or countermeasures) to eliminate major obstacles to reach the organization’s goal and to further improve the organization’s performance. Typically, organizations subsequently track their performance to see whether they are actually improving. In hoshin kanri these performance indicators are called “control items”. But what organizations track less is whether they are actually deploying what they set out to deploy. These so called “check items” are often missing. And exactly this is one of the key aspects in which hoshin kanri differs from traditional management-by-objectives or MBO.
Check items are the quantitative elements used in measuring deployment progress. For instance, unplanned downtime of a piece of equipment may be a control item, whereas the percentage of machines under proper autonomous maintenance could be a related check item. Or on-time delivery may be a control item, whereas the number of part numbers under true kanban control could be a check item. Hoshin kanri sets time-phased ambitions for both.
But judging that a machine is under “proper” autonomous maintenance control or a part number under “true” kanban control is not that simple. What do we call “proper” and “true”? “Proper” and “true” should related to the company’s vision of what autonomous maintenance and kanban control are and will involve various elements. It should not be in the eye of the beholder. So what do you need to see, what is the evidence based upon which anyone can consistently conclude the deployment of a certain policy has been done properly?
This is where checklists are often used as part of audits or process confirmations. But checklists may lead to personal judgments, inconsistent results from different assessors, a lack of feedback and even to a general negative perception of the Lean journey. Less often, but better I think, is the use of so called “rubric assessments” in evaluating deployment.
According to wikipedia, a rubric originally is a word or section of text that is traditionally written or printed in red ink for emphasis. The word derives from the Latin: rubrica, meaning red ochre or red chalk, and originates in Medieval illuminated manuscripts from the 13th century or earlier. Various figurative senses of the word have been extended from its original sense, among which the use of rubric as a category, i.e., something under which a thing is classed (Merriam-Webster’s Collegiate).
In this last sense, a rubric assessment is often used in education where it is described as an evaluation tool to measure the attainment of certain standards against a consistent and coherent set of criteria. A rubric thereby clearly describes expectations and helps to ensure consistency in the evaluations. In her book, Susan Brookhart adds that the essence of a rubric is that you are able to assess something by matching what you observe with the description of the criteria and the expected values of these criteria. In that way, she says, it helps to avoid judgments with a grade or a score without discussing the evidence. Furthermore, a rubric helps clarifying expectations upfront, often not unnecessary in the deployment of Lean initiatives that are not always clear at all levels and in all areas of the organization.
A rubric basically consists of a vertically organized list of specific criteria on which an organization, department or even area will be assessed. Horizontally, the specific quality standards are provided that will be used to assess the entity. These quality standards may detail what needs to be observed, demonstrated, done in order to be at a certain assessment level. Often these are descriptive in nature, but they could even include quantitative criteria. One university puts it as follows: “a rubric can be useful when you find yourself making the same comments again and again and when students repeatedly ask you about the requirements”. Just think how often this happens within your deployment…
The quality standards for each criterion are split into several levels of achievement indicated by a number (level or step 1, 2, 3, etc.), a letter (A, B, C, etc.) or a descriptive name. I often find that they are not so far off from Dreyfus’ five-stage model of skill acquisition, i.e., (1) novice, (2) advanced beginner, (3) competent, (4) proficient and (5) expert but then applied to organizations. This then often leads to qualifications such as (1) not applied, (2) partial or launched, (3) systematic or in deployment, (4) mature and successful, (5) excellent and sustainable.
More than with a checklist, a rubric helps the assessor to provide useful feedback to the organization being assessed. In that way, instead of just checking off the boxes, a rubric helps the assessor in filling in his or her coaching role and to actually help the organization to progress. A rubric helps making feedback more clear, detailed and also useful, as it improves the identification and communication of gaps to be closed — and therefore the required actions — to get to the next level. It also helps sites or areas that are still in the beginning of their Lean journey to stay motivated compared to a one-dimensional checklist that will punish them and paint them blood-red. Not the type of effect you want, particularly not in the beginning of your Lean journey.
But rubrics require careful preparation. And they can only be created in parallel or after standards have been defined as they are the mirror of those same standards. It requires a clear and detailed vision, not only of the ideal state, but also of subsequent target states on the road to the ideal state. And knowing that standards evolve, rubrics evolve as well. So take care to not blindly copy general Lean assessments that are on the web or available from consultants or even from other companies. They may provide initial ideas and they can serve as examples, but in the end it is up to you to make your own system work for you!