Archive
What Drives Continuous Improvement

Let’s start with the really big picture – think about improvement over decades and longer. What has changed since the eighteen hundreds, the sixteen hundreds, or even longer spans of time?
Here is a hint to the first driver: “The wood and stone age. Tin. Copper. Iron. Steel. Aluminum. Plastics. Semiconductors.” New Materials!
Formal scientific method is relatively new, but inquiring minds have been exploring the field of materials science for millennia. Advances and discoveries of new materials have defined and delimited what was possible in human history. Today, these advances in materials come every couple of decades.
Every new advance (graphene, perovskites, nanomaterials, metamaterials, etc.) opens new possibilities for the second driver: invention, engineering, and automation.
While some products, for good reasons, are still made in the tested and traditional manner; new technologies periodically replace the old. We have a saying in our approach to quality and improvement. “Management provides the facility and is ultimately responsible for process capability, IF Operations runs the facility correctly and consistently.”
If both parties of this division of labor work together, processes can produce excellent products – and each can contribute to the goal of producing more with less in the future.
Management can commission engineering to rebuild processes with the latest inventions, materials, and technology. Operations can continue to expand their knowledge beyond the “user manual” – and fully master these new tools. This brings us to the third driver of continuous improvement: operations excellence and operations sustainment training.
An important component of operations management is continually training, reviewing methods, correct use of tools and technology, safety protocols, and so on. Professional athletes, performers and musicians never stop practicing and rehearsing.
Improvement only happens by accident, or when we learn something. The reverse is also true. Process performance worsens sometimes by accident or when we forget. Accidental improvements are temporary. Retained and applied knowledge is the key.
Just Enough, and Just In Time Management
(Definition: Tactical management is management of -available- means to an objective.)
The “micromanaging” topic always begs the question, “how often should tactical managers check in on the progress of a project or task set?” Any intervention by a manager (whatever the form) generally amounts to: Communicate, Coordinate and Re-deploy resources as necessary. You change assignments and reallocate available resources.
To find a balance between time used for talking about work and time actually working, I generally follow the following guideline:
I ask, how many opportunities for action should I budget in order to make project adjustments? Of course, you want to place as much trust in the team as possible, and not frustrate them by signaling a too frequent change of priority and direction.
So, ask yourself, what is the “pace” of the project? If an objective is 16 weeks in the future, can I adapt to any issues and unforeseen surprises if I review monthly? That gives me only three opportunities to act. Given what I know about my team and our objective does that seem enough? Can you count on your teams to “send up a flare” if they encounter un-foreseen issues?
What about management in a crisis, where we are trying to contain a serious problem in hours, and not days? How often is too often to check status?
Let’s explore the example of the 16-week project. After 4 weeks and 1 review, one fourth of the budgeted resources have been spent and there are only two remaining reviews before deliverables.
With only three fourths of the time and money left, are the remaining resources (if judiciously redeployed) going to be sufficient to bring things back on track? In most scenarios, this seems a little tight. 6-12 checkpoints seem like a better minimum for most projects, depending on the team of course.
Of course, tactical management of -processes- is a different topic, and totally different approaches are needed. Many technology project managers use an iterative development process and a pace established by weekly or bi-weekly “sprints”.
Guided by a rendering, or “wireframe”, each iteration is intended to produce an incremental deliverable which will either converge on a finished product or continually improve an existing one. A week or two is typically long enough to add and test a couple of features, and short enough to keep the scope of very abstract work conceptually manageable.
“Process Improvement”

On a company walk-through, we observed a clerk trying to receive and read new customer orders on a remote CRT display terminal tied to the customer’s mainframe.
The clerk struggled to interpret the really old and coded “punch card” fixed field format. Then, the clerk would carefully type in the order for manufacturing on the company’s own in-house computer system.
The manager saw no problem with this process but thought that perhaps training would improve things a bit. It is likely that this order-intake/order-entry area would never have been targeted for “improvement” or flagged as a problem area. “If it’s not broke….”
Some processes don’t need to be improved. Some processes don’t have a problem that needs to be fixed. Some processes simply need to be eliminated.
What is often called for is a fresh look that is not prejudiced by how the work was previously done. Start with the objective. Don’t sub-optimize. Is the objective “improved” order-entry, or making it as easy as possible for the customer to place orders?
The very first step in process improvement is, “Show me everything. Let’s walk through the big picture.” Problems like the order-entry issue will then immediately stand out to a fresh set of experienced eyes.
The “Grouping Error”
Observations can be turned into data by measurement. Measurements can be summarized by statistical analysis, and then decision-making ideas start to emerge from the numbers.
But wait! there is this thing called “the grouping error”.
Here is our classic “classroom” story illustrating this problem:
Ten machines are “bubble-packing a consumer product and a statistical summary concludes that about 1% of the packages are mangled, damaged – crushed in the packaging machines. It’s appears to be an alignment failure of conveyor, product and heat-sealing stamp/press.
This idea begins to emerge. What do these ten machines have in common that causes an occasional alignment failure? Is it a Timing mechanism? Are there plastic parts that should be replaced with steel? Do we need to rebuild/replace these machines with precision stepping motor components? (Don’t raid the capital expenditure budget yet!)
Here is how the 1% came to be: ONE machine had a 10% scrap rate and the remaining nine had little or none. A DIFFERENT IDEA emerges from the numbers: what is different about machine number ten?!!
I have seen this exact issue in more than one industry/process, and of course there are ways to be vigilant and catch this mistake before the final roll-up of data into a final report.
A data analyst might know that a histogram can show multiple peaks (“bimodal”) indicating that a single average does not describe the population. A statistician might look at data clusters or perform a F-test or test for a goodness of fit to a normal distribution. Any of these checks and more could be employed to examine suspicious data for a grouping error.
However, there are the facts we know from simply observing the thing we measure. CNC machining data should probably not be merged too early in the analysis with “old school” machining technology or additive manufacturing. Defective/damaged products manufactured from wood should be studied apart from same products made from metal . Call center calls with translators should not be prematurely grouped with calls handled by native speakers.
Working with data does NOT mean shutting out every other fact and observation available to us, and this other information guides us as we extract the right conclusions from the data we collect.