Archive
“Process Improvement”

On a company walk-through, we observed a clerk trying to receive and read new customer orders on a remote CRT display terminal tied to the customer’s mainframe.
The clerk struggled to interpret the really old and coded “punch card” fixed field format. Then, the clerk would carefully type in the order for manufacturing on the company’s own in-house computer system.
The manager saw no problem with this process but thought that perhaps training would improve things a bit. It is likely that this order-intake/order-entry area would never have been targeted for “improvement” or flagged as a problem area. “If it’s not broke….”
Some processes don’t need to be improved. Some processes don’t have a problem that needs to be fixed. Some processes simply need to be eliminated.
What is often called for is a fresh look that is not prejudiced by how the work was previously done. Start with the objective. Don’t sub-optimize. Is the objective “improved” order-entry, or making it as easy as possible for the customer to place orders?
The very first step in process improvement is, “Show me everything. Let’s walk through the big picture.” Problems like the order-entry issue will then immediately stand out to a fresh set of experienced eyes.
The “Grouping Error”
Observations can be turned into data by measurement. Measurements can be summarized by statistical analysis, and then decision-making ideas start to emerge from the numbers.
But wait! there is this thing called “the grouping error”.
Here is our classic “classroom” story illustrating this problem:
Ten machines are “bubble-packing a consumer product and a statistical summary concludes that about 1% of the packages are mangled, damaged – crushed in the packaging machines. It’s appears to be an alignment failure of conveyor, product and heat-sealing stamp/press.
This idea begins to emerge. What do these ten machines have in common that causes an occasional alignment failure? Is it a Timing mechanism? Are there plastic parts that should be replaced with steel? Do we need to rebuild/replace these machines with precision stepping motor components? (Don’t raid the capital expenditure budget yet!)
Here is how the 1% came to be: ONE machine had a 10% scrap rate and the remaining nine had little or none. A DIFFERENT IDEA emerges from the numbers: what is different about machine number ten?!!
I have seen this exact issue in more than one industry/process, and of course there are ways to be vigilant and catch this mistake before the final roll-up of data into a final report.
A data analyst might know that a histogram can show multiple peaks (“bimodal”) indicating that a single average does not describe the population. A statistician might look at data clusters or perform a F-test or test for a goodness of fit to a normal distribution. Any of these checks and more could be employed to examine suspicious data for a grouping error.
However, there are the facts we know from simply observing the thing we measure. CNC machining data should probably not be merged too early in the analysis with “old school” machining technology or additive manufacturing. Defective/damaged products manufactured from wood should be studied apart from same products made from metal . Call center calls with translators should not be prematurely grouped with calls handled by native speakers.
Working with data does NOT mean shutting out every other fact and observation available to us, and this other information guides us as we extract the right conclusions from the data we collect.
“We Tried TQM, Lean, Just In Time, Six Sigma, etc…but”
A helpful post contrasting the definitions of Lean versus Six Sigma made me think about the skeptical reaction many have when the latest improvement buzz phrase or acronym appears in media. There are always successful and credible case studies, but many are left thinking that surely a key ingredient has been left out of the tasty recipe.
The “Silver Bullet”
For years, we attributed this to the wish for a “silver bullet”, a quick solution to that performance, time, cost or risk problem. Perhaps the way solutions are presented (Six Sigma versus lean versus Kanban versus KPIs versus dashboards, team building, etc.) contributes to the misunderstanding and misuse.
Maybe it is the way that older strategies are renamed, rebranded, and upgraded with larger and larger data requirements. If SPC didn’t work, then maybe DOE, regression (sorry: “Predictive Model Building”), multiple regression, data lakes, simulations, linear algebra and AI are the answer.
Certainly, greater computational power, technology improvements and more data is a positive; but these various methods and tools should not be treated as “siloed” solutions. There is often a missing ingredient in many approaches, and that is integration of these tools with each other and with a conceptual view of the processes one wishes to improve.
Quality, Performance & Value
Many struggled with TQM (“Total Quality Management”) because of the tendency to conflate “Quality” with “Performance”. To clarify this I would ask teams, “What would Total COST Management represent? What is the value of an absolutely perfect wedding cake one day late?” When they came to see quality as value to the customer, TQM began to be integrated with the famous conceptual formula from Project Management: “Value=FunctionOf(Performance, Time, Cost, Risk)” (Not every chicken dinner needs to be a $1000/plate celebrity chef masterpiece to have value.)
When high scoring “Quality as Value” KPIs do not tell us that customers were disappointed – we must add the knowledge that metrics and measures are not the same; that actionable descriptive statistics will rely on homogeneous groups and that outliers and trends can hide in a statistical analysis that ignores time sequence and variability.
Variability & Outliers
When descriptive statistics is integrated with probability functions and Statistical Process Control (SPC), we began to get a near real-time picture of variability, quick recognition of outliers and objective evidence of homogeneous groups for acceptance testing – We then need to integrate this with an action plan for outliers. We need tools to connect causes (“Things and how they behave”) with effects (changes in variability and outliers.)
Visualizing Process Dependencies
When Cause/Effect or “dependency” diagrams give us visual relationships between “What we do” and “What we get”, we become able to integrate this with our metrics, measures, process and product targets and data collection strategies. With this additional tool, we can integrate team building, leadership exercises, sustainment training and “learn, share, teach, improve” training with experiment days.
The “Base Camp”
When we finally have what we call, a “base camp” from which further improvement can occur; then we are ready to try and test advanced optimization techniques, large data set tools, technology upgrades and more.
Whether our improvement initiatives began with process inputs, customer deliverables, a bottleneck process or in quality, lab and measurement; we continue to integrate – to connect what we know and have learned about one area to others and we may use one of the various “value chain” techniques.
Continuous Improvement
We match inputs of one internal process to the outputs of another to optimize and align. As variability is reduced, buffer stocks can be gradually reduced and Just-In-Time techniques can be incorporated.
There are other tools in the process improvement toolbox, some of which are optimized for manufacturing and some for the service industry. The principle is the same. Regardless of which techniques come first and where in the process we begin, there is need for integration into an expanding process knowledge database structured to support -human- intelligence and conceptual understanding of work and deliverables.
Actionable Metrics
An analysis, dashboard, diagram or computer output that has not been integrated into the sum of what we know is -not- actionable. If you have seen daily, weekly, and monthly status report on paper or screen that do not drive action, you know exactly what I mean. It’s the Monday morning “Read and File” process, and that may be why these approaches sometimes fail.