It’s Hard To Make Things Easy…
I have often been asked how I came into the field of consulting, and how that career path forked and diverged into successful projects in several manufacturing and service industry companies. It began when I went to work with James Abbott. I was an early hire for his training organization and continued to work with him as we transitioned into a training and consulting group.
James had a vision. He saw the training and professional development need that every company has; and he decided to address that need with in-depth training seminars on a wide variety of subjects.
I had an academic background in physics and math and had a strong work background in Information Technology. I always have an appetite to master new material and I told James that I would come up to speed in any training material he wanted to offer. This turned out to be a perfect fit to my personality and skill set!
The Secret To Great Training
We worked hard to develop and organize the training materials in proper order from the basic to the advanced. Our course design approach was to always establish a context for new ideas. We often reminded each other that counting comes before arithmetic, arithmetic before algebra, and algebra before calculus.
This conceptual approach allowed us to effectively present a large amount of material in a compressed period of time. When community colleges began teaching computer skills classes – their instructors took two days of our material as a template for a semester long course.
What Students Said
James has a saying, “It is easy to make things hard and complex. It is hard to make things easy and simple!” This principle was obvious to me after every seminar that I taught. The student reviews typically fell into two categories.
I was disappointed with reviews like this: “The material was very advanced. I learned a lot. The instructor was an expert in the material and presented it very well” . Instead, I wanted to see reviews like this: “The class was easy. The time went by fast. I think I may have already known a lot of the material beforehand.”
With the same subject, similar audiences reported different experiences. I could always account for this particular difference by how I organized the material. I could tell that proper order was one important key to participants “getting” the material. (Arithmetic Before Algebra!)
The Principle Applied to Process Improvement
Any business, service industry or manufacturing process can improve if we learn something new and integrate it into the context of our existing knowledge. New process tools, knowledge and metrics are the principal drivers of improvement. Every process will degrade when we start to forget, and we let known manageable factors return to the dark unknown.
Learning and sustainment begins by conceptually building that contextual foundation. What do we know? When we start a consulting project, we often encounter this: “It’s all the same. A call is a call. An order is an order. A cast iron part is a part. It’s pretty obvious.”
When pressed for details on how things work and how tools are used, this posture then flips and becomes: “Our processes are so complex. You wouldn’t understand. No two calls in the call center are the same. No two customer manufacturing orders are the same. No two system failures are the same.”
We call this retrenching position: the “no two snowflakes are alike” response, and we then explain what we are looking for:
(1) A difference in degree that is a difference in kind: For example, 2- or 3-minute calls in the support center versus 30-minute calls. “Short calls” versus “extended calls”, or “quick feature training calls” versus “software installation and configuration calls”. These might be approached as two different processes.
(2) In manufacturing, we might want to think about similar products made from copper or aluminum versus wood. These might be considered as two processes if the end result differs only in material. “Soft Metal” versus “Woodworking”.
Is This Too Simple?
When teaching these first process concepts to beginners, they first object that this seems too simple. It is simple, once this “factor analysis” has been done!
Is This Too Hard?
Charles Hobbs was an important thought leader in the field of Time Management, and he once told a parable of a woman who found purpose when she was encouraged to start by examining what was in her own back yard. She found a rock – a steppingstone rock near her back door and she began to ask herself, “What kind of rock is this? What else can it be used for? What is its mineral content?” After a time, she became quite knowledgeable about minerals and when she felt that she had reached a goal she asked for advice again. Her mentor, replied: “What was under the rock?”
When even experienced engineers grasp the implications of recursively drilling down into processes to learn more, to “look under the rock”, they often experience a moment where it all seems never-ending and too hard.
Simple is not the same as easy. Creating a culture of learning in yourself and in an organization is hard work. However, a culture of learning results in better decisions, better products and services lower costs and hopefully – a virtuous circle of continuous improvement.
Have you heard this saying?
“You can’t inspect quality into the product! Quality begins in the process.”
I’ll never forget the night a glass tabletop in our home suddenly exploded – shattering into an uncountable number of tiny pieces. It had not been struck, abused, or even touched. There was nothing on the tabletop, and the glass had been resting gently on several rubber supports for months, possibly years.
Why Spontaneous Explosion Occurs to Tempered Glass – LandGlass
(Why Spontaneous Glass Explosion Occurs)
Without any evidence of defects or non-conformities of any kind, this shattering event was likely born in the -process- which created it. But can product inspection tell us things about the temperature or pressure differentials when the glass tabletop was created? Most often, it cannot.
Perhaps there are exotic techniques that can extrapolate these facts; but once the product is formed, the opportunity to measure and to know is lost.
Remember that SPC is Statistical –Process– Control? Metrics of process are just as important, and sometimes more important than finished product dimensions, weights, colors, and tolerances.
A real-time product metric does have some ability to “look back” into the process and we can infer process consistency to an extent. Time series temperatures, processing speeds, and other measures of activity provide that direct look many operations want and need.
Whether it is food prep in the kitchen, pouring and curing concrete, or making tempered glass – process metrics tell an important part of the Quality story that product inspection alone cannot.
Just One More Person…
Many times, a team will make the case that they need “just one more person”, but when pressed for details – the exact role is poorly defined. So, next comes the attempt to be specific.
A committee is then formed to generate job requirements. Someone might start with a Word and Excel requirement and someone else will suggest that C# programming or a master’s degree would be nice. It’s resume poker. “See and Raise”.
Instead, back up and take a look at the business processes and match roles and skills to the value stream.
In environments where there is a progression of task complexity, create a career path that lets new associates make an early contribution without umpteen weeks of training, and then let them advance as they continue to learn the operation.
When All You Have Is A Hammer…
In every line of work, people strive to conceptualize, to mentally simplify work. We want to understand. We want reminders of how things work. We want to continue to learn and to improve. Diagrams, drawings and pictures are tools we all use for this very purpose.
There is more than one way to approach this, to visualize relationships between time, dependencies, roles inputs, actions, cost and results. Look at these four examples and ask yourself, what is the right tool for the job?
When do we need a flowchart approach with loops and branches?
https://en.wikipedia.org/wiki/Business_Process_Model_and_Notation
When do we need a matrix of “as is” business silos on which to analyze some new or existing workflow?
Layering: A New Approach to Business Process Mapping – isixsigma.com
When is a Gantt chart and a critical path analysis required?
When is the “front line worker” perspective (a dependency diagram and decision table) appropriate?
Operation Improvement Inc. » Every process has a product and a by-product.
What is the Goal? Who is the audience? How does the visualization really help you to visualize the process and its relationship to the product, any decisions and actions, and any relevant metrics and measures? Don’t make your analysis about paperwork. Make it about understanding and communicating.
You may have a favored approach, but you need to consider that everything is not a nail.
Future-Proof Your Programming Skills
If you started with Java when it was the ‘new kid’, and then moved into Node-JS, Rust or Python, you come to know that the language you are using today may not be the one you are using in two or three years. Structured programming, objects, encapsulation, open-source frameworks and packages are common features of all modern platforms. So, when we migrate to a new environment – we expect to find these familiar constructs along with a new twist.
Read on, and you will see how developing your logical thinking skills is actually your core competency.
As we all come “up to speed” in a new development environment, we reach for familiar job aids the first two or three weeks. A good code editor will “grammar check” your code and helpfully fill in key reserved words. Syntax cheat sheets and code examples will get you rapidly oriented to syntax in your new language.
In quick references, you will find the hundred or so built-in functions that do the usual type conversions, string and numeric manipulations and so on. Are they implemented “SQL” style, or are they called as an object’s method? Is code structure defined with keywords beginning and ending code blocks, or does indentation create structure? Can I create a data type and assign it values, or am I limited to traditional data types? How does it handle JSON or XML? Where are the database APIs?
None of these “quick start” tools will help you to debug one of the most common logic mistakes in programming. The order in which code is executed matters, and you must learn how to control it.
The Beginner Debugging Challenge
When we read code, the default assumption is that code is executed in the order presented, and this is often true. Beginner mistakes include such things as attempting arithmetic on an undefined variable and missing code dependencies. Language nuance does matter a bit here. Do we declare a variable or object before using it, or is it implicitly declared and typed on first use?
The Intermediate Debugging Challenge
In many languages code is not always executed in the order presented. The simplest example is a nested and not so obvious break from a loop or a function. A more complex example is the “callback function”, a bit of code that is triggered upon the completion of another code.
The next debugging challenge in the intermediate category is event handling code – code that is triggered by a change to system state. In the cloud it could be code that is triggered when a file is added to a directory. In integration with real-world sensors and devices it could be code that is triggered by light, movement, a magnetic sensor, and so on.
The most challenging entry in this category is concurrency and “race condition” issues. When the language and platforms permit parallel execution of code, one routine may complete too soon, and “turn out the lights and put out the cat” before another routine has completed its work. Or, two or more routines may compete for a shared resource and “gridlock” the application so that no routine can proceed.
Even non-procedural tools like SQL are vulnerable to concurrency logic issues. SQL logic grows into scripts. Scripts become stored procedures. Stored procedures become multiple “cron” jobs that are released into systems on independent schedules. Suddenly, completion order, and race conditions are again in play, and must be sorted out.
The Advanced Debugging Challenge
Here is a little true story that illustrate an advanced issue of logic and order of execution:
There once was a very big bank who had to reload customer accounts from backup. One most recent and tiny transaction, a few pennies of a bank change was not reloaded into several customer accounts. All other historical transactions were recovered, and IT was satisfied that all ending balances were correct.
A very conscientious customer had kept, printed and filed every monthly statement from the bank for years. It was satisfying for them to see each ending balance transferred to the next month, all transactions present and accounted for, and a new correct ending balance drawn from the beginning balance and the monthly activity.
WHOOPS! The online new month’s balance now suddenly does not agree with the last printed statement’s ending balance!
And, the online system has new and different beginning and ending balances the previous month and the next previous month, and so on.
There was no explanation. No transaction was present that accounted for the missing few pennies and no obvious reason why all online records by month were now retroactively different from the printed and saved monthly statements!
What do you think happened?
Perhaps a programmer thought and built the on-line system like this:
IF New Balance = Old Balance plus Transaction data,
THEN Old Balance = New Balance minus Transaction data.
It is a common mistake to think we can cleverly and always recreate past state of a database or system. That is most likely the error in logic for the banking programmer. Bank ledgers are uni-directional. Calculations always proceed from closed accounting periods to closed accounting periods. Here, it is not just the order in which code is executed, it is the order in which data is handled for correct and auditable results.
Conclusion
Banking systems must retain the past state of customer accounts, and not ‘reverse-engineer” past ending balances. The same is true for many business processes where programming must take and retain snapshots of the “current state” of the operation.
The solution to these and other kinds of logic issues will not be found in a programming language quick reference. They come from experience and a willingness to think and understand more about what you are asking the system to do, in addition to how you ask the language to do it.
How do you diagnose process control problems?
First make sure we understand the process!
[See this example: Ceramic Manufacturing Process in 10 Steps (khatabook.com)]
Starting with ore (Rocks!), we organize the ceramic process into a few mental “buckets”, then further break down each process step to a desired level of detail.
Ore->(Procurement & Beneficiation->Mixing & Forming->Green Machining->Drying & Sintering->Glazing & Firing)->Ceramic Part.
Like geographic maps that cover countries, states, counties and cities with only four levels of detail – these mental “maps” that we make of process help us to “zoom in or zoom out” on particular aspects as we begin to problem solve.
We now have a context for data collection, true process control (not “product” control) and capability assessment.
Next, you can think of each part of the process as a “test point”. At each level of detail, add the appropriate metrics and measures such as particle size, viscosity, machining tool selection, speeds, feeds, dimensional targets and tolerances, moisture content, temperature, and so on. If you haven’t been tracking these key metrics – better late than never! Use this past and current process control data to determine exactly when and where things changed.
A ”cause” is a thing – a noun. You are searching for some -thing- that is present, missing, or behaving differently. Good process control data tells you when and where to look.
The tough problems are the ones with interactions – multiple causes. The really tough problems are sometimes caused by a neighboring process instance that shares plumbing, power or other facilities infrastructure with the process in trouble.
AI In The Recruitment Process
A client we once helped began by complaining, “Our customers don’t follow our processes.”
For customers, at least, processes should be like vending machines. They do not need to know what makes all the sounds as the soft drink drops. I don’t have the source at hand, but a here is great and relevant quote: “Customers do not need to know what does on behind the door marked EMPLOYEES ONLY.”
To some extent, this attitude should be extended to employees. How hard is it to schedule shift swaps? Vacation days? Of course, this is a big selling point for the latest HR automation tools.
However, AI classification algorithms sort resumes, not people. In my quality management classes, I have taught and reminded participants that: “The Measure is not the Metric”.
Like any automation attempt:
1. You must understand in principle how to perform the work manually if AI is to be successful.
2. You must periodically audit the automated process from the ground up.
See: Operation Improvement Inc. » Automation Technology & Business Operations
Every process has a product and a by-product.
Early in my consulting work, we were doing a series of Quality training classes at a plant in Tennessee. My associate, James, asked the class about “scrap”. The class said we don’t have a scrap problem, and James held up a little tip of rubber – a trimming from a finished product that amounted to a significant quantity every month.
Every process has a product and a by-product. The “product” of an operation is often subjected to inspections, reporting and analysis that we call “burnt biscuit reports.”. Even if a facility is fortunate and has no “burnt biscuits”, there are still opportunities to learn and improve by, for example, examining the by-products of production.
Of course, Statistical Process Control looks at the processes that -make- the biscuits. It moves the focus from biscuits to baking. When we are “baking”, “mixing”, “pouring”, “cooling”; we want metrics that we can quantify critical aspects of those actions in addition to those of the product.
Finally, the input of every process is the output of a predecessor. A “Walkabout” style visualization pulls together the important relationships between incoming product, process, outgoing product metrics, and by-product!. We describe it as a way to “learn, share, teach and improve”.
Some call this style of documentation, a “functional block diagram”. It is really just our early development of what is today called “value stream”. Our use of the technique began by understanding its traditional use for “divide and conquer” signal tracing in the electronics industry. We then adapted it to latex form and mold manufacturing, hydraulic valve machining and assembly, chemical baths and plating, IT support processes, call center/telephony processes, and more.
If you find this approach helpful, check out these books: The Abbott Walkabout Series & Thirtysevenideas.com
What is It? Where is It? What Does It Do?
How Does It Work?
Automation Technology & Business Operations
Process automation can and does mean anything from data collection, repetitive manufacturing or maintenance task execution, service organization reporting, software testing and deployment and more.
For operations monitoring in particular, and for automation in general, there are three principles that should be followed as a guide to success.
- To successfully automate a task, you must understand in principle how to perform the work manually. With data collection, you need to have a clear picture of the thing or activity, its attribute, the type and units of measure, and the measurement method. Automation is a tool that allows one to perform the work of many.
- You must keep the focus on the ‘why’ of automating a manual process. In the case of data collection, does automation produce information that is more timely, more accurate and less expensive than other means?
- You must periodically audit the automated process from the ground up. With data collection, you need to audit the flow of data from the point measurement takes place, up through any rollups, summary statistics and reporting. (See #1)
Finally, with operations monitoring automation, remember that the goal is conceptual understanding and ideas that are useful/actionable. Don’t ‘bury the lead’ in pages and pages of unnecessary reporting, data analysis and out-of-context graphics.
Check out the Keyence.com case studies library for more on Automation Tech: https://www.keyence.com/solutions/case-studies/
Software Project Management Callback
WATERFALL PROJECT MANAGEMENT
“Waterfall” project management is a style of management best suited to the assembly and integration of proven processes and skills. In residential contracting, we work from plans, measure twice and cut once. The materials, techniques and skills required are known quantities. The quantities of steel, concrete, wiring, plumbing and fixtures can be estimated from the plans and established labor ratios can be applied to quantify the human component.
The challenge in these kinds of projects is wise management of critical path, and the ability to adapt to early and late delivery, price surprises, material defects, resource scheduling conflicts, human error and project changes. In “waterfall”, the project manager is given an objective expressed as performance, time and cost and must optimize these three variables while considering project risk.
As a tactical manager, the “go-to” skill in the face of challenges is the ability to re-deploy the available resources when and where they are needed! The hardest lesson for some is that attempting to schedule every task ASAP can make a project take longer!
Software project management is different. By software development, I am not referring to the low-code/no-code projects where workflows are tweaked, data is recoded, and BI engine reports are created. The alternative to waterfall is certainly an option for these kinds of assignments, particularly the first time they are attempted. Software Development at a means code and database creation and integration with other pieces of software.
ITERATIVE DEVELOPMENT
The conceptual alternative to “waterfall” can generically be termed “iterative development” and it is an approach that is decades old. It predates computer technology and code creation, and has many applications outside of IT.
Fredrick Brooks, the well known UNC computer scientist and thought leader had some success with waterfall techniques when he redesigned the roles of a lean programming team. His original book, The Mythical Man-Month is a classic of software development, and gave us this guiding principle: (among others) “Adding programmers to a late project will make it later.”
In subsequent editions of this book 20 years later, he explored the concept of iterative development. In simplest terms it builds on the lean programming team model and add iterative development constraints. A development cycle is a short, relative fixed time period where an application is constructed with only the functionality that the time budget allows. He calls this “growing a program”. This approach turns learning, discovery and invention into a manageable process.
At each stage of development, the result is both a working product, and a prototype for the next iteration. With this approach, nuances of a new language or strengths/limitations of a chosen framework of building block functions can be explored and adjustments can be made as necessary.
Project planning is developmental. Much like a curriculum development for first, fifth and 12 grade education, project planning has a vision, or outline, for what the product could look like at different stages of development. You can see that this lean style of iterative project management is “baked into” the DevOp and CI/CD methods and tools offered by AWS cloud services.
With either “waterfall” or “iterative” management one wants control and desires to avoid the pitfall of the illusion of control. Project management skeptics who are aware of such illusions sometimes ask why plan at all?
I ask them to imagine productivity if we planned 100% of the time, (The answer is none.) and then I make the case that activity without performance-time-cost objectives will not deliver value. What is the value of the perfect wedding cake delivered a week after the wedding? What if the cake is cheap and on-time but looks and tastes awful?
Without a sense of where we are going, where we are and an idea of how to get from here to there – the risk of failure is almost certain. A sweet spot of planning and control is somewhere in between no plan and an obsessive attempt of over-control. However, when we encounter those things we cannot control, we realize that plans must change; and that is ok.
An interesting Twitter (“X”) post had detailed criticisms of SCRUM and other recent attempts to build management layers on the thirty-year old lean development framework of Dr. Brooks. The Twitter writer did not reference Mythical Man-Month, but the book certainly came to my mind when I read the on-line comments and replies.
On reflection, it also made me remember the advice, “…things I can control, the things I can’t. The wisdom to know the difference.”