Archive for the ‘Uncategorized’ Category

Just One More Person…

November 3, 2023 Comments off

Many times, a team will make the case that they need “just one more person”, but when pressed for details – the exact role is poorly defined. So, next comes the attempt to be specific.

A committee is then formed to generate job requirements. Someone might start with a Word and Excel requirement and someone else will suggest that C# programming or a master’s degree would be nice. It’s resume poker. “See and Raise”.

Instead, back up and take a look at the business processes and match roles and skills to the value stream.

In environments where there is a progression of task complexity, create a career path that lets new associates make an early contribution without umpteen weeks of training, and then let them advance as they continue to learn the operation.

When All You Have Is A Hammer…

October 19, 2023 Comments off

In every line of work, people strive to conceptualize, to mentally simplify work. We want to understand. We want reminders of how things work. We want to continue to learn and to improve. Diagrams, drawings and pictures are tools we all use for this very purpose.

There is more than one way to approach this, to visualize relationships between time, dependencies, roles inputs, actions, cost and results. Look at these four examples and ask yourself, what is the right tool for the job?

When do we need a flowchart approach with loops and branches?

When do we need a matrix of “as is” business silos on which to analyze some new or existing workflow?

Layering: A New Approach to Business Process Mapping –

When is a Gantt chart and a critical path analysis required?

When is the “front line worker” perspective (a dependency diagram and decision table) appropriate?

Operation Improvement Inc. » Every process has a product and a by-product.

What is the Goal? Who is the audience? How does the visualization really help you to visualize the process and its relationship to the product, any decisions and actions, and any relevant metrics and measures? Don’t make your analysis about paperwork. Make it about understanding and communicating.

You may have a favored approach, but you need to consider that everything is not a nail.

Future-Proof Your Programming Skills

October 7, 2023 Comments off

If you started with Java when it was the ‘new kid’, and then moved into Node-JS, Rust or Python, you come to know that the language you are using today may not be the one you are using in two or three years. Structured programming, objects, encapsulation, open-source frameworks and packages are common features of all modern platforms. So, when we migrate to a new environment – we expect to find these familiar constructs along with a new twist.

Read on, and you will see how developing your logical thinking skills is actually your core competency.

As we all come “up to speed” in a new development environment, we reach for familiar job aids the first two or three weeks. A good code editor will “grammar check” your code and helpfully fill in key reserved words.  Syntax cheat sheets and code examples will get you rapidly oriented to syntax in your new language.

In quick references, you will find the hundred or so built-in functions that do the usual type conversions, string and numeric manipulations and so on. Are they implemented “SQL” style, or are they called as an object’s method? Is code structure defined with keywords beginning and ending code blocks, or does indentation create structure? Can I create a data type and assign it values, or am I limited to traditional data types? How does it handle JSON or XML? Where are the database APIs?

None of these “quick start” tools will help you to debug one of the most common logic mistakes in programming. The order in which code is executed matters, and you must learn how to control it.

The Beginner Debugging Challenge

When we read code, the default assumption is that code is executed in the order presented, and this is often true. Beginner mistakes include such things as attempting arithmetic on an undefined variable and missing code dependencies. Language nuance does matter a bit here. Do we declare a variable or object before using it, or is it implicitly declared and typed on first use?

The Intermediate Debugging Challenge

In many languages code is not always executed in the order presented. The simplest example is a nested and not so obvious break from a loop or a function. A more complex example is the “callback function”, a bit of code that is triggered upon the completion of another code.

The next debugging challenge in the intermediate category is event handling code – code that is triggered by a change to system state. In the cloud it could be code that is triggered when a file is added to a directory. In integration with real-world sensors and devices it could be code that is triggered by light, movement, a magnetic sensor, and so on.

The most challenging entry in this category is concurrency and “race condition” issues. When the language and platforms permit parallel execution of code, one routine may complete too soon, and “turn out the lights and put out the cat” before another routine has completed its work.   Or, two or more routines may compete for a shared resource and “gridlock” the application so that no routine can proceed.

Even non-procedural tools like SQL are vulnerable to concurrency logic issues. SQL logic grows into scripts. Scripts become stored procedures. Stored procedures become multiple “cron” jobs that are released into systems on independent schedules. Suddenly, completion order, and race conditions are again in play, and must be sorted out.

The Advanced Debugging Challenge

Here is a little true story that illustrate an advanced issue of logic and order of execution:

There once was a very big bank who had to reload customer accounts from backup. One most recent and tiny transaction, a few pennies of a bank change was not reloaded into several customer accounts. All other historical transactions were recovered, and IT was satisfied that all ending balances were correct.

A very conscientious customer had kept, printed and filed every monthly statement from the bank for years. It was satisfying for them to see each ending balance transferred to the next month, all transactions present and accounted for, and a new correct ending balance drawn from the beginning balance and the monthly activity.

WHOOPS! The online new month’s balance now suddenly does not agree with the last printed statement’s ending balance!

And, the online system has new and different beginning and ending balances the previous month and the next previous month, and so on.

There was no explanation. No transaction was present that accounted for the missing few pennies and no obvious reason why all online records by month were now retroactively different from the printed and saved monthly statements!

What do you think happened?

Perhaps a programmer thought and built the on-line system like this:

IF New Balance = Old Balance plus Transaction data,
THEN Old Balance = New Balance minus Transaction data.

It is a common mistake to think we can cleverly and always recreate past state of a database or system. That is most likely the error in logic for the banking programmer. Bank ledgers are uni-directional. Calculations always proceed from closed accounting periods to closed accounting periods. Here, it is not just the order in which code is executed, it is the order in which data is handled for correct and auditable results.


Banking systems must retain the past state of customer accounts, and not ‘reverse-engineer” past ending balances. The same is true for many business processes where programming must take and retain snapshots of the “current state” of the operation.  

The solution to these and other kinds of logic issues will not be found in a programming language quick reference. They come from experience and a willingness to think and understand more about what you are asking the system to do, in addition to how you ask the language to do it.

How do you diagnose process control problems?

October 6, 2023 Comments off

First make sure we understand the process!
[See this example: Ceramic Manufacturing Process in 10 Steps (]

Starting with ore (Rocks!), we organize the ceramic process into a few mental “buckets”, then further break down each process step to a desired level of detail.

Ore->(Procurement & Beneficiation->Mixing & Forming->Green Machining->Drying & Sintering->Glazing & Firing)->Ceramic Part.

Like geographic maps that cover countries, states, counties and cities with only four levels of detail – these mental “maps” that we make of process help us to “zoom in or zoom out” on particular aspects as we begin to problem solve.

We now have a context for data collection, true process control (not “product” control) and capability assessment.

Next, you can think of each part of the process as a “test point”. At each level of detail, add the appropriate metrics and measures such as particle size, viscosity, machining tool selection, speeds, feeds, dimensional targets and tolerances, moisture content, temperature, and so on. If you haven’t been tracking these key metrics – better late than never! Use this past and current process control data to determine exactly when and where things changed.

A ”cause” is a thing – a noun. You are searching for some -thing- that is present, missing, or behaving differently. Good process control data tells you when and where to look.

The tough problems are the ones with interactions – multiple causes. The really tough problems are sometimes caused by a neighboring process instance that shares plumbing, power or other facilities infrastructure with the process in trouble. 

AI In The Recruitment Process

September 20, 2023 Comments off

A client we once helped began by complaining, “Our customers don’t follow our processes.”

For customers, at least, processes should be like vending machines. They do not need to know what makes all the sounds as the soft drink drops. I don’t have the source at hand, but a here is great and relevant quote: “Customers do not need to know what does on behind the door marked EMPLOYEES ONLY.”

To some extent, this attitude should be extended to employees. How hard is it to schedule shift swaps? Vacation days? Of course, this is a big selling point for the latest HR automation tools.

However, AI classification algorithms sort resumes, not people. In my quality management classes, I have taught and reminded participants that: “The Measure is not the Metric”. 

Like any automation attempt:

1. You must understand in principle how to perform the work manually if AI is to be successful.

2. You must periodically audit the automated process from the ground up. 

See: Operation Improvement Inc. » Automation Technology & Business Operations

Every process has a product and a by-product.

September 18, 2023 Comments off

Early in my consulting work, we were doing a series of Quality training classes at a plant in Tennessee. My associate, James, asked the class about “scrap”. The class said we don’t have a scrap problem, and James held up a little tip of rubber – a trimming from a finished product that amounted to a significant quantity every month.

Every process has a product and a by-product. The “product” of an operation is often subjected to inspections, reporting and analysis that we call “burnt biscuit reports.”. Even if a facility is fortunate and has no “burnt biscuits”, there are still opportunities to learn and improve by, for example, examining the by-products of production.

Of course, Statistical Process Control looks at the processes that -make- the biscuits. It moves the focus from biscuits to baking. When we are “baking”, “mixing”, “pouring”, “cooling”; we want metrics that we can quantify critical aspects of those actions in addition to those of the product.

Finally, the input of every process is the output of a predecessor. A “Walkabout” style visualization pulls together the important relationships between incoming product, process, outgoing product metrics, and by-product!. We describe it as a way to “learn, share, teach and improve”.

Some call this style of documentation, a “functional block diagram”. It is really just our early development of what is today called “value stream”. Our use of the technique began by understanding its traditional use for “divide and conquer” signal tracing in the electronics industry. We then adapted it to latex form and mold manufacturing, hydraulic valve machining and assembly, chemical baths and plating, IT support processes, call center/telephony processes, and more.

If you find this approach helpful, check out these books: The Abbott Walkabout Series &

What is It? Where is It? What Does It Do?

How Does It Work?

Automation Technology & Business Operations

September 14, 2023 Comments off

Process automation can and does mean anything from data collection, repetitive manufacturing or maintenance task execution, service organization reporting, software testing and deployment and more.

For operations monitoring in particular, and for automation in general, there are three principles that should be followed as a guide to success.

  1. To successfully automate a task, you must understand in principle how to perform the work manually. With data collection, you need to have a clear picture of the thing or activity, its attribute, the type and units of measure, and the measurement method. Automation is a tool that allows one to perform the work of many.
  2. You must keep the focus on the ‘why’ of automating a manual process. In the case of data collection, does automation produce information that is more timely, more accurate and less expensive than other means?
  3. You must periodically audit the automated process from the ground up. With data collection, you need to audit the flow of data from the point measurement takes place, up through any rollups, summary statistics and reporting. (See #1)

Finally, with operations monitoring automation, remember that the goal is conceptual understanding and ideas that are useful/actionable. Don’t ‘bury the lead’ in pages and pages of unnecessary reporting, data analysis and out-of-context graphics.

Check out the case studies library for more on Automation Tech:

Software Project Management Callback

August 31, 2023 Comments off


“Waterfall” project management is a style of management best suited to the assembly and integration of proven processes and skills. In residential contracting, we work from plans, measure twice and cut once. The materials, techniques and skills required are known quantities. The quantities of steel, concrete, wiring, plumbing and fixtures can be estimated from the plans and established labor ratios can be applied to quantify the human component.

                The challenge in these kinds of projects is wise management of critical path, and the ability to adapt to early and late delivery, price surprises, material defects, resource scheduling conflicts, human error and project changes. In “waterfall”, the project manager is given an objective expressed as performance, time and cost and must optimize these three variables while considering project risk.

                As a tactical manager, the “go-to” skill in the face of challenges is the ability to re-deploy the available resources when and where they are needed! The hardest lesson for some is that attempting to schedule every task ASAP can make a project take longer!

                Software project management is different. By software development, I am not referring to the low-code/no-code projects where workflows are tweaked, data is recoded, and BI engine reports are created.  The alternative to waterfall is certainly an option for these kinds of assignments, particularly the first time they are attempted. Software Development at a means code and database creation and integration with other pieces of software.


                The conceptual alternative to “waterfall” can generically be termed “iterative development” and it is an approach that is decades old. It predates computer technology and code creation, and has many applications outside of IT.

                Fredrick Brooks, the well known UNC computer scientist and thought leader had some success with waterfall techniques when he redesigned the roles of a lean programming team. His original book, The Mythical Man-Month is a classic of software development, and gave us this guiding principle: (among others) “Adding programmers to a late project will make it later.”

                In subsequent editions of this book 20 years later, he explored the concept of iterative development. In simplest terms it builds on the lean programming team model and add iterative development constraints. A development cycle is a short, relative fixed time period where an application is constructed with only the functionality that the time budget allows. He calls this “growing a program”. This approach turns learning, discovery and invention into a manageable process.

                At each stage of development, the result is both a working product, and a prototype for the next iteration. With this approach, nuances of a new language or strengths/limitations of a chosen framework of building block functions can be explored and adjustments can be made as necessary.

                Project planning is developmental. Much like a curriculum development for first, fifth and 12 grade education, project planning has a vision, or outline, for what the product could look like at different stages of development.  You can see that this lean style of iterative project management is “baked into” the DevOp and CI/CD methods and tools offered by AWS cloud services.

                With either “waterfall” or “iterative” management one wants control and desires to avoid the pitfall of the illusion of control. Project management skeptics who are aware of such illusions sometimes ask why plan at all?

                I ask them to imagine productivity if we planned 100% of the time, (The answer is none.) and then I make the case that activity without performance-time-cost objectives will not deliver value. What is the value of the perfect wedding cake delivered a week after the wedding? What if the cake is cheap and on-time but looks and tastes awful?

                Without a sense of where we are going, where we are and an idea of how to get from here to there – the risk of failure is almost certain. A sweet spot of planning and control is somewhere in between no plan and an obsessive attempt of over-control. However, when we encounter those things we cannot control, we realize that plans must change; and that is ok.

                An interesting Twitter (“X”) post had detailed criticisms of SCRUM and other recent attempts to build management layers on the thirty-year old lean development framework of Dr. Brooks. The Twitter writer did not reference Mythical Man-Month, but the book certainly came to my mind when I read the on-line comments and replies.

                On reflection, it also made me remember the advice, “…things I can control, the things I can’t. The wisdom to know the difference.”

A Picture…One Thousand Words: Process

August 28, 2023 Comments off

How do you identify the most important business processes?

August 23, 2023 Comments off

At the highest level, a business is defined by its goals and strategies. (Ends and Means) Within that context, a process is a plan for creating a capability, a potential to produce. When executed, a good process plan delivers the correct product, consistently with measurable results.

The product could be that “widget” every book mentions – “we can produce 1000 widgets an hour from iron castings”. Product could even be an intangible like electric power or bandwidth – “we can provide energy for a 100 Kilowatt demand for up to 8 hours out of 24 from sunlight”.

Note the words “can” and “from”. Process improvement simply means improving our ability to make more with less. The “less” could mean less raw material, less expensive equipment, less time, less risk or less waste. (Every process has a product and a by-product.) Improvement only happens by accident, or when we learn something. The reverse is also true. Process performance worsens sometimes by accident or when we forget. Accidental improvements are temporary. Knowledge is the key.

So, the question of most important process takes us in three directions.

First: It is good advice to “play to one’s strengths.” What are our strengths? What are the core competencies that we can develop and improve? What are those processes that should never be outsourced? Those are most certainly highest in importance.

Second: The Sustainment process. An important component of operations management is continually training, reviewing methods, correct use of tools and technology, safety protocols, and so on. Professional athletes, performers and musicians never stop practicing and rehearsing.

Third: The Learning process. Since accidents do happen, an incident should trigger a learning process. Was the consequence of the accident a good thing or a bad thing? Why did the accident happen? I like a simple classic approach to causal analysis.

A. Identify the immediate cause. – (The roadway nail that caused a flat tire.)

B. Look for a causal chain of events. – (The carpentry truck ahead with the open toolbox. The speed bump. The speed at which the carpenter drives.)

C. Examine human action. What choices and decisions could have been, and will be different? – (Lock the toolbox. Slow down for speed bumps.)

Finally, don’t wait for accidents to happen. A fire drill is sustainment. An experiment-day is a controlled and managed “accident”, a way for us to learn what makes a process better or worse.