AI In The Recruitment Process

September 20, 2023 Comments off

A client we once helped began by complaining, “Our customers don’t follow our processes.”

For customers, at least, processes should be like vending machines. They do not need to know what makes all the sounds as the soft drink drops. I don’t have the source at hand, but a here is great and relevant quote: “Customers do not need to know what does on behind the door marked EMPLOYEES ONLY.”

To some extent, this attitude should be extended to employees. How hard is it to schedule shift swaps? Vacation days? Of course, this is a big selling point for the latest HR automation tools.

However, AI classification algorithms sort resumes, not people. In my quality management classes, I have taught and reminded participants that: “The Measure is not the Metric”. 

Like any automation attempt:

1. You must understand in principle how to perform the work manually if AI is to be successful.

2. You must periodically audit the automated process from the ground up. 

See: Operation Improvement Inc. » Automation Technology & Business Operations

Every process has a product and a by-product.

September 18, 2023 Comments off

Early in my consulting work, we were doing a series of Quality training classes at a plant in Tennessee. My associate, James, asked the class about “scrap”. The class said we don’t have a scrap problem, and James held up a little tip of rubber – a trimming from a finished product that amounted to a significant quantity every month.

Every process has a product and a by-product. The “product” of an operation is often subjected to inspections, reporting and analysis that we call “burnt biscuit reports.”. Even if a facility is fortunate and has no “burnt biscuits”, there are still opportunities to learn and improve by, for example, examining the by-products of production.

Of course, Statistical Process Control looks at the processes that -make- the biscuits. It moves the focus from biscuits to baking. When we are “baking”, “mixing”, “pouring”, “cooling”; we want metrics that we can quantify critical aspects of those actions in addition to those of the product.

Finally, the input of every process is the output of a predecessor. A “Walkabout” style visualization pulls together the important relationships between incoming product, process, outgoing product metrics, and by-product!. We describe it as a way to “learn, share, teach and improve”.

Some call this style of documentation, a “functional block diagram”. It is really just our early development of what is today called “value stream”. Our use of the technique began by understanding its traditional use for “divide and conquer” signal tracing in the electronics industry. We then adapted it to latex form and mold manufacturing, hydraulic valve machining and assembly, chemical baths and plating, IT support processes, call center/telephony processes, and more.

If you find this approach helpful, check out these books: The Abbott Walkabout Series &

What is It? Where is It? What Does It Do?

How Does It Work?

Automation Technology & Business Operations

September 14, 2023 Comments off

Process automation can and does mean anything from data collection, repetitive manufacturing or maintenance task execution, service organization reporting, software testing and deployment and more.

For operations monitoring in particular, and for automation in general, there are three principles that should be followed as a guide to success.

  1. To successfully automate a task, you must understand in principle how to perform the work manually. With data collection, you need to have a clear picture of the thing or activity, its attribute, the type and units of measure, and the measurement method. Automation is a tool that allows one to perform the work of many.
  2. You must keep the focus on the ‘why’ of automating a manual process. In the case of data collection, does automation produce information that is more timely, more accurate and less expensive than other means?
  3. You must periodically audit the automated process from the ground up. With data collection, you need to audit the flow of data from the point measurement takes place, up through any rollups, summary statistics and reporting. (See #1)

Finally, with operations monitoring automation, remember that the goal is conceptual understanding and ideas that are useful/actionable. Don’t ‘bury the lead’ in pages and pages of unnecessary reporting, data analysis and out-of-context graphics.

Check out the case studies library for more on Automation Tech:

Software Project Management Callback

August 31, 2023 Comments off


“Waterfall” project management is a style of management best suited to the assembly and integration of proven processes and skills. In residential contracting, we work from plans, measure twice and cut once. The materials, techniques and skills required are known quantities. The quantities of steel, concrete, wiring, plumbing and fixtures can be estimated from the plans and established labor ratios can be applied to quantify the human component.

                The challenge in these kinds of projects is wise management of critical path, and the ability to adapt to early and late delivery, price surprises, material defects, resource scheduling conflicts, human error and project changes. In “waterfall”, the project manager is given an objective expressed as performance, time and cost and must optimize these three variables while considering project risk.

                As a tactical manager, the “go-to” skill in the face of challenges is the ability to re-deploy the available resources when and where they are needed! The hardest lesson for some is that attempting to schedule every task ASAP can make a project take longer!

                Software project management is different. By software development, I am not referring to the low-code/no-code projects where workflows are tweaked, data is recoded, and BI engine reports are created.  The alternative to waterfall is certainly an option for these kinds of assignments, particularly the first time they are attempted. Software Development at a means code and database creation and integration with other pieces of software.


                The conceptual alternative to “waterfall” can generically be termed “iterative development” and it is an approach that is decades old. It predates computer technology and code creation, and has many applications outside of IT.

                Fredrick Brooks, the well known UNC computer scientist and thought leader had some success with waterfall techniques when he redesigned the roles of a lean programming team. His original book, The Mythical Man-Month is a classic of software development, and gave us this guiding principle: (among others) “Adding programmers to a late project will make it later.”

                In subsequent editions of this book 20 years later, he explored the concept of iterative development. In simplest terms it builds on the lean programming team model and add iterative development constraints. A development cycle is a short, relative fixed time period where an application is constructed with only the functionality that the time budget allows. He calls this “growing a program”. This approach turns learning, discovery and invention into a manageable process.

                At each stage of development, the result is both a working product, and a prototype for the next iteration. With this approach, nuances of a new language or strengths/limitations of a chosen framework of building block functions can be explored and adjustments can be made as necessary.

                Project planning is developmental. Much like a curriculum development for first, fifth and 12 grade education, project planning has a vision, or outline, for what the product could look like at different stages of development.  You can see that this lean style of iterative project management is “baked into” the DevOp and CI/CD methods and tools offered by AWS cloud services.

                With either “waterfall” or “iterative” management one wants control and desires to avoid the pitfall of the illusion of control. Project management skeptics who are aware of such illusions sometimes ask why plan at all?

                I ask them to imagine productivity if we planned 100% of the time, (The answer is none.) and then I make the case that activity without performance-time-cost objectives will not deliver value. What is the value of the perfect wedding cake delivered a week after the wedding? What if the cake is cheap and on-time but looks and tastes awful?

                Without a sense of where we are going, where we are and an idea of how to get from here to there – the risk of failure is almost certain. A sweet spot of planning and control is somewhere in between no plan and an obsessive attempt of over-control. However, when we encounter those things we cannot control, we realize that plans must change; and that is ok.

                An interesting Twitter (“X”) post had detailed criticisms of SCRUM and other recent attempts to build management layers on the thirty-year old lean development framework of Dr. Brooks. The Twitter writer did not reference Mythical Man-Month, but the book certainly came to my mind when I read the on-line comments and replies.

                On reflection, it also made me remember the advice, “…things I can control, the things I can’t. The wisdom to know the difference.”

A Picture…One Thousand Words: Process

August 28, 2023 Comments off

How do you identify the most important business processes?

August 23, 2023 Comments off

At the highest level, a business is defined by its goals and strategies. (Ends and Means) Within that context, a process is a plan for creating a capability, a potential to produce. When executed, a good process plan delivers the correct product, consistently with measurable results.

The product could be that “widget” every book mentions – “we can produce 1000 widgets an hour from iron castings”. Product could even be an intangible like electric power or bandwidth – “we can provide energy for a 100 Kilowatt demand for up to 8 hours out of 24 from sunlight”.

Note the words “can” and “from”. Process improvement simply means improving our ability to make more with less. The “less” could mean less raw material, less expensive equipment, less time, less risk or less waste. (Every process has a product and a by-product.) Improvement only happens by accident, or when we learn something. The reverse is also true. Process performance worsens sometimes by accident or when we forget. Accidental improvements are temporary. Knowledge is the key.

So, the question of most important process takes us in three directions.

First: It is good advice to “play to one’s strengths.” What are our strengths? What are the core competencies that we can develop and improve? What are those processes that should never be outsourced? Those are most certainly highest in importance.

Second: The Sustainment process. An important component of operations management is continually training, reviewing methods, correct use of tools and technology, safety protocols, and so on. Professional athletes, performers and musicians never stop practicing and rehearsing.

Third: The Learning process. Since accidents do happen, an incident should trigger a learning process. Was the consequence of the accident a good thing or a bad thing? Why did the accident happen? I like a simple classic approach to causal analysis.

A. Identify the immediate cause. – (The roadway nail that caused a flat tire.)

B. Look for a causal chain of events. – (The carpentry truck ahead with the open toolbox. The speed bump. The speed at which the carpenter drives.)

C. Examine human action. What choices and decisions could have been, and will be different? – (Lock the toolbox. Slow down for speed bumps.)

Finally, don’t wait for accidents to happen. A fire drill is sustainment. An experiment-day is a controlled and managed “accident”, a way for us to learn what makes a process better or worse.

A Few Thoughts about “Problem Solving”

August 10, 2023 Comments off

Among other techniques, I have encouraged teams to begin an analysis with the immediate cause of a problem, and work backwards along a causal chain. This form of analysis documents two components of causality: 1) causes (things) acting inevitably in accordance with their nature. and 2) decisions and choices (root causes) made that could have been different.

If I imagine dealing with a mechanical product failure, one could learn about (1) the strength, durability, duty cycle, etc of a particular material or component; or (2) the many alternatives, including different materials, that could have been chosen in the product design.

However, a thorough understanding of a problem should begin with a clear understanding of the purpose that cannot be achieved due to an obstacle. Sometimes we can become so fixated on the next immediate objective that we forget about the ultimate goal. Can we go “over, under, around” or must be work through the problem in order to succeed?

Years ago, someone asked me for help with a spreadsheet issue. Their computer was complaining about running out of space and they wanted to know how their computer could expand. I asked them how many of these huge spreadsheets they had, and the answer was one! They only had -one- spreadsheet!

The rest of the story…
When the computer was new, they started their first calculation in row 1, column 1 – and kept going. It never occurred to them to open a second or third file for subsequent projects! When we talked about how multiple projects can be organized into multiple spreadsheet files, they immediately realized their “problem” was not the size of their computer – but their inexperience with the new software.

If you are experienced with office software suites and had a little laugh over this story – don’t forget that sometimes we all have these moments where enlightenment from a “Brilliant Flash of the Obvious” (a BFO) makes an apparent problem vanish and we get a clearer picture of how to proceed.

Check your assumptions!

AWS Lessons Learned

June 12, 2023 Comments off

In every field of endeavor today, the need for continuous learning and self-improvement never ends.

“Hybrid” skills – knowledge that spans traditional HR silos and categories have always created opportunities for me. So, I always take the opportunity to expand in new directions.

After my own business requirements led to dabbling in Amazon Web Services for email delivery, cloud desktop services and small cloud web servers, I decided to try the AWS certification tests, starting with what they call the “Cloud Practitioner”.

The certification programs also most certainly bring Amazon more business. For example, I quickly learned that the domain registration and name server services at Amazon were quite superior to the service I had used for years – and I have already begun the migration of the first four of many domain names to Amazon’s Route 53 service.

Next up for me is the Amazon Architect program. Fortunately, there are many hours of free preparation materials available at the Amazon web site. There is also much that can be learned from sample tests that are offered by third party training organizations. These free tests are an enticement to sample their website and consider their fee for training programs.

Here are some tips for others who may want to explore Amazon AWS services and certifications.

  • Think big. Really REALLY big!

On the one hand, AWS has services for small organizations and their “only pay for what you use” pricing lets almost anyone experiment with web servers, cloud telephony, email, cloud data storage and more for tens of dollars per month and not thousands.

On the other hand, consider what would be involved in configuring and launching thousands of servers in dozens of data centers – while sharing and securing data between these centers and millions of end users. No “on premises” data center experience prepares you for the scale, the “wholesale” mindset you must develop to effectively use AWS for substantial workloads.

  • Don’t Panic!

AWS services are modular and are intended to be loosely connected to produce a resilient and highly available result that can be evolved in small iterations to continuously improve. However, you must first break the “code” of AWS naming conventions.

If you have programmed, you may be familiar with concepts such as “objects”, “structured code”, “event driven routines” and more. When you pick up a new programming language these elements and common built-in functions are typically available if you can “google” the new language terminology for the familiar code building blocks. (Do arrays start at zero or one? Does this language use Braces or brackets? Indented structure blocks of reserved words. Etc.)

AWS often makes creative names for functions and services that you must work to match up conceptually with functions you may have encountered before. On top of this new nomenclature, AWS condenses these names to acronyms, and one must remember and retain the difference between SES, SNS, SQS or ELB, ALB, NLB – and so on.  When you expand the acronym and “translate” the service name you often find something familiar.

Of course, there is much you will find that is new and much that is arcane. Overall, you will find the AWS view of “good architecture” is logical, secure, and labor and time saving if used properly. Ask yourself, “if I were designing AWS itself – how would I do it?”

  • AWS is always changing.

The “free” sample tests and training videos on the web are not 100% accurate in their scoring. One test heavily emphasized the “access control list” for security, while AWS now prefers resource policies unless ACLs are the only option. Another list marked an answer wrong claiming AWS did not support a particular option or feature – but that was “old news”. New features often change the way cloud applications are designed and deployed.

Training and improvement Tools

April 28, 2023 Comments off

Learn, Share, Teach, Improve

March 13, 2023 Comments off

Some years ago, I saw a clever poster that diagrammed systems of the human body. It was the most complex flow diagram with dozens of boxes and lines representing the interaction of electrical and chemical activity and their effects on the whole.

This diagram was presented as a “wholistic systems” view of human anatomy, but it was mind-stopping in its complexity. Such an analysis might be appropriate as input for a computer simulation, but it did not conceptualize anatomy and biological process in a form that was suitable to learn, teach and share knowledge.

I have seen this same kind of complexity in business. I have seen telephone routing protocols that make it impossible for a human being to predict where calls will be delivered, who will be overloaded and who will be left idle.  I have seen workflows in service and manufacturing operations with a multitude of steps, loops and IF-THEN-ELSE forks in the road that would leave workers uncertain how to proceed.

From cross-discipline experience in many different work environments we derived a different method of describing work that is suited to learn, share, teach and improve business processes.

You know that humans need conceptual structure. Sentences can’t run too long. Articles benefit from paragraphs. Social Security numbers and phone numbers are hyphenated so that they can be remembered as three things instead of 10.

For example, we think of the first three digits of a phone number as an area code. Applying a conceptual structure, grouping, and naming things of like kind is what empowers human thought and decision-making.

Did you ever try to assemble a jigsaw puzzle as a child? The first step was always to group. Pieces of a certain color were pulled aside and in our mind we said “sky”, or “grass”, “people”. Our process dependency diagrams are approached the same way.

At a high level, we group the inputs, outcomes (nouns) and process steps (verbs) into a mentally manageable number that can be presented on a single page. This “umbrella” process diagram shows the overall dependencies for a successful outcome. Each earlier process step is usually earlier in time, but it is definitely a prerequisite for success.

Each process step is broken down onto a detail sheet and more details about the inputs and outputs emerge as well. A process guidebook organized in this way makes it easy to access the level of detail that the user needs for their level of experience.

Decisions are handled by referencing what we call factor tables. We either know in advance the decisions that will be made in a process or not. If not, the work is “out of process” and needs to be reviewed at a higher level; a supervisor, manager, or engineer – depending on the operation. This feedback is a crucial part of the improvement process.

The result: ideas about work and results are presented in a linear, easy to understand format without loops and branches. We describe work conceptually, and not in the fashion a computer requires.