Our Management Philosophy

August 19, 2024 Comments off

THINK AND COMMUNICATE CLEARLY

Practice and encourage the policy of only using words and acronyms you are prepared to define. You needn’t be a surgeon to discuss brain surgery, but you should be able to define brain and surgery. If it is true that you can´t effectively manage without measuring, then you surely can´t manage what you cannot define. Define your acronyms!

BE DECISIVE
The time for action and the decision to act are two different things. The difference between Decisiveness and Impulsiveness is patient and prudent timing of action. Decisiveness is the ability to mentally adjudicate a matter so that it no longer consumes your most precious resource – your focus.

DON’T BE A BOTTLENECK
Successful follow-through takes a network of key individuals and massively parallel and well organized activity. If you try to do everything yourself, then you will limit managed work to your personal ability to process information and make decisions.

HOLD PEOPLE ACCOUNTABLE FOR THE THINGS THEY CAN CONTROL
Properly apportion work and responsibility. An objective division of labor is based on product, process, decision-role and human factors. Holding people accountable for the wrong things is self-deceiving, self-defeating and the biggest destroyer of productivity and morale. Make sure you understand the difference between accountability and blame.

BUILD REAL PROCESSES

Processes are intentional methods of achieving repeatable results at a predictable cost. Many operations claim to have processes but upon examination they obviously don´t. If every little undertaking is approached as a first-time initiative, then a company only achieves a fraction of its potential for productivity.

PAY ATTENTION AND MAKE EVERY DAY A DAY OF REAL JOB EXPERIENCE
When we were young, we were told to “pay attention in school”. However, at any skill level, the essence of work is attention. Learn and encourage the policy of learning something new every day. Evaluate what you learn. Call a bad theory just that; not a “good theory that doesn’t work in practice”.

All rights to the content of this site are retained by Ron Parker and Operation Improvement Inc.

Ron Parker is a Service and Manufacturing Industry Operations Improvement Consultant with years of experience in the integration of IT and Business Operations.

June, July, August Valid: 2023 – 2026

Organizing IT Work – Keeping Teams Focused

August 20, 2024 Comments off

IT work can be generally organized as:

1) Process (Things we know how to do),
2) Development (We need to learn, research, experiment, try) and
3) Project work (We assemble #1 and #2 into something new.)

The management of these three requires different tools.

1) Tier I support should be organized as a process. Think and manage and staff using concepts of demand, capability and capacity. Integrate staff into the process with a “career path” approach that allows them to become productive quickly and then handle more complex support requests as they learn. Implement sustainment training. Well managed ticketing systems are the proper tool here.

2) Development work should be organized around small teams of 2-6 completely responsible for an objective or sub-objective. Use iterative project management (now called “agile” by many) – the simplest being a task board approach: A) TBD, B) Planned for this iteration, C) Done. Every big problem can be broken down into smaller ones.

(If you have never done this: each iteration is a relatively short and fixed block of calendar time. It is similar to the classic time management approach and the prioritized daily actions list, but the time window of each can be a day, a week, or more depending on the nature of the work. )

Choosing an appropriate interval to pause, assess, and plan the next iteration is probably the single most important skill in managing this type of work, keeping focus and making progress without falling into the “micromanaging” trap.

Avoid the illusion of control offered by “buzz word” and overly complex management tools. Keep review and tracking simple. Developers justifiably hate when the work management system is too bureaucratic. Don’t over-complicate this! Just judiciously use the best knowledge-base, version control, sharing and team building tools you have available today.

3) Major project work usually calls for Gantt Chart/Critical path resource assignment, baseline tracking, leveling, materials cost, Earned Value calculations, etc. Network, Infrastructure, facilities build-out, etc. are all examples of projects typically require waterfall project management.

You need a software tool that handles all these and advanced scheduling methods (ASAP, ALAP, Start and Finish dependencies Leads, Lags, etc.) Look at Project Plan 365 or Microsoft Project. Other popular “big name” tools are missing some of these features. Everyone needs to see the big picture. Not everyone needs to see every detail.

What Drives Continuous Improvement

July 29, 2024 Comments off

Let’s start with the really big picture – think about improvement over decades and longer. What has changed since the eighteen hundreds, the sixteen hundreds, or even longer spans of time?

Here is a hint to the first driver: “The wood and stone age. Tin. Copper. Iron. Steel. Aluminum. Plastics. Semiconductors.” New Materials!

Formal scientific method is relatively new, but inquiring minds have been exploring the field of materials science for millennia. Advances and discoveries of new materials have defined and delimited what was possible in human history. Today, these advances in materials come every couple of decades.

Every new advance (graphene, perovskites, nanomaterials, metamaterials, etc.) opens new possibilities for the second driver: invention, engineering, and automation.

While some products, for good reasons, are still made in the tested and traditional manner; new technologies periodically replace the old. We have a saying in our approach to quality and improvement. “Management provides the facility and is ultimately responsible for process capability, IF Operations runs the facility correctly and consistently.”

If both parties of this division of labor work together, processes can produce excellent products – and each can contribute to the goal of producing more with less in the future.

Management can commission engineering to rebuild processes with the latest inventions, materials, and technology. Operations can continue to expand their knowledge beyond the “user manual” – and fully master these new tools. This brings us to the third driver of continuous improvement: operations excellence and operations sustainment training.

An important component of operations management is continually training, reviewing methods, correct use of tools and technology, safety protocols, and so on. Professional athletes, performers and musicians never stop practicing and rehearsing.

Improvement only happens by accident, or when we learn something. The reverse is also true. Process performance worsens sometimes by accident or when we forget. Accidental improvements are temporary. Retained and applied knowledge is the key.

Just Enough, and Just In Time Management

Comments off


(Definition: Tactical management is management of -available- means to an objective.)

The “micromanaging” topic always begs the question, “how often should tactical managers check in on the progress of a project or task set?” Any intervention by a manager (whatever the form) generally amounts to: Communicate, Coordinate and Re-deploy resources as necessary. You change assignments and reallocate available resources.

To find a balance between time used for talking about work and time actually working, I generally follow the following guideline:

I ask, how many opportunities for action should I budget in order to make project adjustments? Of course, you want to place as much trust in the team as possible, and not frustrate them by signaling a too frequent change of priority and direction.

So, ask yourself, what is the “pace” of the project? If an objective is 16 weeks in the future, can I adapt to any issues and unforeseen surprises if I review monthly? That gives me only three opportunities to act. Given what I know about my team and our objective does that seem enough? Can you count on your teams to “send up a flare” if they encounter un-foreseen issues?

What about management in a crisis, where we are trying to contain a serious problem in hours, and not days? How often is too often to check status?

Let’s explore the example of the 16-week project. After 4 weeks and 1 review, one fourth of the budgeted resources have been spent and there are only two remaining reviews before deliverables.

With only three fourths of the time and money left, are the remaining resources (if judiciously redeployed) going to be sufficient to bring things back on track? In most scenarios, this seems a little tight. 6-12 checkpoints seem like a better minimum for most projects, depending on the team of course.

Of course, tactical management of -processes- is a different topic, and totally different approaches are needed. Many technology project managers use an iterative development process and a pace established by weekly or bi-weekly “sprints”.

Guided by a rendering, or “wireframe”, each iteration is intended to produce an incremental deliverable which will either converge on a finished product or continually improve an existing one. A week or two is typically long enough to add and test a couple of features, and short enough to keep the scope of very abstract work conceptually manageable.

“Process Improvement”

Comments off

On a company walk-through, we observed a clerk trying to receive and read new customer orders on a remote CRT display terminal tied to the customer’s mainframe.

The clerk struggled to interpret the really old and coded “punch card” fixed field format. Then, the clerk would carefully type in the order for manufacturing on the company’s own in-house computer system.

The manager saw no problem with this process but thought that perhaps training would improve things a bit. It is likely that this order-intake/order-entry area would never have been targeted for “improvement” or flagged as a problem area. “If it’s not broke….”

Some processes don’t need to be improved. Some processes don’t have a problem that needs to be fixed. Some processes simply need to be eliminated.

What is often called for is a fresh look that is not prejudiced by how the work was previously done. Start with the objective. Don’t sub-optimize. Is the objective “improved” order-entry, or making it as easy as possible for the customer to place orders?

The very first step in process improvement is, “Show me everything. Let’s walk through the big picture.” Problems like the order-entry issue will then immediately stand out to a fresh set of experienced eyes.

The “Grouping Error”

July 28, 2024 Comments off

Observations can be turned into data by measurement. Measurements can be summarized by statistical analysis, and then decision-making ideas start to emerge from the numbers.

But wait! there is this thing called “the grouping error”.

Here is our classic “classroom” story illustrating this problem:

Ten machines are “bubble-packing a consumer product and a statistical summary concludes that about 1% of the packages are mangled, damaged – crushed in the packaging machines. It’s appears to be an alignment failure of conveyor, product and heat-sealing stamp/press.

This idea begins to emerge. What do these ten machines have in common that causes an occasional alignment failure? Is it a Timing mechanism? Are there plastic parts that should be replaced with steel? Do we need to rebuild/replace these machines with precision stepping motor components? (Don’t raid the capital expenditure budget yet!)

Here is how the 1% came to be: ONE machine had a 10% scrap rate and the remaining nine had little or none. A DIFFERENT IDEA emerges from the numbers: what is different about machine number ten?!!

I have seen this exact issue in more than one industry/process, and of course there are ways to be vigilant and catch this mistake before the final roll-up of data into a final report.

A data analyst might know that a histogram can show multiple peaks (“bimodal”) indicating that a single average does not describe the population. A statistician might look at data clusters or perform a F-test or test for a goodness of fit to a normal distribution. Any of these checks and more could be employed to examine suspicious data for a grouping error.

However, there are the facts we know from simply observing the thing we measure. CNC machining data should probably not be merged too early in the analysis with “old school” machining technology or additive manufacturing. Defective/damaged products manufactured from wood should be studied apart from same products made from metal . Call center calls with translators should not be prematurely grouped with calls handled by native speakers.

Working with data does NOT mean shutting out every other fact and observation available to us, and this other information guides us as we extract the right conclusions from the data we collect.

“We Tried TQM, Lean, Just In Time, Six Sigma, etc…but”

July 20, 2024 Comments off

A helpful post contrasting the definitions of Lean versus Six Sigma made me think about the skeptical reaction many have when the latest improvement buzz phrase or acronym appears in media. There are always successful and credible case studies, but many are left thinking that surely a key ingredient has been left out of the tasty recipe.

The “Silver Bullet”

For years, we attributed this to the wish for a “silver bullet”, a quick solution to that performance, time, cost or risk problem. Perhaps the way solutions are presented (Six Sigma versus lean versus Kanban versus KPIs versus dashboards, team building, etc.) contributes to the misunderstanding and misuse.

Maybe it is the way that older strategies are renamed, rebranded, and upgraded with larger and larger data requirements. If SPC didn’t work, then maybe DOE, regression (sorry: “Predictive Model Building”), multiple regression, data lakes, simulations, linear algebra and AI are the answer.

Certainly, greater computational power, technology improvements and more data is a positive; but these various methods and tools should not be treated as “siloed” solutions. There is often a missing ingredient in many approaches, and that is integration of these tools with each other and with a conceptual view of the processes one wishes to improve.

Quality, Performance & Value

Many struggled with TQM (“Total Quality Management”) because of the tendency to conflate “Quality” with “Performance”. To clarify this I would ask teams, “What would Total COST Management represent? What is the value of an absolutely perfect wedding cake one day late?” When they came to see quality as value to the customer, TQM began to be integrated with the famous conceptual formula from Project Management: “Value=FunctionOf(Performance, Time, Cost, Risk)” (Not every chicken dinner needs to be a $1000/plate celebrity chef masterpiece to have value.)

When high scoring “Quality as Value” KPIs do not tell us that customers were disappointed – we must add the knowledge that metrics and measures are not the same; that actionable descriptive statistics will rely on homogeneous groups and that outliers and trends can hide in a statistical analysis that ignores time sequence and variability.

Variability & Outliers

When descriptive statistics is integrated with probability functions and Statistical Process Control (SPC), we began to get a near real-time picture of variability, quick recognition of outliers and objective evidence of homogeneous groups for acceptance testing – We then need to integrate this with an action plan for outliers. We need tools to connect causes (“Things and how they behave”) with effects (changes in variability and outliers.)

Visualizing Process Dependencies

When Cause/Effect or “dependency” diagrams give us visual relationships between “What we do” and “What we get”, we become able to integrate this with our metrics, measures, process and product targets and data collection strategies. With this additional tool, we can integrate team building, leadership exercises, sustainment training and “learn, share, teach, improve” training with experiment days.

The “Base Camp”

When we finally have what we call, a “base camp” from which further improvement can occur; then we are ready to try and test advanced optimization techniques, large data set tools, technology upgrades and more.

Whether our improvement initiatives began with process inputs, customer deliverables, a bottleneck process or in quality, lab and measurement; we continue to integrate – to connect what we know and have learned about one area to others and we may use one of the various “value chain” techniques.

Continuous Improvement

We match inputs of one internal process to the outputs of another to optimize and align. As variability is reduced, buffer stocks can be gradually reduced and Just-In-Time techniques can be incorporated.

There are other tools in the process improvement toolbox, some of which are optimized for manufacturing and some for the service industry. The principle is the same. Regardless of which techniques come first and where in the process we begin, there is need for integration into an expanding process knowledge database structured to support -human- intelligence and conceptual understanding of work and deliverables.

Actionable Metrics

An analysis, dashboard, diagram or computer output that has not been integrated into the sum of what we know is -not- actionable. If you have seen daily, weekly, and monthly status report on paper or screen that do not drive action, you know exactly what I mean. It’s the Monday morning “Read and File” process, and that may be why these approaches sometimes fail.

Actionable Data

February 20, 2024 Comments off

It seems like everyone is talking about “Data” and “Data Analysis”. Of course, what everyone wants is actionable data!

A “play it where it lies” approach to data simply treats any data as a math problem. There are just numbers to be crunched and fed into statistical algorithms with only a loose reminder of their connection to the real world. We then have results and answers without conviction – UNactionable data.

Actionable data always has an explicit logical chain of thought beginning with first-hand observation of something. Then we sharpen our observations with attribute or variable measurement by making a systematic comparison to an objective standard. (We count and quantify.)

With physical characteristics this is often straightforward, but do we all agree on the objective definition of: a NEW customer? an EXCELLENT customer service call? full COMPLIANCE to policy or contract deliverables? a DEFECTIVE part? an IMPROVED process? an OUTAGE in our IT systems?

With even the best measurement system in place, we still have two recurring measurement quality issues: outliers and missing data. Have we investigated outliers as opportunities to learn and integrate new information or do we pretend that they don’t exist? And, what about missing data? Some missing data can be interpolated, but other missing data should also be treated as outlier information.

GPS position and location tracking data might be used to extrapolate an in-between location and time, but missing refrigerator temperature data might indicate an equipment malfunction or failure!

If, without grouping errors*, we correctly begin to reduce good data with descriptive statistics; then human ideas/abstractions will emerge from the numbers. We begin to confidently speak and accurately describe the typical “new customer”, “service call”, “warranty return” and so on; and we can graphically illustrate the analytical results that back up these sweeping generalizations.

Our confidence in these conclusions rests on our ability to trace all the way back to what could be seen, heard and experienced, and that is what makes this data actionable.

(A “very bad” example of interpolated values:

https://imgs.xkcd.com/comics/sphere_tastiness_2x.png)

Changing How We Think About Work

December 20, 2023 Comments off

There are many excellent points in this Linkedin” slide presentation, and I especially like the arrangement across time of these popular methods.

(See/Search: “Continuous improvement, encompassing Lean, Kaizen, and Agile methodologies, advocates ongoing excellence through gradual refinement….” on LinkedIn)

There are three things that I would add from my experience.

First, there is a tendency to equate quality with performance. Quality is value as seen by the customer. I have made this distinction in other articles and there are several ways to see this.

a.  The project management connection: value = function(performance, time, cost, risk). That integrates understanding of project management and process management as the two basic skills of “tactical management”. (Management of -available- means to an end.)

b. There is the relationship of “performance”, “specs”, and “targets” to sales and marketing. “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!” –https://quoteinvestigator.com/2019/03/23/drill/

c. The measure is not the metric. Space inside cardboard box can be measured by length, width, depth, cubic inches. Or one can count the number of unbroken cookies that will fit securely in the box. (“Cookies or crumbles?”

Second, these various “schools” of quality are just that. They are different approaches to teaching the same/similar mental skills like: experimentation, basic measurement and statistics, cause/effect thinking, organizing and planning, conceptualizing and then drilling down to details, etc.

If one teaches root cause analysis with fishbone, 8 steps, 5 whys, deductive logic, or Aristotelian causality – the end skill is the ability to trace a causal chain back to choices one could have made and can make in the future.

Finally, when strategic decision makers provide new facilities, new tools, and new processes, performance expectations go up.

For tacticians, the driver of performance improvement is new knowledge about the existing facility. Experimentation techniques are “hunting” for the causes that keep us from hitting the target every time. Control chart techniques are patiently “fishing” for causes. In a sea of countless tiny independent causes of variation, we watch and wait for a cause that surfaces above the noise. That “out of control outlier” tells us when and where to look and learn, share, teach and improve.

We naturally expect that large capital outlays and clever engineering should result in better product performance. What one hundred years of these quality management tools teach us is that changing how we think about work can result in just as big an improvement.

It’s Hard To Make Things Easy…

December 12, 2023 Comments off

I have often been asked how I came into the field of consulting, and how that career path forked and diverged into successful projects in several manufacturing and service industry companies. It began when I went to work with James Abbott. I was an early hire for his training organization and continued to work with him as we transitioned into a training and consulting group.

James had a vision. He saw the training and professional development need that every company has; and he decided to address that need with in-depth training seminars on a wide variety of subjects.

I had an academic background in physics and math and had a strong work background in Information Technology. I always have an appetite to master new material and I told James that I would come up to speed in any training material he wanted to offer. This turned out to be a perfect fit to my personality and skill set!

The Secret To Great Training

We worked hard to develop and organize the training materials in proper order from the basic to the advanced. Our course design approach was to always establish a context for new ideas. We often reminded each other that counting comes before arithmetic, arithmetic before algebra, and algebra before calculus.

This conceptual approach allowed us to effectively present a large amount of material in a compressed period of time. When community colleges began teaching computer skills classes – their instructors took two days of our material as a template for a semester long course.

What Students Said

James has a saying, “It is easy to make things hard and complex. It is hard to make things easy and simple!” This principle was obvious to me after every seminar that I taught. The student reviews typically fell into two categories.

I was disappointed with reviews like this: “The material was very advanced. I learned a lot. The instructor was an expert in the material and presented it very well” . Instead, I wanted to see reviews like this: “The class was easy. The time went by fast. I think I may have already known a lot of the material beforehand.”

With the same subject, similar audiences reported different experiences. I could always account for this particular difference by how I organized the material. I could tell that proper order was one important key to participants “getting” the material. (Arithmetic Before Algebra!)

The Principle Applied to Process Improvement

Any business, service industry or manufacturing process can improve if we learn something new and integrate it into the context of our existing knowledge. New process tools, knowledge and metrics are the principal drivers of improvement. Every process will degrade when we start to forget, and we let known manageable factors return to the dark unknown.

Learning and sustainment begins by conceptually building that contextual foundation. What do we know? When we start a consulting project, we often encounter this: “It’s all the same. A call is a call. An order is an order. A cast iron part is a part. It’s pretty obvious.”

When pressed for details on how things work and how tools are used, this posture then flips and becomes: “Our processes are so complex. You wouldn’t understand. No two calls in the call center are the same. No two customer manufacturing orders are the same. No two system failures are the same.”

We call this retrenching position: the “no two snowflakes are alike” response, and we then explain what we are looking for:

(1) A difference in degree that is a difference in kind: For example, 2- or 3-minute calls in the support center versus 30-minute calls. “Short calls” versus “extended calls”, or “quick feature training calls” versus “software installation and configuration calls”. These might be approached as two different processes.

(2) In manufacturing, we might want to think about similar products made from copper or aluminum versus wood. These might be considered as two processes if the end result differs only in material. “Soft Metal” versus “Woodworking”.

Is This Too Simple?

When teaching these first process concepts to beginners, they first object that this seems too simple. It is simple, once this “factor analysis” has been done!

Is This Too Hard?

Charles Hobbs was an important thought leader in the field of Time Management, and he once told a parable of a woman who found purpose when she was encouraged to start by examining what was in her own back yard. She found a rock – a steppingstone rock near her back door and she began to ask herself, “What kind of rock is this? What else can it be used for? What is its mineral content?” After a time, she became quite knowledgeable about minerals and when she felt that she had reached a goal she asked for advice again. Her mentor, replied: “What was under the rock?”

When even experienced engineers grasp the implications of recursively drilling down into processes to learn more, to “look under the rock”, they often experience a moment where it all seems never-ending and too hard.

Simple is not the same as easy. Creating a culture of learning in yourself and in an organization is hard work. However, a culture of learning results in better decisions, better products and services lower costs and hopefully – a virtuous circle of continuous improvement.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.