Are You Really Ready to Make A Change?

176a0c7cf124ced47656a5561b32fd1f-720x480

“A new study by Towers Watson has found that only 25% of change management initiatives are successful over the long term. While this may come as no shock – substantive change in organizations with entrenched cultures is always difficult”. Although leadership often knows there is a need for change, frequently it starts with an unclear vision of the change, poor planning, and unclear communications to initiate the change process on the part of the management group. The lack of poor planning, unclear communications, and poor execution causes a lot of fear in the organization about what this change will do to the current status quo, and whether it will be better than the current mediocre reality.

In addition employees usually resist any change efforts no matter how small because of the following five reasons

  1. Fear of the unknown/surprise.
  2. Mistrust
  3. Loss of job security/control
  4. Bad timing
  5. An individual’s negative predisposition toward change

Through our experience in making successful organizational change we have found the following questions provide a useful guide in helping to think through a change initiative before embarking, while also minimizing the resistance to change. The questions deal with issues and concerns before the change starts (BC), during the change process (DC), and after the change has been made (AC) on all levels that are affected by the change. This is shown in the Change Question Checksheet in figure 1.

Change Vision and Message Questions

  • Do I have a clear vision of the change to be made?
  • Do I have a clear and concise message about the change?
  • Do I have the ability to articulate the change message to all levels of the organization?
  • Do I have sufficient passion for this change to be its champion?
  • Does the message fully and concisely explain the value of the change?
  • Is the message believable to all audiences?
  • Have we made it clear what will change?
  • Do we understand the scale of the change, including potential unintended consequences?
  • Have we established a sense of urgency for the need for this change?

Change Goals:

  • Do we have goal clarity and know exactly what we want to achieve?
  • Are the change goals realistic?
  • Are the goals believable?
  • Can we measure our achievements and progress?
  • Do we have goal alignment to our strategic plan?

Change Plan:

  • Do we have a change plan that sets up a series of quick wins to build momentum or are we trying to hit a home run?
  • Do we understand and have a plan to deal with the technical challenges of the change?
  • Do we have the right systems in place to support the change?
  • Do we understand and have a plan to deal with the adaptive changes people will have to make?
  • Are we clear about the adaptive changes to be made?
  • Are we always discussing the conceptual side of the change and not the details of how we will do it? Remember the devil is in the details

Management:

  • Is the management team on board and ready to support the change?
  • Will the management team roll-up-their sleeves and get fully involv
  • Will senior management demonstrate a behavior that is fully supportive of the change initiative and walk the talk, not talk the walk?
  • Do the employees trust us?
  • Do we have a way to anchor every change gain we make and not let it slip back to the old status quo?
  • Have we given managers and supervisors the information to really understand the reason for the change and are they able to translate that message to the people that report to them?
  • Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?

People and Change Teams:

  • Do people understand how the change will impact them?
  • Do people understand what they will gain and lose in this change?
  • Do we have the right talent to make the change?
  • Do we have training available to assist in the change?
  • Have we given people reasons to buy in and be engaged with the change?
  • What resistance are we encountering?
  • Can we hold people accountable for making or not making the change?
  • Do we need some coaching to help make the change?
  •  Do we have informed, passionate, and engaged change teams in place?

Barrier Removal:

  • Have we cleared the underbrush and removed the weeds that derail change?
  • Have we eliminated mid-management doubt and resistance and do we have their commitment to the change initiative?
  • Have we addressed people’s fears in tangible ways?

 

Change Question Checksheet

When To Ask                                 Readiness

Change Ready Question Before Change During

Chang

After Change Yes Maybe No NA
1. Change Vision and Message Questions              
Do I have a clear vision of the change to be made?              
  • Do I have a clear and concise message about the change?
             
  • Do I have the ability to articulate the change message to all levels of the organization?
             
  • Do I have sufficient passion for this change to be its champion?

 

             
  • Does the message fully and concisely explain the value of the change?
             
  • Is the message believable to all audiences?
             
  • Have we made it clear what will change?
             
  • Do we understand the scale of the change, including potential unintended consequences?
             
  • Have we established a sense of urgency for the need for this change?
             
               
2. Change Goals:              
Do we have goal clarity and know exactly what we want to achieve?              
Are the change goals realistic?              
Are the goals believable?              
Can we measure our achievements and progress?              
Do we have goal alignment to our strategic plan?              
               
3. Change Plan:              
Do we have a change plan that sets up a series of quick wins to build momentum or are we trying to hit a home run?              
Do we understand and have a plan to deal with the technical challenges of the change?              
Do we have the right systems in place to support the change?              
Do we understand and have a plan to deal with the adaptive changes people will have to make?              
Are we clear about the adaptive changes to be made?              
Are we always discussing the conceptual side of the change and not the details of how we will do it? Remember the devil is in the details              
               
4. Management:              
Is the management team on board and ready to support the change?

Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?

             
Will the management team roll-up-their sleeves and get fully involved?              
Will senior management demonstrate a behavior that is fully supportive of the change initiative and walk the talk, not talk the walk?              
Do the employees trust us?              
Do we have a way to anchor every change gain we make and not let it slip back to the old status quo?              
Have we given managers and supervisors the information to really understand the reason for the change and are they able to translate that message to the people that report to them?              
Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?              
               
5. People and Change Teams:              
 Do people understand how the change will impact them?              
Do people understand what they will gain and lose in this change?              
Do we have the right talent to make the change?              
Do we have training available to assist in the change?              
Have we given people reasons to buy in and be engaged with the change?              
What resistance are we encountering?              
Can we hold people accountable for making or not making the change?              
Do we need some coaching to help make the change?              
Do we have informed, passionate, and engaged change teams in place?              
               
6. Barrier Removal:              
Have we cleared the underbrush and removed the weeds that derail change?              
Have we eliminated mid-management doubt and resistance and do we have their commitment to the change initiative?              
Have we addressed people’s fears in tangible ways?              
               
Your Other Questions:              
               

Figure 1

Summary:

Successful change comes from developing an organizational atmosphere that is creative, risk taking, enthusiastic, reflective, involved, and inspires people to change. To achieve this elusive set of critical ingredients the organization must go for the quick wins and create a positive change momentum that contributes to a successful change process. Building and sustaining a conducive change environment cannot be achieved without careful planning before, during, and after the change initiative. It requires the change leaders to be constantly questioning how things are going, if support is waning, whether people are continually engaged, and determine if we are making clear mid-course adjustments based on what we are hearing, seeing, and sensing that will help accelerate the change initiative.

Many questions were listed but one you should ask at the end of a successful change is –“When will be ready for more change?” Change never stops or takes a holiday we constantly have to change to survive. Edward Deming said “It is not necessary to change. Survival is not mandatory.

Do not fall into the trap that French journalist Jean-Baptise Alphonse Karr wrote in 1849 “Plus ça change, plus c’est la même chose.” Or “The more things change, the more they stay the same.” You want a sustained and lasting change in your organization that improves its overall performance and response to customer needs. The change needs to look and feel different after the change initiative is completed. A return to the old status quo is not acceptable. Be part of the 25% of change management initiatives that are successful over the long term.

The 5 Step BPM Implementation Cycle

Let us admit the fact that manual processes are broken, inefficient, time-consuming, prone to errors, and come with a whole baggage of problems. This lands many companies to a pool of a series of undesirable effects that do no good to nobody. Hence, companies that want to be efficient and smart turn towards business process management (BPM) as a solution to step up their game through process automation. One of the biggest advantages it offers is the ability to orchestrate their existing IT better. It helps seamlessly integrate the several pillars of an organization’s’ IT support system to help derive maximum ROI out of it.

But, is it that simple? Why is BPM considered quite complex and time-taking? Well, there are several layers in an organization that need to give a buy-in for a completely drastic change in operating procedures. A BPM solution means a major change in the regular day-to-day operational activities that could affect tens of hundreds of users. This means that a lot of care has to go into making the right choice of tool that users will find easy to adapt to. It has the potential to truly change their entire way of operating.

cd3c3b9c3c4907a1a6ea01b4c40c45cf-720x478

But implementing a BPM is a huge responsibility layered with many challenges. Ideally, the implementation process begins with identifying the requirements and ends in executing the tool to meet the desired organizational goals. Interestingly, companies can start small and wrap their heads around a few processes until they gain the confidence to expand a BPM’s usage to a company-wide scale. In the zest to get into implementing automation, one may tend to overlook the possible damaging outcomes it can have. That must be avoided as the call for automation is purely specific to the use-case. Without a clear purpose, embarking on the exercise can be a heftily expensive mistake.

There are umpteen other considerations from an end-user standpoint depending on the industry. This is particularly true in the case of industries like manufacturing where the employees are largely blue-collar workers. To what extent can IT make in-roads into their day-to-day activities is a call organizations need to take. After taking into account all such key considerations, the organization decides to get into the act of implementing a BPM solution. The 5 most important stages of implementing BPM in an organization are discussed below:

Choice of product

BPM software comes in all sizes and shapes, budgets, and complexities. Selecting the right BPM software for your company is not as easy and picking the top-ranking vendors in Gartner’s Magic Quadrant. What might be a perfect BPM tool for a multitude of other businesses might not be a close fit to your enterprise because no two companies are the same.

There are some key considerations when evaluating BPM software. The one-size fits all approach does not apply in the BPM paradigm at all. After considering the top features that would add maximum value to the organization, the choices are narrowed down.

The selection process usually begins by identifying your requirements and defining your end goals. Are you looking for rich functionality or do you prefer usability over features? Who is going to use the tools, the technical experts or the average business users? The first step to choosing a BPM tool should ideally begin with creating a list of meaningful questions that clarify your needs and the purpose of adopting a BPM tool.

Answering these questions can hugely affect your vendor selection process and save your company from future frustrations.

Trial run

After the initial list of 4 is made, it is typically narrowed down to 2 and they are considered for intricate examination. This intricate evaluation typically takes a couple of months because of the series of test involved such as process simulations, pricing and ROI calculation, user-friendliness, scalability, and so on. Typically, companies form a vendor evaluation team that includes BPM experts, and they choose an existing business process to pilot run the automation using the chosen products. The team assesses the shortlisted tools from several perspectives such as:

  • How much time does it save for all?
  • How tight is the data security?
  • How much access control does it allow?
  • What is the cost benefit it brings in?
  • Does it help to improve overall operational efficiency?
  • Does it contribute to organizational productivity?

Once the trial run is over, the process naturally moves to picking the most suitable solution that checks all the right boxes to the above questions. When the process owners and the company’s decision makers are on the same page about adopting the most suitable tool, they initiate a purchasing that involves internal and external stakeholders.

In larger organizations, this might take a while because of the varying processes they have, the complexities of the processes, and the negotiation with the selected vendor based on your company’s requirements.

Installation

Once the purchase is closed, a company sets up the necessary technologies to roll out the BPM tool into their processes. The installation of an on-premise BPM software would require active involvement from both the vendor and the participating enterprise teams. The hand-holding role of vendor is very crucial for this stage because there might be skill gaps in the enterprise to not fully understand the working of the said BPM product. This is why your business should also consider a BPM vendor’s customer support and training as a decisive factor during the evaluation process.

This step has a tendency to become complex in a larger organizations, especially if they are spread across different geographies. It might encounter initial resistance from the teams and resource lag, but given the right kind of execution and enough preparation, the IT and other technical teams can make the installation robust and ready to deploy on time. At times, language- or code-based customizations may be needed in order to suit language and usage preferences of specific geographies respectively. This is particularly true in the case of verticals such as manufacturing.

Evangelization

You would assume that your communication with the vendor about their BPM product is over after the installation process. However, leading vendors stick by their clients till they train their teams and get them accustomed to use all the features and functionalities they have included in their BPM arsenal.

The deployment stage gives way to evangelization and user training. Existing BPM experts in partnership with vendor teams coach the designated process owners, admins, end-users, and all other stakeholders to dirty their hands and know the basics of the product.

This is the best time for the client company to encourage their BPM participants to ask questions specific to their domain and the processes they will be automating. How to incorporate conditions or handle exceptions? How to generate reports? Can you map the processes differently? How to setup an SLA? These are a few questions that require hands-on training, and perhaps proper documentation for future reference.

It is worthwhile for your teams to spend more time with the trainers and get comfortable around knowing the tool inside out rather than rushing through the process and facing difficulties at a later stage.

Integration

Your teams know the product, but how about introducing the new system to the existing applications in your enterprise? Whether your BPM comes pre-integrated with major software or demands API integration, you should ensure that the workflow management system sits well with the legacy systems and other core software in your enterprise network.

Except for the default and basic-level integrations, you might require support from the IT team to connect your BPM software with ADS for authentication (Single sign on) and other similar applications. This is an important step in the BPM’s entry to your organization because combining the applications of all software creates a powerful synergy in your enterprise and enables the exchange of data between individual systems when it is needed. It will also assimilate the BPM system to fit well into the existing IT framework so that it can operate seamlessly within the new environment.

Business process automation is a major leap for every organization to improve its overall efficiency and scale its operations to greater heights. Tedious as it may seem at the initial stages, the proper implementation offers several tangible benefits to the organization and ensures predictable outcomes that are in alignment with the company’s profit goals. Of course, the process needs a great deal of conviction at both the strategic and operational levels to embark into a BPM journey which will have several long-term benefits and help shape a productive, efficient organization. Often times, BPM pricing is accused to be opaque and one of the reasons is the inherent costs involved at each stage in terms of time and resources involved.

It’s important to realize not treating the BPM implementation as a “project” with a beginning and a definitive end; it is a continuous learning process to absorb the BPM solution completely in your organization. Nevertheless, a step-by-step approach to implementing a BPM solution would lead to achieve excellence in business processes thereby benefitting the organization at large. The right BPM software is customizable to your needs, and offers optimum functionalities to amplify the processes that differentiates your business from the market competition. It ensures that the business outcomes are more predictable and puts key stakeholders accountable.

Why Leadership Commitment Won’t Guarantee Lean Success

teamwork-leadership-climb-tpromo

If you do a Web search on the phrase “Why continuous improvement programs fail,” you’ll get about 8 million “hits,” give or take a few hundred thousand. I haven’t read all 8 million articles, but I’ve read a lot of them on the topic and most point to leadership failure as the root cause of program failure. The usual line is “lack of leadership commitment causes most continuous improvement failures.”

It’s hard to disagree with this, of course. We’ve all seen our share of improvement initiatives that died on the vine when leadership seemed to only be interested in supporting the effort with lip service and not much else. But I’ve also seen improvement initiatives fail, or, at least, run into a lot of difficulty, even when top leadership gave every appearance of being committed to the success of the program. We throw a lot of reasonably good leaders under the bus when we paint them all with the same brush of “lack of commitment.”

So, what’s going on, then, in those cases where the root cause of failure isn’t lack of commitment?

I’ve always thought that one of the things that makes lean difficult to implement is the fact that it’s a strategy wrapped up in a lot of tactics. In other words, it looks easier to implement than it turns out to be because lean strategy is camouflaged by an array of methods and techniques that seem as if they ought to be pretty straightforward. Company leaders, not understanding the strategy at the foundation of lean, strive to implement a few of the techniques, find they are hard to install and even more difficult to sustain, then drop them. Later, they claim that they “tried lean but it just didn’t work for us.”

Lean is the building of capabilities that enable the company to get more customers.”

When lean works well, there is a clear, direct line between the tactics (tools and methods) and the broader strategy. But when a strong linkage between “lean as methods” and “lean as strategy” is never formed in a manager’s mind, he or she never develops the necessary commitment to lean as a strategy. That’s not to say that managers who fail at lean have no strategic vision at all. It is to say that such strategic vision as they do have isn’t in synchrony with what lean is intended to do.

The disconnect between “lean as tactics” and “lean as strategy” is made even more of a problem by “practitioners” and consultants who espouse purposes for lean that it just isn’t designed to meet. Recently, I read an article on LinkedIn that purported to discuss the “dark side” of lean. The author presents an account of a lean implementation. Here’s a small excerpt:

[The company owner] decided to double his production, building a bigger factory and hiring an additional 50 workers. He did not understand the market or why people bought Swiss Cheese, his sales only increased by 10%. He began to lose a lot of money. In desperation, he hired a manufacturing consultant to implement lean. The consultant suggested he automate his manufacturing process to cut costs.”

Simply and directly put, lean is not a strategy to cut costs, so lean tactics are not designed to directly reduce costs. Lean isn’t about automating or getting rid of personnel. But listen to writers and consultants who tout lean tools as effective means to cut costs, then they implement those tools with the hope that costs will quickly fall. When that doesn’t happen, they lose interest. Lean implementations that have cost cutting as their central purpose will almost certainly fail.

If Not Cost Cuts, Then What?

So, if “cutting costs” isn’t the strategy that lean is meant to achieve, what is? Lean is the building of capabilities that enable the company to get more customers.

Let’s look at a simple example that I hope will illustrate my point. A plant installs shadow boards so that operators will put their tools on them when they are finished using them. We all know that a shadow board keeps tools visible and organized. A shadow board makes it easy to find a tool, to know where to put a tool, to tell if a tool is in use or missing altogether.

The use of a shadow board is directly connected to the broader strategy of increasing market share and improving margins. Here’s the way it works: A shadow board allows operators to retrieve tools more readily. This, in turn, impacts reductions in set-up and changeover times. As set-up/changeover times are reduced, shorter runs are possible. Shorter runs mean reduced inventory, better adherence to production schedules, better on time shipping, improved agility and flexibility in responding to changing demands of customers. Reduced set-up times also provide additional machine capacity, which is how all the new orders that arrive because the company has so markedly improved its service will be produced.

Shadow boards themselves, then, are tactical, it’s true. But they directly support and enable a broader strategy. The same is true for all the other elements of lean, from kanban, to 5S, to value stream mapping, to lean teams, to leader standard work…well, you get the idea.

Successful lean implementations, then, start with a clear view of company strategy and the role of lean in supporting that strategy. If company leaders don’t have that clear view, no matter how committed they are initially to implementing tools and methods, they will lose that commitment and the energy that goes with it.

 

Lean Six Sigma to Reduce Excess and Obsolete Inventory

Excess and obsolete inventory write-offs are chronic supply chain problems costing businesses billions of dollars each year. Unfortunately, improvement projects that are deployed to eliminate these problems often have a short-term focus. In other words, the current levels of excess and obsolete inventory are usually addressed, but not the root causes of the problem. Often such inventory is reduced by selling it below standard cost or donating it to charitable organizations. Competing business priorities sometimes keeps businesses from developing effective long-term solutions to eliminate the root causes, sometimes it is the difficulty in unraveling the complexity of the root causes.

Lean Six Sigma methods have been shown to be very effective in finding and eliminating root causes, and thus preventing arbitrary year-end reductions in inventory investment.

Higher- and Lower-Level Root Causes

An analysis of excess and obsolete inventory often shows that its major root causes are associated with long lead times, poor forecasting accuracy, quality problems or design obsolescence. However, these higher-level causes can be successively broken down into lower-level root causes as shown in the figure below.

As the figure suggests, from an inventory investment perspective, a long lead time may be caused, in part, by large lot sizes. For example, if the actual lead time or order cycle time is 30 days, but the required lot size for purchase is 90 days of supply (DOS), then this lot size drives a higher average inventory level than lead time by itself. In this case, the average on-hand inventory (neglecting a safety-stock calculation) increases from 15 to 45 DOS assuming a constant usage rate. Of course, the actual reasons for large lot sizes would have to be investigated by a Lean Six Sigma improvement team. The root causes of long lead times also could be due to complicated processes having numerous rework loops and non-value-adding operations as well as scheduling problems and/or late deliveries.

The second major cause of excess and obsolete inventory is poor demand management practices. Some lower-level root causes may include inaccurate historical demand data, a poor forecasting modeling methodology or other issues such as overly optimistic sales projections. Lean Six Sigma projects also can be used to attack lower-level root causes in this area. Lean Six Sigma is frequently used to improve quality levels to reduce waste and rework caused by a multitude of diverse factors within a process workflow. Finally, Design for Six Sigma can be used to improve the design processes for new products or services.

Using DMAIC to Find Root Causes

Lean Six Sigma improvement teams can drive to the root causes of their excess and obsolete inventory problem using the DMAIC problem-solving methodology (Define, Measure, Analyze, Improve, Control) in conjunction with Lean tools as well as process workflow models. In fact, building simple Excel-based inventory models or using off-the-shelf software, are good ways to identify the key process input variables (KPIVs) or drivers of excess and obsolete inventory problems. Inventory models follow a generalized Six Sigma root-cause philosophy – Y = f(x). They also are effective communication vehicles showing sales, marketing, manufacturing and other supply chain functions, as well as the impact of lead time and demand management practices on excess and obsolete inventory.

In an actual improvement project, the team begins an inventory analysis by defining the project’s goals in the Define phase. Using these goals as guidelines, relevant questions are developed to enable the team to understand how the system operates. Data fields corresponding to these questions are identified and extracted from information technology (IT) systems. The data fields are then organized in the form of an inventory model to provide the information necessary to answer the team’s questions and understand the root causes of the inventory problem.

After the Define phase, the team begins to evaluate measurement systems and plan data collection activities. This is the Measure phase of the project. An important activity in this phase is an on-site physical count by location of inventoried items associated with the problem. This is done to measure valuation accuracy relative to stated book value. Measurement analyses also are conducted of management reports and their related workflow systems. These analyses determine the accuracy of key supply chain metrics such as lead time, lot size, expected demand and its variation, forecasting accuracy (different from demand variation), on-time delivery and other metrics that may be related to an inventory investment problem. Unfortunately, supply chain metrics often are scattered across the several software systems within an organization. These systems include the forecasting module, master production schedule module, materials requirements planning module, inventory record files, warehouse management system module and similar IT systems.

After verification of a system’s metrics, the improvement team begins data collection to capture information necessary to answer the team’s questions developed during the Define phase. Relevant information, which may help the team in its root-cause investigation, usually includes suppliers, lead times, expected demand and its variation, lot sizes, storage locations, delivery information, customers and other facts.

Analyzing Data and Using Inventory Model

The Analyze phase begins after the required data has been collected and a simple inventory model has been created using classic inventory formulas such as those found in operations management textbooks. These models are used to analyze an inventory population to understand how key process input variables impact excess and obsolete inventory investment (i.e., key process output variables). A value stream map also should be constructed as part of the overall analysis. In fact, in many projects, a value stream map, once quantified, becomes the basis for the inventory model. This is especially true when the analyses focus on internal process workflows, at system bottlenecks, rather than finished goods inventories.

In addition, a simple inventory balance is calculated for every item and location of the inventory population based on each item’s service level, lead time and demand variation. An inventory balance shows which items and locations may have too much inventory and which items and locations may have too little inventory. In the latter case, inventory investment must be temporarily increased to meet required customer service levels.

After the team determines the root causes of the excess or obsolete inventory problem, it develops countermeasures to eliminate these root causes – the project’s Improve phase. In addition, other needs for improvement may be found as a project winds toward the Improve phase. The Analyze phase often identifies other types of process breakdowns within the supply chain that may serve as a justification for subsequent improvement projects. Lean tools and methods are particularly important in the analysis and execution of these types of projects. In fact, the application of the Lean tool, 5S, or what can loosely be called housekeeping, in the Control phase of a project, can help ensure that the resultant improvements are sustained over time.

Conclusion: The Typical Project Benefits

Typical benefits of defining and implementing improvement projects to reduce and eliminate excess and obsolete inventory include higher system accuracy, creation of quantified inventory models showing relationships between inventory investment versus lead time and demand variation, higher inventory valuation and location accuracies, higher cycle counting accuracies, and – most importantly – permanent reductions in excess and obsolete inventory investment.

In Six Sigma, It Takes Powerful Thinking

six-sigma-critical-thinking-1

The way we think dictates how we function and why we do what we do. In business, all three types of thinking are necessary, but the one that will keep your business afloat is critical thinking.

If we were going to assign a type of thinking to be associated with Six Sigma, it would be that of critical thinking. In critical thinking, you are putting all your facts and figures on the table and dealing with the issue.

Types of Thinking

Critical Thinking: Having the ability to think clearly and rationally and logically.

Constructive Thinking: Understanding our emotions and choosing to think in a way that will benefit our growth and development, and minimize friction in a situation.

Creative Thinking: Looking at problems or situations with a new fresh perspective and point of view — many times resulting in unorthodox solutions.

The Mind Can Play Tricks on Us

Have you heard of the McGurk Effect? It’s when what you see doesn’t match what you hear, so your brain will make the sound to match the visual. It is an incredible phenomenon, so in order to really hear accurately, you need to close your eyes.

There are cognitive biases. This is when we confirm over and over again with what we believe to be true, and completely ignore anything that does not share in our beliefs. So we create our very own subjective social reality.

Our imagination affects how we see the world, meaning the thoughts that we create in our minds can alter our perception of our world.

In Conclusion 

Now that we know that so many things can impact the way we think and see the world, perhaps we can take a lesson from our constructive thinkers, and choose or have the intention to see everything in a way that will benefit our continuous growth.

Our world would not survive without our creative thinkers, for they give us the impetus to come out of the box and create brand new products and businesses that make our world exciting. Our critical thinkers make sure those businesses become the legacy that we leave behind for future generations to benefit from.

Next Generation Lean – Tools

lean-tools

Over the course of the first five entries to this seven-part series, a case has been made for the need of Lean practice to evolve. An industrially proven strategy, metric and process have been proposed to be a central part of that transformation. This article will discuss tools that lend themselves to the practice of Next Generation Lean. Two of them have already been discussed earlier in the series, i.e., Value Stream Mapping and MCT Critical-Path Mapping. The third is a tool based on queuing theory named MPX.

Very few Lean tools exist outside of those that have been around since the 1990s. As has been discussed, the practice of Lean hasn’t changed much since its inception so there was probably not much driving the development of new tools. History has shown that it is human nature to resist change. The phrase “If it ain’t broke don’t fix it” characterizes this tendency. Unfortunately, disciplines such as Lean that fail to evolve become broken even if their regression isn’t initially recognized. Rather, this type of failure manifests itself in a discipline becoming less and less relevant. A lack of new tools should raise an alarm similar to when “the canary in the coal mine” is gasping for breath regarding Lean’s vitality.

Tools are important. If a practice is sound, the use of an effective tool assures practitioners a good shot at satisfying customer needs by standardizing the application of best practices. The consistent results they produce can also be used to create statistically significant proof-of-concept process assessments, which can be important in convincing management of the need to support related initiatives. I’ll talk a bit more about this last point in the next—and final—article of this series.

Tool # 1: Value Stream Mapping

There are a multitude of companies offering both online and desktop-based Value Stream Mapping tools. Every year these companies add new bells and whistles to their products in an attempt to achieve product differentiation. The problem with this is that the over-riding usefulness of a Value Stream Map is the basic documentation it provides—a picture—of information and processing flows. In my opinion most of the additional features—while they may provide a bit of eye-candy to users—add very little if any functionality to a Value Stream Map. To that point, I have yet to see a Value Stream Mapping tool tie targeted processing to either executive level financial metrics or the end-use customer. Just as important, I haven’t seen one that provides any level of sophisticated analysis. What do I mean by this?

Analysis of Value Stream Maps today is for the most part intuitive, i.e., review the current state; identify manufacturing Critical-Path Time (MCT) reduction opportunities; develop associated MCT reduction solutions; and prioritize each solution based on what is hoped to be its impact on reducing MCT as well as its projected ROI. If Value Stream Maps had analysis capability they could also be used to compare the MCT of each potential solution to that of the current state, letting you know up up-front what its impact on MCT would be. This type of analysis is typically done through simulation. Few companies have the budget and expertise to routinely create simulations of the What If changes being considered.

If I were selecting a Value Stream Mapping tool today for my personal use I would focus on a basic product with few features above and beyond the visual basics. The one feature I would include relates to ease of storage and retrieval, including the ability to review “before” and “after” Value Stream Maps side-by-side. You’ll find that these basic products are for the most part reasonably priced and easiest to learn how to use. Butcher paper and Post-It notes may do for someone trying out Value Stream Mapping. But for anyone starting up a full blown initiative—possibly encompassing dozens or hundreds of suppliers—an electronic tool is a wise investment.

Tool # 2: MCT Critical-Path Mapping

I’ll start out by saying I have a financial interest in this tool. My partner Sean Larson and I developed it when we were unable to find an existing product that had the capabilities one of our customers asked for relative to both analysis and in communicating results.

As you learned from the article on Next Generation Lean Metrics, a MCT Critical-Path Map is a visual timeline of what a job experiences as it winds its way through processing:

NextGen-Lean-Tools1

The main difference between MCT Critical-Path Maps and generic maps is that we’ve added different color hues to represent the three main (Green, Yellow, Red) time segment colors. Why is this useful? Because there can be different reasons for the different lengths of a particular color segment and it is important to understand what each represents. For instance if you run a one-shift five-day shop, a significant amount of Red (Non-Value Added—Unnecessary) time will be due to the lack of second and third shifts as well as idle weekends. There are things you can do to address these two issues, for instance, routinely running bottleneck departments extra hours. It is definitely more important, however, to differentiate the portion of Red that is due to scheduled downtimes versus the times due to waiting for resources (man or machine) or waiting for a Lot to finish (remember, MCT tracks the “true” lead-time of the first part in a Lot).

Here is a list of the variation in hues we have set up for our tool (Note: Green has a single hue):

NextGen-Lean-Tools2

Here’s a comparison of a generic map against a MCT Critical-Path Map employing this differentiated color scheme. I think you can understand how much more useful the second visual is:

NextGen-Lean-Tools3

We received seed funds from a Fortune 100 OEM for development of this tool and have spent every bit of revenue we’ve generated since then to further its development. A free three-month trial is offered which you can access at www.criticalpathmapping.com. We plan on updates to the tool adding many more features but hope in trying it out you will find the current feature-set is sufficient such that you’ll recognize a MCT Critical-Path Map’s value over more generic MCT Mapping tools.

Similar to Value Stream Mapping, manual approaches like crayons and paper will work for anyone testing out the concept of MCT Critical-path Mapping. However, if you decide to launch a larger initiative you’ll find a more effective tool and platform will be of great value.

Tool #3: MPX

Most people aren’t familiar with Queueing Theory. Put simply, it is the basis for analyzing the competition for resources—both personnel and equipment—needed for processing by a group of jobs with output including job flow-through time. That output sounds a bit like MCT (“true” lead-time), doesn’t it? And whether you realize it or not, everyone has had experiences with Queuing Theory in their everyday life. The following example illustrates this.

Grocery Store Check-Out Lines

The dilemma of selecting the check-out line that will most quickly get you out of a grocery store is something that everyone can probably understand since most of the time—in my experience, anyway—it seems like there are lines at all available cash registers. In deciding which one to join you are conducting a Queuing Theory analysis, albeit an intuitive one. Choosing the quickest check-out line is akin to scheduling at most factories since jobs usually stack up behind needed processing equipment. The only difference is that in a factory the goal is to reduce the average MCT of all jobs—a much more complicated task—while at a store all you are concerned with is your individual check-out time.

If you wonder why more often than not you fail to get in the optimal check-out line it is because Queueing Theory solutions aren’t intuitive. Neither are factory MCT reduction solutions. Queueing Theory can be used to predict MCTs for all the different jobs in your shop under every conceivable scenario. Understanding Queueing Theory and using tools based on it will provide you with a quicker, less expensive alternative to simulations, allowing you to test the impact of your various MCT reduction alternatives.

I’ll now translate the above to a manufacturing example.

The Question of Lot Size vs. Resource Utilization

Shop performance is typically evaluated on the utilization level of equipment and personnel—the higher the better. The reasons for this are financial. For personnel it boils down to whether an employee is hitting their highest practical productivity rate. For equipment it comes down to whether a capital investment is delivering its highest possible ROI. Under today’s Standard Accounting Principles this is how manufacturing effectiveness is usually evaluated.

Production supervisors know how to play this productivity game. Larger lot sizes produce higher machine and manpower utilization numbers. The problem, then, becomes a scheduling one since the longer one job stays on a piece of equipment, the longer others that also need it sit waiting to be processed. In other words larger lot sizes increase the time it takes to satisfy customer demand, i.e., they are anti-build-to-demand.

Academics called this competition for resources dynamic interactions. Production supervisors call it—among other things—snafus! Regardless, the result is longer job MCTs. So tension is created between internal performance metrics and getting product to the customer on-time, as well as all of the associated savings that occur with reduced MCTs.

So where does Queuing Theory come in?

The standard approach to lot size reduction is reducing set-up times. What many manufacturers don’t understand—and what queuing theory reveals—is that when reducing set-up times there reaches a point where the smaller the lot size, the longer—not shorter—the MCT! So, in effect, there is a lot size sweet spot for any specific set-up time which produces the shortest job MCT.

NextGen-Lean-Tools4

Queueing Theory can evaluate the impact of dynamic interactions on MCT and because of this can identify those lot size sweet spots. It can also help manufacturers understand the best targets for manpower and equipment utilization goals. Goals of 100% utilization are worse than meaningless for markets where error exists in demand forecast since they negatively affect revenues by adding cost and also interfere with a company’s ability to deliver product to customers who are currently willing to pay money for one of their products but may not be willing to wait if it is not currently available.

Requirements

MPX is a tool that—once you define your job and processing set-up—can be used to test What If manufacturing improvement scenarios to understand their probable impact without having to first either develop simulations or actually implement a change to find out how it will do. Baselining MPX to the current state is relatively easy, with input needed from six main areas:

1. General Data—the number of hours per day and the average number of days per year each department in your shop operates.

2. Labor—the job classifications of production employees along with their job skills and availability. To define this you will also need to start with their scheduled hours and subtract when workers are unavailable due to the combined effects of vacation, training, sick days, bathroom breaks, etc.

3. Equipment—defined similarly to labor with machine capabilities and availabilities being the focus. Availability should take into account historic machine breakdown frequencies/durations and/or maintenance schedules and durations. Labor group assignments for each machine type must also be defined.

4. Operations and Routing—this data is perhaps the most detailed model need but is also usually the easiest to come by. If you use MRP or ERP documented routings and processing times you’ll need to verify they reflect actual operations. Tagging data is one way for doing so.

5. Bill of Material—also usually readily available.

6. Demand—reflecting the average production needed over the length of period selected initially in the General Data section.

MPX uses this data to determine Machine Utilization, Labor Utilization and MCT for each product and production scenario being evaluated. The following visuals represent just a few typical “before” and “after” results using actual MPX output screens. The product is available for a free trial through the website www.build-to-demand.com. Dr. Greg Diehl—developer of the software—is retired and usually very available to answer questions that may come up with your testing of it. (Disclosure: If you pull up the About Us section on his website you’ll find my name listed. It is there because over the years I have purchased a large number of copies of the software and provided significant user feedback on MPX use. I have no financial interest in MPX itself or its sales.)

NextGen-Lean-Tools5
NextGen-Lean-Tools6
NextGen-Lean-Tools7

MCT changes from infeasible to 15 calendar days due to reduced set-ups and lot sizes. Aggregate of all changes resulted in MCT reduction from 90 to 15 calendar days, an 83% reduction!

As you know from previous articles in this series this size of MCT reduction will have a tremendously positive financial impact. It also would increase the company’s build-to-demand capability, i.e., Lean-ness.

In the next article I will provide a series recap as well as offer some personal observations about the state of Lean.

Lean Six Sigma to Reduce Excess and Obsolete Inventory

Excess and obsolete inventory write-offs are chronic supply chain problems costing businesses billions of dollars each year. Unfortunately, improvement projects that are deployed to eliminate these problems often have a short-term focus. In other words, the current levels of excess and obsolete inventory are usually addressed, but not the root causes of the problem. Often such inventory is reduced by selling it below standard cost or donating it to charitable organizations. Competing business priorities sometimes keeps businesses from developing effective long-term solutions to eliminate the root causes, sometimes it is the difficulty in unraveling the complexity of the root causes.

Lean Six Sigma methods have been shown to be very effective in finding and eliminating root causes, and thus preventing arbitrary year-end reductions in inventory investment.

Higher- and Lower-Level Root Causes

An analysis of excess and obsolete inventory often shows that its major root causes are associated with long lead times, poor forecasting accuracy, quality problems or design obsolescence. However, these higher-level causes can be successively broken down into lower-level root causes as shown in the figure below.

As the figure suggests, from an inventory investment perspective, a long lead time may be caused, in part, by large lot sizes. For example, if the actual lead time or order cycle time is 30 days, but the required lot size for purchase is 90 days of supply (DOS), then this lot size drives a higher average inventory level than lead time by itself. In this case, the average on-hand inventory (neglecting a safety-stock calculation) increases from 15 to 45 DOS assuming a constant usage rate. Of course, the actual reasons for large lot sizes would have to be investigated by a Lean Six Sigma improvement team. The root causes of long lead times also could be due to complicated processes having numerous rework loops and non-value-adding operations as well as scheduling problems and/or late deliveries.

The second major cause of excess and obsolete inventory is poor demand management practices. Some lower-level root causes may include inaccurate historical demand data, a poor forecasting modeling methodology or other issues such as overly optimistic sales projections. Lean Six Sigma projects also can be used to attack lower-level root causes in this area. Lean Six Sigma is frequently used to improve quality levels to reduce waste and rework caused by a multitude of diverse factors within a process workflow. Finally, Design for Six Sigma can be used to improve the design processes for new products or services.

Using DMAIC to Find Root Causes

Lean Six Sigma improvement teams can drive to the root causes of their excess and obsolete inventory problem using the DMAIC problem-solving methodology (Define, Measure, Analyze, Improve, Control) in conjunction with Lean tools as well as process workflow models. In fact, building simple Excel-based inventory models or using off-the-shelf software, are good ways to identify the key process input variables (KPIVs) or drivers of excess and obsolete inventory problems. Inventory models follow a generalized Six Sigma root-cause philosophy – Y = f(x). They also are effective communication vehicles showing sales, marketing, manufacturing and other supply chain functions, as well as the impact of lead time and demand management practices on excess and obsolete inventory.

In an actual improvement project, the team begins an inventory analysis by defining the project’s goals in the Define phase. Using these goals as guidelines, relevant questions are developed to enable the team to understand how the system operates. Data fields corresponding to these questions are identified and extracted from information technology (IT) systems. The data fields are then organized in the form of an inventory model to provide the information necessary to answer the team’s questions and understand the root causes of the inventory problem.

After the Define phase, the team begins to evaluate measurement systems and plan data collection activities. This is the Measure phase of the project. An important activity in this phase is an on-site physical count by location of inventoried items associated with the problem. This is done to measure valuation accuracy relative to stated book value. Measurement analyses also are conducted of management reports and their related workflow systems. These analyses determine the accuracy of key supply chain metrics such as lead time, lot size, expected demand and its variation, forecasting accuracy (different from demand variation), on-time delivery and other metrics that may be related to an inventory investment problem. Unfortunately, supply chain metrics often are scattered across the several software systems within an organization. These systems include the forecasting module, master production schedule module, materials requirements planning module, inventory record files, warehouse management system module and similar IT systems.

After verification of a system’s metrics, the improvement team begins data collection to capture information necessary to answer the team’s questions developed during the Define phase. Relevant information, which may help the team in its root-cause investigation, usually includes suppliers, lead times, expected demand and its variation, lot sizes, storage locations, delivery information, customers and other facts.

Analyzing Data and Using Inventory Model

The Analyze phase begins after the required data has been collected and a simple inventory model has been created using classic inventory formulas such as those found in operations management textbooks. These models are used to analyze an inventory population to understand how key process input variables impact excess and obsolete inventory investment (i.e., key process output variables). A value stream map also should be constructed as part of the overall analysis. In fact, in many projects, a value stream map, once quantified, becomes the basis for the inventory model. This is especially true when the analyses focus on internal process workflows, at system bottlenecks, rather than finished goods inventories.

In addition, a simple inventory balance is calculated for every item and location of the inventory population based on each item’s service level, lead time and demand variation. An inventory balance shows which items and locations may have too much inventory and which items and locations may have too little inventory. In the latter case, inventory investment must be temporarily increased to meet required customer service levels.

After the team determines the root causes of the excess or obsolete inventory problem, it develops countermeasures to eliminate these root causes – the project’s Improve phase. In addition, other needs for improvement may be found as a project winds toward the Improve phase. The Analyze phase often identifies other types of process breakdowns within the supply chain that may serve as a justification for subsequent improvement projects. Lean tools and methods are particularly important in the analysis and execution of these types of projects. In fact, the application of the Lean tool, 5S, or what can loosely be called housekeeping, in the Control phase of a project, can help ensure that the resultant improvements are sustained over time.

Conclusion: The Typical Project Benefits

Typical benefits of defining and implementing improvement projects to reduce and eliminate excess and obsolete inventory include higher system accuracy, creation of quantified inventory models showing relationships between inventory investment versus lead time and demand variation, higher inventory valuation and location accuracies, higher cycle counting accuracies, and – most importantly – permanent reductions in excess and obsolete inventory investment.

Understanding Process Sigma Level

Six Sigma is a data-driven approach to quality, aimed at reducing variation and the associated defects, wastes and risks in any process. This article explores the basics of Six Sigma process quality – definition and measurement.

In a set of data, mean (μ) and standard deviation (σ) are defined as:

μ = x1 + x2 + x3 + … + xn) / n

Where x1 , x2 , , xn are data values and n is the number of data elements, and

Equation

Standard deviation shows the extent of variation or spread of data. A larger standard deviation indicates that a data set has a wider spread around its mean. Process data usually has a normal distribution. The distance from the mean μ to a data value in terms of data units can be measured. For example, a data point with a value of x = 31 seconds is 6 seconds away from a mean value of 25 seconds. This distance can also be measured by counting the number of standard deviations in the distance. If the standard deviation is 2 seconds, the same point is 6/2 or 3 standard deviations away from the mean. This count is denoted by sigma level, Z, also known as Z-score, as shown below.
Z = (x – μ) / σ

Z = (31- 25) / 2 = 3

Specification Limits and Defect Rates

In a process, deviations from the target or mean are accepted to a certain value defined by the specification limits (SL) around the mean. Any value beyond the specification limit indicates a defect or unacceptable result. The farther the specification limits are from the mean, the lower the chance of defects.

A Six Sigma process has a specification limit which is 6 times its sigma (standard deviation) away from its mean. Therefore, a process data point can be 6 standard deviations from the mean and still be acceptable. (See Figure 1.)

Figure 1: Normal Distribution With Mean, Z-score and Six Sigma Specification Limits

Figure 1: Normal Distribution With Mean, Z-score and Six Sigma Specification Limits

In a stable process, the mean naturally shifts as much as 1.5 sigma in the long term on either side of its short-term value. The red lines in Figure 2 (below) show the extreme case of 1.5-sigma mean shift to the right. The right specification limit is at 4.5 sigma from the mean with a defect rate of 3.4 parts per million (PPM). The left specification limit is at 7.5 sigma from the mean with a defect rate of 0 PPM. The overall defect rate, therefore, is 3.4 PPM. A similar argument applies to the extreme case of 1.5-sigma shift to the left. A Six Sigma process is actually 4.5 sigma in the long term, and the 3.4 PPM defect rate is the 1-sided probability of having a data value beyond 4.5 sigma measured from the short-term mean.

Figure 2: Process Mean Shift of 1.5 Sigma and Defect Rate Corresponding to 4.5 Sigma

Figure 2: Process Mean Shift of 1.5 Sigma and Defect Rate Corresponding to 4.5 Sigma

The 1.5-sigma shift makes defects approach 0 on the opposite side of the shift even at lower sigma levels. The one-sided defect rate is applicable to any capable process with 1-sided or 2-sided SLs, even at a 3-sigma level.

Given the specification limit, SL, the process sigma level, or process Z, is:

Z = (x – μ) / σ = (SL – μ) / σ

In this example, the process sigma level for a specification limit of 31 seconds is:

Z = (SL – μ) / σ

Z  = (31 – 25) / 2 = 3

Therefore, the process is at a 3-sigma quality level. In order to bring the process to the golden Six Sigma quality level, the process sigma would have to be reduced to 1.

Z = (31 – 25) / 1 = 6

In general, the Z formula can be rearranged to calculate the maximum allowable process sigma, or standard deviation, for any sigma level.

Z = (x – μ) / σ

σ = (x – μ ) / Z

For example, given a mean of 25 seconds and SL of 31 seconds, for a Six Sigma quality level, the required process sigma is calculated as:

σ = (31 – 25) / 6 = 1

Similarly, for a 3-sigma quality level, the process sigma must be:

σ = (31 – 25 ) / 3 = 2

Referring back to the short- and long-term behavior of the process mean, there are 2 values for Z, short-term Z, or Zst, and long-term Z, or Zlt.

Zlt = Zst – 1.5

In sigma level calculations, use Zst. A Six Sigma process is 6 sigma in the short term and 4.5 sigma in the long term or:

Zst = 6

Zlt = Zst – 1.5 = 4.5

Clarifying Process Sigma and Sigma Level

Sometimes the term process sigma is used instead of the process sigma level, which may cause confusion. Process sigma indicates the process variation (i.e., standard deviation) and is measured in terms of data units (such as seconds or millimeters), while process sigma count Z, or process sigma level, is a count with no unit of measure.

Process Capability and Six Sigma

Another measure of process quality is process capability, or Cp, which is the specification width (distance between the specification limits) divided by 6 times the standard deviation.

Cp = (Upper SL – Lower SL) / 6σ

The recommended minimum or acceptable value of Cp is 1.33. In terms of Six Sigma, this process capability is equivalent to a sigma level of 4 and long-term defect rate of 6,210 PPM. Process capability for a Six Sigma process is 2.

Understanding Process Sigma Level

Six Sigma is a data-driven approach to quality, aimed at reducing variation and the associated defects, wastes and risks in any process. This article explores the basics of Six Sigma process quality – definition and measurement.

In a set of data, mean (μ) and standard deviation (σ) are defined as:

μ = x1 + x2 + x3 + … + xn) / n

Where x1 , x2 , , xn are data values and n is the number of data elements, and

Equation

Standard deviation shows the extent of variation or spread of data. A larger standard deviation indicates that a data set has a wider spread around its mean. Process data usually has a normal distribution. The distance from the mean μ to a data value in terms of data units can be measured. For example, a data point with a value of x = 31 seconds is 6 seconds away from a mean value of 25 seconds. This distance can also be measured by counting the number of standard deviations in the distance. If the standard deviation is 2 seconds, the same point is 6/2 or 3 standard deviations away from the mean. This count is denoted by sigma level, Z, also known as Z-score, as shown below.
Z = (x – μ) / σ

Z = (31- 25) / 2 = 3

Specification Limits and Defect Rates

In a process, deviations from the target or mean are accepted to a certain value defined by the specification limits (SL) around the mean. Any value beyond the specification limit indicates a defect or unacceptable result. The farther the specification limits are from the mean, the lower the chance of defects.

A Six Sigma process has a specification limit which is 6 times its sigma (standard deviation) away from its mean. Therefore, a process data point can be 6 standard deviations from the mean and still be acceptable. (See Figure 1.)

Figure 1: Normal Distribution With Mean, Z-score and Six Sigma Specification Limits

Figure 1: Normal Distribution With Mean, Z-score and Six Sigma Specification Limits

In a stable process, the mean naturally shifts as much as 1.5 sigma in the long term on either side of its short-term value. The red lines in Figure 2 (below) show the extreme case of 1.5-sigma mean shift to the right. The right specification limit is at 4.5 sigma from the mean with a defect rate of 3.4 parts per million (PPM). The left specification limit is at 7.5 sigma from the mean with a defect rate of 0 PPM. The overall defect rate, therefore, is 3.4 PPM. A similar argument applies to the extreme case of 1.5-sigma shift to the left. A Six Sigma process is actually 4.5 sigma in the long term, and the 3.4 PPM defect rate is the 1-sided probability of having a data value beyond 4.5 sigma measured from the short-term mean.

Figure 2: Process Mean Shift of 1.5 Sigma and Defect Rate Corresponding to 4.5 Sigma

Figure 2: Process Mean Shift of 1.5 Sigma and Defect Rate Corresponding to 4.5 Sigma

The 1.5-sigma shift makes defects approach 0 on the opposite side of the shift even at lower sigma levels. The one-sided defect rate is applicable to any capable process with 1-sided or 2-sided SLs, even at a 3-sigma level.

Given the specification limit, SL, the process sigma level, or process Z, is:

Z = (x – μ) / σ = (SL – μ) / σ

In this example, the process sigma level for a specification limit of 31 seconds is:

Z = (SL – μ) / σ

Z  = (31 – 25) / 2 = 3

Therefore, the process is at a 3-sigma quality level. In order to bring the process to the golden Six Sigma quality level, the process sigma would have to be reduced to 1.

Z = (31 – 25) / 1 = 6

In general, the Z formula can be rearranged to calculate the maximum allowable process sigma, or standard deviation, for any sigma level.

Z = (x – μ) / σ

σ = (x – μ ) / Z

For example, given a mean of 25 seconds and SL of 31 seconds, for a Six Sigma quality level, the required process sigma is calculated as:

μ = (31 – 25) / 6 = 1

Similarly, for a 3-sigma quality level, the process sigma must be:

σ = (31 – 25 ) / 3 = 2

Referring back to the short- and long-term behavior of the process mean, there are 2 values for Z, short-term Z, or Zst, and long-term Z, or Zlt.

Zlt = Zst – 1.5

In sigma level calculations, use Zst. A Six Sigma process is 6 sigma in the short term and 4.5 sigma in the long term or:

Zst = 6

Zlt = Zst – 1.5 = 4.5

Clarifying Process Sigma and Sigma Level

Sometimes the term process sigma is used instead of the process sigma level, which may cause confusion. Process sigma indicates the process variation (i.e., standard deviation) and is measured in terms of data units (such as seconds or millimeters), while process sigma count Z, or process sigma level, is a count with no unit of measure.

Process Capability and Six Sigma

Another measure of process quality is process capability, or Cp, which is the specification width (distance between the specification limits) divided by 6 times the standard deviation.

Cp = (Upper SL – Lower SL) / 6σ

The recommended minimum or acceptable value of Cp is 1.33. In terms of Six Sigma, this process capability is equivalent to a sigma level of 4 and long-term defect rate of 6,210 PPM. Process capability for a Six Sigma process is 2.

Why a Robust S&OP Process is Critical to Delivery Performance – and Key Factory Metrics

warehouse-2-595-t

 

I’ve always thought the delivery performance metric best reflects how well integrated the various functions are and how interdependent they all are if we expect to consistently perform at a high level. From a shop-floor perspective it’s sure easy to see how well the operation is synchronized and managed, or not. It’s also clear when many of the problems in the shop have been sent there by others causing the planned schedule to not be executable at all.

It’s true that if the plant is not performing well on the metrics we’re covering in this series of articles, the chances of consistently delivering on time to customers are pretty remote. But I would offer that you can fix all of the quality problems, maintenance issues, materials issues, etc., and still have little chance of ever sustaining outstanding delivery performance. Why? Because many companies lack a formal, collaborative and robust sales and operations planning (S&OP) process. The result is that schedules get sent to the shop floor and, in aggregate, are impossible to execute.

S&OP is a critical business process, yet I continue to be amazed that most businesses I visit do not have a mature S&OP process — if they have one at all. This process is absolutely foundational for operational excellence, for breaking down silos, for creating team culture and for taking much better care of our customers. And S&OP has been around for over four decades!

If your business truly expects delivery problems to become rare events, then build this critical infrastructure, require cross-functional collaboration and send executable schedules to the shop floor. And measure the accuracy of the performance vs. plan for sales, product mix, and operating margins. This process has to be understood and fully supported as a career-long commitment by senior management. There must be the same relentless effort to improve the process for S&OP as there is in the factories for their process failures.

Finally, S&OP outcomes are directly tied to the operational plan and its execution. No more finger pointing when the plan is missed. The team makes the plan and the team executes the plan. Alignment is not optional.

No manufacturing operation has a chance of delivering outstanding service to customers if there isn’t accurate data available.”

Here’s an example of a typical planning effort.  No manufacturing operation has a chance of delivering outstanding service to customers if there isn’t accurate data available, by product groupings (with common product and process parameters, i.e. value streams), for use in a formal sales and operations planning (S&OP) process. Typically the core group in a monthly S&OP meeting is leaders from sales, customer service, marketing, scheduling and accounting — though any other function may periodically attend as ad hoc members. Attendance will be required from time to time from functions such as process engineering, supply chain, purchasing, design engineering, etc., depending on the issues that need to be addressed.

The outcome of the monthly S&OP process is a consensus forecast, a contract if you will, between all parties on a rolling three-month basis (or whatever timeframe makes sense based on your industry and the associated lead times). Accounting rolls up the plan to forecast volumes/mix, costs, margins, earnings, etc., so that all key functions are signed up to deliver one plan. This is the quarterly plan for sales volume/mix vs. the demonstrated capacity for raw materials availability; production capacity, i.e., machines, labor and shifts scheduled.

All of these things must be available in the plan that is released to manufacturing for execution.  Too many schedules are missed because the schedule that got “thrown over the wall” was not executable from Day 1 based on the above criteria. But the shop floor typically catches hell anyway for missing schedules even if they were overscheduled in the plan. This is a mindless circle that I see repeated week after week in lots of factories. The plan must be in sync with the constraints or it simply cannot be executed. Capacity is not infinite. It is finite to the demonstrated capacity until actions are taken to expand it.

In addition to the measurements on the S&OP process, the factory metrics below measure the shop’s ability to execute the plan.

Work Order Performance to Current Schedule: The measurement is expressed as the percentage of work orders completed on the day or within the week of its current schedule date, e.g., six total orders delivered on time out of 10 orders due and past due = 60% on time. The measurement includes only released work orders for finished goods or sub-assemblies.

Work Order Performance to Original Schedule: The measurement is expressed as the percentage of work orders completed when it was first scheduled, i.e. the original due date assigned to the work order. For example, three original schedules delivered on time out of six original schedules = 50% in the current week. (The other order that delivered was past due in this example.) The measurement includes only released work orders for finished goods or sub-assemblies.

The first formula is intended to measure performance on both original schedules as well as for all past due items that carry over from prior periods. This metric encourages that past dues be moved to the front of the queue, as they should be. If a past due order is missed 10 times, it gets reported as missed 10 times. There is no delivery credit until it’s out the door to the customer. The second metric is to understand why original commitments to customers are not being met a high percentage of the time. There is only one chance to get credit for on-time delivery with this metric. The challenge is to close the gap between these two metrics and move toward great performance to the original schedule commitment and delight customers.

As with all the other factory execution measurements, be sure to build the supporting soft infrastructure that includes reason codes for missed schedules. This enables easy sorting and analysis of what orders are being missed and why. This data in concert with quality data and rolled-throughput-yield analysis will help to further refine the assignment of scare resources for corrective actions. It’s one more way of making sure you’re addressing the processes, the products and the specific issues that are causing the plant’s unreliability. Obviously, this is yet another rich opportunity to significantly improve manufacturing performance.

There are a number of other metrics that are typically kept by customer service that I encourage you to tap into. What is the plant’s record on product returns and the costs associated with that?  What’s the cost of price adjustments that have been made because of poor plant performance, e.g. late delivery, rejected product? And what is the S&OP team’s record for releasing executable plans to the factories? How well are the factories executing the plans?

We’ve now reviewed safety, quality, inventory and delivery performance. The next installment will address measuring productivity.

“Even if you’re on the right track you’ll get run over if you just sit there. — Will Rogers, humorist

I’ve never been satisfied with anything we’ve ever built. I’ve felt that dissatisfaction is the basis of progress. When we become satisfied in business we become obsolete. — Bill Marriott Sr., former CEO, Marriott Hotels