Next Generation Lean: Why Lean Too Often Requires a Leap of Faith

 

The first six articles of this seven-part series on Next Generation Lean laid out a pretty detailed justification for current Lean to evolve. They also laid out a strategy, metric and process that could form the basis of that evolution. In those articles an honest effort was made to base the discussion on generally accepted facts rather than opinions.

I’ll use this seventh and last article to give my view on the state of Lean practice. Another way of saying this is that readers should regard the following as a “Letter to the Editor” about what, in my estimation, have been (and continue to be) the top three barriers to the advancement of Lean practice. Realize that opinions are just that, and there is little doubt that expressing mine will result in some ruffled feathers!

The Blaming of Management

In the depths of their hearts I believe that most of the Lean advocates understand that Lean has—for the most part—over-promised and under-delivered. One outgrowth of this is that rather than personally accepting and trying to address Lean’s shortfalls, most of these same people have instead tended to point the blame finger at others. And the prime target of much of this finger-pointing seems to be the very managers from whom they are trying to gain support! A strategy of assigning culpability to those you are trying to influence usually isn’t very effective, at least in the world I live in. Further, as my mother used to tell me, “When you point a blame finger at someone else, you should recognize that three of your fingers are pointing back at you.” In other words, there may be a lot of blame to spread around for why Lean as a construct has not moved forward, but today’s Lean community needs to appreciate that they have contributed to this outcome by failing to allow and/or lead the evolution of Lean.

Long-time readers of my column know that I have criticized various facets of current corporate philosophy, but the negative attitude towards Lean is not one of them. All that management has asked of Lean is that it “show them the money,” something that up until now hasn’t consistently been accomplished. Lean practitioners seem to want their managers to accept on faith that supporting Lean will provide an acceptable ROI, or at least that it will over time. I have two points to make in response to this position. First, most managers today are not given extended periods of time—years, or even months—to produce positive impacts. Second, unless things have changed significantly since I was an executive, investments—capital or otherwise—generally aren’t made based on leaps-of-faith.

Take my word for it. There are many, many executives who have supported Lean initiatives only to be let down by their impact. These failures have negatively influenced their reputations and, in some cases, their careers. One result of this is that while you’ll seldom hear higher level executives publically express negativity about Lean, in discussions between executive peers there is a growing disdain for it as a cost management strategy. Without executive support Lean will not remain relevant. That is the primary reason that Lean needs to evolve.

Academics Need to Step Up to the Plate

Over the last two decades I’ve had a significant amount of interaction with academics in support of developing better business strategies. Overall I’ve been pretty satisfied with the results of these collaborations. As a consequence I have, in general, a high regard for academia. With that being said, though, I am of the opinion that the academic community has dropped the ball relative to Lean. What do I mean by this?

I have previously stated that managers today need to take leaps of faith to justify Lean initiatives. This is because it is rare for Lean practitioners to be able to provide solid ROI projections for the initiatives they advocate. Sure, academics have documented hundreds, if not thousands, of case studies that outline how specific companies have benefited from Lean transformations. This is a good first step. However, individual success stories are usually not effective in developing sound ROI projections. What is needed instead is a more general analysis of accumulated Lean impact data. I’m talking about analyzing data from a statistically sound sample of individual case studies such that the projected impacts of specific Lean activities can be tied to the various metrics used by executives in making business decisions. Why is this important? Executives are generally not willing to wait until the completion of a multi-year transformation to see if an initiative they have sponsored (and maybe staked their career on) will produce the necessary business results.

Impact projections are required to financially justify investments of corporate money. Granted, basing impact projections to statistically sound sample sizes takes more work than writing up individual case studies, but that’s “where the rubber meets the road” relative to gaining management support. And historically, it is the role of academics to deliver this type of process analysis.

Another beef I have with academics is that they, for the most part, have seemed to be along for the ride. The focus of this series has been on the need for Lean to evolve. Where are the academic results that can be cited proposing anything comparable to the strategy, metric, process and tools outlined in this series?  There aren’t any, at least as far as I have seen. Rather, academia has tended to embrace the current practice of Lean, ignoring rather than addressing its shortfalls. Sure, there are exceptions to this, but for the most part academia has let Lean down in addressing process-related concerns.

I’ll give you a simple example of what I mean. To do this I’ll use two questions posed earlier in this series relative to Lean. Specifically, managers often want to know:

  • How far along are they along on their Lean initiative?
  • Related to this, how do they know when it will be done?

In this series I’ve proposed concrete answers to how both of these questions can be answered. In talking to both practitioners and academics I’ve heard all of the non-answer answers, such as, “Lean is a journey—you’ll know you’re there when you get there.”

That type of justification doesn’t fly in the corporate world and it’s unrealistic to think that it should. On the other hand, you’d expect that basic questions of this nature would be exactly the type that academics would want to address. In my experience, they haven’t. And until they do, executive level managers will continue to be expected to take leaps of faith to support expenditures on Lean—something they are usually loath to do.

There Really is No Existing Lean Community

It’s almost seems as if soon after its initial launch the people associated with Lean decided that the practice was perfect and needed no further development. And in order to protect its purity they decided to etch the details of its practice in stone. I know this sounds a bit sarcastic but take the case of Value Stream Maps. VSMs represent a pretty basic—and effective—approach to defining material and communication flow. Although their template was for the most part formalized nearly two decades ago they are still the most frequently used tool by Lean practitioners. My opinion is that the existing Lean infrastructure is a bit too top-down and tends to be more interested in rearranging the deck chairs on the Titanic rather than actually acting as a strong advocate for increasing the impact of Lean practice, i.e., supporting its evolution.

One of the most effective strategies for getting constructive input and developing best practices is through Communities of Practice, comprised of top-notch practitioners who are willing to objectively discuss the problems they have faced as well as to share practices they have found successful. This is a more bottom-up approach and for that reason has a better chance of identifying and addressing Lean’s strategy, metric, process and tool needs. I would like to see a Community of Practice formed to take on a leadership role in the development of Next Generation Lean.

There would be a role in this for today’s Lean institutions. They could provide infrastructure by creating and financially supporting such a group. Based on my experience with multiple Communities of Practice, I’d recommend it be comprised of a manageable number—no more than two dozen members—who would represent most Lean practice interest groups, i.e., OEMs, small and medium-sized manufacturers, consultancies and academia.

Lean Order Fulfillment

I’ve invested a lot of time and effort in putting together this seven-part, 15,000-word series. And I didn’t do it for fortune or fame. Rather, I wanted to share what I learned through my 20 years of experience in applying Lean at both OEMs and their suppliers. The basis of what I have laid out was learned during a seven-year assignment with a large OEM—believe me, a true school-of-hard-knocks experience. Our first application of the approach we developed was in positioning that OEM for entry into the Big Box marketing channel. We understood that in order to deliver the profits we needed to justify this move we needed to both increase the Lean-ness of our supply chain and to Lean-up our distribution—our own factories were already operationally efficient.

I was responsible for planning and overseeing the supply chain side of the project work. A general description of this was published as a two-part series (Oct.-Nov. 2013) in Industrial Engineer entitled, “Lean’s Trinity.” The distribution side was planned and overseen by a colleague of mine and his work is described in an article in Interfaces(Jan.-Feb. 2005) entitled, “Improving Asset and Order Fulfillment at…”

I have two comments to make about this second article:

1. First, the sole comment in that article regarding supply chain was that Lean order fulfillment requires “a responsive supply chain with a manufacturing lead time* of three (weeks) or less.” (*The manufacturing lead time referred to was MCT, or Manufacturing Critical-path Time.)

When this article first came out I complimented my colleague—the one that led the distribution side of the initiative—on how efficient he had been by summarizing seven years of intense supplier development Lean activity in a single sentence!

2. Many of the executive-level financial impacts alluded to in this series were delivered by this initiative, and are provided in great detail in the article. In fact, when the article was first published I was a bit surprised that my colleague received corporate approval to provide cost reduction results in such detail.

If you are interested in reading either of these two articles, send me a line—I have a limited number of copies that I am willing to send out.

There is a third document I will include—a reference PowerPoint entitled, “Did Toyota fool the Lean decade community”

I’m not big on conspiracy theories but I do believe it’s credible that Toyota might not have shared ALL of what it considered to be a corporate competitive advantage when they allowed other companies to benchmark and otherwise study their Toyota Production System. Why? I find it interesting that order fulfillment overview comments by Toyota executives (some are which were cited in the third article of this series, Next generation lean strategy, tend to focus on “time to the customer” but nowhere in what they share do you find definition of their metric for measuring this. I’m pretty sure they have one.

Quantifying the Financial Benefits of Quality: Bringing Suppliers into the Fold

suppliers-595-t

 

In part two of this series, we discussed the role governance and reporting practices had on the level of financial benefits organizations derived from their quality efforts. What we found was that similar to findings in earlier studies, quality governance models and transparency (reporting and standardized measures) improve the efficacy of organizations’ quality efforts.

Given that transparency and cross-functional integration help improve the financial benefits of quality, we also wanted to understand what happens when organizations extend those factors (transparency and collaboration) to partners outside of the organization through training. Hence in this article, we will discuss the relationship between quality training for suppliers and the financial benefits of quality.

Training Suppliers and Financial Value

Suppliers play a major role in any organization’s quality—they provide the materials necessary to create products. To understand the relationship between suppliers and quality efforts, respondents were asked to indicate which suppliers they will train on quality management practices within the organization.

Surprisingly, the vast majority of organizations do not provide any training for their suppliers on their quality management system. Of those that do provide training, it is often limited to tier one suppliers. This means most organizations are missing out on opportunities to ensure transparency between the organizations, create a common language and set expectations with suppliers on quality needs. To test where organizations need to invest in training, we ran analysis against which tier of suppliers were provided training and the organizations’ financial benefits of quality (Figure 1).

Figure 1: Training by Tiers and Financial Benefits of Quality

There is a case to be made that training for any suppliers will generally improve the organization’s financial benefits—illustrated by the increase on average from the financial benefits range of $100,000 to $500,000 for those who do not provide training to a range of $500.001 to $1 million in benefits for those that do. However the largest improvement in financial benefits comes from training the often overlooked tier three suppliers. This does not mean that organizations should ignore their tier one and two suppliers, but instead should extend their training to their tier three suppliers.

However, this then begged the question: What types of training should organizations offer their suppliers?

What Training Matters?

In addition to the general ideas of transparency and common language organizations need to help their suppliers understand the impact that defects or other setbacks like delays will have on their end customer. This helps the organizations eliminate waste and ultimately improve the efficiency and quality of their products, resulting in higher customer satisfaction and potential price premiums. This theory leads to the hypothesis that organizations that train their suppliers on their quality culture and end use of product would see the largest financial benefits. To test this hypothesis we ran the types of training offered suppliers against the financial benefits of quality (Figure 2).

Figure 2: Gap Analysis: Training Topics and Impact on Financial Benefits

What we found was that all the types of training discussed in the survey correlated to improved financial benefits. However the general hypothesis that organizations that provide training focused on the organization’s quality value, policies, and end product use suppliers would reap higher financial benefits was not supported. Instead organizations that provided training on their quality KPIs and technology systems (e.g., ERP systems) were more likely to reap higher financial benefits.  Though additional value can be created through a shared understanding of customers and quality culture, organizations should start with ensuring clarity around foundational items such as measures and reporting technology.

Conclusion

Best-in-class quality organizations use training with their suppliers to drive quality and are twice as likely to train suppliers. Supplier training ensures that all critical parties in the value chain understand the organization’s standard of quality—around quality measures and efficacy and what it wants to achieve with its product offerings. In the final article, of this series we will look more closely at the relationship between incentives and training for staff and the financial benefits of quality.

 

Tips and Suggestions for Six Sigma Project Success

critical-thinking-map

 

Tools and methodology will only get a person so far. Experience gained from the practical implementation of Six Sigma solutions is priceless. A list of tips, tools and suggestions for Six Sigma practitioners can help avoid many pitfalls of project management.

Some fundamental points for project success are:

  1. Planning project work well.
  2. Determining the exact scope of the work and the required/desired outcomes.
  3. Developing a proper fact-based understanding of the problem.
  4. Leveraging creative tools to develop the highest quality imaginative ideas.
  5. Leveraging selection tools and decision making tools to identify the most appropriate solutions.
  6. Managing stakeholders well, involving them and planning their involvement.
  7. Planning and executing implementation with great care.
  8. Ensuring that benefits are calculated and extracted.
  9. Handing over a complete sustainable finished product to the business.

These issues can be better managed when using a proven methodology like Six Sigma.

Point 1

Planning project work well: Projects with a relatively short timeframe (e.g., three months) require a disciplined approach to planning. Just measuring a problem properly or testing a solution properly may require considerable time given that many have weekly, monthly, quarterly, annual peaks and troughs in volumes or experience. Or there may be other types of seasonal variations.

  • Develop a conceptual plan for the whole project within the first week.
  • In the conceptual plan, identify milestones or key events.
  • Schedule future key meetings with stakeholders based upon that conceptual plan.
  • Arrange these key meetings for the whole project as early as possible (e.g., within the first two weeks).
  • Develop a detailed plan for the first phase within the first week.
  • Avoid overcomplicating the plan. In many cases it is better to be approximately right than precisely wrong.
  • By considering the tools to be used (brainstorming, affinity diagrams, etc), more accurate timeframe estimates will be possible.
  • Try to imagine/estimate/guess the type of outputs that will be produced from each event. (For example, about 100 ideas from brainstorming will probably take about 15 minutes to plot into an affinity diagram.) This helps in estimating timeframes.
  • Arrange all workshops and meetings for the first phase immediately or as soon as possible.
  • Plan in as much detail as practical. the subsequent phases and make sure the detailed and conceptual plan align.
  • Do not put the plan in a bottom drawer, it is for daily use.
  • Block out the time on a written schedule for all events, including thinking time. Avoid being driven by other people’s agendas (e.g., block out a day for planning the next phase).

Point 2

Determining the exact scope of the work and the required/desired outcomes: The scope of a project will without doubt change as the team leader or team members develop a better understanding of the problem. However, it is important that once there is a basic understanding of the problem and it has been discussed with the project sponsor, the scope should be locked in. Then that scope should only be changed with sponsor agreement and after carefully considering the pros and cons of the decision. Consider these steps:

  • Create a working scope document from Day 1.
  • Try to get a basic understand the problem as soon as possible.
  • Think about the scope that would give the greatest benefit for the effort required.
  • Make sure the project scope is practical. There are plenty of small initiatives that can have massive impact on business performance. Try to identify these.
  • Consider who would be the best sponsor for such an initiative.
  • Discuss this with a project current sponsor. Enlist their help to find the right sponsor if necessary.
  • Use a simple in-scope/out-of-scope table.
  • Pay particular attention to out-of-scope items.
  • Make sure the scope is communicated to all key stakeholders.

Point 3

Developing a proper fact-based understanding of the problem: Obviously if project leaders do not understand the problems properly, they will be unable to fix them. The most common mistakes in this area are:

  • Relying on “folk law” as the basis of understanding the problem.
  • Being an intellectual snob, that is, thinking the cause of the problem is obvious.
  • Taking bosses’ or sponsors’ or other key persons’ interpretation of the problem as fact.

To avoid making these mistakes, adhere to the DMAIC (Define, Measure, Analyze, Improve, Control) roadmap. A strength of DMAIC is that it produces a fact-based understanding of problems.

Points 4 and 5

Leveraging creative tools to develop the highest quality imaginative ideas, and leveraging selection tools and decision-making tools to identify the most appropriate solutions: Once the problem is properly understood, the next job is to find a good solution. The most common mistakes made in this area are:

  • Jumping on the first idea as the optimal solution.
  • Accepting “folk law” ideas as viable solutions.
  • Thinking that the solution to the problem is obvious and not exerting any effort to identify alternatives.
  • Taking your bosses’ or sponsors’ or other key persons’ ideas for solutions as the best ones.

Avoid making these mistakes by:

  • Adhering to the DMAIC roadmap – another strength of the DMAIC methodology is in the development and selection of ideas.
  • Leveraging the toolkit for idea generation and idea selection tools.
  • Avoiding silent brainstorming.
  • Using a warm up “non-work” brainstorm prior to generating ideas in work-focused brainstorms.
  • Applying brainstorming rules rigorously – especially the “no judgment” rule.
  • Inviting cross sections of people to the brainstorm session, including “wild card” invitees.
  • Reviewing and using lateral thinking techniques (as developed by Edward De Bono).
    • Exploring ideas known to be wrong to see where they may lead.
    • Considering ideas that at fist glance appear opposed to logic.
    • Planting random words into brainstorming session.
    • Suggesting the opposite of the last suggestion in a brainstorming session.
  • Using decision-making tools like nominal group technique to quickly reduce the options.
  • Trusting the idea generation and selection process.

Point 6

Managing stakeholders well, involving them and planning their involvement: A project’s stakeholder group will play a significant role in the success or failure of the project. Failure to manage the stakeholder group properly is the number one non-technical cause of project failure. Common mistakes include:

  • Failing to identify a significant key stakeholder(s).
  • Underestimating the power and influence of a stakeholder(s).
  • Failing to identify a negative stakeholder(s). Failing to develop a proper management plan for stakeholders.
  • Ignoring stakeholder issues.
  • Doing whatever stakeholders want.

Avoid these mistakes by:

  • Taking stakeholder management very seriously.
  • Involving others to help identify stakeholders.
  • Making stakeholder identification part of every meeting.
  • Encouraging the project team to be frank about the nature of stakeholders.
  • Treating stakeholders with respect, but not being driven by them.
  • Developing individual management plans for key stakeholders that represent any type of risk or opportunity.

Point 7

Planning and executing implementation with great care: The implementation of change is often the most poorly managed phase of any project. This is clearly the most important phase as it is the phase where ideas come to life and the benefits start to be realized. Failure to implement an idea properly makes all prior work appear futile. It is irrelevant how good an idea is if it is not implemented properly.

Common mistakes here are:

  • Attempting to implement fanciful solutions that cannot or do not work in the real world.
  • Poor planning (e.g., failing to take into account that to train 1,000 call center staff might take many months and cost millions of dollars).
  • Failing to understand the magnitude of the task.
  • Avoiding consideration of details.

Avoid these mistakes by:

  • Planning thoroughly and in great detail.
  • Involving supervisors and line staff in the planning.
  • Testing implementation methods and tools prior to implementation.
  • Involving training, human resources and business leaders in support of the implementation.

Point 8

Ensuring that benefits are calculated and extracted: Understanding the benefits of a project improvement is extremely important. Apart from the obvious importance to the company’s financial performance, benefits also can be a powerful tool to use in stakeholder management. Understanding benefits can be difficult, extracting those benefits can be even more difficult. If the benefits of an improvement cannot be identified, then one can only conclude that the change being proposed is not an improvement at all. Therefore do not implement it.

It is worth remembering that it is impossible to have an improvement that does not have a financial benefit.

Common mistakes in this area include:

  • Being unwilling to make estimates.
  • Being unwilling to take on the hard tasks (manual counts, grunt work, etc.) that are required to gather the required statistics.
  • Being unable to find the information required.
  • Being unwilling to facilitate, encourage, coerce others to assist in these efforts.
  • Calculating theoretical benefits but not having agreement about benefit extraction.
  • Accepting stakeholders’ politically drive explanations regarding why a particular benefit is not achievable.

Avoid these mistakes by:

  • Making estimates early. This is a good way of encouraging involvement and feedback. (It is interesting how enthusiastic people get about proving someone wrong.)
  • Building time into the project plan to gather benefits information and to calculate benefits.
  • Building time into the project plan to plan benefit extraction.
  • Building milestones into the plan where benefit estimates will be updated – and keep to it.
  • Attaching confidence levels (or ranges) to estimates (e.g., $100k per annum, +/- $60k).
  • Making sure the sponsor and other key stakeholders know the level of confidence in all estimates.
  • Working closely with stakeholders to develop benefit extraction plans.
  • Challenging stakeholders when explanations do not make sense.
  • Being creative about how benefits are accessed.
  • Spending time identifying who has the information the project requires.
  • Being prepared to manually take samples, gather information, etc.

Point 9

Handing over a complete sustainable finished product to the business: In order to close the project, the project team will need to have a finished product that can be sustained by the business without the special intervention of the project manager or project team.

Common mistakes here include:

  • Handing over an incomplete product because the project budget has run out.
  • Letting the project timeline slip, and therefore not allowing sufficient time for this part of the project.
  • Not agreeing to handover the business.
  • Failing to recognize the tasks involved.

Avoid these mistakes by:

  • Following the Six Sigma DMAIC methodology for the Control phase.
  • Ensuring the project meets its deadlines.
  • Building into the plan sufficient time for this phase.

Are You Really Ready to Make A Change?

176a0c7cf124ced47656a5561b32fd1f-720x480

“A new study by Towers Watson has found that only 25% of change management initiatives are successful over the long term. While this may come as no shock – substantive change in organizations with entrenched cultures is always difficult”. Although leadership often knows there is a need for change, frequently it starts with an unclear vision of the change, poor planning, and unclear communications to initiate the change process on the part of the management group. The lack of poor planning, unclear communications, and poor execution causes a lot of fear in the organization about what this change will do to the current status quo, and whether it will be better than the current mediocre reality.

In addition employees usually resist any change efforts no matter how small because of the following five reasons

  1. Fear of the unknown/surprise.
  2. Mistrust
  3. Loss of job security/control
  4. Bad timing
  5. An individual’s negative predisposition toward change

Through our experience in making successful organizational change we have found the following questions provide a useful guide in helping to think through a change initiative before embarking, while also minimizing the resistance to change. The questions deal with issues and concerns before the change starts (BC), during the change process (DC), and after the change has been made (AC) on all levels that are affected by the change. This is shown in the Change Question Checksheet in figure 1.

Change Vision and Message Questions

  • Do I have a clear vision of the change to be made?
  • Do I have a clear and concise message about the change?
  • Do I have the ability to articulate the change message to all levels of the organization?
  • Do I have sufficient passion for this change to be its champion?
  • Does the message fully and concisely explain the value of the change?
  • Is the message believable to all audiences?
  • Have we made it clear what will change?
  • Do we understand the scale of the change, including potential unintended consequences?
  • Have we established a sense of urgency for the need for this change?

Change Goals:

  • Do we have goal clarity and know exactly what we want to achieve?
  • Are the change goals realistic?
  • Are the goals believable?
  • Can we measure our achievements and progress?
  • Do we have goal alignment to our strategic plan?

Change Plan:

  • Do we have a change plan that sets up a series of quick wins to build momentum or are we trying to hit a home run?
  • Do we understand and have a plan to deal with the technical challenges of the change?
  • Do we have the right systems in place to support the change?
  • Do we understand and have a plan to deal with the adaptive changes people will have to make?
  • Are we clear about the adaptive changes to be made?
  • Are we always discussing the conceptual side of the change and not the details of how we will do it? Remember the devil is in the details

Management:

  • Is the management team on board and ready to support the change?
  • Will the management team roll-up-their sleeves and get fully involv
  • Will senior management demonstrate a behavior that is fully supportive of the change initiative and walk the talk, not talk the walk?
  • Do the employees trust us?
  • Do we have a way to anchor every change gain we make and not let it slip back to the old status quo?
  • Have we given managers and supervisors the information to really understand the reason for the change and are they able to translate that message to the people that report to them?
  • Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?

People and Change Teams:

  • Do people understand how the change will impact them?
  • Do people understand what they will gain and lose in this change?
  • Do we have the right talent to make the change?
  • Do we have training available to assist in the change?
  • Have we given people reasons to buy in and be engaged with the change?
  • What resistance are we encountering?
  • Can we hold people accountable for making or not making the change?
  • Do we need some coaching to help make the change?
  •  Do we have informed, passionate, and engaged change teams in place?

Barrier Removal:

  • Have we cleared the underbrush and removed the weeds that derail change?
  • Have we eliminated mid-management doubt and resistance and do we have their commitment to the change initiative?
  • Have we addressed people’s fears in tangible ways?

 

Change Question Checksheet

When To Ask                                 Readiness

Change Ready Question Before Change During

Chang

After Change Yes Maybe No NA
1. Change Vision and Message Questions              
Do I have a clear vision of the change to be made?              
  • Do I have a clear and concise message about the change?
             
  • Do I have the ability to articulate the change message to all levels of the organization?
             
  • Do I have sufficient passion for this change to be its champion?

 

             
  • Does the message fully and concisely explain the value of the change?
             
  • Is the message believable to all audiences?
             
  • Have we made it clear what will change?
             
  • Do we understand the scale of the change, including potential unintended consequences?
             
  • Have we established a sense of urgency for the need for this change?
             
               
2. Change Goals:              
Do we have goal clarity and know exactly what we want to achieve?              
Are the change goals realistic?              
Are the goals believable?              
Can we measure our achievements and progress?              
Do we have goal alignment to our strategic plan?              
               
3. Change Plan:              
Do we have a change plan that sets up a series of quick wins to build momentum or are we trying to hit a home run?              
Do we understand and have a plan to deal with the technical challenges of the change?              
Do we have the right systems in place to support the change?              
Do we understand and have a plan to deal with the adaptive changes people will have to make?              
Are we clear about the adaptive changes to be made?              
Are we always discussing the conceptual side of the change and not the details of how we will do it? Remember the devil is in the details              
               
4. Management:              
Is the management team on board and ready to support the change?

Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?

             
Will the management team roll-up-their sleeves and get fully involved?              
Will senior management demonstrate a behavior that is fully supportive of the change initiative and walk the talk, not talk the walk?              
Do the employees trust us?              
Do we have a way to anchor every change gain we make and not let it slip back to the old status quo?              
Have we given managers and supervisors the information to really understand the reason for the change and are they able to translate that message to the people that report to them?              
Are we listening to people’s concerns and reacting to them rather than dismissing them or failing to “hear” them?              
               
5. People and Change Teams:              
 Do people understand how the change will impact them?              
Do people understand what they will gain and lose in this change?              
Do we have the right talent to make the change?              
Do we have training available to assist in the change?              
Have we given people reasons to buy in and be engaged with the change?              
What resistance are we encountering?              
Can we hold people accountable for making or not making the change?              
Do we need some coaching to help make the change?              
Do we have informed, passionate, and engaged change teams in place?              
               
6. Barrier Removal:              
Have we cleared the underbrush and removed the weeds that derail change?              
Have we eliminated mid-management doubt and resistance and do we have their commitment to the change initiative?              
Have we addressed people’s fears in tangible ways?              
               
Your Other Questions:              
               

Figure 1

Summary:

Successful change comes from developing an organizational atmosphere that is creative, risk taking, enthusiastic, reflective, involved, and inspires people to change. To achieve this elusive set of critical ingredients the organization must go for the quick wins and create a positive change momentum that contributes to a successful change process. Building and sustaining a conducive change environment cannot be achieved without careful planning before, during, and after the change initiative. It requires the change leaders to be constantly questioning how things are going, if support is waning, whether people are continually engaged, and determine if we are making clear mid-course adjustments based on what we are hearing, seeing, and sensing that will help accelerate the change initiative.

Many questions were listed but one you should ask at the end of a successful change is –“When will be ready for more change?” Change never stops or takes a holiday we constantly have to change to survive. Edward Deming said “It is not necessary to change. Survival is not mandatory.

Do not fall into the trap that French journalist Jean-Baptise Alphonse Karr wrote in 1849 “Plus ça change, plus c’est la même chose.” Or “The more things change, the more they stay the same.” You want a sustained and lasting change in your organization that improves its overall performance and response to customer needs. The change needs to look and feel different after the change initiative is completed. A return to the old status quo is not acceptable. Be part of the 25% of change management initiatives that are successful over the long term.

The 5 Step BPM Implementation Cycle

Let us admit the fact that manual processes are broken, inefficient, time-consuming, prone to errors, and come with a whole baggage of problems. This lands many companies to a pool of a series of undesirable effects that do no good to nobody. Hence, companies that want to be efficient and smart turn towards business process management (BPM) as a solution to step up their game through process automation. One of the biggest advantages it offers is the ability to orchestrate their existing IT better. It helps seamlessly integrate the several pillars of an organization’s’ IT support system to help derive maximum ROI out of it.

But, is it that simple? Why is BPM considered quite complex and time-taking? Well, there are several layers in an organization that need to give a buy-in for a completely drastic change in operating procedures. A BPM solution means a major change in the regular day-to-day operational activities that could affect tens of hundreds of users. This means that a lot of care has to go into making the right choice of tool that users will find easy to adapt to. It has the potential to truly change their entire way of operating.

cd3c3b9c3c4907a1a6ea01b4c40c45cf-720x478

But implementing a BPM is a huge responsibility layered with many challenges. Ideally, the implementation process begins with identifying the requirements and ends in executing the tool to meet the desired organizational goals. Interestingly, companies can start small and wrap their heads around a few processes until they gain the confidence to expand a BPM’s usage to a company-wide scale. In the zest to get into implementing automation, one may tend to overlook the possible damaging outcomes it can have. That must be avoided as the call for automation is purely specific to the use-case. Without a clear purpose, embarking on the exercise can be a heftily expensive mistake.

There are umpteen other considerations from an end-user standpoint depending on the industry. This is particularly true in the case of industries like manufacturing where the employees are largely blue-collar workers. To what extent can IT make in-roads into their day-to-day activities is a call organizations need to take. After taking into account all such key considerations, the organization decides to get into the act of implementing a BPM solution. The 5 most important stages of implementing BPM in an organization are discussed below:

Choice of product

BPM software comes in all sizes and shapes, budgets, and complexities. Selecting the right BPM software for your company is not as easy and picking the top-ranking vendors in Gartner’s Magic Quadrant. What might be a perfect BPM tool for a multitude of other businesses might not be a close fit to your enterprise because no two companies are the same.

There are some key considerations when evaluating BPM software. The one-size fits all approach does not apply in the BPM paradigm at all. After considering the top features that would add maximum value to the organization, the choices are narrowed down.

The selection process usually begins by identifying your requirements and defining your end goals. Are you looking for rich functionality or do you prefer usability over features? Who is going to use the tools, the technical experts or the average business users? The first step to choosing a BPM tool should ideally begin with creating a list of meaningful questions that clarify your needs and the purpose of adopting a BPM tool.

Answering these questions can hugely affect your vendor selection process and save your company from future frustrations.

Trial run

After the initial list of 4 is made, it is typically narrowed down to 2 and they are considered for intricate examination. This intricate evaluation typically takes a couple of months because of the series of test involved such as process simulations, pricing and ROI calculation, user-friendliness, scalability, and so on. Typically, companies form a vendor evaluation team that includes BPM experts, and they choose an existing business process to pilot run the automation using the chosen products. The team assesses the shortlisted tools from several perspectives such as:

  • How much time does it save for all?
  • How tight is the data security?
  • How much access control does it allow?
  • What is the cost benefit it brings in?
  • Does it help to improve overall operational efficiency?
  • Does it contribute to organizational productivity?

Once the trial run is over, the process naturally moves to picking the most suitable solution that checks all the right boxes to the above questions. When the process owners and the company’s decision makers are on the same page about adopting the most suitable tool, they initiate a purchasing that involves internal and external stakeholders.

In larger organizations, this might take a while because of the varying processes they have, the complexities of the processes, and the negotiation with the selected vendor based on your company’s requirements.

Installation

Once the purchase is closed, a company sets up the necessary technologies to roll out the BPM tool into their processes. The installation of an on-premise BPM software would require active involvement from both the vendor and the participating enterprise teams. The hand-holding role of vendor is very crucial for this stage because there might be skill gaps in the enterprise to not fully understand the working of the said BPM product. This is why your business should also consider a BPM vendor’s customer support and training as a decisive factor during the evaluation process.

This step has a tendency to become complex in a larger organizations, especially if they are spread across different geographies. It might encounter initial resistance from the teams and resource lag, but given the right kind of execution and enough preparation, the IT and other technical teams can make the installation robust and ready to deploy on time. At times, language- or code-based customizations may be needed in order to suit language and usage preferences of specific geographies respectively. This is particularly true in the case of verticals such as manufacturing.

Evangelization

You would assume that your communication with the vendor about their BPM product is over after the installation process. However, leading vendors stick by their clients till they train their teams and get them accustomed to use all the features and functionalities they have included in their BPM arsenal.

The deployment stage gives way to evangelization and user training. Existing BPM experts in partnership with vendor teams coach the designated process owners, admins, end-users, and all other stakeholders to dirty their hands and know the basics of the product.

This is the best time for the client company to encourage their BPM participants to ask questions specific to their domain and the processes they will be automating. How to incorporate conditions or handle exceptions? How to generate reports? Can you map the processes differently? How to setup an SLA? These are a few questions that require hands-on training, and perhaps proper documentation for future reference.

It is worthwhile for your teams to spend more time with the trainers and get comfortable around knowing the tool inside out rather than rushing through the process and facing difficulties at a later stage.

Integration

Your teams know the product, but how about introducing the new system to the existing applications in your enterprise? Whether your BPM comes pre-integrated with major software or demands API integration, you should ensure that the workflow management system sits well with the legacy systems and other core software in your enterprise network.

Except for the default and basic-level integrations, you might require support from the IT team to connect your BPM software with ADS for authentication (Single sign on) and other similar applications. This is an important step in the BPM’s entry to your organization because combining the applications of all software creates a powerful synergy in your enterprise and enables the exchange of data between individual systems when it is needed. It will also assimilate the BPM system to fit well into the existing IT framework so that it can operate seamlessly within the new environment.

Business process automation is a major leap for every organization to improve its overall efficiency and scale its operations to greater heights. Tedious as it may seem at the initial stages, the proper implementation offers several tangible benefits to the organization and ensures predictable outcomes that are in alignment with the company’s profit goals. Of course, the process needs a great deal of conviction at both the strategic and operational levels to embark into a BPM journey which will have several long-term benefits and help shape a productive, efficient organization. Often times, BPM pricing is accused to be opaque and one of the reasons is the inherent costs involved at each stage in terms of time and resources involved.

It’s important to realize not treating the BPM implementation as a “project” with a beginning and a definitive end; it is a continuous learning process to absorb the BPM solution completely in your organization. Nevertheless, a step-by-step approach to implementing a BPM solution would lead to achieve excellence in business processes thereby benefitting the organization at large. The right BPM software is customizable to your needs, and offers optimum functionalities to amplify the processes that differentiates your business from the market competition. It ensures that the business outcomes are more predictable and puts key stakeholders accountable.

Why Leadership Commitment Won’t Guarantee Lean Success

teamwork-leadership-climb-tpromo

If you do a Web search on the phrase “Why continuous improvement programs fail,” you’ll get about 8 million “hits,” give or take a few hundred thousand. I haven’t read all 8 million articles, but I’ve read a lot of them on the topic and most point to leadership failure as the root cause of program failure. The usual line is “lack of leadership commitment causes most continuous improvement failures.”

It’s hard to disagree with this, of course. We’ve all seen our share of improvement initiatives that died on the vine when leadership seemed to only be interested in supporting the effort with lip service and not much else. But I’ve also seen improvement initiatives fail, or, at least, run into a lot of difficulty, even when top leadership gave every appearance of being committed to the success of the program. We throw a lot of reasonably good leaders under the bus when we paint them all with the same brush of “lack of commitment.”

So, what’s going on, then, in those cases where the root cause of failure isn’t lack of commitment?

I’ve always thought that one of the things that makes lean difficult to implement is the fact that it’s a strategy wrapped up in a lot of tactics. In other words, it looks easier to implement than it turns out to be because lean strategy is camouflaged by an array of methods and techniques that seem as if they ought to be pretty straightforward. Company leaders, not understanding the strategy at the foundation of lean, strive to implement a few of the techniques, find they are hard to install and even more difficult to sustain, then drop them. Later, they claim that they “tried lean but it just didn’t work for us.”

Lean is the building of capabilities that enable the company to get more customers.”

When lean works well, there is a clear, direct line between the tactics (tools and methods) and the broader strategy. But when a strong linkage between “lean as methods” and “lean as strategy” is never formed in a manager’s mind, he or she never develops the necessary commitment to lean as a strategy. That’s not to say that managers who fail at lean have no strategic vision at all. It is to say that such strategic vision as they do have isn’t in synchrony with what lean is intended to do.

The disconnect between “lean as tactics” and “lean as strategy” is made even more of a problem by “practitioners” and consultants who espouse purposes for lean that it just isn’t designed to meet. Recently, I read an article on LinkedIn that purported to discuss the “dark side” of lean. The author presents an account of a lean implementation. Here’s a small excerpt:

[The company owner] decided to double his production, building a bigger factory and hiring an additional 50 workers. He did not understand the market or why people bought Swiss Cheese, his sales only increased by 10%. He began to lose a lot of money. In desperation, he hired a manufacturing consultant to implement lean. The consultant suggested he automate his manufacturing process to cut costs.”

Simply and directly put, lean is not a strategy to cut costs, so lean tactics are not designed to directly reduce costs. Lean isn’t about automating or getting rid of personnel. But listen to writers and consultants who tout lean tools as effective means to cut costs, then they implement those tools with the hope that costs will quickly fall. When that doesn’t happen, they lose interest. Lean implementations that have cost cutting as their central purpose will almost certainly fail.

If Not Cost Cuts, Then What?

So, if “cutting costs” isn’t the strategy that lean is meant to achieve, what is? Lean is the building of capabilities that enable the company to get more customers.

Let’s look at a simple example that I hope will illustrate my point. A plant installs shadow boards so that operators will put their tools on them when they are finished using them. We all know that a shadow board keeps tools visible and organized. A shadow board makes it easy to find a tool, to know where to put a tool, to tell if a tool is in use or missing altogether.

The use of a shadow board is directly connected to the broader strategy of increasing market share and improving margins. Here’s the way it works: A shadow board allows operators to retrieve tools more readily. This, in turn, impacts reductions in set-up and changeover times. As set-up/changeover times are reduced, shorter runs are possible. Shorter runs mean reduced inventory, better adherence to production schedules, better on time shipping, improved agility and flexibility in responding to changing demands of customers. Reduced set-up times also provide additional machine capacity, which is how all the new orders that arrive because the company has so markedly improved its service will be produced.

Shadow boards themselves, then, are tactical, it’s true. But they directly support and enable a broader strategy. The same is true for all the other elements of lean, from kanban, to 5S, to value stream mapping, to lean teams, to leader standard work…well, you get the idea.

Successful lean implementations, then, start with a clear view of company strategy and the role of lean in supporting that strategy. If company leaders don’t have that clear view, no matter how committed they are initially to implementing tools and methods, they will lose that commitment and the energy that goes with it.

 

Lean Six Sigma to Reduce Excess and Obsolete Inventory

Excess and obsolete inventory write-offs are chronic supply chain problems costing businesses billions of dollars each year. Unfortunately, improvement projects that are deployed to eliminate these problems often have a short-term focus. In other words, the current levels of excess and obsolete inventory are usually addressed, but not the root causes of the problem. Often such inventory is reduced by selling it below standard cost or donating it to charitable organizations. Competing business priorities sometimes keeps businesses from developing effective long-term solutions to eliminate the root causes, sometimes it is the difficulty in unraveling the complexity of the root causes.

Lean Six Sigma methods have been shown to be very effective in finding and eliminating root causes, and thus preventing arbitrary year-end reductions in inventory investment.

Higher- and Lower-Level Root Causes

An analysis of excess and obsolete inventory often shows that its major root causes are associated with long lead times, poor forecasting accuracy, quality problems or design obsolescence. However, these higher-level causes can be successively broken down into lower-level root causes as shown in the figure below.

As the figure suggests, from an inventory investment perspective, a long lead time may be caused, in part, by large lot sizes. For example, if the actual lead time or order cycle time is 30 days, but the required lot size for purchase is 90 days of supply (DOS), then this lot size drives a higher average inventory level than lead time by itself. In this case, the average on-hand inventory (neglecting a safety-stock calculation) increases from 15 to 45 DOS assuming a constant usage rate. Of course, the actual reasons for large lot sizes would have to be investigated by a Lean Six Sigma improvement team. The root causes of long lead times also could be due to complicated processes having numerous rework loops and non-value-adding operations as well as scheduling problems and/or late deliveries.

The second major cause of excess and obsolete inventory is poor demand management practices. Some lower-level root causes may include inaccurate historical demand data, a poor forecasting modeling methodology or other issues such as overly optimistic sales projections. Lean Six Sigma projects also can be used to attack lower-level root causes in this area. Lean Six Sigma is frequently used to improve quality levels to reduce waste and rework caused by a multitude of diverse factors within a process workflow. Finally, Design for Six Sigma can be used to improve the design processes for new products or services.

Using DMAIC to Find Root Causes

Lean Six Sigma improvement teams can drive to the root causes of their excess and obsolete inventory problem using the DMAIC problem-solving methodology (Define, Measure, Analyze, Improve, Control) in conjunction with Lean tools as well as process workflow models. In fact, building simple Excel-based inventory models or using off-the-shelf software, are good ways to identify the key process input variables (KPIVs) or drivers of excess and obsolete inventory problems. Inventory models follow a generalized Six Sigma root-cause philosophy – Y = f(x). They also are effective communication vehicles showing sales, marketing, manufacturing and other supply chain functions, as well as the impact of lead time and demand management practices on excess and obsolete inventory.

In an actual improvement project, the team begins an inventory analysis by defining the project’s goals in the Define phase. Using these goals as guidelines, relevant questions are developed to enable the team to understand how the system operates. Data fields corresponding to these questions are identified and extracted from information technology (IT) systems. The data fields are then organized in the form of an inventory model to provide the information necessary to answer the team’s questions and understand the root causes of the inventory problem.

After the Define phase, the team begins to evaluate measurement systems and plan data collection activities. This is the Measure phase of the project. An important activity in this phase is an on-site physical count by location of inventoried items associated with the problem. This is done to measure valuation accuracy relative to stated book value. Measurement analyses also are conducted of management reports and their related workflow systems. These analyses determine the accuracy of key supply chain metrics such as lead time, lot size, expected demand and its variation, forecasting accuracy (different from demand variation), on-time delivery and other metrics that may be related to an inventory investment problem. Unfortunately, supply chain metrics often are scattered across the several software systems within an organization. These systems include the forecasting module, master production schedule module, materials requirements planning module, inventory record files, warehouse management system module and similar IT systems.

After verification of a system’s metrics, the improvement team begins data collection to capture information necessary to answer the team’s questions developed during the Define phase. Relevant information, which may help the team in its root-cause investigation, usually includes suppliers, lead times, expected demand and its variation, lot sizes, storage locations, delivery information, customers and other facts.

Analyzing Data and Using Inventory Model

The Analyze phase begins after the required data has been collected and a simple inventory model has been created using classic inventory formulas such as those found in operations management textbooks. These models are used to analyze an inventory population to understand how key process input variables impact excess and obsolete inventory investment (i.e., key process output variables). A value stream map also should be constructed as part of the overall analysis. In fact, in many projects, a value stream map, once quantified, becomes the basis for the inventory model. This is especially true when the analyses focus on internal process workflows, at system bottlenecks, rather than finished goods inventories.

In addition, a simple inventory balance is calculated for every item and location of the inventory population based on each item’s service level, lead time and demand variation. An inventory balance shows which items and locations may have too much inventory and which items and locations may have too little inventory. In the latter case, inventory investment must be temporarily increased to meet required customer service levels.

After the team determines the root causes of the excess or obsolete inventory problem, it develops countermeasures to eliminate these root causes – the project’s Improve phase. In addition, other needs for improvement may be found as a project winds toward the Improve phase. The Analyze phase often identifies other types of process breakdowns within the supply chain that may serve as a justification for subsequent improvement projects. Lean tools and methods are particularly important in the analysis and execution of these types of projects. In fact, the application of the Lean tool, 5S, or what can loosely be called housekeeping, in the Control phase of a project, can help ensure that the resultant improvements are sustained over time.

Conclusion: The Typical Project Benefits

Typical benefits of defining and implementing improvement projects to reduce and eliminate excess and obsolete inventory include higher system accuracy, creation of quantified inventory models showing relationships between inventory investment versus lead time and demand variation, higher inventory valuation and location accuracies, higher cycle counting accuracies, and – most importantly – permanent reductions in excess and obsolete inventory investment.

In Six Sigma, It Takes Powerful Thinking

six-sigma-critical-thinking-1

The way we think dictates how we function and why we do what we do. In business, all three types of thinking are necessary, but the one that will keep your business afloat is critical thinking.

If we were going to assign a type of thinking to be associated with Six Sigma, it would be that of critical thinking. In critical thinking, you are putting all your facts and figures on the table and dealing with the issue.

Types of Thinking

Critical Thinking: Having the ability to think clearly and rationally and logically.

Constructive Thinking: Understanding our emotions and choosing to think in a way that will benefit our growth and development, and minimize friction in a situation.

Creative Thinking: Looking at problems or situations with a new fresh perspective and point of view — many times resulting in unorthodox solutions.

The Mind Can Play Tricks on Us

Have you heard of the McGurk Effect? It’s when what you see doesn’t match what you hear, so your brain will make the sound to match the visual. It is an incredible phenomenon, so in order to really hear accurately, you need to close your eyes.

There are cognitive biases. This is when we confirm over and over again with what we believe to be true, and completely ignore anything that does not share in our beliefs. So we create our very own subjective social reality.

Our imagination affects how we see the world, meaning the thoughts that we create in our minds can alter our perception of our world.

In Conclusion 

Now that we know that so many things can impact the way we think and see the world, perhaps we can take a lesson from our constructive thinkers, and choose or have the intention to see everything in a way that will benefit our continuous growth.

Our world would not survive without our creative thinkers, for they give us the impetus to come out of the box and create brand new products and businesses that make our world exciting. Our critical thinkers make sure those businesses become the legacy that we leave behind for future generations to benefit from.

Next Generation Lean – Tools

lean-tools

Over the course of the first five entries to this seven-part series, a case has been made for the need of Lean practice to evolve. An industrially proven strategy, metric and process have been proposed to be a central part of that transformation. This article will discuss tools that lend themselves to the practice of Next Generation Lean. Two of them have already been discussed earlier in the series, i.e., Value Stream Mapping and MCT Critical-Path Mapping. The third is a tool based on queuing theory named MPX.

Very few Lean tools exist outside of those that have been around since the 1990s. As has been discussed, the practice of Lean hasn’t changed much since its inception so there was probably not much driving the development of new tools. History has shown that it is human nature to resist change. The phrase “If it ain’t broke don’t fix it” characterizes this tendency. Unfortunately, disciplines such as Lean that fail to evolve become broken even if their regression isn’t initially recognized. Rather, this type of failure manifests itself in a discipline becoming less and less relevant. A lack of new tools should raise an alarm similar to when “the canary in the coal mine” is gasping for breath regarding Lean’s vitality.

Tools are important. If a practice is sound, the use of an effective tool assures practitioners a good shot at satisfying customer needs by standardizing the application of best practices. The consistent results they produce can also be used to create statistically significant proof-of-concept process assessments, which can be important in convincing management of the need to support related initiatives. I’ll talk a bit more about this last point in the next—and final—article of this series.

Tool # 1: Value Stream Mapping

There are a multitude of companies offering both online and desktop-based Value Stream Mapping tools. Every year these companies add new bells and whistles to their products in an attempt to achieve product differentiation. The problem with this is that the over-riding usefulness of a Value Stream Map is the basic documentation it provides—a picture—of information and processing flows. In my opinion most of the additional features—while they may provide a bit of eye-candy to users—add very little if any functionality to a Value Stream Map. To that point, I have yet to see a Value Stream Mapping tool tie targeted processing to either executive level financial metrics or the end-use customer. Just as important, I haven’t seen one that provides any level of sophisticated analysis. What do I mean by this?

Analysis of Value Stream Maps today is for the most part intuitive, i.e., review the current state; identify manufacturing Critical-Path Time (MCT) reduction opportunities; develop associated MCT reduction solutions; and prioritize each solution based on what is hoped to be its impact on reducing MCT as well as its projected ROI. If Value Stream Maps had analysis capability they could also be used to compare the MCT of each potential solution to that of the current state, letting you know up up-front what its impact on MCT would be. This type of analysis is typically done through simulation. Few companies have the budget and expertise to routinely create simulations of the What If changes being considered.

If I were selecting a Value Stream Mapping tool today for my personal use I would focus on a basic product with few features above and beyond the visual basics. The one feature I would include relates to ease of storage and retrieval, including the ability to review “before” and “after” Value Stream Maps side-by-side. You’ll find that these basic products are for the most part reasonably priced and easiest to learn how to use. Butcher paper and Post-It notes may do for someone trying out Value Stream Mapping. But for anyone starting up a full blown initiative—possibly encompassing dozens or hundreds of suppliers—an electronic tool is a wise investment.

Tool # 2: MCT Critical-Path Mapping

I’ll start out by saying I have a financial interest in this tool. My partner Sean Larson and I developed it when we were unable to find an existing product that had the capabilities one of our customers asked for relative to both analysis and in communicating results.

As you learned from the article on Next Generation Lean Metrics, a MCT Critical-Path Map is a visual timeline of what a job experiences as it winds its way through processing:

NextGen-Lean-Tools1

The main difference between MCT Critical-Path Maps and generic maps is that we’ve added different color hues to represent the three main (Green, Yellow, Red) time segment colors. Why is this useful? Because there can be different reasons for the different lengths of a particular color segment and it is important to understand what each represents. For instance if you run a one-shift five-day shop, a significant amount of Red (Non-Value Added—Unnecessary) time will be due to the lack of second and third shifts as well as idle weekends. There are things you can do to address these two issues, for instance, routinely running bottleneck departments extra hours. It is definitely more important, however, to differentiate the portion of Red that is due to scheduled downtimes versus the times due to waiting for resources (man or machine) or waiting for a Lot to finish (remember, MCT tracks the “true” lead-time of the first part in a Lot).

Here is a list of the variation in hues we have set up for our tool (Note: Green has a single hue):

NextGen-Lean-Tools2

Here’s a comparison of a generic map against a MCT Critical-Path Map employing this differentiated color scheme. I think you can understand how much more useful the second visual is:

NextGen-Lean-Tools3

We received seed funds from a Fortune 100 OEM for development of this tool and have spent every bit of revenue we’ve generated since then to further its development. A free three-month trial is offered which you can access at www.criticalpathmapping.com. We plan on updates to the tool adding many more features but hope in trying it out you will find the current feature-set is sufficient such that you’ll recognize a MCT Critical-Path Map’s value over more generic MCT Mapping tools.

Similar to Value Stream Mapping, manual approaches like crayons and paper will work for anyone testing out the concept of MCT Critical-path Mapping. However, if you decide to launch a larger initiative you’ll find a more effective tool and platform will be of great value.

Tool #3: MPX

Most people aren’t familiar with Queueing Theory. Put simply, it is the basis for analyzing the competition for resources—both personnel and equipment—needed for processing by a group of jobs with output including job flow-through time. That output sounds a bit like MCT (“true” lead-time), doesn’t it? And whether you realize it or not, everyone has had experiences with Queuing Theory in their everyday life. The following example illustrates this.

Grocery Store Check-Out Lines

The dilemma of selecting the check-out line that will most quickly get you out of a grocery store is something that everyone can probably understand since most of the time—in my experience, anyway—it seems like there are lines at all available cash registers. In deciding which one to join you are conducting a Queuing Theory analysis, albeit an intuitive one. Choosing the quickest check-out line is akin to scheduling at most factories since jobs usually stack up behind needed processing equipment. The only difference is that in a factory the goal is to reduce the average MCT of all jobs—a much more complicated task—while at a store all you are concerned with is your individual check-out time.

If you wonder why more often than not you fail to get in the optimal check-out line it is because Queueing Theory solutions aren’t intuitive. Neither are factory MCT reduction solutions. Queueing Theory can be used to predict MCTs for all the different jobs in your shop under every conceivable scenario. Understanding Queueing Theory and using tools based on it will provide you with a quicker, less expensive alternative to simulations, allowing you to test the impact of your various MCT reduction alternatives.

I’ll now translate the above to a manufacturing example.

The Question of Lot Size vs. Resource Utilization

Shop performance is typically evaluated on the utilization level of equipment and personnel—the higher the better. The reasons for this are financial. For personnel it boils down to whether an employee is hitting their highest practical productivity rate. For equipment it comes down to whether a capital investment is delivering its highest possible ROI. Under today’s Standard Accounting Principles this is how manufacturing effectiveness is usually evaluated.

Production supervisors know how to play this productivity game. Larger lot sizes produce higher machine and manpower utilization numbers. The problem, then, becomes a scheduling one since the longer one job stays on a piece of equipment, the longer others that also need it sit waiting to be processed. In other words larger lot sizes increase the time it takes to satisfy customer demand, i.e., they are anti-build-to-demand.

Academics called this competition for resources dynamic interactions. Production supervisors call it—among other things—snafus! Regardless, the result is longer job MCTs. So tension is created between internal performance metrics and getting product to the customer on-time, as well as all of the associated savings that occur with reduced MCTs.

So where does Queuing Theory come in?

The standard approach to lot size reduction is reducing set-up times. What many manufacturers don’t understand—and what queuing theory reveals—is that when reducing set-up times there reaches a point where the smaller the lot size, the longer—not shorter—the MCT! So, in effect, there is a lot size sweet spot for any specific set-up time which produces the shortest job MCT.

NextGen-Lean-Tools4

Queueing Theory can evaluate the impact of dynamic interactions on MCT and because of this can identify those lot size sweet spots. It can also help manufacturers understand the best targets for manpower and equipment utilization goals. Goals of 100% utilization are worse than meaningless for markets where error exists in demand forecast since they negatively affect revenues by adding cost and also interfere with a company’s ability to deliver product to customers who are currently willing to pay money for one of their products but may not be willing to wait if it is not currently available.

Requirements

MPX is a tool that—once you define your job and processing set-up—can be used to test What If manufacturing improvement scenarios to understand their probable impact without having to first either develop simulations or actually implement a change to find out how it will do. Baselining MPX to the current state is relatively easy, with input needed from six main areas:

1. General Data—the number of hours per day and the average number of days per year each department in your shop operates.

2. Labor—the job classifications of production employees along with their job skills and availability. To define this you will also need to start with their scheduled hours and subtract when workers are unavailable due to the combined effects of vacation, training, sick days, bathroom breaks, etc.

3. Equipment—defined similarly to labor with machine capabilities and availabilities being the focus. Availability should take into account historic machine breakdown frequencies/durations and/or maintenance schedules and durations. Labor group assignments for each machine type must also be defined.

4. Operations and Routing—this data is perhaps the most detailed model need but is also usually the easiest to come by. If you use MRP or ERP documented routings and processing times you’ll need to verify they reflect actual operations. Tagging data is one way for doing so.

5. Bill of Material—also usually readily available.

6. Demand—reflecting the average production needed over the length of period selected initially in the General Data section.

MPX uses this data to determine Machine Utilization, Labor Utilization and MCT for each product and production scenario being evaluated. The following visuals represent just a few typical “before” and “after” results using actual MPX output screens. The product is available for a free trial through the website www.build-to-demand.com. Dr. Greg Diehl—developer of the software—is retired and usually very available to answer questions that may come up with your testing of it. (Disclosure: If you pull up the About Us section on his website you’ll find my name listed. It is there because over the years I have purchased a large number of copies of the software and provided significant user feedback on MPX use. I have no financial interest in MPX itself or its sales.)

NextGen-Lean-Tools5
NextGen-Lean-Tools6
NextGen-Lean-Tools7

MCT changes from infeasible to 15 calendar days due to reduced set-ups and lot sizes. Aggregate of all changes resulted in MCT reduction from 90 to 15 calendar days, an 83% reduction!

As you know from previous articles in this series this size of MCT reduction will have a tremendously positive financial impact. It also would increase the company’s build-to-demand capability, i.e., Lean-ness.

In the next article I will provide a series recap as well as offer some personal observations about the state of Lean.

Lean Six Sigma to Reduce Excess and Obsolete Inventory

Excess and obsolete inventory write-offs are chronic supply chain problems costing businesses billions of dollars each year. Unfortunately, improvement projects that are deployed to eliminate these problems often have a short-term focus. In other words, the current levels of excess and obsolete inventory are usually addressed, but not the root causes of the problem. Often such inventory is reduced by selling it below standard cost or donating it to charitable organizations. Competing business priorities sometimes keeps businesses from developing effective long-term solutions to eliminate the root causes, sometimes it is the difficulty in unraveling the complexity of the root causes.

Lean Six Sigma methods have been shown to be very effective in finding and eliminating root causes, and thus preventing arbitrary year-end reductions in inventory investment.

Higher- and Lower-Level Root Causes

An analysis of excess and obsolete inventory often shows that its major root causes are associated with long lead times, poor forecasting accuracy, quality problems or design obsolescence. However, these higher-level causes can be successively broken down into lower-level root causes as shown in the figure below.

As the figure suggests, from an inventory investment perspective, a long lead time may be caused, in part, by large lot sizes. For example, if the actual lead time or order cycle time is 30 days, but the required lot size for purchase is 90 days of supply (DOS), then this lot size drives a higher average inventory level than lead time by itself. In this case, the average on-hand inventory (neglecting a safety-stock calculation) increases from 15 to 45 DOS assuming a constant usage rate. Of course, the actual reasons for large lot sizes would have to be investigated by a Lean Six Sigma improvement team. The root causes of long lead times also could be due to complicated processes having numerous rework loops and non-value-adding operations as well as scheduling problems and/or late deliveries.

The second major cause of excess and obsolete inventory is poor demand management practices. Some lower-level root causes may include inaccurate historical demand data, a poor forecasting modeling methodology or other issues such as overly optimistic sales projections. Lean Six Sigma projects also can be used to attack lower-level root causes in this area. Lean Six Sigma is frequently used to improve quality levels to reduce waste and rework caused by a multitude of diverse factors within a process workflow. Finally, Design for Six Sigma can be used to improve the design processes for new products or services.

Using DMAIC to Find Root Causes

Lean Six Sigma improvement teams can drive to the root causes of their excess and obsolete inventory problem using the DMAIC problem-solving methodology (Define, Measure, Analyze, Improve, Control) in conjunction with Lean tools as well as process workflow models. In fact, building simple Excel-based inventory models or using off-the-shelf software, are good ways to identify the key process input variables (KPIVs) or drivers of excess and obsolete inventory problems. Inventory models follow a generalized Six Sigma root-cause philosophy – Y = f(x). They also are effective communication vehicles showing sales, marketing, manufacturing and other supply chain functions, as well as the impact of lead time and demand management practices on excess and obsolete inventory.

In an actual improvement project, the team begins an inventory analysis by defining the project’s goals in the Define phase. Using these goals as guidelines, relevant questions are developed to enable the team to understand how the system operates. Data fields corresponding to these questions are identified and extracted from information technology (IT) systems. The data fields are then organized in the form of an inventory model to provide the information necessary to answer the team’s questions and understand the root causes of the inventory problem.

After the Define phase, the team begins to evaluate measurement systems and plan data collection activities. This is the Measure phase of the project. An important activity in this phase is an on-site physical count by location of inventoried items associated with the problem. This is done to measure valuation accuracy relative to stated book value. Measurement analyses also are conducted of management reports and their related workflow systems. These analyses determine the accuracy of key supply chain metrics such as lead time, lot size, expected demand and its variation, forecasting accuracy (different from demand variation), on-time delivery and other metrics that may be related to an inventory investment problem. Unfortunately, supply chain metrics often are scattered across the several software systems within an organization. These systems include the forecasting module, master production schedule module, materials requirements planning module, inventory record files, warehouse management system module and similar IT systems.

After verification of a system’s metrics, the improvement team begins data collection to capture information necessary to answer the team’s questions developed during the Define phase. Relevant information, which may help the team in its root-cause investigation, usually includes suppliers, lead times, expected demand and its variation, lot sizes, storage locations, delivery information, customers and other facts.

Analyzing Data and Using Inventory Model

The Analyze phase begins after the required data has been collected and a simple inventory model has been created using classic inventory formulas such as those found in operations management textbooks. These models are used to analyze an inventory population to understand how key process input variables impact excess and obsolete inventory investment (i.e., key process output variables). A value stream map also should be constructed as part of the overall analysis. In fact, in many projects, a value stream map, once quantified, becomes the basis for the inventory model. This is especially true when the analyses focus on internal process workflows, at system bottlenecks, rather than finished goods inventories.

In addition, a simple inventory balance is calculated for every item and location of the inventory population based on each item’s service level, lead time and demand variation. An inventory balance shows which items and locations may have too much inventory and which items and locations may have too little inventory. In the latter case, inventory investment must be temporarily increased to meet required customer service levels.

After the team determines the root causes of the excess or obsolete inventory problem, it develops countermeasures to eliminate these root causes – the project’s Improve phase. In addition, other needs for improvement may be found as a project winds toward the Improve phase. The Analyze phase often identifies other types of process breakdowns within the supply chain that may serve as a justification for subsequent improvement projects. Lean tools and methods are particularly important in the analysis and execution of these types of projects. In fact, the application of the Lean tool, 5S, or what can loosely be called housekeeping, in the Control phase of a project, can help ensure that the resultant improvements are sustained over time.

Conclusion: The Typical Project Benefits

Typical benefits of defining and implementing improvement projects to reduce and eliminate excess and obsolete inventory include higher system accuracy, creation of quantified inventory models showing relationships between inventory investment versus lead time and demand variation, higher inventory valuation and location accuracies, higher cycle counting accuracies, and – most importantly – permanent reductions in excess and obsolete inventory investment.