So What is “Lean” Anyway?

I’ve been involved in manufacturing since 1967, initially as a manufacturing engineer in a precision machine shop and later managing multi million dollar programs. Not to brag on myself, but I’ve met many smart people!

Initially our goal was limited to meeting quality requirements, by getting through quality inspections and testing in order to deliver on time that month. It was a pretty simple life then, but we didn’t know it.

A little later, I managed the manufacturing services department of a geophysical exploration company, and my boss asked me to analyze the flow of our hydrophones and ocean-going seismic cables. Not knowing what he meant by flow, I went to our final assembly area to look around. This was way before Gemba was a word we had heard.

I found that, yes; we were building in batches because of the large setup times–and my boss knew that.

So began my career of looking for continuous improvements.

Now, quite a few years later, we have all used the broad words for change such as Lean, continuous improvement, Toyota Production System, and before that Total Quality. Still, despite the widespread use of the terms, I’m concerned that perhaps they are being used without a full understanding.

Toyota has been given a lot of press and acknowledgement for their approach to creating TPS, and rightly so. But in today’s implementation of Lean, how many organizations buy into the total culture change that TPS and Lean really require?

It seems to me that, too often, companies run a pilot in their assembly or machine shops to see if it works. If they get good results, they train a few manufacturing employees on 5S, 3P, Poke Yoke, the seven wastes and all the other tools we know.

Usually if done well, there are immediate cost savings from reducing waste, so Lean tools are expanded across the manufacturing department. Cost savings become the key metric looked at by management. But after a few years, the low hanging fruit has been diminished and cost savings plateau or recede. Then management asks, “Ok, we’ve done Lean, what’s next?”

This is what I call a “manufacturing lean project,” which can lead to short term gains, but no transformation.

Let’s look at what a Lean Transformation entails. By looking at TPS, the two pillars it is built on are easy to identify:

1. Continuous  Improvement

  • Part of the culture and expectations
  • By everyone, every day
  • In every department, from the top down
  • Management goes to the Gemba to view the work being done

2. Respect for People

  • Management asks questions as a form of mentoring, so that workers decide for themselves what is best.
  • Each worker is unique and should be treated with respect and helped by management to fulfill their capabilities and dreams.
  • Communication to all about the companys’ goals, plans and results assures that everyone is on the same page.

These two pillars are true to Lean as well. Many lean practitioners may not understand that the “Respect for People” pillar is the basis for everything else–trust, motivation, continuous improvement and outstanding performance.

It’s a big step to adopt a Lean strategy as the Lean Management System for the entire company, but it’s important that everyone has the same goals and expectations, i.e., one language. For example, management should be teaching the Lean classes and frequently inspecting for both continuous improvement and respect for people every day! Then everyone knows it’s important.

Some enlightening questions about a Lean Transformation

  1. Does everyone in the company understand that this is a long-term commitment?
  2. Does the company have a Lean Management System in place that defines these expectations and live by it daily? Research identifies this as a best practice for companies that have been on the Lean journey for 20 to 30 years.
  3. Does management have standard work? Yes, this includes top management, marketing, engineering, purchasing, quality and everyone else.
  4. Is the company continually looking at the customer’s needs today and tomorrow? For example, is the company willing to change what works today for what will work tomorrow?

Ask yourself these four questions to see if your company is on a Lean Transformation or just doing a Lean manufacturing project.

Simulation Modeling Best Addition to Analysis Toolkit

Because of the rapid growth and increased competition in information technology (IT), business process outsourcing (BPO) and other service sector industries in India, quality and cost of operations have become the major distinguishing factors among such companies. Survival, growth and profits depend on how an organization controls its costs and satisfies its clients or customers.

Many organizations have adopted quality improvement programs, the important ones being Six Sigma and Kaizen. They also have modified the techniques of these programs to best suit the organization’s needs. To generalize, the choice of the quality philosophy has been made on such factors as scope and duration of the projects, the organization’s product or processes, and the statistical intensity required to analyze and improve.

Irrespective of the quality program used, many organizations have found limitations in some of the quality improvement tools they use. At the same time, they are discovering the advantages of using simulation modeling and analysis as a problem-solving tool.

Limitations of Quality Tools Used

The reasons why companies are finding that some analysis methodologies provide sub-optimal results include:

  • Complexity of the System Under Study – The business scenario has become highly complex with continuous changes with which organizations must cope. When initiating a project on quality for a highly complex new or existing system, often there are too may factors affecting the performance of the system. Even Six Sigma may fail as it becomes impossible to statistically analyze the system or provide statistical alternatives to the existing system. This has prompted project teams to provide ad hoc alternatives as solutions.
  • Sensitivity or Robustness Required – Analysis methods provide a solution to the problem at hand, but a slight change in input or a minor business decision requires the quality project team to “reinvent the wheel” by kicking off a new project to solve the “new problem.”
  • Verification of Analytical Solution – There is no pedagogical pattern to reinforce or verify the solution arrived at. Most quality methodologies include having to implement and measure the solution provided to determine if the required quality level (or Sigma level) is reached and then control the system to stay at that level. If the project has not met the expectations, it will need to be restarted. This is highly costly to the organization which must change the processes or work force or even make business decisions based on the project’s analysis. Cost also is incurred when actual experimentation (design of experiments) is done on the system.
  • Inability to Analyze a Stochastic System – When the outcome of an activity can be described completely in terms of the input, the activity is deterministic. When the effects of the activity vary randomly over various possible outcomes, regardless of the complexity of the system, the activity is stochastic. Many systems currently used in the industry are stochastic and cannot be easily modeled or studied in the current quality methodologies. The solutions provided to such systems are ad hoc and never satisfactory. Statistical modeling is necessary to study such systems.
  • Inability to Visualize the System – When studying a system for bottlenecks, lead-time reduction and process changes, it can become difficult to visualize it. The quality team requires a scale model to assist it in spotting bottlenecks. Mere numbers such as average handling time (mean time) or standard deviation can be misleading. Even system changes need to be visualized.

These limitations of quality processes can be dealt with by implementing an operations research technique called simulation modeling and analysis.

Sources on Simulation

Discrete Event System Simulation by Jerry Banks, John S. Carson II, Barry L. Nelson and David M. Nicol, (third edition) Pearson Education.

System Simulation by Geoffrey Gordon, (second edition) Prentice Hall.

“Simulation as a Tool for Continuous Process Improvement” by Mel Adams, Paul Componation, Hank Czarnecki and Bernard J. Schroer in Proceedings of the 1999 Winter Simulation Conference, IEEE Press.

An Introduction to Simulation

Simulation is imitation of the operations of a real world process or system over time. It involves the generation of artificial history of the system and the observation of that artificial history to draw inferences concerning the operating characteristics of the real system.

Operations research and simulation modeling have been used in the past by upper management for decision-making in various areas, including supply chain management, manufacturing applications, semiconductor manufacturing, construction engineering and military applications.

 

Simulation is now in use in service industries to model and analyze call flows, human resource management and forecasting. Usage had generally been a one-time effort due to various disadvantages of the simulation concept, but current technology and development have actually converted these disadvantages into advantages. Some of the important ones are:

 

 

  • Data Availability – Simulation requires a large amount of data. In the past, data was generally not available and had to be collected, which was a strenuous activity and took a lot of time. Now the use of enterprise resource planning software and customer relationship management programs provide large volumes of data. That data can be used as input for simulation.
  • Cost of Modeling – In the past, companies developed their own simulation software to supplement their analysis. That software was costly to procure. Now, many off-the-shelf simulation software products have been developed. They are cheaper and easy to use, and can be applied in different business scenarios. The software packages also provide graphical representations of the model.
  • Extensive Knowledge of Probability and Statistics – Simulation modeling requires the use of probability and statistics to model the system. This was a hindrance in the past as many system modelers were reluctant to bury themselves with probability and statistics. The current simulation software packages have input analyzers – this also is available with many quality software tools – which help with the data conversions. Only a working knowledge of statistics is a prerequisite to effective simulation modeling.
  • Time to Run Simulation – Thanks to the current computer processor speeds, simulation can be run quickly or even be slowed down to assist the project team.

Figure 1: Quality Improvement Framework

Figure 1: Quality Improvement Framework

Integrating Simulation Modeling and Analysis

Most organizations have modified the quality techniques to suit their requirements, but the basic project methodology for continuous quality improvement remains. The projects would follow the basic outline as shown in Figure 1.

Simulation modeling and analysis as a tool can be best used in Steps 2 and 3 in the framework above. It is most useful when studying the system, designing the system, evaluating alternatives and backing up the results of the improved process.

A typical example of how it can be done is shown in Figure 2 for a Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) process. DMAIC applies to an existing process that needs improvement. It is best applicable to continuous defect reduction in a cross-functional/uni-functional environment.

Figure 2: Six Sigma DMAIC with Simulation Tool as an Option

Figure 2: Six Sigma DMAIC with Simulation Tool as an Option

Conclusion: Enhancing Current Methods

Simulation modeling and analysis can be used in a quality improvement framework as an enhancement to current methods. Some key points to remember when deciding to use simulation are:

  • GIGO (garbage in, garbage out) applies to simulation. The way the system is modeled and data is entered will determine the efficiency of the model itself. This will mean that the project team or at least an analyst must be trained in the use of simulation. The analyst or team must know the system well enough to gauge the factors and level of detail to be simulated.
  • Simulation must not be performed if the team does not have time or resources for a detailed quality project. This would obviously mean a sub-optimal solution or a stopgap solution.
  • Various off-the-shelf simulation software is available in the market. A detailed feature study needs to be made before purchase so the software will best fit the organization’s needs.
  • The simulation software is still a cost to the company. The monetary gains will be seen only after successful completion of the project.

Simulation modeling as a tool currently is the best addition for a continuous improvement process. Organizations have a lot of new challenges when it comes to quality of service. These new challenges can only be dealt with by taking it up with newer ways of finding solutions. The quality framework needs to be upgraded as the situation demands.

What Will Process Excellence Look Like in 2025?

It is safe to say that 2016 was a year of immense and unexpected change. As the global political, economic and regulatory environment shift, the way we do business in the future will fundamentally change. The rule book is being torn up and disruptive technologies are shaking to the very core how organizations operate.

In the face of unprecedented change, I am reminded of a quote I read in my friend and colleague Diana Davis’ paper, Emergence: the Future of Operational Excellence:

“The average lifespan of an S&P 500 firm is 18 years today down from 61 years in 1958. By 2027, new firms will replace 75 percent of the companies that were in the Index in 2011″.- Source: Innosight Consulting Research.

It is too early to tell who will be the winners in 2025, so I reached out the PEX Network Global Advisory Board to find out what they thought process excellence would look like in 2025. Here is what two of our key advisors had to say:

“Process Excellence has to do with strategy and the tactics used in its application. However, there can be no single correct strategy of excellence because there are too many variables which are different in each situation and organizational environment including resources, mission, and society.

What can be done is to develop and implement basic principles which will result in process excellence in widely varying circumstances. Therefore, to the extent that organizations focus on improvements in their ability to develop and implement ways to better serve their constituents within their environments, process excellence will improve on into the future.” –  William A. Cohen, PhD, Major General, USAF, Ret. & President, The Institute of Leader Arts

And…

“It will be entirely different than today. Surviving companies will have adopted an ongoing Enterprise Transformation philosophy that is led by a Digital Transformation strategy that is defined within the context of business strategy. This will set the design requirements for a modernization of IT and result in a new IT infrastructure architecture and portfolio.

Applications will be divided into three groups – core applications (HR, Finance, Legal, etc., that must be in place but offer no advantage) will form the first group. These will never provide a competitive advantage and can be supported on licensed software.

The second group will be custom built backroom applications. BPMS generated applications created by a collaboration between the business and IT will replace current legacy applications in this group. These applications actually are the workhorses of the company.

The third and final group are the applications that provide competitive advantage, which will be created by BPM/BPMS staff located in the business units.

The second and third groups of applications will be generated by people in the business areas using low code (by then probably “no code”) BPMS tool suites. These business process specialists will bridge the business/IT gap and redesign the business in the BPMS environment and then generate the solutions. The OPEX people located in the business areas will work closely in an open collaboration model with IT data, hardware, tool, communication specialists and others. This will produce a process excellence capability that is nimble, responsive, collaborative and both low risk and low cost.” – Dan Morris, Managing Principal, Wendan Consulting

Innovation

What will the future hold? What are the demographic and economic trends that will shape markets and businesses tomorrow? Which technologies will drive fundamental changes to our ways of working, living and developing as individuals and employees? More importantly for PEX Network‘s community, what will the impact of these big picture changes be on approaches to Operational Excellence and the profession itself.

Find out how leading practitioners and businesses are positioning themselves for success in PEX Network’s exclusive report “Emergence: The Future of Operational  Excellence.”

  • How the rise of the Millennial Generation is reshaping the customer experience and driving the need for simpler, faster processes
  • Why operational excellence programs need to get more strategic and how you can make the shift
  • The ways that new technologies – Robotics, Low Code, Artificial Intelligence, Data Analytics, and Process Automation – are causing fundamental change to business processes and how you can start to effectively capitalize on them to promote operational excellence
  • The skills and capabilities you need to develop now and in the future to better support your business to achieve true process and operational excellence

Performance Metrics that Matter: Effective Productivity Measures

The final metrics I’ll address in this series are for productivity.  This is a slippery slope.  Frankly, I can count on one hand the plants I’ve seen with an effective measure over the last 10 years.  Most are not robust enough to represent reality, if they exist at all.

I also find there is often “discounting” going on for things not directly in control of the shop floor that affect product costs.  The most notable, of course, is raw materials, which can have a huge impact on the cost of goods produced.  These costs must be effectively managed by the sourcing/purchasing function, which is typically overseen by the senior operations or supply chain executive.

Bottom line:  I recommend three distinct productivity measurements for the operations function:  shop-floor productivity, plant productivity and total operations productivity.  Let’s take them one at a time.

Shop Floor Productivity Metric = OEE (Utilization x First Pass Yield x Efficiency)

OR, use the following option if you don’t yet have robust OEE reporting:

Actual Cost of Production Dollars current year compared to Actual Prior Year Cost of Production Dollars — adjusted for the new year’s volume and product mix

Our readers may remember references to OEE in previous articles.  If the plant organization, top to bottom, understands the budgeted OEE that must be met on constrained work centers (as well as other critical support work centers), that’s a pretty good way for operators and first line supervision to do their part on delivering productivity.

OEE exposes yield issues on raw materials, quality cost issues and labor cost issues; and these metrics are very easily monitored visually so that reactions to unfavorability occur in real time.  For example, kanbans or FIFO lanes that start drifting off the plan, i.e., are starving for material or are building inventory cause supervisor/operator intervention on the spot.  So I like OEE as the shop-floor productivity measurement.  The value stream managers/production managers own this one.  Here’s a recent experience I had that makes the point well about how plant leaders should be thinking and responding — and it isn’t like this particular plant manager.

I counseled a young plant manager a couple of years ago who was so full of himself that he thought he was the best plant manager in the company.  I know because that’s what he told me.  He couldn’t understand why the corporate HR trainer had invited me in to spend some time with him.  He thought he was running the best plant in what is a multi billion dollar enterprise.  I suggested that his expectations of himself and his team were far too low.  Of course he was offended and disagreed.  I then attended his staff meeting later that morning, and his plant controller put up a graph that showed significant shortfalls in plant performance to budget for the first two months of the year. March was already more than half over and would result in the same unfavorable outcome.

No analysis of causes or any attempt at corrective action had been done.  I did a quick calculation and asked the staff team if they knew how much extra productivity they must deliver each of the last nine months of the year, from April through December, to make the budget plan for the year.  Of course they did not.  The plant manager quickly spoke up and said that he didn’t believe in extrapolating two or three months of performance into a 12-month projection.  My response:  Then show me the projects and corrective action that will give us all confidence that you and your team will close the gap by year end.  There were none.

OEE is often used as a measure of plant productivity, though I usually find issues with the reporting when I dig into it on my plant visits.  Most plants should use the alternative shop-floor productivity measure until robust OEE accounting is in place.  Further, and highly problematic, is that other functional areas typically don’t understand the mechanics of the OEE calculations and can’t tie them directly to products and mix, which is a more common language for them.  Also, leaders around the entire business often think that productivity is the responsibility of manufacturing, and they become cheerleaders instead of doing their part in their own functions.  These leaders also tend to think more in terms of the overall plant and business financials.  The alternative method of reporting I suggested dollarizes the results and have a direct link to the financials.  OEE is the shop-floor piece of the plant productivity measure.  The costs of products are either going up or down when compared quarter-over-quarter, year-over-year.

But so are the “fixed/period costs.”1

Let’s Talk Fixed Costs

The fixed cost piece has the plant manager’s oversight and each functional leader is accountable for making their numbers accordingly.  All staff managers are accountable for their respective pieces.  And the plant manager, of course, is accountable for the sum total of the OEE and fixed costs within the plant organization.

The plant manager must expect that his/her team will remove any obstacles that cause delivery of the promised performance to be negatively affected.  And, of course, that expectation is universal for anyone in a leadership position.  This leads us to our second measure, the plant productivity metric, which is designed to capture the total variable cost spend and the fixed cost spend compared to prior year spending.

Plant Productivity Metric = OEE results dollarized +/- Actual Period Costs spent vs. Prior Year Actual Spend

This simple measurement collects all of the plant spending except for capital spending, which is excluded.  Capital is appropriated project by project and has to meet certain pay back criteria.  The productivity that results from capital spending is calendarized into the budget calendar and captured according to the actual reporting compared to how it was included in the budget process.

The final measurement I propose is the Total Operations Productivity Metric:

Shop Floor Productivity +/- Year-Over-Year Changes in Fixed/Period Costs +/- Raw Materials Cost Changes (adjusted for the new year’s product mix)

These three measurements clearly assign productivity metrics for which very specific groups are accountable.  It’s not surprising that if you hold plant managers accountable for unfavorable purchase prices, they’ll protest and become discouraged since they have little if any influence on them.  Holding the sourcing team accountable for purchase prices is where it belongs and eliminates the frustration of factory people.  Thus purchase price changes + or – are reported in the Total Operations productivity number.  The VP of operations is accountable for all of this.

For those of you who may be struggling to create a viable productivity metric, I hope you’ve found some help here to put reality-based productivity plans together that you can track all year long to a successful outcome at three levels of operations accountability.  Finally always remember this:  If the productivity you report can’t be found on the income statement and/or balance sheet, then it didn’t happen.  From a financial standpoint that’s the final word.

Forget Sales Dollars

Some of you may be wondering by now why there is no measurement here that compares plant performance to any kind of a sales number as many of the measurements I’ve seen in the field measure manufacturing output vs. some form of sales dollars.  Here’s why.

The biggest mistake I see in manufacturing organizations is the tracking of manufacturing production costs as a percentage of sales dollars.  It doesn’t matter whether the scorecard is on net sales or gross sales.  It’s just the wrong way to think about it.  There are three compelling reasons why this is true.

  • First, the amount of sales dollars in a specific period has no relevance to the shop floor’s cost performance.  Even if your plant is delivering on a very short lead time, the accounts payable cycle alone usually puts the recording of a sales dollar into a different month or quarter from when the goods were produced.  Sales dollars simply have no connection to the actual time of when the product was manufactured.
  • The second reason is that several non-manufacturing causes affect the calculation.  Marketing may be running a promotion and selling at discounted prices.  Does a lower sales number have anything to do with today’s cost of manufacturing?  What if the mix of product sales is significantly different than what was being produced in the factory?  (See previous article re: poor S&OP planning.)  This could result in windfall productivity or unfavorability.  Neither has anything to do with the period’s production costs.
  • The third reason for a 35-year manufacturing guy like me is just short of criminal.  Too often companies still measure cost by the long outdated formula of “standard cost + margin = price.”  So the factory team busts their tails delivering year-over-year cost reductions, which should flow directly to the bottom line  (Isn’t that the goal?).  The effect:  Cost reductions get passed straight through to the customer.  Pretty discouraging for factory folks and a very negative outcome for the business.  I’ve seen this occur time after time.

    The only formula I know of that really works is to expect sales and marketing people to be close enough to the market that they know what the market price is that is being supplied by their best competitors.  Sales and marketing people are the final implementers of manufacturing productivity by using the formula “market price – cost = margin.” If the prior year margin on a product was 30%, and manufacturing is coming off a 5% productivity year, the new calculations yield a margin of 35% using the market price – cost = margin formula.  It’s a simplistic example, but you get the point.  When the cost + margin = price formula is used, sales/marketing/accounting take the new 5% lower manufacturing cost and add a 30% margin and the entire amount of productivity gets passed straight through to the customer.

In the above example, if margins had dropped from 30% down to 25% the productivity gain would result in simply getting margins back to the desired 30%.  In either case, Total Operations Productivity would be calculated the same way.

“There are so many people who can figure costs, and so few who can measure values.”

 

Simulation Modeling Best Addition to Analysis Toolkit

Because of the rapid growth and increased competition in information technology (IT), business process outsourcing (BPO) and other service sector industries in India, quality and cost of operations have become the major distinguishing factors among such companies. Survival, growth and profits depend on how an organization controls its costs and satisfies its clients or customers.

Many organizations have adopted quality improvement programs, the important ones being Six Sigma and Kaizen. They also have modified the techniques of these programs to best suit the organization’s needs. To generalize, the choice of the quality philosophy has been made on such factors as scope and duration of the projects, the organization’s product or processes, and the statistical intensity required to analyze and improve.

Irrespective of the quality program used, many organizations have found limitations in some of the quality improvement tools they use. At the same time, they are discovering the advantages of using simulation modeling and analysis as a problem-solving tool.

Limitations of Quality Tools Used

The reasons why companies are finding that some analysis methodologies provide sub-optimal results include:

  • Complexity of the System Under Study – The business scenario has become highly complex with continuous changes with which organizations must cope. When initiating a project on quality for a highly complex new or existing system, often there are too may factors affecting the performance of the system. Even Six Sigma may fail as it becomes impossible to statistically analyze the system or provide statistical alternatives to the existing system. This has prompted project teams to provide ad hoc alternatives as solutions.
  • Sensitivity or Robustness Required – Analysis methods provide a solution to the problem at hand, but a slight change in input or a minor business decision requires the quality project team to “reinvent the wheel” by kicking off a new project to solve the “new problem.”
  • Verification of Analytical Solution – There is no pedagogical pattern to reinforce or verify the solution arrived at. Most quality methodologies include having to implement and measure the solution provided to determine if the required quality level (or Sigma level) is reached and then control the system to stay at that level. If the project has not met the expectations, it will need to be restarted. This is highly costly to the organization which must change the processes or work force or even make business decisions based on the project’s analysis. Cost also is incurred when actual experimentation (design of experiments) is done on the system.
  • Inability to Analyze a Stochastic System – When the outcome of an activity can be described completely in terms of the input, the activity is deterministic. When the effects of the activity vary randomly over various possible outcomes, regardless of the complexity of the system, the activity is stochastic. Many systems currently used in the industry are stochastic and cannot be easily modeled or studied in the current quality methodologies. The solutions provided to such systems are ad hoc and never satisfactory. Statistical modeling is necessary to study such systems.
  • Inability to Visualize the System – When studying a system for bottlenecks, lead-time reduction and process changes, it can become difficult to visualize it. The quality team requires a scale model to assist it in spotting bottlenecks. Mere numbers such as average handling time (mean time) or standard deviation can be misleading. Even system changes need to be visualized.

These limitations of quality processes can be dealt with by implementing an operations research technique called simulation modeling and analysis.

Sources on Simulation

Discrete Event System Simulation by Jerry Banks, John S. Carson II, Barry L. Nelson and David M. Nicol, (third edition) Pearson Education.

System Simulation by Geoffrey Gordon, (second edition) Prentice Hall.

“Simulation as a Tool for Continuous Process Improvement” by Mel Adams, Paul Componation, Hank Czarnecki and Bernard J. Schroer in Proceedings of the 1999 Winter Simulation Conference, IEEE Press.

An Introduction to Simulation

Simulation is imitation of the operations of a real world process or system over time. It involves the generation of artificial history of the system and the observation of that artificial history to draw inferences concerning the operating characteristics of the real system.

Operations research and simulation modeling have been used in the past by upper management for decision-making in various areas, including supply chain management, manufacturing applications, semiconductor manufacturing, construction engineering and military applications.

 

Simulation is now in use in service industries to model and analyze call flows, human resource management and forecasting. Usage had generally been a one-time effort due to various disadvantages of the simulation concept, but current technology and development have actually converted these disadvantages into advantages. Some of the important ones are:

 

 

  • Data Availability – Simulation requires a large amount of data. In the past, data was generally not available and had to be collected, which was a strenuous activity and took a lot of time. Now the use of enterprise resource planning software and customer relationship management programs provide large volumes of data. That data can be used as input for simulation.
  • Cost of Modeling – In the past, companies developed their own simulation software to supplement their analysis. That software was costly to procure. Now, many off-the-shelf simulation software products have been developed. They are cheaper and easy to use, and can be applied in different business scenarios. The software packages also provide graphical representations of the model.
  • Extensive Knowledge of Probability and Statistics – Simulation modeling requires the use of probability and statistics to model the system. This was a hindrance in the past as many system modelers were reluctant to bury themselves with probability and statistics. The current simulation software packages have input analyzers – this also is available with many quality software tools – which help with the data conversions. Only a working knowledge of statistics is a prerequisite to effective simulation modeling.
  • Time to Run Simulation – Thanks to the current computer processor speeds, simulation can be run quickly or even be slowed down to assist the project team.

Figure 1: Quality Improvement Framework

Figure 1: Quality Improvement Framework

Integrating Simulation Modeling and Analysis

Most organizations have modified the quality techniques to suit their requirements, but the basic project methodology for continuous quality improvement remains. The projects would follow the basic outline as shown in Figure 1.

Simulation modeling and analysis as a tool can be best used in Steps 2 and 3 in the framework above. It is most useful when studying the system, designing the system, evaluating alternatives and backing up the results of the improved process.

A typical example of how it can be done is shown in Figure 2 for a Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) process. DMAIC applies to an existing process that needs improvement. It is best applicable to continuous defect reduction in a cross-functional/uni-functional environment.

Figure 2: Six Sigma DMAIC with Simulation Tool as an Option

Figure 2: Six Sigma DMAIC with Simulation Tool as an Option

Conclusion: Enhancing Current Methods

Simulation modeling and analysis can be used in a quality improvement framework as an enhancement to current methods. Some key points to remember when deciding to use simulation are:

  • GIGO (garbage in, garbage out) applies to simulation. The way the system is modeled and data is entered will determine the efficiency of the model itself. This will mean that the project team or at least an analyst must be trained in the use of simulation. The analyst or team must know the system well enough to gauge the factors and level of detail to be simulated.
  • Simulation must not be performed if the team does not have time or resources for a detailed quality project. This would obviously mean a sub-optimal solution or a stopgap solution.
  • Various off-the-shelf simulation software is available in the market. A detailed feature study needs to be made before purchase so the software will best fit the organization’s needs.
  • The simulation software is still a cost to the company. The monetary gains will be seen only after successful completion of the project.

Simulation modeling as a tool currently is the best addition for a continuous improvement process. Organizations have a lot of new challenges when it comes to quality of service. These new challenges can only be dealt with by taking it up with newer ways of finding solutions. The quality framework needs to be upgraded as the situation demands.

Quantifying the Benefits of Quality: Employee Training and Incentives

This four-part series looks at what helps drive financial benefits from quality, including looking at the relationship between financial benefits and:

  1. the role and uses of quality,
  2. governance and standardization of quality,
  3. quality training for suppliers, and
  4. quality incentives and training for staff.

In part three of this series we expanded the discussion on the benefits of transparency and cross-functional integration with external partners, namely suppliers. What we found was that organizations that establish training for their suppliers tend to reap higher financial benefits from quality efforts. Training provides a common language and helps suppliers understand the impact that defects or other setbacks like delays will have on the end customer, ultimately resulting in a unified focus on the customer and increased financial benefits.

The same ideas of transparency, common language, and understanding the impact of role on quality should also hold true for the organization’s employees. Hence in this final article, we will look more closely at the relationship between employee training and incentives.

Training Employees and Financial Value

Training programs help develop competencies, ensure employees understand their role in creating quality for the customer, and establish a quality-focused culture. Hence respondents were asked to indicate if they had a formal quality-related training program. Though the majority of organizations do not have a formal training program, more organizations (43%) are investing in a training program than were in 2013 (32%).

Though it can be argued there is intrinsic financial value in offering quality training, there are still unanswered questions such as: Who should receive the training and what training should we provide?

Who Should Get Training?

The majority of respondent organizations (56%) offer (either through direct training or compensation for external training) quality management training for staff involved in quality actives.  Almost half of the respondent organizations (44%) also offer quality-related training to all employees, likely driven by the need to embed a quality-focused culture within the organization.

To understand where organizations should focus their training resources—in regards to ROI—we ran analysis against the employees offered training and the organizations’ financial benefits of quality (Figure 1).

Figure 1: Employees Trained and Financial Benefits of Quality

Though the conventional wisdom would be to increase quality training to all employees to develop a shared perspective and bolster a culture of quality, the analysis indicates a drop-off in financial benefits for organizations that offer quality training to all employees. Instead the largest increase in financial benefits comes from providing training for quality-related staff and those who specifically request training.

What Training Topics Matter?

The majority of organizations focus their training on quality fundamentals, auditing, ISO, quality management principles and quality tools. However, few organizations include training on more customer-value concepts such as the customer experience, Net Promoter Score (NPS) and lean. To understand what training supports increased financial benefits from quality, we ran analysis against the type of training provided and the organizations’ financial benefits of quality (Figure 2).

Figure 2: Gap Analysis: Employee Training Topics and Impact on Financial Benefits

What we found was that almost all of the types of training discussed in the survey correlated to improved financial benefits. However, organizations that provided training on customer-value related concepts like NPS, lean, Six Sigma and the customer experience were more likely to reap higher financial benefits. This makes sense given the relationship between customer value and financial benefits discussed in the first article.  Organizations that use quality as a competitive differentiator — for the benefit of the customer and enhance brand image — reap higher financial benefits.

Role of Incentives

Similar to training, incentives help reinforce preferred behaviors—such as a quality focus. To understand the role that incentives play in quality respondents were asked, “What incentives, if any, do you use to encourage employees to meet critical quality targets?”

Given that most organizations do not include quality measure-based goals in their variable performance compensation, it makes sense that financial and variable compensation are not readily used incentives.  Instead, organizations use informal recognition by management to encourage employees to meet quality targets. Though immediate recognition can help create buy-in on a person-by-person basis, using formal recognition and tying quality goals into performance can help generate a widespread cultural change faster.  However we wanted to see if specific incentives also had a direct financial impact (Figure 3).

Figure 3: Incentives and Financial Benefits of Quality

What we found was that all types of incentives increase the financial benefits of quality. However informal manager recognition — followed by honorary awards — has the greatest impact on financial benefits. This does not mean organizations should not still consider financial incentives such as tying quality measures to performance goals.  As noted earlier, awards and informal recognition are excellent tools for engagement and ongoing motivation, but financial measures tend to increase cultural changes in the long run.

Conclusion

Best-in-class organizations use training to drive a commitment to quality and help employees understand their role in quality — including their impact on the end customer and driving value.  However, organizations need to consider the purpose of their quality efforts before making decisions on incentives, the types of training, and even which employees to target for training. If the organization’s goals are to create a widespread culture of quality, then casting a wide net for measures tied to quality and training all employees on the fundamentals of quality and impact on the customer could reap the most benefits. However if the organization is specifically leveraging quality to provide customer value — and potential price premiums — then it should consider incorporating training aimed at the customer experience and related concepts such as lean and NPS.

Holly Lyke-Ho-Gland is process and performance principal research lead with APQC, a member-based nonprofit and one of the leading proponents of bench marking and best practice business research.

Start with Leaner Tools to Ease Non-Belts into Six Sigma

Six Sigma offers a variety of powerful tools that help organizations make data-driven decisions. Yet most people in an organization do not hold a degree in statistics and may feel that filling out endless data forms is pointless. When first starting a deployment, it is best to make things as easy and painless as possible for the non-Belt community. Once Six Sigma has gained momentum, Belts can enhance the statistical aspect and refine the methods they use.

Here are three examples for leaner tools that could be used to ease process owners and other non-Belts into the method during an initial deployment:

1. Failure Mode and Effects Analysis (FMEA)

If a Six Sigma team does everything manually in the standard FMEA template, it may need to fill in somewhere between 20 and 30 columns per row. To do that, team members may need to get thousands of data records from the process owner. And once the FMEA is complete, will the Champion even care if the risk priority number is 441 or 810?

When starting out, people may not even be capable of telling whether a defect occurs 7 percent or 70 percent of the time. But they do know what you need to be looking for – their most obvious pains. Most likely, the information Belts need from the process owner is this: What and where could something happen? Why would it happen? How bad is it? Who is going to do what about it, and is it effective?

That is a total of seven questions that almost everybody should be able to answer about their process. Asking these questions allows practitioners to get some data quickly, without misunderstanding or redundancy. As the initiative becomes more sophisticated, practitioners can work to refine the FMEA assessment process.

2. Analytic Hierarchy Process (AHP)

The AHP consists of simply going through a list of options and asking for each possible pair, Is (the first) more important than (the second) – and if so, by how much? But it can be tedious for larger amounts of options.

Time can be saved, however, by reviewing and optimizing the list beforehand, removing the unnecessary comparison questions. If the team already knows that gadget production is three times more important than widget production, why ask later if widget production is more important than gadget production? Taking that to the next level: If the team knows that gadget production is a factor three over widget production and that widgets are twice as important as trinkets – why waste stakeholder time by asking whether trinkets beat gadgets?

Optimizing AHP requires a bit of thought and definitely some information technology support. But for Belts doing the AHP on six factors, completing optimization first makes the difference between discussing 30 comparisons or nine. The AHP session may be condensed from two hours to 30 minutes, which key decision makers will appreciate.

3. Quality Function Deployment (QFD)

QFD is a support process for innovation and change, and also helps in assessing the status quo. It is nearly a science, and performs best in the hands of trained experts.

The information needed to first introduce QFD is not necessarily related to interactions, benchmarks and development status. What practitioners really need to know is: who is doing what, and why?

When practitioners know what requirements a process realizes, and what groups are engaged in the operation of the process, they have a solid basis for process improvement. They can still build intricate houses of quality later, when there is at least a formal requirement process.

Create Other Simplified Tools

The list does not stop here. With a small time investment studying a tool, chances are practitioners can find a simplified, leaner version that provides the information Belts really need from process owners in order to produce initial results.

Next Generation Lean: Why Lean Too Often Requires a Leap of Faith

 

The first six articles of this seven-part series on Next Generation Lean laid out a pretty detailed justification for current Lean to evolve. They also laid out a strategy, metric and process that could form the basis of that evolution. In those articles an honest effort was made to base the discussion on generally accepted facts rather than opinions.

I’ll use this seventh and last article to give my view on the state of Lean practice. Another way of saying this is that readers should regard the following as a “Letter to the Editor” about what, in my estimation, have been (and continue to be) the top three barriers to the advancement of Lean practice. Realize that opinions are just that, and there is little doubt that expressing mine will result in some ruffled feathers!

The Blaming of Management

In the depths of their hearts I believe that most of the Lean advocates understand that Lean has—for the most part—over-promised and under-delivered. One outgrowth of this is that rather than personally accepting and trying to address Lean’s shortfalls, most of these same people have instead tended to point the blame finger at others. And the prime target of much of this finger-pointing seems to be the very managers from whom they are trying to gain support! A strategy of assigning culpability to those you are trying to influence usually isn’t very effective, at least in the world I live in. Further, as my mother used to tell me, “When you point a blame finger at someone else, you should recognize that three of your fingers are pointing back at you.” In other words, there may be a lot of blame to spread around for why Lean as a construct has not moved forward, but today’s Lean community needs to appreciate that they have contributed to this outcome by failing to allow and/or lead the evolution of Lean.

Long-time readers of my column know that I have criticized various facets of current corporate philosophy, but the negative attitude towards Lean is not one of them. All that management has asked of Lean is that it “show them the money,” something that up until now hasn’t consistently been accomplished. Lean practitioners seem to want their managers to accept on faith that supporting Lean will provide an acceptable ROI, or at least that it will over time. I have two points to make in response to this position. First, most managers today are not given extended periods of time—years, or even months—to produce positive impacts. Second, unless things have changed significantly since I was an executive, investments—capital or otherwise—generally aren’t made based on leaps-of-faith.

Take my word for it. There are many, many executives who have supported Lean initiatives only to be let down by their impact. These failures have negatively influenced their reputations and, in some cases, their careers. One result of this is that while you’ll seldom hear higher level executives publically express negativity about Lean, in discussions between executive peers there is a growing disdain for it as a cost management strategy. Without executive support Lean will not remain relevant. That is the primary reason that Lean needs to evolve.

Academics Need to Step Up to the Plate

Over the last two decades I’ve had a significant amount of interaction with academics in support of developing better business strategies. Overall I’ve been pretty satisfied with the results of these collaborations. As a consequence I have, in general, a high regard for academia. With that being said, though, I am of the opinion that the academic community has dropped the ball relative to Lean. What do I mean by this?

I have previously stated that managers today need to take leaps of faith to justify Lean initiatives. This is because it is rare for Lean practitioners to be able to provide solid ROI projections for the initiatives they advocate. Sure, academics have documented hundreds, if not thousands, of case studies that outline how specific companies have benefited from Lean transformations. This is a good first step. However, individual success stories are usually not effective in developing sound ROI projections. What is needed instead is a more general analysis of accumulated Lean impact data. I’m talking about analyzing data from a statistically sound sample of individual case studies such that the projected impacts of specific Lean activities can be tied to the various metrics used by executives in making business decisions. Why is this important? Executives are generally not willing to wait until the completion of a multi-year transformation to see if an initiative they have sponsored (and maybe staked their career on) will produce the necessary business results.

Impact projections are required to financially justify investments of corporate money. Granted, basing impact projections to statistically sound sample sizes takes more work than writing up individual case studies, but that’s “where the rubber meets the road” relative to gaining management support. And historically, it is the role of academics to deliver this type of process analysis.

Another beef I have with academics is that they, for the most part, have seemed to be along for the ride. The focus of this series has been on the need for Lean to evolve. Where are the academic results that can be cited proposing anything comparable to the strategy, metric, process and tools outlined in this series?  There aren’t any, at least as far as I have seen. Rather, academia has tended to embrace the current practice of Lean, ignoring rather than addressing its shortfalls. Sure, there are exceptions to this, but for the most part academia has let Lean down in addressing process-related concerns.

I’ll give you a simple example of what I mean. To do this I’ll use two questions posed earlier in this series relative to Lean. Specifically, managers often want to know:

  • How far along are they along on their Lean initiative?
  • Related to this, how do they know when it will be done?

In this series I’ve proposed concrete answers to how both of these questions can be answered. In talking to both practitioners and academics I’ve heard all of the non-answer answers, such as, “Lean is a journey—you’ll know you’re there when you get there.”

That type of justification doesn’t fly in the corporate world and it’s unrealistic to think that it should. On the other hand, you’d expect that basic questions of this nature would be exactly the type that academics would want to address. In my experience, they haven’t. And until they do, executive level managers will continue to be expected to take leaps of faith to support expenditures on Lean—something they are usually loath to do.

There Really is No Existing Lean Community

It’s almost seems as if soon after its initial launch the people associated with Lean decided that the practice was perfect and needed no further development. And in order to protect its purity they decided to etch the details of its practice in stone. I know this sounds a bit sarcastic but take the case of Value Stream Maps. VSMs represent a pretty basic—and effective—approach to defining material and communication flow. Although their template was for the most part formalized nearly two decades ago they are still the most frequently used tool by Lean practitioners. My opinion is that the existing Lean infrastructure is a bit too top-down and tends to be more interested in rearranging the deck chairs on the Titanic rather than actually acting as a strong advocate for increasing the impact of Lean practice, i.e., supporting its evolution.

One of the most effective strategies for getting constructive input and developing best practices is through Communities of Practice, comprised of top-notch practitioners who are willing to objectively discuss the problems they have faced as well as to share practices they have found successful. This is a more bottom-up approach and for that reason has a better chance of identifying and addressing Lean’s strategy, metric, process and tool needs. I would like to see a Community of Practice formed to take on a leadership role in the development of Next Generation Lean.

There would be a role in this for today’s Lean institutions. They could provide infrastructure by creating and financially supporting such a group. Based on my experience with multiple Communities of Practice, I’d recommend it be comprised of a manageable number—no more than two dozen members—who would represent most Lean practice interest groups, i.e., OEMs, small and medium-sized manufacturers, consultancies and academia.

Lean Order Fulfillment

I’ve invested a lot of time and effort in putting together this seven-part, 15,000-word series. And I didn’t do it for fortune or fame. Rather, I wanted to share what I learned through my 20 years of experience in applying Lean at both OEMs and their suppliers. The basis of what I have laid out was learned during a seven-year assignment with a large OEM—believe me, a true school-of-hard-knocks experience. Our first application of the approach we developed was in positioning that OEM for entry into the Big Box marketing channel. We understood that in order to deliver the profits we needed to justify this move we needed to both increase the Lean-ness of our supply chain and to Lean-up our distribution—our own factories were already operationally efficient.

I was responsible for planning and overseeing the supply chain side of the project work. A general description of this was published as a two-part series (Oct.-Nov. 2013) in Industrial Engineer entitled, “Lean’s Trinity.” The distribution side was planned and overseen by a colleague of mine and his work is described in an article in Interfaces(Jan.-Feb. 2005) entitled, “Improving Asset and Order Fulfillment at…”

I have two comments to make about this second article:

1. First, the sole comment in that article regarding supply chain was that Lean order fulfillment requires “a responsive supply chain with a manufacturing lead time* of three (weeks) or less.” (*The manufacturing lead time referred to was MCT, or Manufacturing Critical-path Time.)

When this article first came out I complimented my colleague—the one that led the distribution side of the initiative—on how efficient he had been by summarizing seven years of intense supplier development Lean activity in a single sentence!

2. Many of the executive-level financial impacts alluded to in this series were delivered by this initiative, and are provided in great detail in the article. In fact, when the article was first published I was a bit surprised that my colleague received corporate approval to provide cost reduction results in such detail.

If you are interested in reading either of these two articles, send me a line—I have a limited number of copies that I am willing to send out.

There is a third document I will include—a reference PowerPoint entitled, “Did Toyota fool the Lean decade community”

I’m not big on conspiracy theories but I do believe it’s credible that Toyota might not have shared ALL of what it considered to be a corporate competitive advantage when they allowed other companies to benchmark and otherwise study their Toyota Production System. Why? I find it interesting that order fulfillment overview comments by Toyota executives (some are which were cited in the third article of this series, Next generation lean strategy, tend to focus on “time to the customer” but nowhere in what they share do you find definition of their metric for measuring this. I’m pretty sure they have one.

Quantifying the Financial Benefits of Quality: Bringing Suppliers into the Fold

suppliers-595-t

 

In part two of this series, we discussed the role governance and reporting practices had on the level of financial benefits organizations derived from their quality efforts. What we found was that similar to findings in earlier studies, quality governance models and transparency (reporting and standardized measures) improve the efficacy of organizations’ quality efforts.

Given that transparency and cross-functional integration help improve the financial benefits of quality, we also wanted to understand what happens when organizations extend those factors (transparency and collaboration) to partners outside of the organization through training. Hence in this article, we will discuss the relationship between quality training for suppliers and the financial benefits of quality.

Training Suppliers and Financial Value

Suppliers play a major role in any organization’s quality—they provide the materials necessary to create products. To understand the relationship between suppliers and quality efforts, respondents were asked to indicate which suppliers they will train on quality management practices within the organization.

Surprisingly, the vast majority of organizations do not provide any training for their suppliers on their quality management system. Of those that do provide training, it is often limited to tier one suppliers. This means most organizations are missing out on opportunities to ensure transparency between the organizations, create a common language and set expectations with suppliers on quality needs. To test where organizations need to invest in training, we ran analysis against which tier of suppliers were provided training and the organizations’ financial benefits of quality (Figure 1).

Figure 1: Training by Tiers and Financial Benefits of Quality

There is a case to be made that training for any suppliers will generally improve the organization’s financial benefits—illustrated by the increase on average from the financial benefits range of $100,000 to $500,000 for those who do not provide training to a range of $500.001 to $1 million in benefits for those that do. However the largest improvement in financial benefits comes from training the often overlooked tier three suppliers. This does not mean that organizations should ignore their tier one and two suppliers, but instead should extend their training to their tier three suppliers.

However, this then begged the question: What types of training should organizations offer their suppliers?

What Training Matters?

In addition to the general ideas of transparency and common language organizations need to help their suppliers understand the impact that defects or other setbacks like delays will have on their end customer. This helps the organizations eliminate waste and ultimately improve the efficiency and quality of their products, resulting in higher customer satisfaction and potential price premiums. This theory leads to the hypothesis that organizations that train their suppliers on their quality culture and end use of product would see the largest financial benefits. To test this hypothesis we ran the types of training offered suppliers against the financial benefits of quality (Figure 2).

Figure 2: Gap Analysis: Training Topics and Impact on Financial Benefits

What we found was that all the types of training discussed in the survey correlated to improved financial benefits. However the general hypothesis that organizations that provide training focused on the organization’s quality value, policies, and end product use suppliers would reap higher financial benefits was not supported. Instead organizations that provided training on their quality KPIs and technology systems (e.g., ERP systems) were more likely to reap higher financial benefits.  Though additional value can be created through a shared understanding of customers and quality culture, organizations should start with ensuring clarity around foundational items such as measures and reporting technology.

Conclusion

Best-in-class quality organizations use training with their suppliers to drive quality and are twice as likely to train suppliers. Supplier training ensures that all critical parties in the value chain understand the organization’s standard of quality—around quality measures and efficacy and what it wants to achieve with its product offerings. In the final article, of this series we will look more closely at the relationship between incentives and training for staff and the financial benefits of quality.

 

Tips and Suggestions for Six Sigma Project Success

critical-thinking-map

 

Tools and methodology will only get a person so far. Experience gained from the practical implementation of Six Sigma solutions is priceless. A list of tips, tools and suggestions for Six Sigma practitioners can help avoid many pitfalls of project management.

Some fundamental points for project success are:

  1. Planning project work well.
  2. Determining the exact scope of the work and the required/desired outcomes.
  3. Developing a proper fact-based understanding of the problem.
  4. Leveraging creative tools to develop the highest quality imaginative ideas.
  5. Leveraging selection tools and decision making tools to identify the most appropriate solutions.
  6. Managing stakeholders well, involving them and planning their involvement.
  7. Planning and executing implementation with great care.
  8. Ensuring that benefits are calculated and extracted.
  9. Handing over a complete sustainable finished product to the business.

These issues can be better managed when using a proven methodology like Six Sigma.

Point 1

Planning project work well: Projects with a relatively short timeframe (e.g., three months) require a disciplined approach to planning. Just measuring a problem properly or testing a solution properly may require considerable time given that many have weekly, monthly, quarterly, annual peaks and troughs in volumes or experience. Or there may be other types of seasonal variations.

  • Develop a conceptual plan for the whole project within the first week.
  • In the conceptual plan, identify milestones or key events.
  • Schedule future key meetings with stakeholders based upon that conceptual plan.
  • Arrange these key meetings for the whole project as early as possible (e.g., within the first two weeks).
  • Develop a detailed plan for the first phase within the first week.
  • Avoid overcomplicating the plan. In many cases it is better to be approximately right than precisely wrong.
  • By considering the tools to be used (brainstorming, affinity diagrams, etc), more accurate timeframe estimates will be possible.
  • Try to imagine/estimate/guess the type of outputs that will be produced from each event. (For example, about 100 ideas from brainstorming will probably take about 15 minutes to plot into an affinity diagram.) This helps in estimating timeframes.
  • Arrange all workshops and meetings for the first phase immediately or as soon as possible.
  • Plan in as much detail as practical. the subsequent phases and make sure the detailed and conceptual plan align.
  • Do not put the plan in a bottom drawer, it is for daily use.
  • Block out the time on a written schedule for all events, including thinking time. Avoid being driven by other people’s agendas (e.g., block out a day for planning the next phase).

Point 2

Determining the exact scope of the work and the required/desired outcomes: The scope of a project will without doubt change as the team leader or team members develop a better understanding of the problem. However, it is important that once there is a basic understanding of the problem and it has been discussed with the project sponsor, the scope should be locked in. Then that scope should only be changed with sponsor agreement and after carefully considering the pros and cons of the decision. Consider these steps:

  • Create a working scope document from Day 1.
  • Try to get a basic understand the problem as soon as possible.
  • Think about the scope that would give the greatest benefit for the effort required.
  • Make sure the project scope is practical. There are plenty of small initiatives that can have massive impact on business performance. Try to identify these.
  • Consider who would be the best sponsor for such an initiative.
  • Discuss this with a project current sponsor. Enlist their help to find the right sponsor if necessary.
  • Use a simple in-scope/out-of-scope table.
  • Pay particular attention to out-of-scope items.
  • Make sure the scope is communicated to all key stakeholders.

Point 3

Developing a proper fact-based understanding of the problem: Obviously if project leaders do not understand the problems properly, they will be unable to fix them. The most common mistakes in this area are:

  • Relying on “folk law” as the basis of understanding the problem.
  • Being an intellectual snob, that is, thinking the cause of the problem is obvious.
  • Taking bosses’ or sponsors’ or other key persons’ interpretation of the problem as fact.

To avoid making these mistakes, adhere to the DMAIC (Define, Measure, Analyze, Improve, Control) roadmap. A strength of DMAIC is that it produces a fact-based understanding of problems.

Points 4 and 5

Leveraging creative tools to develop the highest quality imaginative ideas, and leveraging selection tools and decision-making tools to identify the most appropriate solutions: Once the problem is properly understood, the next job is to find a good solution. The most common mistakes made in this area are:

  • Jumping on the first idea as the optimal solution.
  • Accepting “folk law” ideas as viable solutions.
  • Thinking that the solution to the problem is obvious and not exerting any effort to identify alternatives.
  • Taking your bosses’ or sponsors’ or other key persons’ ideas for solutions as the best ones.

Avoid making these mistakes by:

  • Adhering to the DMAIC roadmap – another strength of the DMAIC methodology is in the development and selection of ideas.
  • Leveraging the toolkit for idea generation and idea selection tools.
  • Avoiding silent brainstorming.
  • Using a warm up “non-work” brainstorm prior to generating ideas in work-focused brainstorms.
  • Applying brainstorming rules rigorously – especially the “no judgment” rule.
  • Inviting cross sections of people to the brainstorm session, including “wild card” invitees.
  • Reviewing and using lateral thinking techniques (as developed by Edward De Bono).
    • Exploring ideas known to be wrong to see where they may lead.
    • Considering ideas that at fist glance appear opposed to logic.
    • Planting random words into brainstorming session.
    • Suggesting the opposite of the last suggestion in a brainstorming session.
  • Using decision-making tools like nominal group technique to quickly reduce the options.
  • Trusting the idea generation and selection process.

Point 6

Managing stakeholders well, involving them and planning their involvement: A project’s stakeholder group will play a significant role in the success or failure of the project. Failure to manage the stakeholder group properly is the number one non-technical cause of project failure. Common mistakes include:

  • Failing to identify a significant key stakeholder(s).
  • Underestimating the power and influence of a stakeholder(s).
  • Failing to identify a negative stakeholder(s). Failing to develop a proper management plan for stakeholders.
  • Ignoring stakeholder issues.
  • Doing whatever stakeholders want.

Avoid these mistakes by:

  • Taking stakeholder management very seriously.
  • Involving others to help identify stakeholders.
  • Making stakeholder identification part of every meeting.
  • Encouraging the project team to be frank about the nature of stakeholders.
  • Treating stakeholders with respect, but not being driven by them.
  • Developing individual management plans for key stakeholders that represent any type of risk or opportunity.

Point 7

Planning and executing implementation with great care: The implementation of change is often the most poorly managed phase of any project. This is clearly the most important phase as it is the phase where ideas come to life and the benefits start to be realized. Failure to implement an idea properly makes all prior work appear futile. It is irrelevant how good an idea is if it is not implemented properly.

Common mistakes here are:

  • Attempting to implement fanciful solutions that cannot or do not work in the real world.
  • Poor planning (e.g., failing to take into account that to train 1,000 call center staff might take many months and cost millions of dollars).
  • Failing to understand the magnitude of the task.
  • Avoiding consideration of details.

Avoid these mistakes by:

  • Planning thoroughly and in great detail.
  • Involving supervisors and line staff in the planning.
  • Testing implementation methods and tools prior to implementation.
  • Involving training, human resources and business leaders in support of the implementation.

Point 8

Ensuring that benefits are calculated and extracted: Understanding the benefits of a project improvement is extremely important. Apart from the obvious importance to the company’s financial performance, benefits also can be a powerful tool to use in stakeholder management. Understanding benefits can be difficult, extracting those benefits can be even more difficult. If the benefits of an improvement cannot be identified, then one can only conclude that the change being proposed is not an improvement at all. Therefore do not implement it.

It is worth remembering that it is impossible to have an improvement that does not have a financial benefit.

Common mistakes in this area include:

  • Being unwilling to make estimates.
  • Being unwilling to take on the hard tasks (manual counts, grunt work, etc.) that are required to gather the required statistics.
  • Being unable to find the information required.
  • Being unwilling to facilitate, encourage, coerce others to assist in these efforts.
  • Calculating theoretical benefits but not having agreement about benefit extraction.
  • Accepting stakeholders’ politically drive explanations regarding why a particular benefit is not achievable.

Avoid these mistakes by:

  • Making estimates early. This is a good way of encouraging involvement and feedback. (It is interesting how enthusiastic people get about proving someone wrong.)
  • Building time into the project plan to gather benefits information and to calculate benefits.
  • Building time into the project plan to plan benefit extraction.
  • Building milestones into the plan where benefit estimates will be updated – and keep to it.
  • Attaching confidence levels (or ranges) to estimates (e.g., $100k per annum, +/- $60k).
  • Making sure the sponsor and other key stakeholders know the level of confidence in all estimates.
  • Working closely with stakeholders to develop benefit extraction plans.
  • Challenging stakeholders when explanations do not make sense.
  • Being creative about how benefits are accessed.
  • Spending time identifying who has the information the project requires.
  • Being prepared to manually take samples, gather information, etc.

Point 9

Handing over a complete sustainable finished product to the business: In order to close the project, the project team will need to have a finished product that can be sustained by the business without the special intervention of the project manager or project team.

Common mistakes here include:

  • Handing over an incomplete product because the project budget has run out.
  • Letting the project timeline slip, and therefore not allowing sufficient time for this part of the project.
  • Not agreeing to handover the business.
  • Failing to recognize the tasks involved.

Avoid these mistakes by:

  • Following the Six Sigma DMAIC methodology for the Control phase.
  • Ensuring the project meets its deadlines.
  • Building into the plan sufficient time for this phase.