5 factors limiting process improvement success

Although every organization faces a different set of challenges in their process improvement efforts, to be effective they will all require dedication, clarity of vision and patience.

It’s no secret that embarking on an organizational change initiative can be a stressful undertaking. Here are 5 factors that most frequently hold process improvement specialists and their efforts hostage:

1. Poor planning

Process improvement specialists know that the first step should be to put a process improvement framework in place. But organizations that are new to BPM or those that have given up hope of taming their processes, fail to recognize the importance of doing so. Or if they have a framework, it is very basic.

Set your efforts up for success by appointing someone who will take ultimate process responsibility. This person will work with the management team to obtain the necessary resources and prioritize potential changes.

One of the lead process owner’s responsibilities should be to define a common vocabulary. For example, not everyone understands the difference between policy and process. Terms can mean different things to different people, so ensure your entire organization communicates by using shared definitions to eliminate wasted time and effort, and help clarify discussions and decisions.

The organization also needs a process in place for maintaining and updating processes. Will there be an approval process and if so, what is it? Where will process documentation be stored? How will process improvement efforts be prioritized, measured and reported?

Protect your organization from chaos by kickstarting process improvement initiatives with a solid framework in place. In the same way you’d build a house on a solid foundation, it is often better to take the time to develop a solid framework before rushing in to make other changes.

2. Indifferent execs

When the senior management team isn’t fully committed to meeting an objective, they leave their people adrift. Priorities change. People and teams aren’t empowered to accomplish their assigned goals, and there is a lack of clarity around roles and responsibilities. Chaos can ensue due to the lack of effective communication.

Everyone needs to understand why the organization is taking on a project, and what the upcoming changes will mean to them as individuals, to facilitate real commitment to change. Teams need to comprehend the impact the change will have on the organization and on their day-to-day lives and roles. This is the responsibility of the exec team. Failing to embrace this part of their role is a key reason why BPM projects fail.

3. Resistant teams

When faced with organizational change, most organizations traditionally see three types of behaviour. The first type consists of about 20 percent of employees, who will embrace the change and be excited and eager to get involved in the project. Harnessing this enthusiasm can carry the project to success, and can positively impact the mindset of the next group, who are willing but may be unsure about what is involved.

About 60 percent of employees make up the second behaviour type. They have no objection to the project, and can execute effectively with education, guidance and reassurance about their role.

The last group will always be naysayers, and consists of the remaining 20 percent of people. They are actively negative toward the project, and will say: “We’ve always done it this way” or “We tried that once and it failed.” Ignore their negativity – they will either adapt or leave.

4. Unmanageable processes

Process variations between job sites or business units are commonplace, especially in organizations with locations in multiple countries or regions. Customer segmentation or local practices sometimes lead to variations in the services provided, adding further complexity. While these practices can make process standardization difficult, they don’t make it impossible.

Effective control of process variations can be achieved by mapping the global standard process and the process variations. This helps identify the differences. Then, when job sites or offices argue that their existing process is better or can’t be changed because of a local custom or a unique requirement that can’t be grasped by people outside the area, ask them to document the reasons.

Insist on facts, not opinions. Ask teams to cite the specific law or contract that requires a certain process or step, and to explain why the proposed standard process doesn’t meet the requirement. Insist that they quantify the cost of the extra steps or services they require that other facilities don’t.

Hard analysis will frequently cause the objections and variations to evaporate. If the objection lingers and they produce the requested analysis, facts will help you determine whether it’s warranted to create and manage a process variation or exception.

5. No experimentation

The very definition of organizational agility is trying new things, measuring the results and then quickly rolling them out or changing direction, depending on the results.

There are ways to fast track organizational agility, but it can take time. When things go wrong, exec teams must guide people into identifying the root cause rather than focusing on who’s to blame.

Organizations that reward ingenuity and experimentation have mastered the art of organizational agility. Trying something new should be rewarded, even if the idea fails. Allow a percentage of an employee’s time to be used in the pursuit of new ideas, as Google does, or set aside funds to support trial projects. This signals to the organization that initiative and agility are valued and encouraged. Over time, this attitude will permeate the organization giving rise to a more agile, innovative culture.

Identifying these 5 challenges and getting commitment from their teams to address them will help organizations create a culture of process improvement.

With the help of a sound approach to business process management, process specialists can actively support business transformation, and break free from the factors hampering their process improvement success.

5 key factors for returns from IA

Once upon a time automation was primarily seen as a lever to replace manual work through deployment of robots- physical or virtual. While this is still being seen as a major business driver for adopting automation by organizations, the landscape is witnessing drastic changes owing to the technological advancements across the automation spectrum comprising of key technologies like RPA, Cognitive Learning, OCR/ICR (Data Digitization), Virtual Agents/Chatbots, Machine Learning & Predictive Analytics.

Organizations are no more just talking about replacing their resource pool for back-end processes but posing questions on how they can maximize their returns by getting the pulse from their customer base. AI for instance is being used by Hollywood to predict whether their next movie could be a blockbuster; Amazon can very well anticipate if we are running short of toilet paper; and Netflix can very well suggest on what TV sitcom I can binge upon next. And all this while we already have Virtual Agents in the form of AI Assistants (read Google Home, Alexa) becoming household devices.

Don’t miss the automation train

The question becomes inevitable for businesses as to ‘How they don’t run the risk of missing the automation train’. But this is no more to do with taking the first mover advantage. Businesses can realize value from their automation investments ONLY if they approach the disruptive market with a clear methodology and right leadership drive.

It has already been witnessed that organizations which took a myopic technology bolt-on approach for applying solutions like RPA have struggled over time. A more holistic view needs to be taken in order to apply what could best enable businesses to embark on an automation journey. This is indeed a journey and not a sprint!

Intelligent Automation Approach promises to assist the organizations in identifying the right automation mix which they could be looking to integrate in their processes. Based on the information complexity and the ambiguous nature of processes, different set of automation levers can be targeted by the organizations.

This kind of intelligence is derived through a thorough study of the automation lever being targeted, process characteristics and the organization’s maturity with respect to executing their processes.

The traditional value proposition from BPM and data driven decision making still holds the key in enabling the organizations to take informed decisions and allow for interventions in their process re-design engagements

Most of the time, organizations being cautious are driven by incorporation of technologies which are mature and for which we have enough credentials in the market. This although should not be a road-block for organizations who are being far sighted in getting ahead of the curve. These are organizations who are willing to co-create some of the next gen offerings and are the ones who could benefit the most in due course. We are in the age where even traditional brick and mortar based businesses are banking on AI to become future ready organizations.Having said that AI/intelligent Automation is going to be the buzz for quite some time, its essential that the execution of the same is considered with caution and driven with substantial leadership push.

Key success factors to ensure maximized returns:

A) Have a dedicated Center of Excellence to drive the overall automation agenda

B) Prepare the roadmap by understanding the current automation maturity of the organization

C) Automation should be part of the vision statement of the organization and must be driven well by the leadership

D) Take a well-balanced investment approach when it comes to investing in new vs. mature technologies

E) Define the value proposition better (beyond FTE reduction)

It’s More Than the Mean That Matters

Confidence intervals show the range of values we can be fairly, well, confident, that our true value lies in, and they are very important to any quality practitioner. I could be 95-percent confident the volume of a can of soup will be 390–410 ml. I could be 99-percent confident that less than 2 percent of the products in my batch are defective.

Demonstrating an improvement to the process often involves proving a significant improvement in the mean, so that’s what we tend to focus on—the center of a distribution.

Defects don’t fall in the center, though. They fall in the tails. You can find more beneficial insights in many situations through examining the tails of the distribution rather than just focusing on the mean.

Let’s take a look at some nontraditional confidence intervals that are particularly useful in estimating the percent that fall inside or outside the specification limits.

Figure 1: Estimating the 99th percentile on time to complete hospital lab work. Probability plot generated with Minitab Statistical Software.

probability plot is most often used to help you determine whether your data follow a normal distribution (based on how linear the points appear to be). It provides a good starting point for understanding percentiles. A percentile is the value at which a given percentage of the population is estimated to fall below.

But did you know probability plots also provide estimates of the percentiles of the distribution? You can choose any percentile you want on the Y axis and find the corresponding data value on the X axis. The corresponding confidence intervals on the probability plot are confidence intervals for the specific percentile.

I was recently working on a project with a hospital that needed to estimate within how many minutes 99 percent of its pre-surgery lab work falls.

The X value corresponding to Y = 99 on the probability plot was 204 minutes. In other words, 99 percent of all pre-lab work should be complete by 204 minutes.

The confidence interval around the estimate indicated that we could be 95-percent confident that 99 percent of the lab work would be complete by 171–244 minutes. This explained why the surgeries were often falling behind schedule.

Figure 2: Tolerance interval plot generated with Minitab Statistical Software.

Although percentiles estimate the percent of observations that fall below a certain value, tolerance intervalsprovide a range that you can be confident a certain percentage will fall between.

They are very useful for determining where a certain percentage of the population will fall relative to your specification limits.

For example, a medical device company I was working with needed to show 95/99 reliability on the tensile strength of plastic tubing. The specific requirement was that it needed to be 95-percent confident that the tubing would withstand 3 lbs of force for 99 percent of product.

Like the hospital data, these data were non-normal. Because a tolerance interval estimates what is going on in the tails of the distribution—not the mean—the distribution assumption is important. Here, an extreme value distribution provided a good distribution fit. Because the tubing could not be too strong, a one-sided tolerance limit, a lower bound, was appropriate. The value of the 99-percent lower tolerance bound with 95-percent confidence was 10.7 lbs. In other words, we could be 95-percent confident that 99 percent of the tubing population has a tensile strength of at least 10.7 lbs—well above the specification limit of e lbs.

Percentiles vs. tolerance

Consider using confidence intervals on percentiles or tolerance intervals the next time you are interested in interval estimation involving the tails of a distribution:
• Percentiles estimate the value that a given percentage of the population falls below.
• Tolerance intervals estimate the range of measurements a given percentage of the population will fall with in.

For Process Improvement, Call Six Sigma

Process improvement is vital for any business in any industry. Since change is a constant, we want to continually improve the quality and standards of not only our products, but also how we run the day-to-day tasks to create those products or services.

Six Sigma methodologies using the DMAIC have certain tools that are easy to use and can aid tremendously in process improvement.

In the Define Phase: A great tool is a Project Charter. This is a sort of tell-all about the ensuing project. This includes the main goal, project scope, all those involved including key decision makers, the lead of the project, and team members involved. The cost of risks and poor quality, which would include the baseline metrics on how things measure up before improvement, are also included. 

Included in the Project Charter is a Process Map. This is a great visual tool to get the clear picture of how the current process is flowing or not flowing. Through the process map you can easily see where you would need to re-work the task to eliminate waste in order to save on time and production costs.

In the Measure Phase: A Cause and Effect Diagram can help identify probable causes and their effects. With this tool you can see which inputs are related to which outputs and quickly identify variables. The Cause and Effect Diagram is also known as Ishikawa diagram. Benchmarking is another tool used to introduce better process driven practices based on customer requirements. This tool used usually used in conjunction with Voice of Customer (VOC). Complete the VOC first so that pertinent information is obtained regarding customer needs.

In the Analyze Phase: A great method is running a Root Cause Analysis (RCA), which includes the tool of the 5 Whys. When using the 5 Whys, you find an effect and work backwards by asking why until a satisfactory answer is achieved. 

Improve Phase: These tools include Brainstorming for solutions that could work. Doing some Pilot Testing, which have eliminated previous risks.

Control Phase: This is the new and improved process that you’ve come up with and the tools will be a Control Plan, which is monitoring at its best. It will include a well thought Control Chart and a Monitoring and Response Plan.

Six Sigma: It’s About Value to the Customer

One of Six Sigma’s core beliefs is to do away with any waste. This means if it doesn’t bring value to your product or service, it is waste. So let us shed light on Non-value added processing in Six Sigma. 

Depending on the industry your business is serving, non-value added processing can be anything from the use of special packaging that makes it hard to open (believing that this type of protection is important to your customer), to having an extra step in the day-to-day process that can easily be deleted without harming the quality of your product or service.

There are two very important Six Sigma tools that shed light on non-value added processing and virtually clear up the “waste” in question. The first tool that helps clear this up is Voice of the Customer (VOC). 

Voice of the Customer is both the tool and term used to find out what your customer’s requirements are in order to get their needs met from your product or service. This information can be obtained in several different ways:

  • One-on-One Interviews
  • Surveys
  • Focus Groups
  • Customer Suggestions
  • Call Center/Complaint logs

The more specific your customers are, the better your outcome will be. This information alone could save revenue in production costs without affecting the actual quality of your product or service.

The next Six Sigma tool is the Value Stream Map (VSM). This great tool could vary on the actual creation, depending on your plan of attack or approach taken. Whether you decide to take a pencil and paper approach or use software, the intention is to map out every step of the process taken to produce the final product or service.

The goal is to produce a process that gives your customer the best product or service that meets their requirements. If any of the steps in the process do not add value, then get rid of them. Now you can see why Voice of Customer (VOC) works hand in hand with Value Stream Mapping (VSM). Ultimately it is all about your  customer’s requirements being met.

Root Cause Analysis, Ishikawa Diagrams and the 5 Whys

Root cause analysis (RCA) is a way of identifying the underlying source of a process or product failure so that the right solution can be identified. RCA can progress more quickly and effectively by pairing an Ishikawa diagram with the scientific method in the form of the well-known plan-do-check-act (PDCA) cycle to empirically investigate the failure. Often, failure investigations begin with brainstorming possible causes and listing them in an Ishikawa diagram. This is not necessarily wrong, but often the ideas listed do not clearly contribute to the failure under investigation.

Write a Problem Statement

Once a problem-solving team has been formed, the first step in an RCA is to create a problem statement. Although critical for starting an RCA, the problem statement is often overlooked, too simple or not well thought out. The problem statement should include all of the factual details available at the start of the investigation including:

  • What product failed
  • The failure observations
  • The number of failed units
  • The customer’s description of the failure

The customer’s description does not need to be correct; it should reflect the customer’s words and be clear that it is a quote and not an observation. For example, a problem statement may start as, “Customer X reports Product A does not work.” The rest of the problem statement would then clarify what “does not work” means in technical terms based upon the available data or evidence. A good problem statement would be: “Customer X reports 2 shafts with part numbers 54635v4 found in customer’s assembly department with length 14.5 +/-2 mm measuring 14.12 mm and 14.11 mm.”

Create an Ishikawa Diagram

An Ishikawa (or fishbone) diagram should be created once the problem statement is written and data has been collected. An Ishikawa diagram should be viewed as a graphical depiction of hypotheses that could explain the failure under investigation. It serves to quickly communicate these hypotheses to team members, customers and management. Hypotheses that have been investigated can also be marked on the Ishikawa diagram to quickly show that they are not the cause of the failure (Figure 1).

Figure 1: Ishikawa Diagram

Figure 1: Ishikawa Diagram

How Did the Failure Happen?

Elements in the Ishikawa diagram should be able to explain how the failure happened. For example, “lighting” is a typical example under “environment”; however, it is seldom clear how lighting could lead to the failure. Instead, the result of bad lighting should be listed and then empirically investigated. In this example, lighting could cause an employee to make a mistake resulting in a part not properly installed. Therefore, the part not properly installed would be listed in the Ishikawa diagram. Simply investigating the lighting could take time and resources away from the investigation so the first step would be to see if a part is installed.

Causes of a part not being installed can be listed as sub-branches, but the priority should be on determining if the part was installed or not. If a part is not correctly installed, then use the 5 Whys on that part of the Ishikawa diagram for investigation. The lighting may be a contributing cause, but it should not be the first one investigated. The Ishikawa diagram should be expanded each time 5 Whys is used. For example, the branch may end up as: material → part not installed → employee skipped operation → work environment too dark → poor lighting → light bulbs burned out.

In this example, the use of 5 Whys led to the true cause of the failure – the light bulbs burned out. Had the 5 Whys not been used, then the employee may have been retrained, but the same employee or somebody else may have made the same or a different mistake due to the poor lighting. Each time a cause is identified, the 5 Whys should be used to dig deeper to find the true underling cause of the failure. Failing to use the 5 Whys risks a recurrence of the failure – the corrective action may only address symptoms of the failure.

Other Potential Causes

Potential causes that do not directly explain the failure, but theoretically could have caused it, can be listed in the Ishikawa. This ensures they will not be forgotten; however, better explanations should be prioritized for investigation. Tracking and monitoring investigation are related actions can be facilitated by copying the Ishikawa items into a spreadsheet such as the one shown in Figure 2.

Figure 2: Tracking List for Ishikawa Diagram Action Items

Figure 2: Tracking List for Ishikawa Diagram Action Items

Here, each hypothesis from the Ishikawa diagram is prioritized and the highest priority hypotheses are assigned actions, a person to carry them out and a due date. This makes it easier for the team leader to track actions and see the results of completed actions. Such a tracking list can also be used to communication the team’s progress to management and customers. New insights may be gained as the investigation progresses. For example, somebody checking the length of a part may have observed damage. This damage could then be entered into an updated Ishikawa diagram and then transferred to the tracking list.

The Scientific Method

The scientific method should be used when investigating the failure. According to biophysicist John R. Platt’s Strong Inference, the scientific method consists of:

  1. Devising alternative hypotheses
  2. Devising a crucial experiment (or several of them) with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses
  3. Carrying out the experiment so as to get a clean result
  4. Recycling the procedure, making sub-hypotheses or sequential hypotheses to refine the possibilities that remain and so on

Each item in the Ishikawa diagrams should be viewed as a hypothesis that could explain the cause of the failure under investigation. A good hypothesis should be simple, general, avoid making too many assumptions and should be able to make refutable predictions. A simpler hypothesis is more likely to be correct. In general, it is best to look for the cause closest to the problem and then work back from there using the 5 Whys. The ability to make predictions is essential for testing the hypothesis; a hypothesis that can’t be tested should not be trusted as there is no way to be sure that it is correct. As Dutch psychologist and chess master Adriaan de Groot said, “Where prediction is impossible, there is no knowledge.”

Integrate the Scientific Method

The scientific method can be integrated into RCA by using cycles of PDCA. The planning phases consist of describing the problem, collecting data and forming a hypothesis.

  • P: Whether freshly formed or taken from an Ishikawa diagram, the hypothesis should make some form of prediction (or plan), such as “measurement deviation” predicting “parts will be measured out of specification.”
  • D: The next step is do – where the hypothesis is evaluated. This could be as simple as measuring a part or as elaborate as designing a new type of test method.
  • C: The check phase is where the results are evaluated and conclusions are formed.
  • AAct is where the conclusions are acted upon. A hypothesis may be rejected or modified based on new evidence or the results of the testing, or a plan may be created to confirm a supported hypothesis.

If the hypothesis is not supported, then the next one in the prioritized tracking list should be selected and evaluated.

Summary

Using Ishikawa diagrams and the scientific method can serve as a standalone methodology for RCA or be used as part of any RCA process that uses Ishikawa diagrams. This approach is completely compatible with methodologies such as 8D and A3 reports.

COMPANY CULTURE, EMPLOYEE TRAINING KEY ISSUES AS USE OF AGILE EXPANDS

Agile, a process improvement strategy developed in the software industry to speed up product development, has now become a key component in making operations more efficient across many industries.

Businesses who put Agile practices into use are also getting the results they hoped to achieve, according to the 2018 State of Agile report from CollabNet Version One.

A higher percentage of those surveyed in past years indicated they have adapted Agile practices. The reasons for doing so include:

  • Accelerate software delivery
  • Managing changing priorities
  • Increasing productivity
  • Better alignment between business and IT
  • Increased software quality
  • Enhanced product delivery predictability

The report also found that expectations for Agile are matching reality – the above reasons for Agile also found their way onto the list of the benefits most seen after adopting agile.

What is Agile?

Agile started in the software industry to speed up delivery of products while retaining quality. An Agile Manifesto created in 2001, which can still be read online, outlines the main tenets of the methodology.

They are:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The manifesto also notes that while the items on the right all have value, the items on the left are of higher value.

Much like Lean and Six Sigma, Agile is not something people “do.” Rather, it’s a changed mindset and new approach supported by a variety of tools and techniques. Agile has been used at Microsoft and many other tech giants, including Google (although it’s not always called Agile).

Agile revolves around small, self-organizing and autonomous groups. They work in short “sprints” to quickly achieve tasks. They have frequent, short “scrum” meetings to evaluate progress. They also have short, daily stand-up meetings. The stand-up meetings rank among the most popular tools used by those who use Agile, according to the State of Agile report.

Essentially, much like other process improvement approaches, Agile is designed to overcome cumbersome business processes, cut waste and streamline operations to make an organization more efficient and effective in producing quality products.

Importance of Culture

While integrating Agile is spreading to businesses even outside the software industry, organizational culture remains a roadblock. While Agile emphasizes a bottom-up approach, support and buy-in from all levels of the organization is a critical component.

The State of Agile report listed culture that is at odds with Agile values as one of the main challenges to successfully implementing Agile. Others include resistance to change and inadequate support from management.

Having Agile coaches within the staff and instituting consistent practices and processes across all teams ranked among the key factors for successful implementation of Agile.

The Importance of Agile

In the modern business world, practices from the last century don’t work. For example, many organizations were designed around “siloed” departments. Work was done based on commercial patterns that were predictable.

That no longer works as well in an era of “unpredictability and disruption,” according to the Deloitte Global Human Capital Trends report.

The report states that hierarchical management structures no longer prove as effective in the 21st century. Work now is accomplished faster, and with better quality, when done by teams. The report found that only 14% of executives surveyed believe that traditional organizational models make a business more effective.

More are moving to the team-centric models used in Agile, the report stated. The focus now is more on flexibility, fast adaptation and collaboration.

Operating at Scale

While Agile has been proven to work for smaller startups and tech companies, will it work for larger organizations?

Writing for the Harvard Business Review, three experienced businessmen explain that it does work, no matter the size of the company. The three write that scaling up Agile to work in larger organizations – even those who thousands of employees – “creates substantial benefits.”

The focus, as with Lean methodologies, should always be on the consumer. That was the area listed as most important by those surveyed for the State of Agile report. Streamlining processes leads not only to savings in time and dollars for the organization, but also results in a better products for consumers.

The Harvard Business Review article also lists large companies that have implemented some form of Agile. They include – in addition to Microsoft and Google – Amazon, Bosch, ING Bank, Netflix, Saab, Salesforce, SAP, Spotify, Tesla, SpaceX and USAA.

Agile has emerged as a useful process improvement tool. Much like Six Sigma, it started in a specific industry (manufacturing, in the case of Six Sigma). But success has been the best teacher, and organizations across all industries have learned the value in training employees in process improvement and delivering consistent, high-quality results to consumers.

Design for Six Sigma – IDOV Methodology

Design for Six Sigma (DFSS) can be accomplished using any one of many methodologies. IDOV is one popular methodology for designing products and services to meet six sigma standards.

IDOV is a four-phase process that consists of Identify, Design, Optimize and Verify. These four phases parallel the four phases of the traditional Six Sigma improvement methodology, MAIC – Measure, Analyze, Improve and Control. The similarities can be seen below.

Identify Phase

The Identify phase begins the process with a formal tie of design to voice of the customer (VOC). This phase involves developing a team and team charter, gathering VOC, performing competitive analysis, and developing CTQs.

Crucial Steps:

  • Identify customer and product requirements
  • Establish the business case
  • Identify technical requirements (CTQ variables and specification limits)
  • Roles and responsibilities
  • Milestones

Key Tools:

  • QFD (Quality Function Deployment)
  • FMEA (Failure Means and Effects Analysis)
  • SIPOC (supplier, input, product, output, customer product map)
  • IPDS (Integrated Product Delivery System)
  • Target costing
  • Benchmarking

Design Phase

The Design phase emphasizes CTQs and consists of identifying functional requirements, developing alternative concepts, evaluating alternatives and selecting a best-fit concept, deploying CTQs and predicting sigma capability.

Crucial Steps:

  • Formulate concept design
  • Identify potential risks using FMEA
  • For each technical requirement, identify design parameters (CTQs) using engineering analysis such as simulation
  • Raw materials and procurement plan
  • Manufacturing plan
  • Use DOE (design of experiments) and other analysis tools to determine CTQs and their influence on the technical requirements (transfer functions)

Key Tools:

  • Smart simple design
  • Risk assessment
  • FMEA
  • Engineering analysis
  • Materials selection software
  • Simulation
  • DOE (design of experiments)
  • Systems engineering
  • Analysis tools

Optimize Phase

The Optimize phase requires use of process capability information and a statistical approach to tolerancing. Developing detailed design elements, predicting performance, and optimizing design, take place within this phase.

Crucial Steps:

  • Assess process capabilities to achieve critical design parameters and meet CTQ limits
  • Optimize design to minimize sensitivity of CTQs to process parameters
  • Design for robust performance and reliability
  • Error proofing
  • Establish statistical tolerancing
  • Optimize sigma and cost
  • Commission and startup

Key Tools:

  • Manufacturing database and flowback tools
  • Design for manufacturability
  • Process capability models
  • Robust design
  • Monte Carlo methods
  • Tolerancing
  • Six Sigma tools

Validate Phase

The Validate phase consists of testing and validating the design. As increased testing using formal tools occurs, feedback of requirements should be shared with manufacturing and sourcing, and future manufacturing and design improvements should be noted.

Crucial Steps:

  • Prototype test and validation
  • Assess performance, failure modes, reliability and risks
  • Design iteration
  • Final phase review

Key Tools:

  • Accelerated testing
  • Reliability engineering
  • FMEA
  • Disciplined new product introduction (NPI)

The Importance of Understanding Conditional Probablity

Alot of people in my classes struggle with conditional probability. Don’t feel alone, though. A lot of people get this (and simple probability, for that matter) wrong. If you read Innumeracy by John Allen Paulos (Hill and Wang, 1989), or The Power of Logical Thinking by Marilyn vos Savant (St. Martin’s Griffin, 1997), you’ll see examples of how a misunderstanding or misuse of this has put innocent people in prison and ruined many careers. It’s one of the reasons I’m passionate about statistics, but it’s hard for me, too, because it’s not easy to work out in your head. I always have to build a table.

The best thing to do is to be completely process-driven; identify what’s given, then follow the process and the formulas religiously. After a while, you can start to see it intuitively, but it does take a while.

In my MBA stats class, one of the ones that always stumped the students was a conditional problem:

“Pregnancy tests, like almost all health tests, do not yield results that are 100-percent accurate. In clinical trials of a blood test for pregnancy, the results shown in the accompanying table were obtained for the Abbot blood test (based on data from ‘Specificity and Detection Limit of Ten Pregnancy Tests’ by Tiitinen and Stenman, Scandinavian Journal of Clinical Laboratory Investigation, 53, Supplement 216). Other tests are more reliable than the test with results given [in figure 1].

Positive
Result
Negative
Result
Subject is pregnant 80 5
Subject is not pregnant  3 11
Figure 1

“1. Based on the results in the table, what is the probability of a woman being pregnant if the test indicates a negative result?”

“2. Based on the results in the table, what is the probability of a false positive; that is, what is the probability of getting a positive result if the woman is not actually pregnant?”

Everyone would just try to look at it as though there were no conditions… they would say, 5/80 for question 1, and 3/80 for question 2.

The first question, though, is asking, “What is the chance of being pregnant, given a negative result?” There were 16 negative results, and of those, five were pregnant. So the answer is 5/16, or 31.25 percent. For the second question, it’s, “What is the probability of a positive, given that the woman is not pregnant?” In this case, there are 14 nonpregnant women, and three of those got a positive result. So that’s about 21.42 percent.

These numbers, and this idea, are really important. Some statisticians make their living explaining these concepts to juries. People get fired or arrested because of false positives on urinalysis and other tests, because there is a general impression that they are far more reliable than they actually are.

It’s all about what you are given, and how you define things. Let’s look at a different example. In the military, people are given random drug screenings. The test is “certified 99-percent accurate.” I was always told that this means that if you do drugs, and you’re tested, it will catch you 99 percent of the time.

We think, “logically,” that this means there is only a 1-percent false negative rate… that the fact that someone who does drugs doesn’t get caught 1 percent of the time indicates that 1-percent false positive rate. Worse, we assume that if the “false negative rate” is only 1 percent, the false positive rate must also be 1 percent…it’s just common sense, right?

But “common sense” isn’t… it’s neither common nor truly sensical. Look at it this way… suppose we test 100,000 service members. Suppose further that 0.1 percent or one in a thousand service members actually do drugs. We might get this table shown in figure 2.

  Do Drugs Don’t Do Drugs
Test Positive

99

999

Test Negative

1

98,901

Figure 2

Tables like this are informative, but they don’t tell the whole story. You can see from this that the company is technically correct… at least in this case, of 100 people who did drugs, 99 were caught and one was not. But a false positive rate and a false negative rate are made up of more. To get to the whole story, it’s also good to do the marginals, or row and column totals as shown in figure 3.

  Do Drugs Don’t Do Drugs  
Test Positive

99

999

1,098

Test Negative

1

98,901

98,902

Totals

100

99,900

Figure 3

Numbers like these, the numbers of people tested, are very important. This helps us figure out our givens. The false negative rate is not the number of people who did drugs and tested negative. It’s the number out of all the people who tested negative who actually did drugs. In this case, the false negative rate is much better than advertised… it’s 1/98,902, or 0.00001, about one in 10,000 who do drugs and get tested get away with it.

The consequences, though, are on the false positive side… this is where people get turned away for employment or get fired. In the case of the military, a lot of people end up in a lot of trouble with the random urinalysis program. While we want to be cautious, and we don’t want a lot of druggies flying or controlling aircraft or tanks or other deadly weapons, we should also be concerned that we might be ruining careers unnecessarily. If we look at the table, the “common sense” interpretation of the false positive rate would be 999/100000, or 0.999 percent, very close to the 1 percent that we assumed initially. But, as astounding as it may seem, considering the number of people that are convicted each year because of this assumption, this is entirely incorrect!

The actual false positive rate consists of the number of people incorrectly identified as drug users, or the number of nondrug users out of the total number of positives. In this case, that’s 999 out of 1,098, or 90.98 percent! In other words, your chance of actually being a drug user, given a positive result on this “99-percent accurate” test, is only 9.02 percent!

Yes, it’s tricky. No, it’s not intuitive. But it’s important. It touches lives. Juries, lab technicians, doctors and nurses, lawyers, employers, employees, and patients who don’t understand this put either themselves or others in peril every day.

SPURRED BY ESSA, SCHOOL DISTRICTS TURN TO PROCESS IMPROVEMENT FOR BETTER STUDENT OUTCOMES

Process improvement helps business make better products and healthcare operations achieve better patient outcomes. Education systems now hope to do the same for students.

Improving student outcomes and streamlining often cumbersome school system operations are the focus of new initiatives across the nation.

Part of push behind the move is the Every Student Succeeds Act (ESSA). Signed by President Obama in 2015, the act replaces the federal No Child Left Behind Act enacted in 2002. Unlike the previous law, the ESSA gives much more leeway to local school districts to decide on how to best spend educational dollars.

It’s also made continuous process improvement a priority in school districts nationwide.

 

The Need For Process Improvement in Education

The goal of every school district is to provide a quality education to every student regardless of age, neighborhood, race or economic status. However, that requires the best possible management of resources, smart financial strategies and understanding the issues behind low-performing schools.

These issues often are referred to in phrases such as equity, closing achievement gaps, improved quality of instruction and increasing outcomes for students (in other words, preparing them for success in careers or college).

Some states see ESSA as an opportunity. Many have started to revamp entire education systems, according toEducation Week. This includes turning to two important components of Lean and Six Sigma: evaluating data and constant, consistent review of operations.

For example:

Tennessee has developed a School Improvement Support Network that supports districts in improving low-performing schools. Part of the effort is on training school district and state officials on the needs of low-performing schools and the root causes of their problems.

New York has already established a five-year plan that sets goals for student achievement and graduation rates. The plan also includes continuous evaluation of those goals and adjusting them based on student outcome data.

New Mexico has developed a real-time data system that tracks issues such as how many students in each grade are behind in terms of earning credits and how many students have transferred schools.

Examples of Six Sigma in Education

The use of process improvement methodology in schools is nothing new. In some cases, Lean, Six Sigma and Lean Six Sigma are directly involved with quality improvement efforts at school.

The Des Moines School District in Iowa has created a Department of Continuous Improvement that has reduced paper timesheet submissions by 97% and saved $80,000 in textbook inventory costs.

At Temple University, Six Sigma Green Belt Nichole Humbrecht, a senior in engineering, has applied the methodology to the school’s charity fundraising efforts. She works with HootaThon, an annual dance event that raises money for the Child Life Services department of the Children’s Hospital of Philadelphia.

A host of universities have also applied Lean and Six Sigma to reduce waste and save costs in a variety of areas, including recycling efforts and improving patient satisfaction at university-affiliated hospitals.

Clearly, education provides as many opportunities for applying Lean and Six Sigma as does business. Training employees and earning certification in Six Sigma can be the first step into making education systems more effective and efficient.