How to Use Analysis of Variance (ANOVA)

If you take a Six Sigma Green Belt or Black Belt training class, Analysis of Variance (ANOVA) is a core analysis tool that is taught. It is used to split variability from a data set into two key groupings: random factors (noise) and systemic factors (significant).

The ANOVA test is a useful tool that helps you establish what impact independent variables (inputs) have on dependent variables (outputs) within a regression model, experimental design or multi-variable study. For instance, ANOVA can be used to determine differences in the average Intelligence Quotient (IQ) scores of people from different countries (e.g. Spain vs. US vs. Italy vs. Canada).

In this example, the IQ scores would be considered the dependent variable, and countries would be an independent variable.

ANOVA provides a statistical test of whether the averages of several groups are equal, and therefore generalizes the traditional t-test to more than two groups, referred to as an F-test. If there were statistical differences between the average IQ scores within each country, then we would conclude that country is a systemic (significant) factor in explaining variation in IQ scores.

Many statistical packages can perform ANOVA analysis and help you determine which of your independent variables are significant, which makes the calculations much easier these days.

The History and Purpose of ANOVA

For the purpose of this type of comparison test, which was developed during the 20th century, t-tests were the primary analysis tools available to analysts until 1918, the year when Ronald Fisher created ANOVA.

However, the term only became a buzzword in 1925 after it appeared in his book, ‘Statistical Methods for Research Workers.’ Initially, the method found application in experimental psychology, but was later employed to wider applications such as farming and manufacturing. As a nod to its creator, the test is also known as the Fisher Analysis of Variance.

The ANOVA test is the first step when analyzing the factors that affect a data set, after assumptions have been validated. After the test has been completed, you can perform further tests on the factors which contribute to the variability, or discover that there are more factors not captured in your data that are missing from your analysis.

From the ANOVA analysis, a percentage of explained variation can be calculated (called an R-squared value), which is a number between 0% and 100%. If your analysis shows a percentage of only 33%, likely you are missing some important variables from your data set, and should find ways to gather additional data and re-run your analysis.

Types of ANOVA

Analysis of variance comes in two distinct forms: one-way and multiple. In a one-way ANOVA, the evaluation carried out is with regard to the impact of a single factor on only one dependent variable. This analysis helps to determine if all categories or groups studied are the same within that variable (such as each country). The purpose of the one-way ANOVA is to establish if there are statistically significant differences in the average of two or more unrelated groups within your dependent variable.

The multiple ANOVA extends the one-way ANOVA to two or more dependent variables. An example of a multiple ANOVA is where a company seeks to compare productivity of its workers on the basis of four independent variables…

Dependent: Productivity (average number of quality documents produced per hour)

Independent:

  1. Age (Under 30, 30-50 years old, over 50)
  2. Job experience in company (less than 5 years, 5-10 years, over 10 years)
  3. Previous related work experience or education (no or yes)
  4. Education Level (no high school degree, high school educated, college educated)

In addition to determining which of the 4 variables influence the productivity, it can also identify if any of the variables interact with each other, creating a more complicated relationship. An interaction in this example might be where previous related work experience does not matter for workers with over 10 years experience in the company, but makes a big difference for workers who are under 30 years old and with the company less than 5 years. The impact on productivity changes when you look at the groups of another variable (it’s not consistent across the board).

How Is ANOVA Used?

You will find ANOVA tables displayed in the these 3 popular Six Sigma tools: Regression Analysis, Gage Repeatability and Reproducibility (R&R) studies, and Design of Experiments (DOE).

For instance, a researcher could test students from different colleges in order to find out if the students attending one college are consistently outperforming those from the rest of the colleges. Another example of the applications of the ANOVA test is a researcher testing two different manufacturing processes to find out if one process used to create a product is more cost effective than the other.

You could even compare the beer consumption between regions of the world to see if they are similar or different.

Here is an example of an ANOVA analysis. The bottom section represents the ANOVA table, showing the Region, Error and Total terms. We will not go into the details of this calculation in this article.

Conclusion

If you are familiar with the traditional t-test, you will be excited to learn that the ANOVA test can replace the t-test, as it can handle more complex analyses that are difficult or impossible to perform with the t-test alone. Due to the increase in computing speed over the last few decades, ANOVA has become one of the most popular techniques used to compare group averages, which is needed to understand many research reports and conduct successful Six Sigma projects.

The Fastest Way to Achieve Zero Defects

In the aerospace industry, zero defects has been the driver for customers and competition since Phil Crosby and his team at the Martin Company provided the Army with a zero defect missile in the 1960s. This is not a new concept but the first question that comes to mind is: how much do we need to invest before we achieve zero defects? Once we embrace the fact that zero defects is not a final destination but a journey, it becomes easier to start asking the right question: how do we get started?

This article will attempt to provide a low-cost way of deploying zero defects with little or no investment. First, a review of a generic zero-defect deployment roadmap for years one, two and beyond that is based on my experiences from three different industries – chemical, medical and aerospace. Then we will look at the most recent deployment of zero defects from my present place of employment. Finally, I will reflect on some guiding principles that are keys to the speed and sustainment of the zero-defect culture.

Generic Deployment Roadmap and the Elbit Systems Journey

The goal is to arrive at a state where it is part of the culture to continuously address and prevent small problems from becoming projects – or worse, defects. The roadmaps below present a generic roadmap for years one, two and beyond (Figures 1, 2 and 3). The last section shows what a stable state looks like at a high level. The example of Elbit’s journey follows the generic roadmaps.

Figure 1: Year 1 of starting a zero defect or any continuous improvement program. The new addition indicates a key success factor that is commonly missed during preparation.

Figure 1: Year 1 of starting a zero defect or any continuous improvement program. The new addition indicates a key success factor that is commonly missed during preparation.

Year 1 needs to focus on two things:

  1. A pilot that will exhibit the potential of using the tools
  2. Setup for a long continuous improvement journey

Selecting the pilot is typically not difficult if you are working with management. The latter requires a great deal of information gathering, training at a high level and putting mechanisms in place that will serve as drivers for continuous improvement.

Figure 2: Year 2 of starting a zero defect or any continuous improvement program. This is the year of standards, maximizing communication and coordination between functions and teams.

Figure 2: Year 2 of starting a zero defect or any continuous improvement program. This is the year of standards, maximizing communication and coordination between functions and teams.

Year 2 needs to focus on building speed of continuous improvement – training and structure. The trainers need to expect to be teaching, coaching and following up continuously.

Figure 3: Year 3 and beyond of any continuous improvement program. This is the year where we start sharing best practices, sustaining the speed of improvement by building it into procedures, how we plan budgets and continuously train the organization.

Figure 3: Year 3 and beyond of any continuous improvement program. This is the year where we start sharing best practices, sustaining the speed of improvement by building it into procedures, how we plan budgets and continuously train the organization.

Year 3 is the first year where the continuous improvement journey should follow a standard recurring set of activities.

Example from the Journey at Elbit Systems (Supplier of Vision Systems to Aerospace)

The zero defects journey at Elbit Systems follows the policy of “don’t ship a defect” to the customer, “don’t make a defect” during assembly and “don’t buy a defect” from the supplier (Figure 4).

Figure 4: Journey of Elbit Systems from plug and pray (i.e., low yields at test) to plug and play (100 percent first-pass yields). We sustained a zero-defect state at final test in March 2017. We reduced 30 percent of defects at assembly by June 2018. The challenge now is “don’t buy a defect,” which entails getting our suppliers to embrace a zero-defect journey.

Figure 4: Journey of Elbit Systems from plug and pray (i.e., low yields at test) to plug and play (100 percent first-pass yields). We sustained a zero-defect state at final test in March 2017. We reduced 30 percent of defects at assembly by June 2018. The challenge now is “don’t buy a defect,” which entails getting our suppliers to embrace a zero-defect journey.

Zero defects during year one was all about getting the final test yield to 100 percent. The second year was about putting process controls (Figure 5) in place, educating the suppliers and performing failure mode and effects analysis (FMEA). After that, during weekly zero defect reviews with management, we used FMEA to show the top risks and activities to reduce the defects.

Figure 5: Process control deployment is critical to standardizing and sustaining gains in zero defects.

Figure 5: Process control deployment is critical to standardizing and sustaining gains in zero defects.

The important thing to note with the use of the FMEA is that we not only addressed ongoing defects but also ensured that potential risks are not realized during production.

Guiding Elements

Here are some elements that are critical to both speed and quality of deployment:

  1. How can we achieve zero defects faster than anyone else? By learning faster than anyone else. I recommend using just-in-time training and a structured approach. It may be better to train only when there is assurance of immediate implementation such as just before a Kaizen or a scheduled activity. Structure of DMAIC (Define, Measure, Analyze, Improve, Control) or PDCA (plan, do, check, act) can be easily driven by requiring A3s. Note: Be aggressive about use of visuals and data in A3s; remind everyone that A3s are stories not mere record-keeping.
  2. How can we drive zero defects through the organization and external parties? The application of tools. Asking, “which tools did you use,” should start and end every discussion.
  3. Remember Lean is about creating wealth by eliminating waste and creating value in its place. Six Sigma is about making a change based on data (not based on opinions or tweaking). Data is the lifeblood of any continuous improvement program.
  4. Figure 5 shows how different tools work together to control process performance. Note: Just by making team members aware of how the process is behaving with numbers can change their behaviors and drive them to make the right decisions. Eighty percent of the benefits from implementing process controls comes from simply charting failures immediately after they happen.
  5. Figure 6 is the most useful representation of the change curve and extremely useful when preparing for a Kaizen event or managing a process improvement project.

Figure 6: The Kubler Ross Change Curve and how to handle every change situation.

Figure 6: The Kubler Ross Change Curve and how to handle every change situation.

Conclusion

The roadmaps, together with the guiding elements, can provide any organization with a starting point for zero defects. Today there are more than 10,000 suppliers who cater to the aerospace industry who are being asked to start zero defects programs if they want to stay in business. It will not be long before every industry does the same. The key for everyone who gets such a request is to remember deploying zero defects is not difficult, requires little or no investment and you already have all it takes. Just customize the roadmaps provided here to fit your situation and keep to the guiding elements.

‘Where Else?’ symptom analysis

It is tempting to attack symptoms without addressing underlying causes, but to do so is to commit yourself to endless fire fighting. but the symptoms are telling us something: they are an indication of something that has occurred either good or bad. In quality improvement we are encouraged to quickly find the root cause of the symptom and not spend time analyzing it since the symptom, while superficially troubling, is not the problem so much as a red flag we should be grateful we can see: put off solving a root cause, and you’ll only spread the damage further – and it might be fatal to the organisation as a whole.

From experience I have learned that a thorough analysis of a symptom that is causing problems often leads to other areas that may be experiencing similar issues and this expands the potential solution space. Once a symptom is identified, you need to start asking: where else does this occur?

A problem I encountered in the past as a hospital administrator in an orthopedic hospital was patient falls. The first analysis of the symptom indicated it was in the hospital room but as we asked the question ‘Where Else’ it could occur we uncovered a number of areas that needed to be addressed.

Symptom: Patient Falls in the hospital – below are a few examples of Where Else:

• Where else? In their hospital room – getting out of bed, going to the bathroom, or getting into a chair.

• Where else? In the corridor during walking exercises – While Ambulating

• Where else? In the physical therapy room during therapy

• Where else? Radiology – getting in position for a scan

• Where else? At the hospital entrance being discharged to their ride home

• Where else? At home after surgery

This expands the solution space since a fix in one area still would result in additional patient falls and injury in the other areas if they are not addressed. As each one of these ‘Where Else’ areas was analyzed they lead to improvement and cost reduction opportunities.

Once the ‘Where Else’ areas are identified you want to analyze each one by asking the following questions as shown in the matrix below:

• When does it happen? – specific time of day

• Why does it happen? – causes

• How to correct it? – quick solutions

• What is the cost of the correction? – investment required

• What is the priority? – (High, Medium, or Low)

The fixes that were put in place in the Where Else areas:

• Hospital room – all equipment needed for a patient to use when they got home (reachers, sock aid, long handle shoe horn, elastic laces, and long handle sponge) were labeled with the room number so they would not migrate to other rooms or be taken home by the patient. Potential patients were given a list of items they needed to buy beforehand to use in the hospital and when they got home. The lobby gift shop had packages prepared with all the needed items as well as a list of other stores where they could be purchased. This had a positive revenue impact to the gift shop of $25,000 per year.

Depending on the physical size of the patient, a policy was put in place to make sure the nurse or physical therapist could get the person up or additional help was available to minimize the risk of a fall.

In addition, patients, once scheduled for surgery, were given education about the procedure along with what items they would need to bring for their recovery therapy as well as what they would need at time of discharge.

• Corridor – before allowing a patient to walk in the corridor with assistance or alone the floor nurse would assess the person’s capability to do so. Hand rails were also installed on the corridors used the most by patients. The improvement team developed a comprehensive floor safety plan.

• Discharge – a large curb cut at the discharge area was made so the wheel chair could roll up closer to the car and make the transition to the car safer. Also it was noted that many patients took the pillows that were used in the wheel chair home since they needed extra padding. It was estimated that 10 pillows a day were taken home, costing the hospital approximately $100/per day. Significant cost savings were achieved by letting patients know before discharge to bring pillows as none would be provided at the time of discharge.

• At home – since this was out of the hospital’s control an orthopedic vendor was engaged that would go to the patient’s home and assess what would be needed to reduce the risk of falls in the house such as grab bars, toilet safety frame, tub chair, or transfer chair. If the patient purchased the needed items they would install them and train them on the use. In many cases it was covered by insurance.

Here’s how the matrix above was used:

(C x 8) + FMT = A Formula for Better Process Change

What if I told you there was a secret formula that, if followed, would help your next process change go smoother and faster? ‘Secret’ because it involves components brought together from different disciplines, fine-tuned behind the secured gates of a few Fortune 500 companies, and not necessarily available at your local library. The best news? It is free, quick to learn, and can be applied immediately. The bad news? It’s deceptively simple. Like golf, it’s learned quickly but can take a lifetime to master…

Our rallying cry with this formula comes from an unlikely source: Mr. Rogers. To paraphrase one of his quotes: ‘simple and deep is more essential than complex and shallow.’ Thus, we avoid partial derivatives and a 14-step calculation for our formula. We only have two variables in our equation, but robustly addressing each will give you quite an advantage over the typical process change.

So what is this secret formula?

C = [Communications] and FMT = [Failure Mode Thinking]

Let’s break it down:

(C x 8)

Of those process change efforts that fail, many are caused by a severe underestimation of the need to communicate. The resulting confusion, unmet expectations, and frustration have derailed many changes. Why 8? One of my mentors always told me that if you want to get your message heard in a busy world, you need to communicate it eight times and in eight different ways. That carefully worded 600 word email you broadcast out to everyone describing your change may be just one of 154 emails each person receives that day. With the tsunami of information washing over workers each day, it’s a wonder they remember anything. How can you make your communications stand out? You employ multiple communication channels: print, video, electronic, in person. Also, your audience will focus on different aspects of your communications: some are visual, some want details and others just the bottom line for example. You also get to control when things are communicated, their length, and who does the communicating. Be creative – there are many choices open to you for communicating.

FMT 
Failure Mode Thinking is all about proactively pondering what could possibly go wrong with your process change. Then, develop countermeasures to either prevent that or address it if it occurs. Engineers and quality experts use FMEA (Failure Mode and Effects Analysis) all the time in designs of products, structures, and processes. We’ll use it for a process change rollout. Nothing worse than assuming everything will go smoothly and then being surprised by something that disrupts your rollout. Especially if that ‘something’ could easily have been anticipated in advance and avoided. While not true in all cases but in some, every minute you spend proactively error-proofing your change saves you an hour of reactive damage control during rollout. Look at your To-Do list or rollout plan and ask yourself ‘What could possibly go wrong with this?’ Start this as early as the diagnosis and design stages for the process you are improving – well in advance of rollout.

Bringing both of these together can be very powerful. Following this simple formula can save you a lot of headaches. Now, if you’ll excuse me, I have another formula to follow that I call: 2B + 1H = GA (Two beers plus one hammock = a great afternoon). Happy change!

Five Costly Mistakes Applying SPC

Ihave daily conversations with manufacturer plant managers, quality managers, engineers, supervisors, and plant production workers about challenges when using statistical process control (SPC). Of the mistakes I witness in the application of SPC, I’d like to share the five most prevalent; they can be costly.

No. 1: Capability before stability

Capability is a critical metric, and capability statistics are often an important part of your supply chain conversation. Your customers want assurance that your processes are capable of meeting their requirements. These requirements are usually communicated as tolerances or specifications.

Customers frequently specify a process capability index (Cpk) or process performance index (Ppk) value that you must meet. Because they put such importance on this value, capability statistics may become your primary concern in quality improvement efforts. They may be important, but sole reliance on Cpk values is premature.

The first issue to be addressed is getting to a stable, predictable process. Building control charts into your analytical process on the front end can prevent costly mistakes such as producing scrap, shipping unacceptable product, or even setting the stage for a dreaded recall.

No. 2:  Misuse of control limits

Producing control charts doesn’t guarantee accurate process feedback. There are many subtleties with the application of control limits that are easy to get wrong. Here are a few common errors.

Computing wrong limit values with a home-grown tool. Time and time again I have seen examples where the numbers are just wrong, often resulting in audit failures. If you use a home-grown tool for SPC, proceed with caution.

Never computing static control limits. The decision to compute control limits should be a deliberate one, even if your SPC software automatically computes limits for you.

Never re-computing control limits. If you reduce variation over the course of a year, then the control limits you computed in January will not reflect how the process is running the following December. A deliberate re-computing of the control limits to establish the “new normal” is in order.

Waiting to have enough data to compute control limits. Whether you have a small amount of data or a great deal of data, computing “baseline” control limits will almost always provide benefits. There are many guidelines, such as waiting until you have at least 25 subgroups gathered over a normal course of production. If you don’t have much data yet, reasonable control limits can be computed even with small amounts of data.

Confusing specification limits with control limits. Specifications, aka tolerances, tell you what your customer requires. Control limits reflect how your process behaves. I often see line charts with horizontal specification lines at the upper and lower specification values. This type of chart might provide value in some situations, but it should never be confused with a control chart.

No. 3: Measurement system issues

If you are applying SPC, you’re measuring things. Do you know how well you are measuring? This is a critical factor that is easily overlooked when you are focused on SPC. Even the best application of SPC tools can be undermined when the ability to measure things is uncertain.

In addition to having measurement systems analysis tools, you need to properly manage your measuring devices. How well do you manage your measurement equipment? What is the calibration interval? What steps are checked during a calibration? What’s the history of calibration for a given device? What master gauges are used for the calibration, and have those devices been calibrated?

Software applications designed for this purpose, such as PQ Systems’ GAGEpack, can help to assess and manage measurement systems.

No. 4: Delegating SPC work to one employee or a small group of employees

In many organizations, SPC is not yet internalized and normalized as a part of doing business. This becomes a problem when the person tasked with SPC leaves. The system they put in place may get less attention, and charts on key quality metrics may not get refreshed.

No. 5: Not leveraging technology to scale your SPC efforts

Technology has made it easier to create and deploy SPC charts on anything and everything. While advantages to this abound, the amount of time spent by valuable employees doing nonvalue-adding, repetitive, SPC-related work can be costly.

If you need to monitor dozens or even hundreds of SPC charts, you need to seek methods of scaling your SPC application. Consider the time it might take to do these steps:
1. Find the chart of interest.
2. Display the chart.
3. Analyze the chart.
4. Decide whether action is needed.

Why invest an employee’s time and attention to look at hundreds of charts—most of which are stable or in control? Utilizing an automated approach can amplify your ability to pay attention to key metrics without dragging quality workers away from more important activities.

Often, when I see the five mistakes listed in this article, the root cause is too much focus on the tools of SPC and not enough focus on the SPC way of thinking. The common thread among these mistakes is an underlying need for more education. Through continuing education, this SPC way of thinking can become embedded in the manufacturing culture.

Combining Quality Tools for Effective Problem Solving

Quality tools can serve many purposes in problem solving. They may be used to assist in decision making, selecting quality improvement projects, and in performing root cause analysis. They provide useful structure to brainstorming sessions, for communicating information, and for sharing ideas with a team. They also help with identifying the optimal option when more than one potential solution is available. Quality tools can also provide assistance in managing a problem-solving or quality improvement project.

Seven classic quality tools

The Classic Seven Quality tools were compiled by Kaoru Ishikawa in his book, Guide to Quality Control (Asian Productivity Organization, 1991). Also known as “The Seven Tools” and “The Seven Quality Tools,” these basic tools should be understood by every quality professional. The Classic Seven Tools were first presented as tools for production employees to use in analyzing their own problems; they are both simple enough for everybody to use, yet powerful enough to tackle complex problems.

The seven tools are:
1. Cause and effect diagrams
2. Scatter diagrams
3. Control charts
4. Histograms
5. Check sheets
6. Pareto charts
7. Flow charts

cause-and-effect-diagram is used to list potential causes of a problem. It is also known as an Ishikawa diagram or fishbone diagram. Typically, the main branches are the “6Ms,” or man, material, methods, milieu (environment), machine, and measurement. Sub-branches are listed under the main branches with “twigs” containing the potential problem causes. A cause-and-effect diagram can be used to assist when the team is brainstorming, and it can also be used to quickly communicate all potential causes under consideration.


Figure 1:
 Cause-and-effect diagram. (Click here for larger image)

scatter diagram graphically depicts paired data points along an X and Y axis. The scatter diagram can be used to quickly identify potential relationships between paired data points. Figure 2 depicts various potential correlations ranging from no correlation to a strong negative and strong positive correlation. It is important to remember that a strong correlation does not necessarily mean there is a direct relationship between the paired data points; they may be following third, unstudied factor.

Figure 2: Scatter diagram.

Control charts are used to evaluate and monitor the performance of a process (Wheeler 1995). There are many types of control charts available for statistical process control (SPC), and different charts are used deepening on the sample size and the type of data used. An individuals chart is used when the sample size is one. The formulas for an individuals chart are shown in table 1, and an example of an individuals chart for a shaft diameter is shown in figure 3. The data are in a state of statistical control when all values are within the control limits, which contain 99.7 percent of all values.


Table 1: Formulas for center line and control limits when sample size is one


Figure 3: Control chart

Histograms are used to visualize the distribution of data (McClave and Sincich 2009). The y-axis shows the frequency of occurrences, and the x-axis shows the actual measurements. Each bar on a histogram is a bin, and bin size can be determined by taking the square root of the number of items being analyzed. Using a histogram can quickly show if the data are skewed in one direction or another. Figure 4 shows a histogram for data that fit a normal distribution, with half of all values above and below the mean.


Figure 4:Histogram

Check sheets are used for the collection of data (Borror 2009), such as when parts are being inspected. The various failure categories or problems are listed, and a hash mark is placed next to the label when the failure or problem is observed (see figure 5). The data collected in a check sheet can be evaluated using a Pareto chart.


Figure 5: Check sheet

A Pareto chart is used for prioritization by identifying the 20 percent of problems that result in 80 percent of costs (Juran 2005). This can be useful when searching for improvement projects that will deliver the most impact with the least effort. Figure 6 shows a Pareto chart with three out of seven problems accounting for 80 percent of all problems. Those three would be the priority for improvement projects.


Figure 6: Pareto chart

A flowchart is used to gain a better understanding of a process (Brassard 1996). A flowchart may provide a high-level view of a process, such as the one shown in figure 7, or it may be used to detail every individual step in the process. It may be necessary to create a high-level flowchart to identify potential problem areas and then chart the identified areas in detail to identify steps that need further investigation.


Figure 7: Flowchart

Seven new management and planning tools

The seven new management and planning tools are based on operations research and were created between 1972 and 1979 by the Japanese Society for Quality Control. They were first translated into English by GOAL/QPC in 1983 (Brassard 1996).

These seven tools are:
1. Affinity diagram
2. Interrelationship diagram
3. Tree diagram
4. Arrow diagram
5. Matrix diagram
6. Prioritization matrix
7. Process decision program chart (PDPC)

An affinity diagram identifies points by logically grouping concepts (ReVelle 2004). Members of a team write down items that they believe are associated with the problem under consideration, and these ideas are then grouped into categories or related points.


Figure 8: Affinity diagram

The interrelationship diagram depicts cause-and-effect relationships between concepts and is created by listing problems on cards (Westcott 2014). These cards are then laid out, and influences are identified with arrows pointing at the items that are being influenced. One item with many arrows originating from it is a cause that has many influences, and much can be achieved by correcting or preventing this problem.


Figure 9: Interrelationship diagram

A tree diagram assists in moving from generalities to the specifics of an issue (Tague 2005). Each level is broken down into more specific components as one moves from left to right in the diagram.


Figure 10: Tree diagram

An arrow diagram is used to identify the order in which steps need to be completed to finish an operation or project on time (Brassard 1996). The individual steps are listed, together with the duration, in the order that they occur. Using an arrow diagram such as the one in figure 11 can show steps that must start on time to prevent a delay in the entire project or operation.


Figure 11: Arrow diagram

The matrix diagram is used to show relations between groups of data (Westcott 2014). The matrix diagram in Figure 12 depicts three suppliers as well as their fulfillment of the three characteristics listed on the left side of the table. In this example, only two suppliers share the characteristic “ISO certification.”


Figure 12: Matrix diagram

The prioritization matrix is used to select the optimal option by assigning weighted values to the characteristics that must be fulfilled, and then assessing the degree to which each option fulfills the requirement (ReVelle 2004). The prioritization matrix in figure 13 is being used to select the best option for a staffing problem.


Figure 13: Prioritization matrix

Process decision program charts (PDPC) map out potential problems in a plan and their solutions (Tague 2005). The example in figure 14 shows the potential problems that could be encountered when conduction employee training, as well as solutions to these problems.


Figure 14: Process decision program chart

 

Example of combining quality tools

Multiple quality tools can be used in succession to address a problem (Barsalou 2015). The tools should be selected based on the intended use, and information from one tool can be used to support a later tool. The first step is to create a detailed problem description that fully describes the problem. In this hypothetical example, the problem description is “coffee in second-floor break room tastes bad to the majority of coffee drinkers; this was first noticed in February 2017.” The hypothetical problem-solving team then creates the flowchart shown in figure 15 to better understand the process.


Figure 15: Flowchart for coffee-making process

The team then brainstorms potential causes of the problem. These ideas come from the team members’ experience with comparable, previous issues as well as technical knowledge and understanding of the process. The ideas are written on note cards, which are grouped into related categories to create an affinity diagram based around the 6Ms that are used for a cause-and-effect diagram (see figure 16).


Figure 16: Affinity diagram for bad-tasting coffee

The affinity diagram is then turned into the cause-and-effect diagram depicted in figure 17. The team can then expand the cause-and-effect diagram if necessary. The cause-and-effect diagram provides a graphical method of communicating the many root-cause hypotheses. This makes it easy to communicate the hypotheses, but it’s not ideal for tracking the evaluation and results.


Figure 17: Cause-and-effect diagram for coffee taste

Cause-and-effect diagram items are then transferred to a worksheet like the one shown in figure 18. The hypotheses are then prioritized so that the most probable causes are the first ones to be investigated. A method of evaluation is then determined, a team member is assigned the evaluation action item, and a target completion date is listed. A summary of evaluation results is then listed, and the conclusions are color-coded to indicate if they are OK, unclear, or potentially the root cause. Unclear items as well as potential root causes should then be investigated further, and OK items are moved from consideration.


Figure 18: Cause-and-effect diagram worksheet for coffee taste. 

Figure 19 shows a close up view of the cause-and-effect worksheet. Often, the cause-and-effect diagram item is not clean in how it is related to the problem. In such a situation, it can be expand in the worksheet to turn it into a clearer hypotheses. For example, “Water” in the cause-and-effect diagram can be changed to “Water from the city water system containing chemicals leading to coffee tasting bad” in the worksheet.


Figure 19: Close-up of a cause-and-effect diagram worksheet. 

A prioritization matrix can be used to evaluate multiple potential solutions to the problem. In this example, the team has identified three potential solutions: The team can clean and repair the old machine, buy a new machine, or buy an expensive new machine. They want to avoid high costs, but do not want to spend too much time on implementing the solution, and they want something with long-term value. Therefore the prioritization matrix shown in figure 20 is used to find the ideal solution.


Figure 20: Prioritization matrix for improvement options

Conclusion

There is no one right quality tool for every job, so quality tools should be selected based on what must to be accomplished. Information from one tool can be transferred to a different tool to continue the problem-solving process. Actions items resulting from a cause-and-effect diagram should be entered into a tracking list. This assists the team leader in tracking the status of items, makes it easier to ensure action items are completed, and is also useful for reporting the result of action items.

The value of process in RPA

 

As with many disruptive technologies, theories about robotic process automation (RPA) abound. Some experts believe that by 2020, as much as 25 percent of workflows across all industries will be performed by RPA. Others state that ROI in RPA systems can range from 30 to 200 percent in the first 12 months alone.

It should come as no surprise, then, that businesses across the globe are excited about the potential RPA holds for increasing efficiency and reducing costs.

Organizations would do well to remember one key fact. No matter how powerful the technology is, it’s based on human design and programming — so it has limitations. And the only way to overcome these limitations is to first focus on Process Excellence.

A closer look at RPA

At its most basic, RPA uses automated systems that are governed by business logic to streamline processes. These systems are often referred to as ‘bots’, and they’re geared towards repeatable, rules-based tasks. From a business perspective, the interesting thing about RPA is that it’s faster and more accurate than any human. For example, research shows that a bot can complete a task that would take a human 15 minutes in a mere 60 seconds. Moreover, a bot isn’t susceptible to factors such as fatigue and loss of engagement — issues that are especially important when it comes to high-volume tasks.

The vital difference between RPA and AI

While RPA is often mentioned under the umbrella term of artificial intelligence (AI), it’s definitely a different beast. True AI utilizes powerful algorithms to process vast amounts of data in order to recognize patterns, perform high-level decision making and even learn from new input. In essence, it’s created to mimic human thought processes. For example, AI is increasingly being used in oncology to recognize patterns in large datasets or help scientists determine which patients are good candidates for clinical trials.

In contrast, RPA isn’t designed to think. It’s designed to perform specific tasks in a more efficient manner than humans can complete them. RPA bots can’t learn based on data input, nor can they make any decisions unless they’ve specifically been programmed to do so. Their correct functioning depends entirely on a well-structured environment. In other words, they are not suited to dynamic, evolving environments with highly variable input. For instance, more and more large financial and insurance institutions are using bots for back-office processes such as loans and claim processing.

The value of process in RPA

Knowing this difference, it’s essential to have a clear understanding of the capabilities of RPA when introducing it into your organization. Clearly, since it can’t think or learn, the processes you want to automate with RPA need to be optimized before implementation.

If this isn’t the case, you might achieve cost-savings by eliminating some of the human input — but you won’t improve the process itself by simply automating it. And inefficient or ineffective processes can leave your company vulnerable to a whole host of problems even, or especially, when they’re automated. Issues can range from cost overrun due to waste, to mistakes that adversely impact your services or products.

The importance of good business process management

To establish effective RPA, your processes need to be well-defined. Here are three things you can focus on:

1. Clear, comprehensive end-to-end processes: Every process should be designed so it’s clear what it accomplishes, who performs each step and how every step fits into the overall process. All of this should be supported by easy-to-comprehend documentation.

2. Strong team engagement: Your team needs to be encouraged to take ownership of processes and contribute to their continuous improvement. This is crucial, as it will also reduce the fear of automation and make your people feel more empowered. In addition, it can be helpful to use dynamic BPM tools that enable employees to see their role in processes so they can pinpoint areas for improvement.

3. Executive buy-in: The top-down championing of process excellence is a cornerstone of effective business process management. When leadership believes in process improvement as a condition of efficiency and innovation, it will become embedded in the organization’s DNA. Executives also need to promote conversations about the role of automation in their process improvement efforts in order to reduce apprehension among employees and help them see the value.

Effective RPA hinges on good BPM

RPA holds a lot of potential for businesses, but it needs a foundation of good BPM to be effective. That’s why, before implementing RPA in your organization, it’s advisable to engage your teams in the inventory of your current processes. By doing so you can achieve the cost-savings and effectiveness you expected from your RPA investment.

Manufacturing Engineers Use Lean and Six Sigma to Achieve Success

As more industries implement Lean and Six Sigma to improve efficiency and effectiveness, another trend has emerged. Young workers are learning the methodologies in bigger numbers.

Manufacturing Engineering’s 2018 Class of 30 Under 30 provides the latest example. The list contains many young professionals who have learned Lean and Six Sigma skills and put them to use to make a name for themselves in their respective industries.

It’s not the first and is likely not the last such example. Recently, the “30 Under 30 Supply Chain Stars” awards recognized Denver Water Department employee Rhianna Galen as a rising star in supply chain.

In just two years, Gallen has applied Lean Six Sigma methodology to save the department more than $400,000.

30 Under 30 Stars

In the new awards from Manufacturing Engineering, a number of winners mentioned Lean and Six Sigma use as key to success early in their careers. They include the following.

Women in Manufacturing

Women have increasingly created an impact in the world of Lean and Six Sigma. This ranges from innovators in healthcare to higher education. In the Manufacturing Engineering awards, the list included these two notable women.

Melinda Dean

Dean is a technical manager who oversees manufacturing engineers at Pratt & Whitney. She manages $15 million in capital expenditures. A graduate of Loyola University in Baltimore, she went on to earn a master’s degree from Rensselaer Polytechnic Institute.

She said one of the main reasons she returned for graduate school was to “gain a greater knowledge of Lean principles and apply them. It was important for me to be able to achieve better product flow, faster cycle times and improved production quality.”

Maeve Guilfoyle

Guilfoyle is a former music major who decided to switch to engineering after taking a job at Takumi Precision Engineering in Limerick, Ireland. She now works while attending college at the Limerick Institute of Technology.

Takumi has grown 25% since she joined the company. Her boss said Guilfoyle is part of that growth because of her focus on efficiency.

Guilfoyle said at school – where she is at the top of her class and also the only woman in her class – she most enjoys learning about Lean, Six Sigma and statistical process control because “I love anything that makes a system flow better, work cleaner and makes processes as close to poka yoke as possible, which I feel is important for any small company.”

Other Award Winners

Parth N. Khimsaria, an immigrant from India and graduate of Oregon State University, works at Lam Research as an operations analyst in the continuous improvement department. As part of his job, he coaches other employees in Lean Six Sigma methodologies.

William McCall is a manufacturing engineering with AKG North American Operations. He said a trip to a BMW plant and seeing thousands of people working together for one goal opened his eyes to the complexities and possibilities of manufacturing. His supervisor said he is “always focused on continuous improvement.”

Jeremy Miller has led or participated in more than 40 Lean improvement projects with his employer, Olympic Steel.

Ragava Reddy Sama, who has a master’s from California State University, also is a Master Black Belt in Lean Six Sigma. At his job with Master Power Transmission in Indiana, he’s already reduced inspection time by 50% and reduced tool inventory.

James Strausbaugh holds a Green Belt in Six Sigma, which he got after working on a project to reduce cycle time in an emergency room while in college. He now works for Global Shop Solutions in The Woodlands, Texas and is known for finding and fixing problem areas that affect costs.

This is yet another list of achievers who have made a name for themselves, partially because of a focus on process improvement and an education in Lean and Six Sigma tools and techniques.

Case Study: Building a Business Case for Software Defect Reduction

One of the many challenges faced when attempting to build a business case for software process improvement is the relative lack of credible measurement data. If a company does not have the data to build the business case, it does not have the improvement project to get the data. It is the classic chicken-and-egg dilemma. But there is a solution.

An example case study – actually a composite of several similar situations – illustrates some of the challenges and how to overcome them when attempting to create a realistic, defensible evaluation of potential and actual benefits. As the example indicates, this is often a multistage process that unfolds over a period of months or even years. One of the keys to success is to candidly acknowledge what is and is not known at a particular point in time. All of the numbers here have been changed to protect the innocent (and the guilty), but the overall story and learning are faithful to the real projects.

Beginning with the Situation

A 100-person software development team is responsible for a major software product containing a total of about 4,000,000 statements. The product has been built in a series of releases, with each release typically adding 800,000 to 1,200,000 statements. The development cycle for each release is approximately one year (which includes design, coding and all testing prior to release). The development team is responsible for all support and defect repair during the first year after release. Hence, the team is concurrently responsible for maintenance of the previous release and development of the next. After the first year, support and defect repair is handled by a separate maintenance organization.

In order to build a business case that will lead to approval for a pilot improvement project, available baseline data on the most recent release is collected. This data, together with industry data when local data does not exist, is the basis for the initial business case.

The Initial Business Case

The basic premise of the initial business case is that the introduction of formal peer reviews, initially applied to code only, will reduce the cost to find and fix defects relative to find-and-fix costs associated with existing test practices. In addition, the new process is expected to deliver a higher quality product, as measured by “total containment effectiveness,” or TCE. (TCE = defects discovered before release, divided by [defects discovered before release plus defects discovered first year after release] minus percent of defects discovered before release.)

What needs to be known – “as-is” (baseline) and “to-be:”

  • The number of defects found in each phase – code, integration test, system/acceptance test and release Year 1)
  • The effort required to find and fix a defect in each phase
  • Average labor rate for the team

What is known:

  • The number of customer calls related to the prior release. These included requests for consulting assistance, various types of questions, as well as reports of actual defects.
  • The number of internally reported defects. It was widely recognized that many defects were not reported.
  • The approximate start and end date of each phase (although there was some overlap), and the percent of total effort devoted to maintenance of the prior release and development of the current release during those time periods. From this information, the approximate find-and-fix time for each phase can be calculated. This leads to 12 hours per defect during integration test, 18 hours during system/acceptance test, and 42 hours for defects delivered to customers.
  • The average labor rate

What is not known…and what is assumed:

  • It was recognized that the actual number of defects was less than the number of messages from customers, but there was no way to determine the actual number of true defects. Hence, the number of calls was used because that was the best information available. In any event, that would not have a major impact on the business case as it is assumed that any distortion is uniform across phases. During the pilot deployment, the defect/non-defect ratio will be measured and the appropriate adjustment will be retroactively applied.
  • While an approximate estimate of total labor devoted to test phases is known, there was no way to distinguish “find” time from “fix” time, so the total of the two was estimated, with an intent to measure the distinction during the pilot process.
  • The internal defect count was known to be under-reported. Therefore it was decided to check the code management system to approximate the defect count by looking at the versions created during testing – this turned out to be about 50 percent greater than the number of defects recorded during testing. This data was not believed to be completely accurate either, but closer to the actual facts.
  • Since inspections had not been done in the previous release, the defect removal effectiveness rate was not known. Industry experience indicated that inspections can typically remove around 60 percent of the defects present in work products inspected. Industry data also shows that in many instances defect “clustering” occurs (e.g., perhaps 60 percent of all defects are found in 20 percent of the work products); hence, selecting the right items to inspect was critical to success. Since as a practical matter it would not be possible to inspect everything, it was decided to inspect about 20 percent of the total code. In the ideal case, where the high defect 20 percent was selected, this could theoretically lead to removal of 36 percent of the total defects by inspection (i.e., 60 percent of 60 percent). That was deemed unlikely, so it was decided to base the business case on 18 percent of total defects removed by inspections.
  • Cost to find and fix defects by inspection also was not known. Based on industry experience, it was decided to use four hours as the initial estimate, to be confirmed or changed by experience during the pilot.

The above leads to the following initial business case as outlined in Table 1.

Table 1: Initial Business Case

Rates

As-Is Baseline

To-Be

Labor Rate (Per Hour)

$80.00

Defects

F&F Hours

Dollars

Defects

F&F Hours

Dollars

F&F Hours

Total

% Found

16,510

16,510

Code Inspections

18.0%

0

2,972

11,887

$950,976

4

Integration Test

25.4%

4,188

70,615

$5,649,224

3,434

41,213

$3,297,056

12

System/Acceptance Test

48.1%

5,927

106,680

$8,534,400

4,860

87,478

$6,998,208

18

Customer (One Year)

38.7%

6,395

268,590

$21,487,200

5,244

220,244

$11,639,885

42

Year 1 TCE

61.3%

Year 1 TCE

68.2%

Totals

445,885

$35,670824

360,822

$28,865,744

Savings

$6,805,080

19.1%

Results of the Pilot Project

Based on the initial business case, the improvement initiative was approved and the DMAIC (Define, Measure, Analyze, Improve, Control) roadmap was followed. Two component development teams, each with about 10 people, were selected for the pilot to demonstrate “proof of concept.” Each team inspected roughly 20 percent of the code they developed, performing a total of about 100 inspections. In addition to inspections, the pilot teams used improved tools and processes for defect tracking and time accounting throughout the development and testing cycle. Hence at the end of the one-year pilot phase, there was much more accurate data on defect containment rates in each phase, as well as accurate data on find and fix costs.

Data from the pilots was used to revisit the initial business case. The more accurate containment rates derived from the pilot were retroactively applied to the baseline, based on the fact that the testing processes had not changed, the application and team composition were the same – hence, this approach gives a more accurate comparison of as-is and to-be. Also investigated was the relationship between the number of calls and the number of actual defects – roughly 60 percent of calls were actually defects.

The results of the pilot were scaled-up to the same scale as the original business case (i.e., the pilot represented about 20 percent of the total), so results were multiplied by five. Note that at this point there are no post-release results, so it had to be assumed that the total number of statements developed and the number of defects will remain unchanged from the baseline.

The pilot results were actually somewhat better than the initial business case. The percent of defects discovered by inspections was actually higher than forecast (25 percent versus 18 percent). This is based on the assumption that the total number of statements and the number of defects “inserted” remains the same compared to the previous release. Since no other process changes were made during this time and the staff is the same, this was a reasonable assumption – it will be check against actual results at the end of Year 2.

The defect find-and-fix costs are a bit different that in the initial business case, but this has no effect on the benefits, since these rates are applied to both as-is and to-be.

The scenario presented in Table 2 assumes that the primary goal is to reduce cost; hence, reducing test effort significantly while delivering a product with a slightly higher TCE.

Table 2: Initial Business Case, Assuming Goal of Cost Reduction

Rates

As-Is Baseline

To-Be

Labor Rate (Per Hour)

$80.00

Defects

F&F Hours

Dollars

Defects

F&F Hours

Dollars

F&F Hours

Total

% Found

9,906

9,906

Code Inspections

18.0%

0

2,477

9,906

$792,480

4

Integration Test

25.4%

2,513

37,343

$2,987,454

1,885

18,848

$1,507,800

10

System/Acceptance Test

48.1%

3,556

90,678

$7,254,240

2,667

68,009

$5,440,680

26

Customer (One Year)

38.7%

3,837

161,154

$12,892,320

2,878

120,866

$6,387,742

42

Year 1 TCE

61.3%

Year 1 TCE

70.9%

Totals

289,175

$23,134,014

217,628

$17,410,200

Savings

$5,723,814

24.7%

Alternatively, management might prefer to hold test effort constant, deliver higher quality, and realize cost savings in post-release maintenance. If that approach to harvesting to-be benefits is chosen, it must be assumed that testing will be somewhat less effective because fewer defects “enter” testing since they were removed by inspections. The business case in Table 3 assumes the hours devoted to testing will be unchanged compared to the baseline, but testing will be 10 percent less effective (i.e., the defects found in each test phase will be 10 percent less than in the baseline). Savings are only slightly less, but delivered quality is much higher as measured by TCE – 80.1 percent rather than 70.9 percent. About half as many defects are delivered – 1,967 rather than 3,837.

Table 3: Initial Business Case, Assuming Goal of Higher Quality

Rates

As-Is Baseline

To-Be

Labor Rate (Per Hour)

$80.00

Defects

F&F Hours

Dollars

Defects

F&F Hours

Dollars

F&F Hours

Total

% Found

9,906

9,906

Code Inspections

18.0%

0

2,477

9,906

$792,480

4

Integration Test

25.4%

2,513

37,343

$2,987,454

2,262

37,343

$2,987,454

10

System/Acceptance Test

48.1%

3,556

90,678

$7,254,240

3,200

90,678

$7,254,240

26

Customer (One Year)

38.7%

3,837

161,154

$12,892,320

1967

82,631

$4,367,038

42

Year 1 TCE

61.3%

Year 1 TCE

80.1%

Totals

289,175

$23,134,014

220,558

$17,644,638

Savings

$5,489,376

23.7%

Conclusion: Next Steps and Take-Aways

Based on the results so far, management has agreed to apply inspections to the complete product during the next release development cycle. They have also agreed to have the complete team use the new data collection tools and processes so that in the future accurate data will be available for the entire life cycle, including customer use. That data can be used to prepare much more accurate business cases for future improvement proposals, such as improvements to the test processes.

After another year, data will be available to confirm or revise estimates of total defects and cost to find and fix defects delivered to the customer. The business cases can then be revisited and be restated using the results at that time.

This example case study offers a number of takeaways:

  • No one ever knows everything they would like to know at the start.
  • Nothing can be found out if you do not start.
  • Make the best assumptions, use available data and get better as you go.

Manufacturing Engineers Use Lean and Six Sigma to Achieve Success

As more industries implement Lean and Six Sigma to improve efficiency and effectiveness, another trend has emerged. Young workers are learning the methodologies in bigger numbers.

Manufacturing Engineering’s 2018 Class of 30 Under 30 provides the latest example. The list contains many young professionals who have learned Lean and Six Sigma skills and put them to use to make a name for themselves in their respective industries.

It’s not the first and is likely not the last such example. Recently, the “30 Under 30 Supply Chain Stars” awards recognized Denver Water Department employee Rhianna Galen as a rising star in supply chain.

In just two years, Gallen has applied Lean Six Sigma methodology to save the department more than $400,000.

30 Under 30 Stars

In the new awards from Manufacturing Engineering, a number of winners mentioned Lean and Six Sigma use as key to success early in their careers. They include the following.

Women in Manufacturing

Women have increasingly created an impact in the world of Lean and Six Sigma. This ranges from innovators in healthcare to higher education. In the Manufacturing Engineering awards, the list included these two notable women.

Melinda Dean

Dean is a technical manager who oversees manufacturing engineers at Pratt & Whitney. She manages $15 million in capital expenditures. A graduate of Loyola University in Baltimore, she went on to earn a master’s degree from Rensselaer Polytechnic Institute.

She said one of the main reasons she returned for graduate school was to “gain a greater knowledge of Lean principles and apply them. It was important for me to be able to achieve better product flow, faster cycle times and improved production quality.”

Maeve Guilfoyle

Guilfoyle is a former music major who decided to switch to engineering after taking a job at Takumi Precision Engineering in Limerick, Ireland. She now works while attending college at the Limerick Institute of Technology.

Takumi has grown 25% since she joined the company. Her boss said Guilfoyle is part of that growth because of her focus on efficiency.

Guilfoyle said at school – where she is at the top of her class and also the only woman in her class – she most enjoys learning about Lean, Six Sigma and statistical process control because “I love anything that makes a system flow better, work cleaner and makes processes as close to poka yoke as possible, which I feel is important for any small company.”

Other Award Winners

Parth N. Khimsaria, an immigrant from India and graduate of Oregon State University, works at Lam Research as an operations analyst in the continuous improvement department. As part of his job, he coaches other employees in Lean Six Sigma methodologies.

William McCall is a manufacturing engineering with AKG North American Operations. He said a trip to a BMW plant and seeing thousands of people working together for one goal opened his eyes to the complexities and possibilities of manufacturing. His supervisor said he is “always focused on continuous improvement.”

Jeremy Miller has led or participated in more than 40 Lean improvement projects with his employer, Olympic Steel.

Ragava Reddy Sama, who has a master’s from California State University, also is a Master Black Belt in Lean Six Sigma. At his job with Master Power Transmission in Indiana, he’s already reduced inspection time by 50% and reduced tool inventory.

James Strausbaugh holds a Green Belt in Six Sigma, which he got after working on a project to reduce cycle time in an emergency room while in college. He now works for Global Shop Solutions in The Woodlands, Texas and is known for finding and fixing problem areas that affect costs.

This is yet another list of achievers who have made a name for themselves, partially because of a focus on process improvement and an education in Lean and Six Sigma tools and techniques.