Process Improvement in the Age of Smart Manufacturing

Process improvement projects have typically been a labor-intensive and imprecise process. Labor-intensive in that capturing the as-designed vs the actual current-state process required facilitated meetings, interviews, surveys and analyzing operational data over an extended time period. Imprecise in that workers will typically act differently when they know they are being watched and measured. The Hawthorne effect was first described over 50 years ago and predicts that workers will typically improve a process while being observed as part of a process improvement project, but will revert to their pre-project behavior once the project has ended and the observers departed.

In my experience of running dozens of process improvement projects over a 30-year period, sustaining improvements is always the most daunting challenge.

  • Six Sigma practitioners will admit that the final control phase of the five step DMAIC (Define, Measure, Analyze, Improve, Control) process is its weakest link. The reason is pretty obvious. Six Sigma projects have a beginning, middle, and end. Even though the Control phase is designed to include the business owner taking responsibly for sustaining the project, little is in place to monitor the sustainability of the improvements.
  • Lean advocates a continuous improvement process designed to overcome the problems with a project-based approach, but the Hawthorne effect is very much in play limiting the sustainability of improvements.
  • A common problem with all continuous improvement initiatives is the very dynamic nature of today’s business environment with ever shrinking product life cycles, and rapid developments in automation, mergers and acquisitions. The result is the improved process may become obsolete in a matter of months.

Now consider the new age of process improvement with “smart manufacturing.” Much has been written about the industrial Internet of Things (IIoT) creating significant opportunities to capture operational data from machines and equipment. While this will assist in improving processes, it is limited to reading machine metrics, with few insights into how people interact with machines and products. It is not well understood that people are a key element to Smart Manufacturing – empowering them with more robust operational information, helping them eliminate bottlenecks and solving tough quality issues.

Adding passive, non-obtrusive, sensor technology to continuously monitor operations – people, machines, and products – provides a much greater opportunity than merely making machines smarter.

Process and Value Stream Maps

  • The Past: Labor-intensive process and value-stream charts only capture a qualitative and subjective snap-shot in time that typically can vary from day-to-day and from person-to-person.
  • The Future: Continuous hard data capture over extended periods using unobtrusive sensors watch the interaction of people with machines and products. The Hawthorne effect is defeated by the subtle nature of the observation tools and their permanent nature.

Gage R&R (Repeatability and Reproducibility)

  • The Past: Gage R&R is most difficult challenge in every process improvement project I’ve tackled because of the major discrepancies from the as-designed process when comparing one person to another and comparing one day to another. The variation typically grows with the complexity of the process and skill level of the people involved.
  • The Future: With continuous monitoring of several people over several days and weeks, all variations are captured for analysis. Best practices, bottlenecks and training opportunities are much more easily discovered.

Sustaining Process Improvements

  • The Past: Because of the labor-intensive, qualitative/subjective and snapshot nature of process improvement efforts, a majority of them fail according to a Wall Street Journal article.
  • The Future: Because monitoring is on-going and not obvious to people, variations from the improved process are easily identified in real-time via alerts and dashboards. There is no need for complex reports or expensive consultants to interpret them.

Sustaining Predictable and Economic Operation: What Does It Take?

In theory, a production process is always predictable. In practice, however, predictable operation is an achievement that has to be sustained, which is easier said than done. Predictable operation means that the process is doing the best that it can currently do—that it is operating with maximum consistency. Maintaining this level of process performance over the long haul can be a challenge. Effective ways of meeting this challenge are discussed below.

Some elements of economic operation

As argued in “What Is the Zone of Economic Production?”, to speak of the economic operation of a manufacturing process, all of the following elements are required:
Element 1: Predictable operation
Element 2: On-target operation
Element 3: Process capability achieved (Cp and Cpk ≥ 1.5)

The notions of on-target operation and process capability are inextricably linked to predictable operation—i.e., demonstrable process stability and consistency over time. Without stability and consistency over time it is impossible to meaningfully talk about either capability or on-target operation.

First example of a predictable process

Our first example uses 128 successive sample measurements for product characteristic 17 in product 73S. The time period covered by these data was sufficient to meaningfully address the question of process predictability. As one-value-per-time-period data, a process behavior chart for individual values is appropriate, as seen in figure 1. A process behavior chart has traditionally been called a control chart, the principal technique of statistical process control (SPC). This chart provides a proven operational definition of a predictable, or “in control,” process.

Figure 1: Process behavior chart for product characteristic 17

Is the process predictable? Figure 1 allows us to characterize process behavior as predictable and therefore to think of one voice speaking on behalf of the process. The natural process limits of 37.41 to 40.83 shown in figure 1 define this “voice of the process.” They also tell us what to expect from this process in the future. Thus, by being operated predictably, this process meets the first requirement for economic operation.

Is the process on target? Although the green line of figure 1 shows the average of 39.12 to be very close to the target value of 39.0, it doesn’t give us a standardized means of answering the question. A traditional 99-percent confidence interval for the mean is 38.99 to 39.25. Since this interval estimate includes 39.0, we can conclude that the process is effectively on target, and that the process meets the second requirement for economic operation.

Is the process capable of meeting the specifications? Product characteristic 17 has specifications of 36.5 to 41.5. These specifications define the “voice of the customer.” By combining the histogram, the specifications (USL and LSL), and the natural process limits (UNPL and LNPL), figure 2 gives a graphic way to compare the voice of the customer with the voice of the process and thereby answer the question above. Numerical quantities that complement figure 2 are the capability ratios of Cp = 1.45 and Cpk = 1.39. Since the confidence intervals for both of these ratios include 1.50, it is safe to say they are in the ballpark required for economic operation. Thus, the third requirement of economic operation is met.

Figure 2: Comparison of the voice of the customer with the voice of the process

Further to the capability question, as long as the process continues to be operated predictably and, therefore, within the natural process limits, should we expect fully conforming product for characteristic 17? Figure 2 tells us to answer yes with a high degree of belief. (A high degree of belief has to mean that no substantial change to the process or its operation is expected.)

Hence, the data for product characteristic 17 satisfy the requirements for economic operation listed earlier. With predictability, on-target operation, and capability in place, the process is in the ideal state. (See “Two Definitions of Trouble” to learn of the four possible states for any production process.) So long as process predictability is maintained, the process will continue to operate on target and capably.

But what happens in the future? New production runs will bring new data, inviting the user to update her computations. Would this computational effort be beneficial? There are three cases to be considered when answering this very important question.

First, as long as the process continues to display predictable behavior, there is no need to recompute the limits for the chart or the capability ratios. They would simply be repeat estimates of the same quantities.

Second, if the process shows evidence of unpredictable behavior, there is also no need to recompute the limits for the chart or the capability ratios. Although the process may have changed, the computations do nothing to fix the problem. The real need is to identify the assignable cause of the change and take action to remove its effect from the process. Making the identified assignable cause part of the set of control factors for the process will be more profitable than anything else that can be done when a process is unpredictable.

Third, only when the process displays a different kind of behavior than observed previously, and the behavior is both desired and expected to continue (e.g., the result of a planned change), is there any benefit to be obtained from a recomputation of the limits for the chart and the capability ratios.

Looking forward, figure 3 asks the question for our first example of what is it going to take to sustain predictable operation? The natural process limits define what this process is capable of delivering, so how can we avoid settling for less? How can we get our process to continue to operate up to its full potential? It turns out that sustaining predictable operation requires continued attention to the process and the willingness to take action when and where it is needed. The reason that we can’t simply fix the process and then forget it is known as entropy.

Figure 3: Operation of a process up to its full potential

The deteriorative force of entropy

Entropy acts against all manufacturing processes, meaning it is far from easy to operate a process with maximum consistency, that is, predictably. Entropy is a force of deterioration. It forces a manufacturer to maintain and look after all aspects of a production operation. Without action to counter the effects of entropy, it wouldn’t take long until product measurements for characteristic 17 would be found outside the range of predictable operation found in figure 3.

Predictability is not a natural state for a production process. Signals of process change that are made visible by process behavior charts provide clues about when and where to act against the forces of entropy to regain a state of predictability.

Sustaining predictable operation

How should a manufacturer act against the forces of entropy in such a way that a production process has a chance of sustaining predictability in the long-term? Some fundamental points are discussed below.

To start, the operator of a production process is not solely responsible for predictable operation. While some assignable causes of unpredictability will be sourced to production operators, there is much more to it than that.

Operating standards and training
Without an operating standard, one can argue that there is no “process.” Even with standards in place, operators subject to inadequate or incorrect training may operate a process unpredictably. One example is adjustments that make things worse, such as reacting to process output found outside of specification when there is actually no signal of change in the process.

Operating standards define how to operate the process. They provide the foundation for consistent and effective process operation across the workforce, such as aligning the ways of working among different shifts. High-quality operating standards provide the basis for effective training and supervision, and also the means of operation to enable predictable operation. Although operators follow and execute such standards, they don’t own them.

Data collection and use (rational sampling in SPC)
Of critical importance is that data are collected and used in such a way that the behavior of the process—predictable or unpredictable—can be judged effectively. While operator input may be critical in determining an effective data collection plan, a process specialist, or “SPC lead,” is more likely to take responsibility for the data collection plan and choice and use of SPC chart.

For example, collecting data at too high a frequency can make a predictable process appear unpredictable. Too low a frequency of data collection can mean that some signals of unpredictability pass undetected, meaning the opportunity to learn more about the process, and take appropriate action to potentially improve it, is lost. (See “Rational Sampling” for more details.) Since data provide the basis for action on the process, it is important that process data are collected, organized and used in a way that will provide the needed insight.

If, for example, a purchasing department buys on price tag alone, poor-quality materials may leave an operator helpless to achieve predictable operation (garbage in, garbage out). Raw material suppliers may need SPC as much as, or more than, the manufacturer transforming the supplied raw materials into finished goods (e.g., via assembly operations).

Process design and the possibility to control causes of variation
Natural raw materials may exhibit inconsistencies in quality over time whose causes cannot be directly controlled at source (e.g., seasonal variations in milk or differences between suppliers from different geographical locations when two or more sources of supply are needed to obtain sufficient quantity of raw materials to meet production volumes). The process needs to allow for control actions such as in-tank adjustments that make possible the removal of the effect of these potential assignable causes during actual processing. In-tank adjustments, moreover, need to be executed smartly, implying the need for a well-defined dead band. (The article “The Secret of Process Adjustment” explains how unnecessary adjustments without a well-defined dead band can only increase process variation.)

Maintenance and engineering
Maintenance plans need to be well-defined and executed timely. A failure to maintain and repair the process line, including process equipment such as flow meters and pressure and temperature sensors, can again leave operators helpless to achieve sustained predictable operation.

A lot of process operations allow for automatic process control, which includes the continual execution of automatic adjustments to keep the process on, or close to, target. The means of tuning and monitoring automatic PID loops can be complex and very likely falls outside the list of responsibilities assigned to an operator. Such PID loops need to make predictable operation possible. PID adjustment loops that react too quickly, or too slowly, or that have no direct impact on important control factors, will very likely do nothing to resolve issues of unpredictable process operation. In some cases, they can even increase process variation, making things worse. (See “Process Monitor Charts” for further discussion of these points.)

Measurement data
Measurement systems can be the source of assignable causes on process behavior charts, but only when they are operated unpredictably themselves. As long as they are operated predictably, the variation attributable to measurement will always be part of the process’s routine, common cause variation. The greater the level of routine measurement variation, the further apart the process limits will be on the process behavior chart. Even if you don’ t know how much of the total routine variation is due to measurement variation, it is always there, like it or not.

Assignable causes related to measurement can come from inconsistent sampling and sample-handling practices, as well as poorly calibrated, monitored and maintained equipment. Different analysts may also use measurement equipment differently, which can show up as signals of assignable-cause variation on a process behavior chart. If different laboratories are used, there will be a need for consistency both within and between the laboratories (no bias between locations and a comparable level of consistent precision).

Standards and training are also needed for the use of measurement equipment, since a measurement process is a process in its own right.

Management can foster unpredictable operation by failing to support efforts to fix problems that are identified in the course of production. The workforce needs time to keep, discuss, and respond to process behavior charts, necessitating support from management in these efforts. Responding to the charts means 1) identifying the causes of process changes; and then 2) taking action on them, as also described later in the Tokai Rika example (these two steps are shown in figures 4 and 7).

Everybody connected with the process is needed
To achieve and sustain predictable operation, there is a need for everybody connected with the process to do his part. Operators of the process can only ever be one piece of the jigsaw puzzle. Predictable operation requires a supportive environment, and a key role for management is to establish and maintain this environment, which includes not only the use of process behavior charts, but also the ability and willingness to respond to them. Signals of process change presented by process behavior charts are indicators that predictable operation has broken down. The way to regain predictable operation, and with it minimum achievable variation in output for the current process, is to identify the causes of unpredictability and take action on them. This is illustrated schematically in figure 4, which is drawn circularly to depict its continuous, ongoing nature.

Figure 4: Schematic of a strategy aimed at sustaining predictable operation through process behavior charts

Second example of a predictable process: Tokai Rika

Operating predictably all the time is not a viable aim. To expect continued, uninterrupted predictable operation can only be described as wishful thinking. A predictable process in the mid- to long-term should be regarded as one that is subject to occasional, or only very occasional, signals of unpredictability. When entropy intrudes it will bring assignable causes with it, increasing the variation in process outcomes above and beyond the level of common cause variation routinely present.

The example of Tokai Rika, described in “How Do You Get the Most Out of Any Process?” reveals not only what predictable and economic operation mean in routine production but also how to approach this challenge so that it is practically sustainable in the long-term.

The average (upper) chart shown in figure 5 is reproduced from “How Do You Get the Most Out of Any Process?” The chart finds evidence of a process change on days 35 and 36. Looking back to the last time the process crossed the central line, Tokai Rika’s production workers decided that this problem could have begun as early as day 29. Upon investigation, they found that the positioning collar had worn down and needed to be replaced. Recognizing this as a problem of tool wear, they did two things: They ordered a new positioning collar, and they turned the old collar over to get back on target while waiting for the new collar to arrive. This is indicative of a desire to operate right at the target value whenever possible. The new collar was installed on day 39, and they wrote Intervention Report No. 1 detailing what was found and what was done.

Figure 5: Tokai Rika example, days 1 to 60.

Following this intervention, they decided to compute new limits for the process. They ran without limits for days 39 to 49 and used this period as their new baseline. With a grand average of 90.18 and an average range of 0.91, the new limits were considerably tighter than the previous limits. As they used these limits to track the process, they soon found evidence of another process change.

The averages for days 57, 58, and 59 are all below the lower limit, and it’s fairly clear there was a shift in the process. (They did not make any notes about the ranges falling above their upper limit on days 53 and 54.) Moreover, unlike the excursion on days 29 to 36, in this case, there is no gradual change leading up to the first point outside the limit. Hence, they noted that this was a sudden change and began to look for something broken. They began their investigation with the rolling operation. When no problems were found at rolling, they turned to the blanking operation.

As shown in figure 6, after the better part of two weeks had passed, they finally discovered what was making the detent dimensions smaller: There was a very small wrinkle on the flange due to a defect in the die.


Figure 6: Tokai Rika example, days 39 to 85.

Thus, they scheduled a repair for the die on the weekend between days 70 and 71. At the same time they modified the bolt holding the pressure pad since they had found that it was coming loose. Following these changes they wrote up Intervention Report No. 2 and proceeded to collect data for a new set of limits. The process average went up to 90.88, which is probably due to the fact that this positioning collar already had 32 days of wear prior to this new baseline period.

The record of the Tokai Rika process covers some 20 months and the full story is found in Understanding Statistical Process Control, Third Edition. Over this extended period, the Tokai Rika process was shown to be occasionally subject to the effect of assignable causes. When these assignable causes changed the process location or process variation, the process behavior chart detected these changes as shown in figures 5 and 6.

Returning to the three elements of economic operation, how did Tokai Rika’s process do? First, for the most part the process demonstrated a high degree of predictability, a fine achievement in its own right. Second, with a target of 90 the process was effectively on target over the course of the 20 month record. Lastly, with capabilities in excess of 2, there is no doubt about how to answer the question, “Is the process capable of meeting the specifications?” Hence, Tokai Rika met the requirements of economic operation for this production process.

Some important lessons from the use of process behavior charts at Tokai Rika are:

1. The detected assignable causes were worth knowing about, and action taken on these causes contributed to continual process improvement—the way data were collected and used provided the needed insight to make these improvements possible.

2. Signals on the process behavior chart were carefully interpreted to pinpoint when the process changes likely started so that investigative effort was able to focus in, and identify, the cause of each change.

3. The working environment at Tokai Rika enabled the discussion of, and response to, the signals of process change, allowing the charts to provide a basis for action on the process.

4. The successful identification of assignable causes was sometimes difficult, and on a couple of occasions during the 20-month record, the investigative trail ran dry; however, the working environment was able to accommodate these difficulties and disappointments.

5. Inherent to Tokai Rika’s approach to sustaining predictable and economic operation, and therefore fighting against the effects of entropy, is the three-step process found in figure 7:

Figure 7: The way Tokai Rika approached process predictability

6. Even though assignable causes are undesired, and Tokai Rika wanted to get rid of them, the company chose to operate the process without having identified and removed the effect of one such cause (see days 57 to 70 in figure 6). Some assignable causes will warrant shut down of a process, others not, meaning that user judgment is critical.

Further to the first point on continual improvement, the Tokai Rika example reveals that successfully sustaining predictable operation also provides a means of reducing routine, common cause variation over time. At the start of the record the process’s common-cause standard deviation was 0.0148 mm, yet after the improvements, the process operated for the last 13 months with a standard deviation of 0.0100 mm. This means the process variance was reduced from 0.000219 down to 0.000100, a 54-percent reduction in process variability. This came about by following through on the signals of assignable cause variation found on the process behavior chart. Visually, this improvement appears as illustrated by the two histograms in figure 8. (The second histogram is 68% as wide as the initial histogram because a 54% reduction in variation shows up as √(1-0.543) = 0.676.)

Figure 8: The effect of removing assignable causes upon process variation

Hence, while sustaining predictable operation, Tokai Rika learned more about its process, and by implementing the knowledge gained, it improved the process at the same time. Tokai Rika showed that a process behavior chart makes it possible to learn from the process and to improve the process while monitoring the process.


The difference between operating a process predictably and operating a process unpredictably is not a matter of having the right process, or having the right process settings, or even having the right process design. While all these have an impact upon the ability to operate a process predictably, the ultimate issue in operating a process predictably is an operational one. Does everyone involved with operating a process understand what it takes to operate it predictably? And do they have the operational discipline to do so over the long haul? The operating environment must be one that makes sustained, predictable operation possible.

Entropy is relentless. It will make each and every process unpredictable. Therefore, without a continuing program for identifying and removing the effects of assignable causes of unpredictability, there is little to no point in talking of mid- to long-term process predictability, sustained on-target operation, or process capability.

World-class quality has for years been defined as on-target operation with minimum variance. Operating on target means operating near the ideal process outcome. Operating with minimum variance means operating the process up to its current full potential; when this happens, a state of maximum consistency in operation has been achieved, and the process data will display predictable behavior (see figure 1).

So, if you get as far as meeting the requirements of economic operation outlined at the start of this article, what does it take to sustain this achievement? It takes continued predictable operation, which means a process that is operated at full potential. Any drop below operation at full potential means that predictable operation will have broken down, and assignable causes will be taking the process on walkabout. When this happens, the notions of being on-target and capable are lost, even if only temporarily. The means of regaining predictable operation, and hence also economic operation, for on-target and capable processes, is to identify the assignable causes and then to act on them, just as the Tokai Rika personnel did. This is why process behavior charts are the key to sustaining predictable and economic operation.

Combining Quality Tools for Effective Problem Solving

uality tools can serve many purposes in problem solving. They may be used to assist in decision making, selecting quality improvement projects, and in performing root cause analysis. They provide useful structure to brainstorming sessions, for communicating information, and for sharing ideas with a team. They also help with identifying the optimal option when more than one potential solution is available. Quality tools can also provide assistance in managing a problem-solving or quality improvement project.


Seven classic quality tools

The Classic Seven Quality tools were compiled by Kaoru Ishikawa in his book, Guide to Quality Control (Asian Productivity Organization, 1991). Also known as “The Seven Tools” and “The Seven Quality Tools,” these basic tools should be understood by every quality professional. The Classic Seven Tools were first presented as tools for production employees to use in analyzing their own problems; they are both simple enough for everybody to use, yet powerful enough to tackle complex problems.

The seven tools are:
1. Cause and effect diagrams
2. Scatter diagrams
3. Control charts
4. Histograms
5. Check sheets
6. Pareto charts
7. Flow charts

cause-and-effect-diagram is used to list potential causes of a problem. It is also known as an Ishikawa diagram or fishbone diagram. Typically, the main branches are the “6Ms,” or man, material, methods, milieu (environment), machine, and measurement. Sub-branches are listed under the main branches with “twigs” containing the potential problem causes. A cause-and-effect diagram can be used to assist when the team is brainstorming, and it can also be used to quickly communicate all potential causes under consideration.

Figure 1:
 Cause-and-effect diagram. (Click here for larger image)

scatter diagram graphically depicts paired data points along an X and Y axis. The scatter diagram can be used to quickly identify potential relationships between paired data points. Figure 2 depicts various potential correlations ranging from no correlation to a strong negative and strong positive correlation. It is important to remember that a strong correlation does not necessarily mean there is a direct relationship between the paired data points; they may be following third, unstudied factor.

Figure 2: Scatter diagram. (Click here for larger image)

Control charts are used to evaluate and monitor the performance of a process (Wheeler 1995). There are many types of control charts available for statistical process control (SPC), and different charts are used deepening on the sample size and the type of data used. An individuals chart is used when the sample size is one. The formulas for an individuals chart are shown in table 1, and an example of an individuals chart for a shaft diameter is shown in figure 3. The data are in a state of statistical control when all values are within the control limits, which contain 99.7 percent of all values.

Table 1: Formulas for center line and control limits when sample size is one

Figure 3: Control chart

Histograms are used to visualize the distribution of data (McClave and Sincich 2009). The y-axis shows the frequency of occurrences, and the x-axis shows the actual measurements. Each bar on a histogram is a bin, and bin size can be determined by taking the square root of the number of items being analyzed. Using a histogram can quickly show if the data are skewed in one direction or another. Figure 4 shows a histogram for data that fit a normal distribution, with half of all values above and below the mean.

Figure 4: Histogram

Check sheets are used for the collection of data (Borror 2009), such as when parts are being inspected. The various failure categories or problems are listed, and a hash mark is placed next to the label when the failure or problem is observed (see figure 5). The data collected in a check sheet can be evaluated using a Pareto chart.

Figure 5: Check sheet

A Pareto chart is used for prioritization by identifying the 20 percent of problems that result in 80 percent of costs (Juran 2005). This can be useful when searching for improvement projects that will deliver the most impact with the least effort. Figure 6 shows a Pareto chart with three out of seven problems accounting for 80 percent of all problems. Those three would be the priority for improvement projects.

Figure 6: Pareto chart

A flowchart is used to gain a better understanding of a process (Brassard 1996). A flowchart may provide a high-level view of a process, such as the one shown in figure 7, or it may be used to detail every individual step in the process. It may be necessary to create a high-level flowchart to identify potential problem areas and then chart the identified areas in detail to identify steps that need further investigation.

Figure 7: Flowchart

Seven new management and planning tools

The seven new management and planning tools are based on operations research and were created between 1972 and 1979 by the Japanese Society for Quality Control. They were first translated into English by GOAL/QPC in 1983 (Brassard 1996).

These seven tools are:
1. Affinity diagram
2. Interrelationship diagram
3. Tree diagram
4. Arrow diagram
5. Matrix diagram
6. Prioritization matrix
7. Process decision program chart (PDPC)

An affinity diagram identifies points by logically grouping concepts (ReVelle 2004). Members of a team write down items that they believe are associated with the problem under consideration, and these ideas are then grouped into categories or related points.

Figure 8: Affinity diagram

The interrelationship diagram depicts cause-and-effect relationships between concepts and is created by listing problems on cards (Westcott 2014). These cards are then laid out, and influences are identified with arrows pointing at the items that are being influenced. One item with many arrows originating from it is a cause that has many influences, and much can be achieved by correcting or preventing this problem.

Figure 9: Interrelationship diagram

A tree diagram assists in moving from generalities to the specifics of an issue (Tague 2005). Each level is broken down into more specific components as one moves from left to right in the diagram.

Figure 10: Tree diagram

An arrow diagram is used to identify the order in which steps need to be completed to finish an operation or project on time (Brassard 1996). The individual steps are listed, together with the duration, in the order that they occur. Using an arrow diagram such as the one in figure 11 can show steps that must start on time to prevent a delay in the entire project or operation.

Figure 11: Arrow diagram

The matrix diagram is used to show relations between groups of data (Westcott 2014). The matrix diagram in Figure 12 depicts three suppliers as well as their fulfillment of the three characteristics listed on the left side of the table. In this example, only two suppliers share the characteristic “ISO certification.”

Figure 12: Matrix diagram

The prioritization matrix is used to select the optimal option by assigning weighted values to the characteristics that must be fulfilled, and then assessing the degree to which each option fulfills the requirement (ReVelle 2004). The prioritization matrix in figure 13 is being used to select the best option for a staffing problem.

Figure 13: Prioritization matrix

Process decision program charts (PDPC) map out potential problems in a plan and their solutions (Tague 2005). The example in figure 14 shows the potential problems that could be encountered when conduction employee training, as well as solutions to these problems.

Figure 14: Process decision program chart


Example of combining quality tools

Multiple quality tools can be used in succession to address a problem (Barsalou 2015). The tools should be selected based on the intended use, and information from one tool can be used to support a later tool. The first step is to create a detailed problem description that fully describes the problem. In this hypothetical example, the problem description is “coffee in second-floor break room tastes bad to the majority of coffee drinkers; this was first noticed in February 2017.” The hypothetical problem-solving team then creates the flowchart shown in figure 15 to better understand the process.

Figure 15: Flowchart for coffee-making process

The team then brainstorms potential causes of the problem. These ideas come from the team members’ experience with comparable, previous issues as well as technical knowledge and understanding of the process. The ideas are written on note cards, which are grouped into related categories to create an affinity diagram based around the 6Ms that are used for a cause-and-effect diagram (see figure 16).

Figure 16: Affinity diagram for bad-tasting coffee

The affinity diagram is then turned into the cause-and-effect diagram depicted in figure 17. The team can then expand the cause-and-effect diagram if necessary. The cause-and-effect diagram provides a graphical method of communicating the many root-cause hypotheses. This makes it easy to communicate the hypotheses, but it’s not ideal for tracking the evaluation and results.

Figure 17: Cause-and-effect diagram for coffee taste

Cause-and-effect diagram items are then transferred to a worksheet like the one shown in figure 18. The hypotheses are then prioritized so that the most probable causes are the first ones to be investigated. A method of evaluation is then determined, a team member is assigned the evaluation action item, and a target completion date is listed. A summary of evaluation results is then listed, and the conclusions are color-coded to indicate if they are OK, unclear, or potentially the root cause. Unclear items as well as potential root causes should then be investigated further, and OK items are moved from consideration.

Figure 18: Cause-and-effect diagram worksheet for coffee taste. (Click here for larger image)

Figure 19 shows a close up view of the cause-and-effect worksheet. Often, the cause-and-effect diagram item is not clean in how it is related to the problem. In such a situation, it can be expand in the worksheet to turn it into a clearer hypotheses. For example, “Water” in the cause-and-effect diagram can be changed to “Water from the city water system containing chemicals leading to coffee tasting bad” in the worksheet.

Figure 19: Close-up of a cause-and-effect diagram worksheet.

A prioritization matrix can be used to evaluate multiple potential solutions to the problem. In this example, the team has identified three potential solutions: The team can clean and repair the old machine, buy a new machine, or buy an expensive new machine. They want to avoid high costs, but do not want to spend too much time on implementing the solution, and they want something with long-term value. Therefore the prioritization matrix shown in figure 20 is used to find the ideal solution.

Figure 20: Prioritization matrix for improvement options


There is no one right quality tool for every job, so quality tools should be selected based on what must to be accomplished. Information from one tool can be transferred to a different tool to continue the problem-solving process. Actions items resulting from a cause-and-effect diagram should be entered into a tracking list. This assists the team leader in tracking the status of items, makes it easier to ensure action items are completed, and is also useful for reporting the result of action items.

Lean six sigma black belt JOB-Assistant Manager/Manager-(Technical)

VWR (NASDAQ: VWR), headquartered in Radnor, Pennsylvania, is a leading, independent provider of laboratory products, services and solutions with worldwide sales in excess of $4.5 billion in 2016. VWR enables science in laboratory and production facilities in the pharmaceutical, biotechnology, industrial, education, government and healthcare industries. With more than 160 years of experience, VWR offers a well-established network that reaches thousands of specialized laboratories and facilities spanning the globe. VWR has more than 8,500 associates working to streamline the way scientists, medical professionals and production engineers stock and manage their businesses. In addition, VWR supports its customers by providing value-added service offerings, research support, laboratory services and operations services.
Designation Assistant Manager – Quality and Business Process Re-engineering – 1 Opening(s)
Job Description Key Tasks: 

  • Responsible for end to end Transition efforts to have a proper stabilized operations post transition & Implement operational governance
  • Design and implement KPI measures/service levels
  • Client/Stakeholder expectation management through NPS
  • Drive organizational compliance to ISO 9001:2015
  • Drive continuous improvement culture through training, co-ordination and implementation of principles of Lean/Six Sigma in day-to-day operations in VWR Global Business Center
  • Work closely with operation teams to obtain input of diverse views, facilitate generation of ideas, analyze operational risks, extend support in managing stakeholders/client escalations (RCA/CAPA)
  • Guide operations to conduct process capability study, prepare contingency plan for all levels and develop FSS to staff for holidays based on volume and process capability study
  • Prepare Dashboard/Reports by collecting, analyzing, and summarizing Operations data; making recommendations
  • Support Team to establish statistical confidence by identifying Significant sample size and acceptable error; determining levels of confidence
  • Conduct Process Audit to ensure processes are compliant with ISO requirements.
Desired Profile Skills, knowledge & experience:

  • Minimum 5 years of work experience in managing Quality and driving Continual Improvement projects which should be of Mid/large sized Cross functional
  • Have experience in handling team
  • Graduation/Post Graduation
  • Professional certification like ISO Auditor, Six Sigma, Kaizen, Project Management etc will be added advantage.
  • Hands on experience in MS application (like excel, power point, Visio)
  • Ability to work with minimal supervision and manage multiple tasks/projects simultaneously
  • Strong writing and presentation skills, with an ability to produce high-quality deliverables created through collaboration.
  • Experience in handling change related aspects of business processes including driving continuous improvement in day to day service delivery environment
  • Good analytical skills – Applied Knowledge in Basic QC tools such as Root cause analyze , Fish bone diagram, Pareto, Run Charts, etc.,
  • Ability to quickly adapt to change and to work in a high-energy fast paced environment working against deadlines
Experience 5 – 8 Years
Industry Type BPO / Call Centre / ITES
Role Assistant Manager/Manager-(Technical)
Functional Area ITES, BPO, KPO, LPO, Customer Service, Operations
Employment Type Full Time , Permanent Job
Education UG – Any Graduate – Any Specialization

PG –

Doctorate –

Compensation:  Not disclosed
Location Coimbatore
Keywords lean six sigma training coordination operations iso 9001 quality management process audit iso auditor six sigma kaizen project management root cause analyze fish bone diagram pareto run charts

Six sigma black belt JOB- Sr. Manager / Chief Manager – Business Excellence – QA

Job Description

1.Identify improvement areas in the process for all department and functions.
2.Developing and managing Six Sigma Drive for entire plant Quality.
3.Analyze Quality Assurance processes and make improvement plans.
4.Support all function in resolving their chronic / Strategic and operation problems
5.Organize and facilitate implementation of Strategic Execution initiatives as per plan
6.Coordinate for carrying out various surveys and analyse the survey outcomes
7.Execute Training & Awareness initiatives of TPM, TQM etc or any other execution model & other good practices as per plan/ need.
8.Train the team on TPM, TQM , Quality Tools & Techniques to help them enhance the strategic projects.

Salary: Not Disclosed by Recruiter
Industry: Automobile / Auto Anciliary / Auto Components
Functional Area: Production, Manufacturing, Maintenance
Role Category: Production/Manufacturing/Maintenance
Role: Quality Assurance/Quality Control Manager
Employment Type: Permanent Job, Full Time
Keyskills: Quality Assurance Plant, Quality Six Sigma, TQM, Quality Tools, Business Excellence, QA, TPM, Quality Techniques, Kaizen, QC Tools.

Software Quality Assurance

Job Description



  1. To drive Process standardization support the sustenance of the ISO 9001 & 27001 certifications.
  2. Ideally, He should have supported Process facilitation for testing oriented projects. Should be good in Advanced Statistics / Metrics and Six Sigma would be an added advantage.
  3. Good presentation and communication skills and the ability to lead the team towards achievement of goals.
  4. Facilitate and monitor the project/product/function teams in complying with QMS and advise them on process implementation and compliance with standards.
  5. Conducting internal quality audit by obtaining objective evidence of implementation and effectiveness of the quality system.
  6. Identify process improvement training needs and organizing for course development and training.
  7. Guides projects and functions in metrics collection and analysis. Reports metrics and customer complaints.

Salary: Not Disclosed by Recruiter
Industry: IT-Software / Software Services
Functional Area: Other
Employment Type: Permanent Job, Full Time
Keyskills: Software Quality Assurance, Quality Audit, QMS, ISO 9001, Internal Quality Auditor, Six Sigma, Process Improvement, Metrics, Customer Complaints, Process Standardization.

Make Happen in 2018 with Six Sigma!

Not everyone is a great chef, but if you can follow a recipe, you can produce a delicious meal.

Not everyone is a great businessperson, but if you follow Six Sigma’s methodologies and tools, you will make your business a success.

Our Six Sigma courses provide you with the knowledge as well as the tools to make you an expert at solving issues at your business or company. These issues include eliminating waste, reducing process variation and improving process capability.

Make 2018 the year you start turning issues into solutions!

Take one of our premier Six Sigma courses. We offer online courses, classroom, onsite as well as blended courses at your choosing. We are currently offering a multitude of Six Sigma courses right in your town or city.

Make it your New Year’s resolution and sign up now! Go to, pick the course and make it happen.

Happy New Year from all of us at!

Deputy Manager/senior Manager-quality & Knowledge Management

Job Description

ISO 9001 process documentation, process audits and audit closures at HO, CBO and Branches
Designing and setting up Management Information System for Key Metrics
Identifying and Implementing Process improvement Initiatives using Lean & Six Sigma Methodology
Working closely with Functional Quality Representatives (SPOCs) to achieve Process Standardization, Documentation & Updation of standardized processes, Implementation of standardized processes, Preparation of departmental dashboards and review with respective HoDs, MISs & Data collection need basis, Root cause analysis for escalated/important complaints – need basis and Improvement Initiatives
Working closely with internal/ external agencies to schedule and conduct Process Audits, Quality related Trainings, ISO Certification, Six Sigma/ Lean implementation etc.
Designing Dashboards for CEOs Dashboard reviews
Facilitate CSAT surveys and action plan implementation
Manage Idea Express Initiative for the organization


No of projects/ process improvements completed
No of audits and effective verification of closures of findings shared by SPOCs
No of ideas implemented
Revenue realised/ cost saved
External customer satisfaction survey score;
Implementation (with automation) of Standardized Processes & Internal Measurement Systems

Salary: INR 4,00,000 – 8,00,000 P.A
Industry: Banking / Financial Services / Broking
Functional Area: Financial Services, Banking, Investments, Insurance
Role Category: Marketing Manager
Role: Marketing Manager
Keyskills: Lean Six Sigma ,Quality, ISO 9001 ,CSAT ,Metrics ,Process Improvement Initiatives ,Lean Implementation ,Root Cause Analysis, Process Audit ,Process Standardization.


Six Sigma Black Belt JOB- Quality Assurance/Quality Control Manager

Aon is looking for Black Belt


The Ops & Quality Black Belt supports aligned BU business partners in effectively managing and improving operational performance & in meeting their productivity goals through a culture of continuous improvement.

Your Impact as Black Belt

Identify, Initiate, facilitate & Mentor continuous improvement projects based on DMAIC, Lean, BPMS and other quality methodologies, for the aligned cluster/ organization
Provide analytics support to Business Leaders both onshore and offshore.
Drive a continuous improvement culture
Manage the innovation/ idea generation platform
Conduct audits for processes to check conformance to Quality Management system standards
Facilitates Six Sigma, Lean, BPQMS & other Trainings
Support the business with Quality/Operational excellence initiatives.


Graduate in any  stream

Y ou bring knowledge and expertise

Required Experience

Total work experience of 3-8+ years in Quality / Process Improvement Role.
Green Belt certified/Trained
Experience in Process Improvement

Preferred Experience:

Exposure to ISO, TQM and other Quality methodologies/systems
Knowledge of Minitab
Knowledge of Access

Work Conditions:

Shift timings as per process / business need
Willing to work India day/ Night shift
Mobility between India locations basis team alignment/ meetings
Participation in training sessions, business management routines in office/ offsite

We offer you:

Attractive Reward and recognition program celebrating successes and achievements
Frequent training and development opportunities to acquire and enhance new skills
Fun workplace where people bond and stay motivated
Career opportunities that are quick and offers varied exposure

Salary: Not Disclosed by Recruiter
Industry: BPO / Call Centre / ITES
Functional Area: ITES, BPO, KPO, LPO, Customer Service, Operations
Role Category: Quality Assurance/Quality Control Manager
Role: Quality Assurance/Quality Control Manager
Keyskills: green belt ,process excellence ,Quality Tools ,Quality Improvement ,lean six sigma ,process improvement ,business excellence ,business improvement process, quality ,green belt certified, Quality Management ,DMAIC ,lean ,lean improvement ,projects.


Improving Debt Collection Rate, or How to Gain $865,000

A mid sized debt collection agency was in trouble with one of its largest clients. The client was unhappy with the agency’s debt collection rate, and was threatening to take their business elsewhere if things did not improve. The manager of the agency division involved had just become a Green Belt and he thought this would be a good test of the Lean Six Sigma methodology.

Understanding the urgency to find a permanent fix to this low-collection-rate problem, corporate management agreed and chartered a team to be led by the Green Belt.

The Problem Solving Begins

Like many financial services sectors, the collection industry is data rich but information poor. That is, there is all types of data that can be measured on a daily or even a per-call basis, but decisions are often made by “gut feel” and “this is how we’ve always done business.”

The team’s first challenges, therefore, were to define the process and the data that it needed to gather, which would be the “process voice” for the client’s needs (Figure 1).

Figure 1: High Level Process Map

Figure 1: High Level Process Map

The team began looking at translating client needs into process metrics, or measures. The team recognized several peculiarities in the data that it would have to deal with:

  • There is an annual cycle to the amount of recovered debts. For instance, there is a peak in debt payment during tax refund season and a valley right before the Christmas holidays. If the metrics the team used did not level out that effect, it would be difficult to compare performance across the year.
  • The amount of any particular debt collected increased over time, as the account “aged.” That was in part because the agency had more time to work the account, and in part because this particular agency worked hard to negotiate payment plans. The agency reasoned that allowing debtors to pay regular, small amounts of money was often more successful than demanding a larger lump sum. Therefore, the age of accounts had to be reflected in the metric as well.
  • There was a strong correlation between the percentage of accounts that had been worked with the gross debt collected. Thus, the company had confidence that its approach was basically sound. Had the approach been ineffective, there would be no such correlation.

As a consequence of these considerations, the project team decided to use a six-month “cumulative recovery” rate as its metric. So, for example, the team would look at how much of the debt added to the rolls in January was recovered between January and June. The team thought that using a six-month window would even out the cyclic effect and reflect the impact of account age.

Baselining: Where Do Things Stand Now?

To establish a baseline against which the team would measure its success, historical data was pulled and the six-month cumulative recovery rates were calculated (Figure 2). The average six-month recovery rate turned out to be 3.1 percent.

Figure 2: Baseline Six Month Cumulative Collection Recovery Rate

Figure 2: Baseline Six Month Cumulative Collection Recovery Rate

Where Does the Agency Want to Be?

The issue of “Where does the agency want to be?” has a couple of components:

  • Knowing what the best performers are capable of achieving
  • Figuring out how much improvement is reasonable given where the agency is today

The team addressed the first of these issues by benchmarking another communication industry client where recovery rates were higher, close to 5 percent. The team decided to aim for a 60 percent improvement in the gap between where the process was and the benchmark, establishing a goal of a 4.3 percent cumulative recovery rate.

Process Input Variables and Analysis

The two most useful approaches used by this team to understand and solve the process problems were cause-and-effect analysis and application of the Pareto principle (which few causes contribute the most to the observed problems). Team members ended up narrowing their focus to three factors:

  • Letters that were sent to the debtor
  • Number of times the debtor was called
  • Personal contact with the debtor

With factors reduced down to three, the team was able to perform a sophisticated statistical analysis called binary logistic regression to see what was different between debtors who paid versus those who did not. This analysis showed that each additional personal contact with the debtor increased by 2.6 times the likelihood that they would pay, whereas neither more letters sent nor more phone calls to the debtor were effective (Figure 3).

Figure 3: What Makes a Difference in Likelihood of Payment?

Figure 3: What Makes a Difference in Likelihood of Payment?

Project Team Selects a Solution

Knowing that personal contact with a debtor was critical, the team had to look at how the process was run and what it would take to improve contact with debtors. Team members generated many solution ideas, but discounted most of them after analysis of the risks involved, the lack of control that the organization had over the potential process, or the unrealistic nature of the ideas.

Finally, though, the team hit upon one solution that satisfied all team members: Making a protocol change in how the automatic dialing program decided which numbers to dial. Previously, all phone numbers were treated equally; now the auto dialer gave priority to accounts where there had not been any personal contact. This solution worked well because it was entirely within the control of the agency, it was transparent to the client and the debtors, and no capital expenditure was necessary.

The Results: Short and Long Term

The solution was piloted in a small segment of the client’s accounts. This resulted in an immediate jump in the recovery rate of $54,000 annually in gross collections. Figure 4 shows the increase in the average six-month cumulative recovery rate for the segment of client accounts affected by the pilot improvement. Figure 5 shows graphically the improvement to the recovery percentage made during the pilot program.

Figure 4: Individual/Moving Range Charts of Recovery Percentage

Figure 4: Individual/Moving Range Charts of Recovery Percentage

The results were verified to be significant both practically and statistically, so the improvement was rolled into the entire client portfolio. The collection agency hit its target of a 4.3 percent recovery rate, and the client realized an annual increase of $865,000 in gross collected debt.

Figure 5: Box-Plot of After and Before

Figure 5: Box-Plot of After and Before

The agency’s corporate management was extremely pleased with the outcome of this project. The agency not only retained the complaining client’s business but used the capability improvements to save another account. The fact that the agency is now using Lean Six Sigma and having tangible results is a selling point it emphasizes with both current clients and prospects.