“I don’t work in manufacturing.”
Those five words have been used, again and again, to decry the usefulness and application of the Six Sigma methodology.
Sure, Six Sigma was created and perfected in a manufacturing environment. But it’s a process improvement methodology. We’ve looked around, and we haven’t yet found an industry that isn’t absolutely reliant on processes.
Everything is a process. The way paperwork is filed. The way teams are assembled. The way inventory is shipped.
That’s why Six Sigma is industry agnostic. It doesn’t matter where you work or what you do – if there’s a need for management, then that means there’s a process. And if there’s a process, then Six Sigma can help make it more efficient.
So why don’t more managers – or executives, or entrepreneurs – pursue a Six Sigma certification? It might be because they see their jobs as soft skills-oriented instead of process-oriented.
But why not excel at both?
Six Sigma and Soft Skills
This isn’t a secret. The best project managers are those people with exceptional soft skills and technical skills. According to the Project Management Institute’s Pulse of the Profession Report, almost one-third of employees surveyed felt like great project managers needed both.
But, like anything, developing soft skills is a process. That’s what Six Sigma does best.
DMEDI is a process improvement tool that you can use, right now, to start down the path toward better communication, conflict resolution, and more.
- You first have to Define your goal. Let’s say your goal is “be friendlier.”
- How can you Measure that goal? If you’re trying to be friendlier, you can measure something like the number of casual conversations you engage in daily, or how often people laugh when they’re around you.
- Then you Explore all the options that might help you realize your goal – smiling more, asking people how their day is going, offering compliments, etc.
- Next up, Develop a plan. Set up a system to help you follow through with your new behavior.
- Finally, Implement the plan. Test it to make sure it’s working, by using the measurement you created earlier.
It sounds rote and robotic – but it works. Almost everything in life is a process, and with some careful analysis, every process can be tweaked and improved to get the results you’re looking for.
Even soft skills.
The Benefits of Six Sigma Certification
The pros of Six Sigma certification far outweigh the cons. It’s a shift in mindset, and managers of all types can benefit tremendously from any level of Six Sigma training.
The Green Belt is the most basic form of Six Sigma certification, and it is perfect for those managers or individual contributors who spend a lot of their time gathering and analyzing data.
The Black Belt is the ideal certification for individuals who consistently lead project teams and act as mentors to others. It builds on the data-focused management of the Green Belt and incorporates leadership skills like time management and decision-making.
The Master Black Belt is the highest Six Sigma certification, and it’s best suited for managers who fill an executive role, or act as a liaison with upper management. These practitioners are able to teach Six Sigma principles to both project teams and corporate executives.
If you manage others, you can’t go wrong with any of them.
Process improvement projects have typically been a labor-intensive and imprecise process. Labor-intensive in that capturing the as-designed vs the actual current-state process required facilitated meetings, interviews, surveys and analyzing operational data over an extended time period. Imprecise in that workers will typically act differently when they know they are being watched and measured. The Hawthorne effect was first described over 50 years ago and predicts that workers will typically improve a process while being observed as part of a process improvement project, but will revert to their pre-project behavior once the project has ended and the observers departed.
In my experience of running dozens of process improvement projects over a 30-year period, sustaining improvements is always the most daunting challenge.
- Six Sigma practitioners will admit that the final control phase of the five step DMAIC (Define, Measure, Analyze, Improve, Control) process is its weakest link. The reason is pretty obvious. Six Sigma projects have a beginning, middle, and end. Even though the Control phase is designed to include the business owner taking responsibly for sustaining the project, little is in place to monitor the sustainability of the improvements.
- Lean advocates a continuous improvement process designed to overcome the problems with a project-based approach, but the Hawthorne effect is very much in play limiting the sustainability of improvements.
- A common problem with all continuous improvement initiatives is the very dynamic nature of today’s business environment with ever shrinking product life cycles, and rapid developments in automation, mergers and acquisitions. The result is the improved process may become obsolete in a matter of months.
Now consider the new age of process improvement with “smart manufacturing.” Much has been written about the industrial Internet of Things (IIoT) creating significant opportunities to capture operational data from machines and equipment. While this will assist in improving processes, it is limited to reading machine metrics, with few insights into how people interact with machines and products. It is not well understood that people are a key element to Smart Manufacturing – empowering them with more robust operational information, helping them eliminate bottlenecks and solving tough quality issues.
Adding passive, non-obtrusive, sensor technology to continuously monitor operations – people, machines, and products – provides a much greater opportunity than merely making machines smarter.
Process and Value Stream Maps
- The Past: Labor-intensive process and value-stream charts only capture a qualitative and subjective snap-shot in time that typically can vary from day-to-day and from person-to-person.
- The Future: Continuous hard data capture over extended periods using unobtrusive sensors watch the interaction of people with machines and products. The Hawthorne effect is defeated by the subtle nature of the observation tools and their permanent nature.
Gage R&R (Repeatability and Reproducibility)
- The Past: Gage R&R is most difficult challenge in every process improvement project I’ve tackled because of the major discrepancies from the as-designed process when comparing one person to another and comparing one day to another. The variation typically grows with the complexity of the process and skill level of the people involved.
- The Future: With continuous monitoring of several people over several days and weeks, all variations are captured for analysis. Best practices, bottlenecks and training opportunities are much more easily discovered.
Sustaining Process Improvements
- The Past: Because of the labor-intensive, qualitative/subjective and snapshot nature of process improvement efforts, a majority of them fail according to a Wall Street Journal article.
- The Future: Because monitoring is on-going and not obvious to people, variations from the improved process are easily identified in real-time via alerts and dashboards. There is no need for complex reports or expensive consultants to interpret them.
In theory, a production process is always predictable. In practice, however, predictable operation is an achievement that has to be sustained, which is easier said than done. Predictable operation means that the process is doing the best that it can currently do—that it is operating with maximum consistency. Maintaining this level of process performance over the long haul can be a challenge. Effective ways of meeting this challenge are discussed below.
Some elements of economic operation
As argued in “What Is the Zone of Economic Production?”, to speak of the economic operation of a manufacturing process, all of the following elements are required:
Element 1: Predictable operation
Element 2: On-target operation
Element 3: Process capability achieved (Cp and Cpk ≥ 1.5)
The notions of on-target operation and process capability are inextricably linked to predictable operation—i.e., demonstrable process stability and consistency over time. Without stability and consistency over time it is impossible to meaningfully talk about either capability or on-target operation.
First example of a predictable process
Our first example uses 128 successive sample measurements for product characteristic 17 in product 73S. The time period covered by these data was sufficient to meaningfully address the question of process predictability. As one-value-per-time-period data, a process behavior chart for individual values is appropriate, as seen in figure 1. A process behavior chart has traditionally been called a control chart, the principal technique of statistical process control (SPC). This chart provides a proven operational definition of a predictable, or “in control,” process.
Is the process predictable? Figure 1 allows us to characterize process behavior as predictable and therefore to think of one voice speaking on behalf of the process. The natural process limits of 37.41 to 40.83 shown in figure 1 define this “voice of the process.” They also tell us what to expect from this process in the future. Thus, by being operated predictably, this process meets the first requirement for economic operation.
Is the process on target? Although the green line of figure 1 shows the average of 39.12 to be very close to the target value of 39.0, it doesn’t give us a standardized means of answering the question. A traditional 99-percent confidence interval for the mean is 38.99 to 39.25. Since this interval estimate includes 39.0, we can conclude that the process is effectively on target, and that the process meets the second requirement for economic operation.
Is the process capable of meeting the specifications? Product characteristic 17 has specifications of 36.5 to 41.5. These specifications define the “voice of the customer.” By combining the histogram, the specifications (USL and LSL), and the natural process limits (UNPL and LNPL), figure 2 gives a graphic way to compare the voice of the customer with the voice of the process and thereby answer the question above. Numerical quantities that complement figure 2 are the capability ratios of Cp = 1.45 and Cpk = 1.39. Since the confidence intervals for both of these ratios include 1.50, it is safe to say they are in the ballpark required for economic operation. Thus, the third requirement of economic operation is met.
Further to the capability question, as long as the process continues to be operated predictably and, therefore, within the natural process limits, should we expect fully conforming product for characteristic 17? Figure 2 tells us to answer yes with a high degree of belief. (A high degree of belief has to mean that no substantial change to the process or its operation is expected.)
Hence, the data for product characteristic 17 satisfy the requirements for economic operation listed earlier. With predictability, on-target operation, and capability in place, the process is in the ideal state. (See “Two Definitions of Trouble” to learn of the four possible states for any production process.) So long as process predictability is maintained, the process will continue to operate on target and capably.
But what happens in the future? New production runs will bring new data, inviting the user to update her computations. Would this computational effort be beneficial? There are three cases to be considered when answering this very important question.
First, as long as the process continues to display predictable behavior, there is no need to recompute the limits for the chart or the capability ratios. They would simply be repeat estimates of the same quantities.
Second, if the process shows evidence of unpredictable behavior, there is also no need to recompute the limits for the chart or the capability ratios. Although the process may have changed, the computations do nothing to fix the problem. The real need is to identify the assignable cause of the change and take action to remove its effect from the process. Making the identified assignable cause part of the set of control factors for the process will be more profitable than anything else that can be done when a process is unpredictable.
Third, only when the process displays a different kind of behavior than observed previously, and the behavior is both desired and expected to continue (e.g., the result of a planned change), is there any benefit to be obtained from a recomputation of the limits for the chart and the capability ratios.
Looking forward, figure 3 asks the question for our first example of what is it going to take to sustain predictable operation? The natural process limits define what this process is capable of delivering, so how can we avoid settling for less? How can we get our process to continue to operate up to its full potential? It turns out that sustaining predictable operation requires continued attention to the process and the willingness to take action when and where it is needed. The reason that we can’t simply fix the process and then forget it is known as entropy.
The deteriorative force of entropy
Entropy acts against all manufacturing processes, meaning it is far from easy to operate a process with maximum consistency, that is, predictably. Entropy is a force of deterioration. It forces a manufacturer to maintain and look after all aspects of a production operation. Without action to counter the effects of entropy, it wouldn’t take long until product measurements for characteristic 17 would be found outside the range of predictable operation found in figure 3.
Predictability is not a natural state for a production process. Signals of process change that are made visible by process behavior charts provide clues about when and where to act against the forces of entropy to regain a state of predictability.
Sustaining predictable operation
How should a manufacturer act against the forces of entropy in such a way that a production process has a chance of sustaining predictability in the long-term? Some fundamental points are discussed below.
To start, the operator of a production process is not solely responsible for predictable operation. While some assignable causes of unpredictability will be sourced to production operators, there is much more to it than that.
Operating standards and training
Without an operating standard, one can argue that there is no “process.” Even with standards in place, operators subject to inadequate or incorrect training may operate a process unpredictably. One example is adjustments that make things worse, such as reacting to process output found outside of specification when there is actually no signal of change in the process.
Operating standards define how to operate the process. They provide the foundation for consistent and effective process operation across the workforce, such as aligning the ways of working among different shifts. High-quality operating standards provide the basis for effective training and supervision, and also the means of operation to enable predictable operation. Although operators follow and execute such standards, they don’t own them.
Data collection and use (rational sampling in SPC)
Of critical importance is that data are collected and used in such a way that the behavior of the process—predictable or unpredictable—can be judged effectively. While operator input may be critical in determining an effective data collection plan, a process specialist, or “SPC lead,” is more likely to take responsibility for the data collection plan and choice and use of SPC chart.
For example, collecting data at too high a frequency can make a predictable process appear unpredictable. Too low a frequency of data collection can mean that some signals of unpredictability pass undetected, meaning the opportunity to learn more about the process, and take appropriate action to potentially improve it, is lost. (See “Rational Sampling” for more details.) Since data provide the basis for action on the process, it is important that process data are collected, organized and used in a way that will provide the needed insight.
If, for example, a purchasing department buys on price tag alone, poor-quality materials may leave an operator helpless to achieve predictable operation (garbage in, garbage out). Raw material suppliers may need SPC as much as, or more than, the manufacturer transforming the supplied raw materials into finished goods (e.g., via assembly operations).
Process design and the possibility to control causes of variation
Natural raw materials may exhibit inconsistencies in quality over time whose causes cannot be directly controlled at source (e.g., seasonal variations in milk or differences between suppliers from different geographical locations when two or more sources of supply are needed to obtain sufficient quantity of raw materials to meet production volumes). The process needs to allow for control actions such as in-tank adjustments that make possible the removal of the effect of these potential assignable causes during actual processing. In-tank adjustments, moreover, need to be executed smartly, implying the need for a well-defined dead band. (The article “The Secret of Process Adjustment” explains how unnecessary adjustments without a well-defined dead band can only increase process variation.)
Maintenance and engineering
Maintenance plans need to be well-defined and executed timely. A failure to maintain and repair the process line, including process equipment such as flow meters and pressure and temperature sensors, can again leave operators helpless to achieve sustained predictable operation.
A lot of process operations allow for automatic process control, which includes the continual execution of automatic adjustments to keep the process on, or close to, target. The means of tuning and monitoring automatic PID loops can be complex and very likely falls outside the list of responsibilities assigned to an operator. Such PID loops need to make predictable operation possible. PID adjustment loops that react too quickly, or too slowly, or that have no direct impact on important control factors, will very likely do nothing to resolve issues of unpredictable process operation. In some cases, they can even increase process variation, making things worse. (See “Process Monitor Charts” for further discussion of these points.)
Measurement systems can be the source of assignable causes on process behavior charts, but only when they are operated unpredictably themselves. As long as they are operated predictably, the variation attributable to measurement will always be part of the process’s routine, common cause variation. The greater the level of routine measurement variation, the further apart the process limits will be on the process behavior chart. Even if you don’ t know how much of the total routine variation is due to measurement variation, it is always there, like it or not.
Assignable causes related to measurement can come from inconsistent sampling and sample-handling practices, as well as poorly calibrated, monitored and maintained equipment. Different analysts may also use measurement equipment differently, which can show up as signals of assignable-cause variation on a process behavior chart. If different laboratories are used, there will be a need for consistency both within and between the laboratories (no bias between locations and a comparable level of consistent precision).
Standards and training are also needed for the use of measurement equipment, since a measurement process is a process in its own right.
Management can foster unpredictable operation by failing to support efforts to fix problems that are identified in the course of production. The workforce needs time to keep, discuss, and respond to process behavior charts, necessitating support from management in these efforts. Responding to the charts means 1) identifying the causes of process changes; and then 2) taking action on them, as also described later in the Tokai Rika example (these two steps are shown in figures 4 and 7).
Everybody connected with the process is needed
To achieve and sustain predictable operation, there is a need for everybody connected with the process to do his part. Operators of the process can only ever be one piece of the jigsaw puzzle. Predictable operation requires a supportive environment, and a key role for management is to establish and maintain this environment, which includes not only the use of process behavior charts, but also the ability and willingness to respond to them. Signals of process change presented by process behavior charts are indicators that predictable operation has broken down. The way to regain predictable operation, and with it minimum achievable variation in output for the current process, is to identify the causes of unpredictability and take action on them. This is illustrated schematically in figure 4, which is drawn circularly to depict its continuous, ongoing nature.
Second example of a predictable process: Tokai Rika
Operating predictably all the time is not a viable aim. To expect continued, uninterrupted predictable operation can only be described as wishful thinking. A predictable process in the mid- to long-term should be regarded as one that is subject to occasional, or only very occasional, signals of unpredictability. When entropy intrudes it will bring assignable causes with it, increasing the variation in process outcomes above and beyond the level of common cause variation routinely present.
The example of Tokai Rika, described in “How Do You Get the Most Out of Any Process?” reveals not only what predictable and economic operation mean in routine production but also how to approach this challenge so that it is practically sustainable in the long-term.
The average (upper) chart shown in figure 5 is reproduced from “How Do You Get the Most Out of Any Process?” The chart finds evidence of a process change on days 35 and 36. Looking back to the last time the process crossed the central line, Tokai Rika’s production workers decided that this problem could have begun as early as day 29. Upon investigation, they found that the positioning collar had worn down and needed to be replaced. Recognizing this as a problem of tool wear, they did two things: They ordered a new positioning collar, and they turned the old collar over to get back on target while waiting for the new collar to arrive. This is indicative of a desire to operate right at the target value whenever possible. The new collar was installed on day 39, and they wrote Intervention Report No. 1 detailing what was found and what was done.
Following this intervention, they decided to compute new limits for the process. They ran without limits for days 39 to 49 and used this period as their new baseline. With a grand average of 90.18 and an average range of 0.91, the new limits were considerably tighter than the previous limits. As they used these limits to track the process, they soon found evidence of another process change.
The averages for days 57, 58, and 59 are all below the lower limit, and it’s fairly clear there was a shift in the process. (They did not make any notes about the ranges falling above their upper limit on days 53 and 54.) Moreover, unlike the excursion on days 29 to 36, in this case, there is no gradual change leading up to the first point outside the limit. Hence, they noted that this was a sudden change and began to look for something broken. They began their investigation with the rolling operation. When no problems were found at rolling, they turned to the blanking operation.
As shown in figure 6, after the better part of two weeks had passed, they finally discovered what was making the detent dimensions smaller: There was a very small wrinkle on the flange due to a defect in the die.
Thus, they scheduled a repair for the die on the weekend between days 70 and 71. At the same time they modified the bolt holding the pressure pad since they had found that it was coming loose. Following these changes they wrote up Intervention Report No. 2 and proceeded to collect data for a new set of limits. The process average went up to 90.88, which is probably due to the fact that this positioning collar already had 32 days of wear prior to this new baseline period.
The record of the Tokai Rika process covers some 20 months and the full story is found in Understanding Statistical Process Control, Third Edition. Over this extended period, the Tokai Rika process was shown to be occasionally subject to the effect of assignable causes. When these assignable causes changed the process location or process variation, the process behavior chart detected these changes as shown in figures 5 and 6.
Returning to the three elements of economic operation, how did Tokai Rika’s process do? First, for the most part the process demonstrated a high degree of predictability, a fine achievement in its own right. Second, with a target of 90 the process was effectively on target over the course of the 20 month record. Lastly, with capabilities in excess of 2, there is no doubt about how to answer the question, “Is the process capable of meeting the specifications?” Hence, Tokai Rika met the requirements of economic operation for this production process.
Some important lessons from the use of process behavior charts at Tokai Rika are:
1. The detected assignable causes were worth knowing about, and action taken on these causes contributed to continual process improvement—the way data were collected and used provided the needed insight to make these improvements possible.
2. Signals on the process behavior chart were carefully interpreted to pinpoint when the process changes likely started so that investigative effort was able to focus in, and identify, the cause of each change.
3. The working environment at Tokai Rika enabled the discussion of, and response to, the signals of process change, allowing the charts to provide a basis for action on the process.
4. The successful identification of assignable causes was sometimes difficult, and on a couple of occasions during the 20-month record, the investigative trail ran dry; however, the working environment was able to accommodate these difficulties and disappointments.
5. Inherent to Tokai Rika’s approach to sustaining predictable and economic operation, and therefore fighting against the effects of entropy, is the three-step process found in figure 7:
6. Even though assignable causes are undesired, and Tokai Rika wanted to get rid of them, the company chose to operate the process without having identified and removed the effect of one such cause (see days 57 to 70 in figure 6). Some assignable causes will warrant shut down of a process, others not, meaning that user judgment is critical.
Further to the first point on continual improvement, the Tokai Rika example reveals that successfully sustaining predictable operation also provides a means of reducing routine, common cause variation over time. At the start of the record the process’s common-cause standard deviation was 0.0148 mm, yet after the improvements, the process operated for the last 13 months with a standard deviation of 0.0100 mm. This means the process variance was reduced from 0.000219 down to 0.000100, a 54-percent reduction in process variability. This came about by following through on the signals of assignable cause variation found on the process behavior chart. Visually, this improvement appears as illustrated by the two histograms in figure 8. (The second histogram is 68% as wide as the initial histogram because a 54% reduction in variation shows up as √(1-0.543) = 0.676.)
Hence, while sustaining predictable operation, Tokai Rika learned more about its process, and by implementing the knowledge gained, it improved the process at the same time. Tokai Rika showed that a process behavior chart makes it possible to learn from the process and to improve the process while monitoring the process.
The difference between operating a process predictably and operating a process unpredictably is not a matter of having the right process, or having the right process settings, or even having the right process design. While all these have an impact upon the ability to operate a process predictably, the ultimate issue in operating a process predictably is an operational one. Does everyone involved with operating a process understand what it takes to operate it predictably? And do they have the operational discipline to do so over the long haul? The operating environment must be one that makes sustained, predictable operation possible.
Entropy is relentless. It will make each and every process unpredictable. Therefore, without a continuing program for identifying and removing the effects of assignable causes of unpredictability, there is little to no point in talking of mid- to long-term process predictability, sustained on-target operation, or process capability.
World-class quality has for years been defined as on-target operation with minimum variance. Operating on target means operating near the ideal process outcome. Operating with minimum variance means operating the process up to its current full potential; when this happens, a state of maximum consistency in operation has been achieved, and the process data will display predictable behavior (see figure 1).
So, if you get as far as meeting the requirements of economic operation outlined at the start of this article, what does it take to sustain this achievement? It takes continued predictable operation, which means a process that is operated at full potential. Any drop below operation at full potential means that predictable operation will have broken down, and assignable causes will be taking the process on walkabout. When this happens, the notions of being on-target and capable are lost, even if only temporarily. The means of regaining predictable operation, and hence also economic operation, for on-target and capable processes, is to identify the assignable causes and then to act on them, just as the Tokai Rika personnel did. This is why process behavior charts are the key to sustaining predictable and economic operation.
uality tools can serve many purposes in problem solving. They may be used to assist in decision making, selecting quality improvement projects, and in performing root cause analysis. They provide useful structure to brainstorming sessions, for communicating information, and for sharing ideas with a team. They also help with identifying the optimal option when more than one potential solution is available. Quality tools can also provide assistance in managing a problem-solving or quality improvement project.
Seven classic quality tools
The Classic Seven Quality tools were compiled by Kaoru Ishikawa in his book, Guide to Quality Control (Asian Productivity Organization, 1991). Also known as “The Seven Tools” and “The Seven Quality Tools,” these basic tools should be understood by every quality professional. The Classic Seven Tools were first presented as tools for production employees to use in analyzing their own problems; they are both simple enough for everybody to use, yet powerful enough to tackle complex problems.
The seven tools are:
1. Cause and effect diagrams
2. Scatter diagrams
3. Control charts
5. Check sheets
6. Pareto charts
7. Flow charts
A cause-and-effect-diagram is used to list potential causes of a problem. It is also known as an Ishikawa diagram or fishbone diagram. Typically, the main branches are the “6Ms,” or man, material, methods, milieu (environment), machine, and measurement. Sub-branches are listed under the main branches with “twigs” containing the potential problem causes. A cause-and-effect diagram can be used to assist when the team is brainstorming, and it can also be used to quickly communicate all potential causes under consideration.
A scatter diagram graphically depicts paired data points along an X and Y axis. The scatter diagram can be used to quickly identify potential relationships between paired data points. Figure 2 depicts various potential correlations ranging from no correlation to a strong negative and strong positive correlation. It is important to remember that a strong correlation does not necessarily mean there is a direct relationship between the paired data points; they may be following third, unstudied factor.
Control charts are used to evaluate and monitor the performance of a process (Wheeler 1995). There are many types of control charts available for statistical process control (SPC), and different charts are used deepening on the sample size and the type of data used. An individuals chart is used when the sample size is one. The formulas for an individuals chart are shown in table 1, and an example of an individuals chart for a shaft diameter is shown in figure 3. The data are in a state of statistical control when all values are within the control limits, which contain 99.7 percent of all values.
Histograms are used to visualize the distribution of data (McClave and Sincich 2009). The y-axis shows the frequency of occurrences, and the x-axis shows the actual measurements. Each bar on a histogram is a bin, and bin size can be determined by taking the square root of the number of items being analyzed. Using a histogram can quickly show if the data are skewed in one direction or another. Figure 4 shows a histogram for data that fit a normal distribution, with half of all values above and below the mean.
Check sheets are used for the collection of data (Borror 2009), such as when parts are being inspected. The various failure categories or problems are listed, and a hash mark is placed next to the label when the failure or problem is observed (see figure 5). The data collected in a check sheet can be evaluated using a Pareto chart.
A Pareto chart is used for prioritization by identifying the 20 percent of problems that result in 80 percent of costs (Juran 2005). This can be useful when searching for improvement projects that will deliver the most impact with the least effort. Figure 6 shows a Pareto chart with three out of seven problems accounting for 80 percent of all problems. Those three would be the priority for improvement projects.
A flowchart is used to gain a better understanding of a process (Brassard 1996). A flowchart may provide a high-level view of a process, such as the one shown in figure 7, or it may be used to detail every individual step in the process. It may be necessary to create a high-level flowchart to identify potential problem areas and then chart the identified areas in detail to identify steps that need further investigation.
Seven new management and planning tools
The seven new management and planning tools are based on operations research and were created between 1972 and 1979 by the Japanese Society for Quality Control. They were first translated into English by GOAL/QPC in 1983 (Brassard 1996).
These seven tools are:
1. Affinity diagram
2. Interrelationship diagram
3. Tree diagram
4. Arrow diagram
5. Matrix diagram
6. Prioritization matrix
7. Process decision program chart (PDPC)
An affinity diagram identifies points by logically grouping concepts (ReVelle 2004). Members of a team write down items that they believe are associated with the problem under consideration, and these ideas are then grouped into categories or related points.
The interrelationship diagram depicts cause-and-effect relationships between concepts and is created by listing problems on cards (Westcott 2014). These cards are then laid out, and influences are identified with arrows pointing at the items that are being influenced. One item with many arrows originating from it is a cause that has many influences, and much can be achieved by correcting or preventing this problem.
A tree diagram assists in moving from generalities to the specifics of an issue (Tague 2005). Each level is broken down into more specific components as one moves from left to right in the diagram.
An arrow diagram is used to identify the order in which steps need to be completed to finish an operation or project on time (Brassard 1996). The individual steps are listed, together with the duration, in the order that they occur. Using an arrow diagram such as the one in figure 11 can show steps that must start on time to prevent a delay in the entire project or operation.
The matrix diagram is used to show relations between groups of data (Westcott 2014). The matrix diagram in Figure 12 depicts three suppliers as well as their fulfillment of the three characteristics listed on the left side of the table. In this example, only two suppliers share the characteristic “ISO certification.”
The prioritization matrix is used to select the optimal option by assigning weighted values to the characteristics that must be fulfilled, and then assessing the degree to which each option fulfills the requirement (ReVelle 2004). The prioritization matrix in figure 13 is being used to select the best option for a staffing problem.
Process decision program charts (PDPC) map out potential problems in a plan and their solutions (Tague 2005). The example in figure 14 shows the potential problems that could be encountered when conduction employee training, as well as solutions to these problems.
Example of combining quality tools
Multiple quality tools can be used in succession to address a problem (Barsalou 2015). The tools should be selected based on the intended use, and information from one tool can be used to support a later tool. The first step is to create a detailed problem description that fully describes the problem. In this hypothetical example, the problem description is “coffee in second-floor break room tastes bad to the majority of coffee drinkers; this was first noticed in February 2017.” The hypothetical problem-solving team then creates the flowchart shown in figure 15 to better understand the process.
The team then brainstorms potential causes of the problem. These ideas come from the team members’ experience with comparable, previous issues as well as technical knowledge and understanding of the process. The ideas are written on note cards, which are grouped into related categories to create an affinity diagram based around the 6Ms that are used for a cause-and-effect diagram (see figure 16).
The affinity diagram is then turned into the cause-and-effect diagram depicted in figure 17. The team can then expand the cause-and-effect diagram if necessary. The cause-and-effect diagram provides a graphical method of communicating the many root-cause hypotheses. This makes it easy to communicate the hypotheses, but it’s not ideal for tracking the evaluation and results.
Cause-and-effect diagram items are then transferred to a worksheet like the one shown in figure 18. The hypotheses are then prioritized so that the most probable causes are the first ones to be investigated. A method of evaluation is then determined, a team member is assigned the evaluation action item, and a target completion date is listed. A summary of evaluation results is then listed, and the conclusions are color-coded to indicate if they are OK, unclear, or potentially the root cause. Unclear items as well as potential root causes should then be investigated further, and OK items are moved from consideration.
Figure 19 shows a close up view of the cause-and-effect worksheet. Often, the cause-and-effect diagram item is not clean in how it is related to the problem. In such a situation, it can be expand in the worksheet to turn it into a clearer hypotheses. For example, “Water” in the cause-and-effect diagram can be changed to “Water from the city water system containing chemicals leading to coffee tasting bad” in the worksheet.
A prioritization matrix can be used to evaluate multiple potential solutions to the problem. In this example, the team has identified three potential solutions: The team can clean and repair the old machine, buy a new machine, or buy an expensive new machine. They want to avoid high costs, but do not want to spend too much time on implementing the solution, and they want something with long-term value. Therefore the prioritization matrix shown in figure 20 is used to find the ideal solution.
There is no one right quality tool for every job, so quality tools should be selected based on what must to be accomplished. Information from one tool can be transferred to a different tool to continue the problem-solving process. Actions items resulting from a cause-and-effect diagram should be entered into a tracking list. This assists the team leader in tracking the status of items, makes it easier to ensure action items are completed, and is also useful for reporting the result of action items.
2.Developing and managing Six Sigma Drive for entire plant Quality.
3.Analyze Quality Assurance processes and make improvement plans.
4.Support all function in resolving their chronic / Strategic and operation problems
5.Organize and facilitate implementation of Strategic Execution initiatives as per plan
6.Coordinate for carrying out various surveys and analyse the survey outcomes
7.Execute Training & Awareness initiatives of TPM, TQM etc or any other execution model & other good practices as per plan/ need.
8.Train the team on TPM, TQM , Quality Tools & Techniques to help them enhance the strategic projects.
Salary: Not Disclosed by Recruiter
Industry: Automobile / Auto Anciliary / Auto Components
Functional Area: Production, Manufacturing, Maintenance
Role Category: Production/Manufacturing/Maintenance
Role: Quality Assurance/Quality Control Manager
Employment Type: Permanent Job, Full Time
Keyskills: Quality Assurance Plant, Quality Six Sigma, TQM, Quality Tools, Business Excellence, QA, TPM, Quality Techniques, Kaizen, QC Tools.
- To drive Process standardization support the sustenance of the ISO 9001 & 27001 certifications.
- Ideally, He should have supported Process facilitation for testing oriented projects. Should be good in Advanced Statistics / Metrics and Six Sigma would be an added advantage.
- Good presentation and communication skills and the ability to lead the team towards achievement of goals.
- Facilitate and monitor the project/product/function teams in complying with QMS and advise them on process implementation and compliance with standards.
- Conducting internal quality audit by obtaining objective evidence of implementation and effectiveness of the quality system.
- Identify process improvement training needs and organizing for course development and training.
- Guides projects and functions in metrics collection and analysis. Reports metrics and customer complaints.
Salary: Not Disclosed by Recruiter
Industry: IT-Software / Software Services
Functional Area: Other
Employment Type: Permanent Job, Full Time
Keyskills: Software Quality Assurance, Quality Audit, QMS, ISO 9001, Internal Quality Auditor, Six Sigma, Process Improvement, Metrics, Customer Complaints, Process Standardization.