In theory, a production process is always predictable. In practice, however, predictable operation is an achievement that has to be sustained, which is easier said than done. Predictable operation means that the process is doing the best that it can currently do—that it is operating with maximum consistency. Maintaining this level of process performance over the long haul can be a challenge. Effective ways of meeting this challenge are discussed below.
Some elements of economic operation
As argued in “What Is the Zone of Economic Production?”, to speak of the economic operation of a manufacturing process, all of the following elements are required:
Element 1: Predictable operation
Element 2: On-target operation
Element 3: Process capability achieved (Cp and Cpk ≥ 1.5)
The notions of on-target operation and process capability are inextricably linked to predictable operation—i.e., demonstrable process stability and consistency over time. Without stability and consistency over time it is impossible to meaningfully talk about either capability or on-target operation.
First example of a predictable process
Our first example uses 128 successive sample measurements for product characteristic 17 in product 73S. The time period covered by these data was sufficient to meaningfully address the question of process predictability. As one-value-per-time-period data, a process behavior chart for individual values is appropriate, as seen in figure 1. A process behavior chart has traditionally been called a control chart, the principal technique of statistical process control (SPC). This chart provides a proven operational definition of a predictable, or “in control,” process.
Figure 1: Process behavior chart for product characteristic 17
Is the process predictable? Figure 1 allows us to characterize process behavior as predictable and therefore to think of one voice speaking on behalf of the process. The natural process limits of 37.41 to 40.83 shown in figure 1 define this “voice of the process.” They also tell us what to expect from this process in the future. Thus, by being operated predictably, this process meets the first requirement for economic operation.
Is the process on target? Although the green line of figure 1 shows the average of 39.12 to be very close to the target value of 39.0, it doesn’t give us a standardized means of answering the question. A traditional 99-percent confidence interval for the mean is 38.99 to 39.25. Since this interval estimate includes 39.0, we can conclude that the process is effectively on target, and that the process meets the second requirement for economic operation.
Is the process capable of meeting the specifications? Product characteristic 17 has specifications of 36.5 to 41.5. These specifications define the “voice of the customer.” By combining the histogram, the specifications (USL and LSL), and the natural process limits (UNPL and LNPL), figure 2 gives a graphic way to compare the voice of the customer with the voice of the process and thereby answer the question above. Numerical quantities that complement figure 2 are the capability ratios of Cp = 1.45 and Cpk = 1.39. Since the confidence intervals for both of these ratios include 1.50, it is safe to say they are in the ballpark required for economic operation. Thus, the third requirement of economic operation is met.
Figure 2: Comparison of the voice of the customer with the voice of the process
Further to the capability question, as long as the process continues to be operated predictably and, therefore, within the natural process limits, should we expect fully conforming product for characteristic 17? Figure 2 tells us to answer yes with a high degree of belief. (A high degree of belief has to mean that no substantial change to the process or its operation is expected.)
Hence, the data for product characteristic 17 satisfy the requirements for economic operation listed earlier. With predictability, on-target operation, and capability in place, the process is in the ideal state. (See “Two Definitions of Trouble” to learn of the four possible states for any production process.) So long as process predictability is maintained, the process will continue to operate on target and capably.
But what happens in the future? New production runs will bring new data, inviting the user to update her computations. Would this computational effort be beneficial? There are three cases to be considered when answering this very important question.
First, as long as the process continues to display predictable behavior, there is no need to recompute the limits for the chart or the capability ratios. They would simply be repeat estimates of the same quantities.
Second, if the process shows evidence of unpredictable behavior, there is also no need to recompute the limits for the chart or the capability ratios. Although the process may have changed, the computations do nothing to fix the problem. The real need is to identify the assignable cause of the change and take action to remove its effect from the process. Making the identified assignable cause part of the set of control factors for the process will be more profitable than anything else that can be done when a process is unpredictable.
Third, only when the process displays a different kind of behavior than observed previously, and the behavior is both desired and expected to continue (e.g., the result of a planned change), is there any benefit to be obtained from a recomputation of the limits for the chart and the capability ratios.
Looking forward, figure 3 asks the question for our first example of what is it going to take to sustain predictable operation? The natural process limits define what this process is capable of delivering, so how can we avoid settling for less? How can we get our process to continue to operate up to its full potential? It turns out that sustaining predictable operation requires continued attention to the process and the willingness to take action when and where it is needed. The reason that we can’t simply fix the process and then forget it is known as entropy.
Figure 3: Operation of a process up to its full potential
The deteriorative force of entropy
Entropy acts against all manufacturing processes, meaning it is far from easy to operate a process with maximum consistency, that is, predictably. Entropy is a force of deterioration. It forces a manufacturer to maintain and look after all aspects of a production operation. Without action to counter the effects of entropy, it wouldn’t take long until product measurements for characteristic 17 would be found outside the range of predictable operation found in figure 3.
Predictability is not a natural state for a production process. Signals of process change that are made visible by process behavior charts provide clues about when and where to act against the forces of entropy to regain a state of predictability.
Sustaining predictable operation
How should a manufacturer act against the forces of entropy in such a way that a production process has a chance of sustaining predictability in the long-term? Some fundamental points are discussed below.
To start, the operator of a production process is not solely responsible for predictable operation. While some assignable causes of unpredictability will be sourced to production operators, there is much more to it than that.
Operating standards and training
Without an operating standard, one can argue that there is no “process.” Even with standards in place, operators subject to inadequate or incorrect training may operate a process unpredictably. One example is adjustments that make things worse, such as reacting to process output found outside of specification when there is actually no signal of change in the process.
Operating standards define how to operate the process. They provide the foundation for consistent and effective process operation across the workforce, such as aligning the ways of working among different shifts. High-quality operating standards provide the basis for effective training and supervision, and also the means of operation to enable predictable operation. Although operators follow and execute such standards, they don’t own them.
Data collection and use (rational sampling in SPC)
Of critical importance is that data are collected and used in such a way that the behavior of the process—predictable or unpredictable—can be judged effectively. While operator input may be critical in determining an effective data collection plan, a process specialist, or “SPC lead,” is more likely to take responsibility for the data collection plan and choice and use of SPC chart.
For example, collecting data at too high a frequency can make a predictable process appear unpredictable. Too low a frequency of data collection can mean that some signals of unpredictability pass undetected, meaning the opportunity to learn more about the process, and take appropriate action to potentially improve it, is lost. (See “Rational Sampling” for more details.) Since data provide the basis for action on the process, it is important that process data are collected, organized and used in a way that will provide the needed insight.
If, for example, a purchasing department buys on price tag alone, poor-quality materials may leave an operator helpless to achieve predictable operation (garbage in, garbage out). Raw material suppliers may need SPC as much as, or more than, the manufacturer transforming the supplied raw materials into finished goods (e.g., via assembly operations).
Process design and the possibility to control causes of variation
Natural raw materials may exhibit inconsistencies in quality over time whose causes cannot be directly controlled at source (e.g., seasonal variations in milk or differences between suppliers from different geographical locations when two or more sources of supply are needed to obtain sufficient quantity of raw materials to meet production volumes). The process needs to allow for control actions such as in-tank adjustments that make possible the removal of the effect of these potential assignable causes during actual processing. In-tank adjustments, moreover, need to be executed smartly, implying the need for a well-defined dead band. (The article “The Secret of Process Adjustment” explains how unnecessary adjustments without a well-defined dead band can only increase process variation.)
Maintenance and engineering
Maintenance plans need to be well-defined and executed timely. A failure to maintain and repair the process line, including process equipment such as flow meters and pressure and temperature sensors, can again leave operators helpless to achieve sustained predictable operation.
A lot of process operations allow for automatic process control, which includes the continual execution of automatic adjustments to keep the process on, or close to, target. The means of tuning and monitoring automatic PID loops can be complex and very likely falls outside the list of responsibilities assigned to an operator. Such PID loops need to make predictable operation possible. PID adjustment loops that react too quickly, or too slowly, or that have no direct impact on important control factors, will very likely do nothing to resolve issues of unpredictable process operation. In some cases, they can even increase process variation, making things worse. (See “Process Monitor Charts” for further discussion of these points.)
Measurement systems can be the source of assignable causes on process behavior charts, but only when they are operated unpredictably themselves. As long as they are operated predictably, the variation attributable to measurement will always be part of the process’s routine, common cause variation. The greater the level of routine measurement variation, the further apart the process limits will be on the process behavior chart. Even if you don’ t know how much of the total routine variation is due to measurement variation, it is always there, like it or not.
Assignable causes related to measurement can come from inconsistent sampling and sample-handling practices, as well as poorly calibrated, monitored and maintained equipment. Different analysts may also use measurement equipment differently, which can show up as signals of assignable-cause variation on a process behavior chart. If different laboratories are used, there will be a need for consistency both within and between the laboratories (no bias between locations and a comparable level of consistent precision).
Standards and training are also needed for the use of measurement equipment, since a measurement process is a process in its own right.
Management can foster unpredictable operation by failing to support efforts to fix problems that are identified in the course of production. The workforce needs time to keep, discuss, and respond to process behavior charts, necessitating support from management in these efforts. Responding to the charts means 1) identifying the causes of process changes; and then 2) taking action on them, as also described later in the Tokai Rika example (these two steps are shown in figures 4 and 7).
Everybody connected with the process is needed
To achieve and sustain predictable operation, there is a need for everybody connected with the process to do his part. Operators of the process can only ever be one piece of the jigsaw puzzle. Predictable operation requires a supportive environment, and a key role for management is to establish and maintain this environment, which includes not only the use of process behavior charts, but also the ability and willingness to respond to them. Signals of process change presented by process behavior charts are indicators that predictable operation has broken down. The way to regain predictable operation, and with it minimum achievable variation in output for the current process, is to identify the causes of unpredictability and take action on them. This is illustrated schematically in figure 4, which is drawn circularly to depict its continuous, ongoing nature.
Figure 4: Schematic of a strategy aimed at sustaining predictable operation through process behavior charts
Second example of a predictable process: Tokai Rika
Operating predictably all the time is not a viable aim. To expect continued, uninterrupted predictable operation can only be described as wishful thinking. A predictable process in the mid- to long-term should be regarded as one that is subject to occasional, or only very occasional, signals of unpredictability. When entropy intrudes it will bring assignable causes with it, increasing the variation in process outcomes above and beyond the level of common cause variation routinely present.
The example of Tokai Rika, described in “How Do You Get the Most Out of Any Process?” reveals not only what predictable and economic operation mean in routine production but also how to approach this challenge so that it is practically sustainable in the long-term.
The average (upper) chart shown in figure 5 is reproduced from “How Do You Get the Most Out of Any Process?” The chart finds evidence of a process change on days 35 and 36. Looking back to the last time the process crossed the central line, Tokai Rika’s production workers decided that this problem could have begun as early as day 29. Upon investigation, they found that the positioning collar had worn down and needed to be replaced. Recognizing this as a problem of tool wear, they did two things: They ordered a new positioning collar, and they turned the old collar over to get back on target while waiting for the new collar to arrive. This is indicative of a desire to operate right at the target value whenever possible. The new collar was installed on day 39, and they wrote Intervention Report No. 1 detailing what was found and what was done.
Figure 5: Tokai Rika example, days 1 to 60.
Following this intervention, they decided to compute new limits for the process. They ran without limits for days 39 to 49 and used this period as their new baseline. With a grand average of 90.18 and an average range of 0.91, the new limits were considerably tighter than the previous limits. As they used these limits to track the process, they soon found evidence of another process change.
The averages for days 57, 58, and 59 are all below the lower limit, and it’s fairly clear there was a shift in the process. (They did not make any notes about the ranges falling above their upper limit on days 53 and 54.) Moreover, unlike the excursion on days 29 to 36, in this case, there is no gradual change leading up to the first point outside the limit. Hence, they noted that this was a sudden change and began to look for something broken. They began their investigation with the rolling operation. When no problems were found at rolling, they turned to the blanking operation.
As shown in figure 6, after the better part of two weeks had passed, they finally discovered what was making the detent dimensions smaller: There was a very small wrinkle on the flange due to a defect in the die.
Figure 6: Tokai Rika example, days 39 to 85.
Thus, they scheduled a repair for the die on the weekend between days 70 and 71. At the same time they modified the bolt holding the pressure pad since they had found that it was coming loose. Following these changes they wrote up Intervention Report No. 2 and proceeded to collect data for a new set of limits. The process average went up to 90.88, which is probably due to the fact that this positioning collar already had 32 days of wear prior to this new baseline period.
The record of the Tokai Rika process covers some 20 months and the full story is found in Understanding Statistical Process Control, Third Edition. Over this extended period, the Tokai Rika process was shown to be occasionally subject to the effect of assignable causes. When these assignable causes changed the process location or process variation, the process behavior chart detected these changes as shown in figures 5 and 6.
Returning to the three elements of economic operation, how did Tokai Rika’s process do? First, for the most part the process demonstrated a high degree of predictability, a fine achievement in its own right. Second, with a target of 90 the process was effectively on target over the course of the 20 month record. Lastly, with capabilities in excess of 2, there is no doubt about how to answer the question, “Is the process capable of meeting the specifications?” Hence, Tokai Rika met the requirements of economic operation for this production process.
Some important lessons from the use of process behavior charts at Tokai Rika are:
1. The detected assignable causes were worth knowing about, and action taken on these causes contributed to continual process improvement—the way data were collected and used provided the needed insight to make these improvements possible.
2. Signals on the process behavior chart were carefully interpreted to pinpoint when the process changes likely started so that investigative effort was able to focus in, and identify, the cause of each change.
3. The working environment at Tokai Rika enabled the discussion of, and response to, the signals of process change, allowing the charts to provide a basis for action on the process.
4. The successful identification of assignable causes was sometimes difficult, and on a couple of occasions during the 20-month record, the investigative trail ran dry; however, the working environment was able to accommodate these difficulties and disappointments.
5. Inherent to Tokai Rika’s approach to sustaining predictable and economic operation, and therefore fighting against the effects of entropy, is the three-step process found in figure 7:
Figure 7: The way Tokai Rika approached process predictability
6. Even though assignable causes are undesired, and Tokai Rika wanted to get rid of them, the company chose to operate the process without having identified and removed the effect of one such cause (see days 57 to 70 in figure 6). Some assignable causes will warrant shut down of a process, others not, meaning that user judgment is critical.
Further to the first point on continual improvement, the Tokai Rika example reveals that successfully sustaining predictable operation also provides a means of reducing routine, common cause variation over time. At the start of the record the process’s common-cause standard deviation was 0.0148 mm, yet after the improvements, the process operated for the last 13 months with a standard deviation of 0.0100 mm. This means the process variance was reduced from 0.000219 down to 0.000100, a 54-percent reduction in process variability. This came about by following through on the signals of assignable cause variation found on the process behavior chart. Visually, this improvement appears as illustrated by the two histograms in figure 8. (The second histogram is 68% as wide as the initial histogram because a 54% reduction in variation shows up as √(1-0.543) = 0.676.)
Figure 8: The effect of removing assignable causes upon process variation
Hence, while sustaining predictable operation, Tokai Rika learned more about its process, and by implementing the knowledge gained, it improved the process at the same time. Tokai Rika showed that a process behavior chart makes it possible to learn from the process and to improve the process while monitoring the process.
The difference between operating a process predictably and operating a process unpredictably is not a matter of having the right process, or having the right process settings, or even having the right process design. While all these have an impact upon the ability to operate a process predictably, the ultimate issue in operating a process predictably is an operational one. Does everyone involved with operating a process understand what it takes to operate it predictably? And do they have the operational discipline to do so over the long haul? The operating environment must be one that makes sustained, predictable operation possible.
Entropy is relentless. It will make each and every process unpredictable. Therefore, without a continuing program for identifying and removing the effects of assignable causes of unpredictability, there is little to no point in talking of mid- to long-term process predictability, sustained on-target operation, or process capability.
World-class quality has for years been defined as on-target operation with minimum variance. Operating on target means operating near the ideal process outcome. Operating with minimum variance means operating the process up to its current full potential; when this happens, a state of maximum consistency in operation has been achieved, and the process data will display predictable behavior (see figure 1).
So, if you get as far as meeting the requirements of economic operation outlined at the start of this article, what does it take to sustain this achievement? It takes continued predictable operation, which means a process that is operated at full potential. Any drop below operation at full potential means that predictable operation will have broken down, and assignable causes will be taking the process on walkabout. When this happens, the notions of being on-target and capable are lost, even if only temporarily. The means of regaining predictable operation, and hence also economic operation, for on-target and capable processes, is to identify the assignable causes and then to act on them, just as the Tokai Rika personnel did. This is why process behavior charts are the key to sustaining predictable and economic operation.