Inefficiency is expensive. Just ask the city of New York.

The city’s subway system isn’t known for a glowing on-time reputation, and after a study conducted by New York City’s Independent Budget Office (IBO), we know exactly how much money those transit delays cost every year.

In 2017 alone, transit issues have caused city workers to miss more than 17,000 hours of work (and they’re on their way to missing a whopping 26,000+ before year’s end). Multiply that yearly total by the median hourly rate for a New York City-based employee ($32.40), and what do you get?


Nearly a million dollars of productivity wasted, because the railways are occasionally 10 minutes behind schedule.

But hey, it’s the New York subway – a complicated subterranean train system based in one of the most congested cities on the planet. How in the world would someone go about fine-tuning something so complex?

Public Transit Improvements

The answer is simple – Six Sigma. New York City transit is a network of rote systems and processes, and that’s exactly what Six Sigma’s data-driven methodology is designed to improve. It may take a while to completely overhaul a system as large as the New York railway, but consider this: if process improvement strategies can make these trains even 1% more efficient, it saves $8,400 worth of productivity.

$8,400! For a 1% improvement!

Earlier this year, an urban bus system used Six Sigma DMAIC principles to improve transportation outcomes by 20%, and the same methodology can be applied to the New York subway system.

How did they do it?

Step One: Identify Areas of Improvement – In the bus study, they systemized the actual selection of drivers. Which driver was best for which type of bus? Which driver was most suited for a specific route? Every decision they made was based in logic and reason.

Step Two: Identify Areas of Potential Risk – Every change creates risk. It’s up to you to determine if those risks are worthwhile, or if they’re severe enough to make you rethink your solutions. For example, in the bus study, matching the best drivers to the right routes might’ve uncovered a severe dearth of talent in urban bus drivers. It might frustrate or alienate the drivers, inspiring them to find jobs elsewhere. But they accepted these risks and followed through.

Step Three: Test – Once you’ve identified specific areas of improvement and risk, it’s time to get gather some hard data. Tweak and adjust those two areas, run a few tests (the study specifically cites tests like ANOVA, logistic regression, k-means cluster analysis, and others.), and record the results.

Step Four: Adjust Your Approach Based on Your Results – There’s not a one-size-fits-all solution to something as big as New York City public transportation, so it’s important to adjust testing based on the results you get.

And that’s it. Four steps.

Remember, the goal isn’t perfection. If the process can improve at all, it represents a significant gain for the city, so as long as the rate of improvement keeps increasing, you should continue testing and evaluating the results.

9 ways to secure team participation in process improvement

One of the top challenges process practitioners face is creating and retaining engagement in process improvement.  Many process improvement leads struggle to get business teams to use and, even better, to suggest improvements to processes.

In fact, in research conducted by Promapp, 44% of companies said that few to none of their processes are used by employees. This is no doubt linked to the fact that 41% of companies admitted that their processes are not clear and helpful, and only 61% of companies even had their processes documented.

Let’s assume you’ve passed the first hurdle – you’ve captured your processes, and made them as user friendly and as accessible as possible. How do you make sure process improvement efforts don’t stop there?

The end goal of process improvement shouldn’t be process documentation, it should be engaging everyone across the organization in an ongoing, collaborative effort to improve how they do what they do, every day. This requires processes to continually evolve and improve. 

Organization-wide participation and engagement is a critical ingredient to gain the greatest return from your business process management efforts. So how do you get and keep teams engaged in process improvement?

We asked 300 process professionals to share the top tips and tricks they use to create and retain engagement within their teams. Here’s what they recommended:

  1. Communicate your process improvement initiatives
    Establish a plan to keep your process management efforts top-of-mind with staff. Use a variety of vehicles, from emails to newsletter articles to lunchroom posters, to maintain consistent communications. Some companies even create role plays (or have gone so far as to produce a video or animation) to demonstrate the benefits of easy-to-follow processes for both their staff and customers.

It’s also important to share the communications workload so that it’s easier to manage and maintain. One way to do this is by finding process improvement champions who are willing to take turns sharing a ‘Tip of the Week’ with users.

  1. Recognize the efforts of your process heroes
    To maintain interest among users, it’s important to give recognition where recognition is due. That can mean instituting easy-to-run acknowledgements like the user of the month, the most innovative improvement suggestion, or the process of the week.

    Some companies have set up a Heroes and Villains (top users versus infrequent users) leaderboard. Others do regular announcements of their new Certified Process Champions to the rest of the organization.

  2. Upskill your employees
    Ensure that workers have proper training, ongoing support, and the resources they need to get involved with continuous improvement initiatives.

Train staff right from the start as part of new hire induction so that your expectations around process management discipline, as well as their expectations, are clear. For ongoing support, some businesses hold drop-in sessions during which users can have their questions answered by a process champion.

  1. Promote having fun
    Recognize that staff engagement in process improvement can be difficult to maintain.  Make an effort to proactively address this challenge in a fun way.

Many companies have appealed to people’s competitive instincts by holding competitions, both within teams and across the entire organization. Some businesses have even used gamification – process sprints or virtual scavenger hunts, with clues hidden within processes – to make process improvement fun.

A small incentive can also drive motivation and participation. To encourage staff, some businesses have introduced process improvement incentives, such as pizza or ice cream parties, movie ticket giveaways or, in some cases, cash bonuses.

  1. Lead from the front
    There is a lot to be said for senior management buy-in, but you also need ‘process bulldogs’ on the ground to lead the charge for process improvement.

With that in mind, it is important not just to involve the organization’s leadership team in process improvement communications, but to make sure their support is visible to the entire operation. It also helps to build up a strong champion or super-user network so that momentum can be maintained in all areas of the business.

  1. Encourage collaboration
    Process improvement is a team effort so it is essential to let everyone know “we’re in this together”.

To demonstrate this, some businesses hold cross-functional process improvement brainstorming sessions to get teams thinking outside the box about process improvement. These sessions can also serve as an opportunity to work through process pain points together in order to jointly come up with the best improvement ideas.

  1. Integrate into business as usual
    Embed process information into daily activities and other business systems, like the company intranet, to drive employee engagement.

To drive process usage, some businesses host essential documents that everyone needs to access, exclusively in their business process management tool. Other organizations tie process into personal and team performance outcomes and expectations including KPIs, job descriptions, and personal development programs.

  1. Make staff accountable
    Give staff the autonomy and resources they need to map, review, and ultimately own their own processes and improvement ideas. This will have a major impact on process engagement.

To empower staff to be accountable, many organizations have set up a dedicated time slot for completing process related tasks. Some businesses also provide guidelines for dealing with feedback/improvement suggestions including suggested response times from process owners.

  1. Understand there will always be room for improvement 
    To maintain engagement with process improvement initiatives, it is essential for organizations to recognize that the work will never be done.  It really is a journey.

    Be open to listening to users’ suggestions and concerns. And if no one is talking about process improvement, ask for their opinions. Businesses should consider conducting their own surveys, to hear what frontline teams experience and identify improvement opportunities. These conversations will prompt action plans that are likely to help focus process improvement efforts and produce valuable results for your organization.

The good news is that although it can be challenging at times, driving engagement with process improvement is not impossible. These examples used by process improvement professionals across more than 300 organizations – from simple communication to system integration – demonstrate that it is possible for teams to be not only engaged, but excited about process improvement. 

Overview of Effective Survey Design

The survey is one of the most important data collection tools in the armament of a Six Sigma practitioner. There is no lack of research literature on the principles and designs of effective surveys. While the surveys conducted by academics and certain research institutes often reflect impeccable design, there are innumerable cases in which the results of survey conducted in haste are not accepted due to poor design. Conducting a survey during a Six Sigma project can be a daunting task. Rigid timelines often lead to poorly designed surveys, which lead to rejection of the results.

This article provides a brief overview of the intricacies involved in a survey design – without getting into complex statistical theories.

Designing a survey is an iterative process as shown in Figure 1.

Figure 1: An Overview of Survey Design

Figure 1: An Overview of Survey Design

Measuring the Construct

The critical aspects of any survey design are the underlying construct, framing of questions, validity and reliability, and the sampling methodology. A survey is done to measure a construct – an abstract concept. Before designing a survey, the construct must be clearly defined. Once the construct is clear, it can be broken down into different dimensions and each dimension can then be measured by a set of questions.

Consider an example of a human resources (HR) department that is trying to study the attitude of employees toward a newly launched appraisal process. Assume that with some research, the HR department finds that the major dimensions of the attitudes toward the appraisal policy are “transparency,” “evaluation criteria,” “workflow” and “growth potential.”

Framing the Questions

After determining the dimensions, a set of questions needs to be written to measure said dimensions. Questions can be categorized into two groups: classification and target. Classification questions cover the demographic details of the respondent, which can be used for grouping and identification of patterns during analysis. Target questions refer to the construct of the survey. Table 1 includes tips for avoiding common mistakes while wording the questions and selecting their order.

Table 1: Tips for Question Development and Ordering
Content Wording Sequencing
1.       The question must be linked to a dimension of the construct.2.       The question should be necessary in the manner that it helps decision maker in making a decision.

3.       The question should be precise and should not seek multiple responses.

4.       The question should contain all information to elicit an unbiased response.

5.       The question should not force the participant to respond regardless of knowledge and experience.

6.       The question should not lead the respondent to generalize or summarize something inappropriately.

7.       The question should not ask something sensitive and personal that the respondent may not wish to reveal.

1.       The question should not include jargons, technical words, abbreviations, symbols, etc. Use simple language with shared vocabulary.2.       The question should be worded from respondent’s perspective and not researcher’s perspective.

3.       The question should not assume prior knowledge or experience inappropriate for the given situation.

4.       The question should not lead the respondent to provide a biased response.

5.       The question should not instigate the respondent by using critical words.

1.       The target questions should be asked at the beginning followed by classification questions at the end.2.       The questions should be grouped logically. Under each group, the wording and scale should be similar.

3.       Complex, sensitive and personal questions should not be asked at the beginning.

Response Format

Another important aspect of a survey questionnaire is the response format. There are two levels at which the questions can be classified with regard to response format: structured and unstructured (Figure 2).

Figure 2: Question Response Formats

Figure 2: Question Response Formats

Structured questions provide close-ended options for the respondent to choose from, while unstructured questions provide a free choice of words to the respondent. While structured questions are easy to analyze, the provided choices must be mutually exclusive and collectively exhaustive. Unstructured questions, on the contrary, are difficult to analyze – limit them in the questionnaire. Structured questions can further be classified based upon a measurement scale. The choice of the measurement scale depends upon the objective of asking the question and, in turn, influences the analysis and interpretation of response. Table 2 describes difference types of measurement scales, characteristics of data generated through them, their purpose and their impact on analysis.

Table 2: Questions Defined by Measurement Scale
Scale Characteristics of Data Purpose (When to Use?) Implications on Analysis (Limitations)
Nominal Discrete data with no sense of magnitude (can be binary or multinomial) ·         Classification of certain characteristic, event or object·         Can be dichotomous (only two choices) or multiple choice ·         Only mode can be calculated as measure of central tendency·         Cross tabulation and Chi Square can be used for analysis

·         Arithmetic operations are not possible on nominal scale

Ordinal Discrete data with a sense or order/rank Involves rating or ranking a particular factor on a numeric or verbal scale where distance between various alternatives is immaterial ·         Median, quartile, percentile, etc. can be used for central tendency·         Various nonparametric tests can be used for analysis

·         Arithmetic operations are not possible

Interval Continuous data with a sense or order, and distance Involves rating a particular factor on a numeric or verbal scale where distance between various alternatives is important·         A Likert scale is a type of interval scale with a neutral value in between and extreme values at both end ·         Mean or median can be used as measure of central tendency depending upon the skewness of the data·         Parametric tests such as t-test, F-test, ANOVA, etc., can be used

·         All arithmetic operations are possible except multiplication and division

Ratio Continuous data with a sense or order, distance and origin Involves questions pertaining to specific measurements such as “number of incidents per month” ·         All statistical techniques are possible on data generated by ratio scale·         All arithmetic operations are possible

Evaluating Validity and Reliability

After designing the questionnaire, the next step is to establish its validity and reliability. Validity is the degree to which a survey measures the chosen construct whereas reliability refers to the precision of the results obtained. Table 3 provides a brief description of the considerations of validity and reliability, and how they can be evaluated. The table also states when a particular evaluation technique can be applied.

Table 3: Validity and Reliability
Design Aspect What and Why? How? (Action Items) When?
Validity Representational validity – the degree to which a construct is adequately and clearly defined in terms of underlying dimensions and their corresponding operational definitions. Consider again the attitudes to the newly launched appraisal process; the representational validity can be established by getting the questionnaire objectively evaluated by HR experts. Face validity can be established by assessing the suitability of questions, their wording, order and measurement scale. Content validity can be established by assessing the adequacy of the dimensions and corresponding questions in measuring the attitude. Before administering the survey
Criterion validity – measure of correlation between the result of a survey and a standard outcome of the same construct Criterion validity in the example can be established by comparing the score of attitude of an employee with his or her performance rating.If the scores are compared to a current performance rating, concurrent validityhas been established. Another part of criterion validity is predictive validity, which can be established only in the long run by comparing the survey results with long-term performance of employees. After the results are obtained
Construct validity – measure of extent to which the questionnaire is consistent with the existing ideas and hypothesis on the construct being studied In the HR example, construct validity can be established by two means: 1) correlate the scores obtained with that of another survey on attitude toward appraisal (convergent validity) and 2) correlate the scores obtained with that of another survey on attitudes of employees toward the earlier appraisal policy (discriminant validity). By factor analysis, it can also be verified by whether the results support the theory-based selection of dimensions and the corresponding questions. After the results are obtained
Reliability Stability – measure of consistency of results obtained by administering the survey to same respondents repeatedly In practice, it is difficult to establish stability. The main problem is the choice of interval for administering the survey again. It may result in respondents answering based on memory or confounding of results with actual change in the construct. It is recommended to decide on the choice of interval, taking into consideration all the factors that influence the construct. After the results are obtained
Equivalence – degree of consistency among the observers or interviewers This form of reliability is more appropriate in interviews and telephone surveys. After the results are obtained
Internal consistency – measure of consistency in the questions used to measure the construct Internal consistency can be measured by dividing the questionnaire into two equal halves and measuring the correlation between their scores. After the results are obtained

Sampling Methodology

After the questionnaire is designed, it is time to determine the appropriate sampling methodology. Sampling is done to save time and costs in situations where it is not feasible to reach out to the entire population. It is a part of both analytical and enumerative studies. An enumerative study is done to measure the characteristics of population under study while an analytical study aims at revealing the root cause of the pattern observed. In other words, an enumerative study asks the question, “How many?” while an analytical study asks, “Why?” This distinction influences the sample design as well as the interpretation of the results. For an enumerative study, a random sample is repeatedly taken from a population while for an analytical study a random sampling frame is selected repeatedly from a population and then a sample is selected randomly from the sampling frame. During analysis and interpretation of results, scores of analytical studies include standard errors while those of enumerative studies do not.

Sampling can be broadly classified into two categories: random probability and non-probability. Random probability sampling means that each element of the population or sampling frame has an equal and non-zero probability of getting selected in the sample. It is only within the scope of random probability sampling that standard error (the measure of variation in the sample due to sampling error) can be estimated. Only in the case of random probability sampling can an estimate of confidence be made for a sample statistic. Here it is also noteworthy that as sample size increases, standard error reduces as the calculation of standard error uses sample size in the denominator. Non-probability sampling means that the sample is selected based upon judgement or convenience. It is important to keep the difference between random probability sampling and non-probability sampling in mind while selecting and implementing a survey methodology. Sometimes a random probability sampling methodology used to select elements of a population inadvertently leads to a selection of elements on a non-probability basis. Assume that a survey is conducted by a marketing team wherein the team members select people in various locations and ask them to complete the questionnaire. There can be bias resulting from the selection of respondents by the team members and respondents choosing not to respond. It is imperative that these situations are thought through while planning data collection before it comes to interpreting the results.

Refining and Administering the Survey

After deciding on the sampling methodology, the survey is administered. The survey is also reassessed in terms of validity and reliability and is further refined. Often, a survey is pre-tested before being rolled out on a large scale. This helps in refining the survey before it is launched. This, however, depends upon the available time and budget. In general, it is recommended to pre-test a survey. Once the results are received, the data is analysed and interpreted using various descriptive and inferential statistical techniques.


It is highly recommended for a practitioner to understand the nuances of surveys before embarking upon any such assignment. Refer to this article to help refine survey design in shorter and shorter spans of time.


  1. W. Edwards Deming. On the distinction between enumerative and analytical surveys. Journal of the American Statistical Association. 1953(48):244-255.
  2. José Linares Fontela. A Guide to Designing Surveys. Available at: Accessed September 4, 2017.
  3. W. Edwards Deming. Some Theory of Sampling. 1966. Dover Publications.
  4. Office of Quality Improvement, University of Wisconsin-Madison. Survey Fundamentals: A Guide to Designing and Implementing Surveys. Available at: Accessed September 4, 2017.
  5. Jon A. Krosnick and Stanley Presser. Questionnaire Design in Handbook of Survey Research, (Second Edition). 2010.
  6. Carole L. Kimberlin and Almut G. Winterstein. Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 2008(65);23: 2276-2284.
  7. Sofia D. Anastasiadou. Reliability and Validity Testing of a New Scale for Mesuring Attitudes Toward Learning Statistics with Technology. Acta Didactica Naposencia. 2011(4).
  8. Linda Del Greco, Wikke Walop and Richard H. McCarthy. Questionnaire development: 2. Validity and reliability, CMAJ. 1987(136);699-700.
  9. Wai-Ching Leung. How to design a questionnaire, STUDENT BMJ. 2001(9):187-189.

Practicing Due Diligence: Signs of a Legitimate Professional

When searching for your first, or next, Six Sigma course, it’s important to be on the lookout for illegitimate practitioners. These practitioners may talk you in to paying more for a course than you should, falsely advertise their program, or fall through on their certification promises. Regardless of the reason, there will always be a practitioner who will try to undercut you and your success. In our recent articles, we discuss the signs of a legitimate practitioner and what they should offer. Now, as a Six Sigma employee, it’s time to assess the signs of a legitimate professional. Here’s what you should expect from your next hire, manager, or other Six Sigma professional.

Legitimate Professionals have:


Although some may try to use this to impair your judgment of their work, a confident professional is key to any project. As a Six Sigma professional, you have certification in the most innovative and advance business process improvement methods available. Likewise, it’s important that you not only use them but have confidence when you do. A legitimate professional will exhibit signs of confidence when carrying out a task, managing a project, or making a tough decision. Additionally, this professional will be steadfast and have a sound mind when presenting new ideas to team members and management.


Every legitimate professional will practice organization. Whether it’s a multi-variable process or just their desk, these employees should work in an organized manner. While organization comes natural to some individuals, others have to learn it. Although this takes time, prioritizing organization is key to successfully leading a team, project, or other task. Likewise, organization helps keep the project and team’s goal clear with a clear path towards success.


Somewhat similar to organization, every Six Sigma professional should practice thoroughness. Whether it’s reviewing project updates, going over past notes, or revising estimation, thoroughness is an important trait for legitimate professionals. When you practice checking yourself and your data, you decrease the risk for errors and thus, delays. Additionally, thoroughness helps ensure that every project and department follows their predetermined goals and stays on track.

Up to Date Certification

There’s nothing worse than underutilizing a professional. Sometimes, it’s impossible to avoid. However, organizations should be proactive about using their employees where they fit best and are needed the most. The same goes for Six Sigma professionals. If you or your colleague has been in the same position for multiple years, it might be time to update your certification. After spending countless months in a Green Belt role, it’s natural to peruse Black Belt certification. Likewise, it’s important that you and your employees maintain their certification and handle challenges regularly. This helps freshen memory and practice skill sets.


It’s easy to get bogged down with the negative aspects of illegitimate practitioners and unprofessional employees. However, don’t let that stop you from operating your organization, team, or corporation from your maximum potential. While on the lookout for hindrances to your organization, keep an eye open for signs of legitimate employees.

How to Manage Difficult Team Members

Let’s face it; we all have to deal with difficult team members from time to time. Whether it’s little issues, such as not showing up to work on time, or bigger ones, like failing to complete tasks on time, it’s important to effectively manage the situation. Six Sigma focuses on providing innovative improvement methods for business processes within your organization. However, these methods can also be used on difficult team members. If you’re having trouble effectively managing your project team members, here’s what you can do to get them back on track.

When Facing Difficult Team Members, Use DMAIC

DMAIC can be referred to as the backbone of Six Sigma methodologies. Its principle is simple; define a problem and finds ways to effectively resolve it. If you work on Six Sigma projects, chances are you have run into DMAIC multiple times. And, rightfully so. Yet, this method is not solely for business processes but also for employees. Every employee is different and thus, has a different work habit. If you have difficulty managing a team member look to DMAIC for help. First, assess the situation; what exactly is going wrong? Once you have this, work with the employee to resolve the problem. It’s important that you keep your focus on providing constructive feedback and assistance when needed.

Listen to Their Voices With DFSS

Design for Six Sigma (DFSS) is another excellent approach to mitigating difficult employees. This methodology looks at answering two voices; that of the process and that of the customer. In this case, however, you can alter this to answer the voice of the role and that of the employee. First, see what your employee’s role requires. Do they oversee a project? Do they collect and analyze data? Understanding this will help outline where the team member is lacking in the role. Then, consult with your employee and see what might be causing the issues at hand. Sometimes, simply just speaking directly to a team member can help change their work habits and realign their focus with the tasks at hand.

Get to the Bottom with Root Cause Analysis

Sometimes, the problem you see is not always the actual cause of the error. When managing a team, it’s important that information is relayed accurately and tasks are completed on time. However, sometimes this does not go according to plan and issues will arise. For example, if an employee fails to deliver a project status update on time, it’s natural to believe the fault is with said employee. However, taking a closer look may reveal more information. In this case, the employee may have sent you the status report but failed to type the correct email or postal address. In other words, being quick to blame can sometimes not be productive. If you face similar issues, implore methodology of Root Cause Analysis. This method looks to assess a problem within a process, in this case within a team, and find the direct cause of it. Once you know what is causing the issues at hand, it’s much easier to mitigate it.

Controlling Change in IT Departments Using DMAIC




During a Six Sigma project at an information technology (IT) department, there are many good reasons why a process or a system needs to change. There also are a few bad reasons – bad, but unavoidable. It’s up to the practitioners to decide how to transform bad reasons into good ones, and how to prevent good reasons from going bad, by implementing proper change control mechanisms.

Change control is a method used for requesting and managing changes to work processes that are created or maintained by the organization. Change control helps facilitate communication about requested process modifications among the team members, provides a common process for resolving requested changes and reported problems, and reduces the uncertainty around the existence, state and outcome of a change that has been requested.

Having such a system in place reduces the possibility that faults or unnecessary changes will be introduced to a process without forethought, and lessens the likelihood of undoing positive changes made by other users. The goals of a change control procedure usually include minimal disruption to services, reduction in back-out activities and cost-effective utilization of resources involved in implementing change.

Applying the DMAIC Roadmap

One of the best ways to ensure full benefit of change control is to employ the power of DMAIC (Define, Measure, Analyze, Improve, Control). For IT departments, following a DMAIC roadmap for a process change request – such as the upgrading of databases and server hardware – can provide organizations with the necessary framework to ensure that all changes are made in the most efficient manner possible.

The following change control steps, which mirror the DMAIC path, should be used by IT departments while implementing any type of procedural changes.

1. Define need for change control – First off, the person responsible for effecting change in the IT department (the change owner) needs to identify the basic requirements of change control. These requirements can be either an improvement in performance or the replacement of existing processes. The change owner needs to identify the scope of change control in terms of practical problems in existing processes and document a proposed solution on a change request template.

To build a strong business case, this person also must enumerate the reasons for the upgrade of the server (the process change being used in this example), such as higher storage capacity, faster processing speeds and tighter security. This Define step is critical for the change owner to demonstrate the value of the project and convince the organization to provide its full support for the proposed change.

2. Request change control and measure performance – Next, the change owner should list these critical criteria while making the change request.
a. Provide a detailed description of the change – Specifications of the new server
b. Describe the type of change – Hardware upgrade
c. List the reasons for the change – Higher storage capacity, a faster application process, better security
d. Describe how the newly proposed changes will replace existing procedures – How will hardware be changed? How will data on current hardware be migrated to new hardware? What are the design and installation qualification protocols?
e. Technical evaluation – The head of the IT department should describe how this will be carried out in terms of a risk-to-benefit ratio. The operation qualification of the upgraded server also must be performed to check for capability.
f. List the impacts – What are the human manpower requirements for ensuring that this proposed change is made? What will the financial effects be as a result?

This change request should be sent to both the organization’s management and its quality personnel for approval. The priority of the above criteria will be decided by management.

3. Analyze the proposed change control request – Once the proposed change control is approved by management, it also should be analyzed by all other departments that will be affected by the proposal. This impact assessment and risk analysis should be based on – but not limited to – the criteria above (a to f). The results of this impact assessment and risk analysis should be reported using the levels “low,” “medium” or “high” by operational qualification.

The analysis can be sent back to the change owner in case it is rejected or if more clarification is required for the criteria. All concerns and suggestions for rejection or clarification should be noted on the change request form. After these concerns are addressed, the change request should be sent back once more to management for approval.

4. Improve, confirm and test the statistical solution – Upon final approval by authorized personnel, the change request should be subject to performance qualification. Before the proposed change can be implemented – in this case, an upgraded server – the data on the current server needs to be backed up. Only then can the change be implemented.

To verify the new upgrade against the current server system, practitioners should take a pilot run and analyze the process; this test also should include the migration of legacy data. The new change can be evaluated by performing technical tests decided as per protocol.

Statistical evaluation of all affected parameters should be performed against the outputs of the process. If the implemented change shows the expected improvements in performance, then an implementation plan must be developed and enacted. After completion of this procedure, a note should be made on the change request that impacts have been assessed and that the change is ready for implementation in routine practice.

5. Set up a control system with routine verification – Controlling and maintaining the change is essential to the project’s success. Newly implemented changes should be assessed at regular intervals to ensure that they are running effectively. In case of future personnel changes, it is important to provide proper training to all personnel for handling the new process.

DMAIC is a familiar method to affect change in IT performance. However, to ensure that change is controlled effectively, practitioners should remember that the DMAIC roadmap also can be employed as an organizational resource. Six Sigma provides the tools to improve the capability and reduce the defects in any process, and change control can benefit from this structured approach.

Using Six Sigma to Reduce Pressure Ulcers at a Hospital

Since 2001, Thibodaux Regional Medical Center (TRMC) in Louisiana has applied Six Sigma and change management methods to a range of clinical and operational issues. One project that clearly aligned with the hospital’s strategic plan was an initiative to reduce nosocomial or hospital-acquired pressure ulcers, because this is one of the key performance metricsindicating quality of care.

Although the pressure ulcer rate at the medical center was much better than the industry average, the continuous quality improvement data detected an increase between the last quarter of 2003 and the second quarter of 2004.

In October 2004, a Six Sigma project to address this issue was approved by the hospital’s senior executives. A team began to clarify the problem statement. Their vision was to be the “Skin Savers” by resolving issues leading to the development of nosocomial pressure ulcers. The project team included a Black Belt, enterostomal therapy registered nurse (ETRN), medical surgical RN, ICU RN, rehab RN and RN educator.

Scoping the Project

Through the scoping process, the team determined that inpatients with a length of stay longer than 72 hours would be included, while pediatric patients would be excluded. The project Y was defined as the nosocomial rate of Stage 2, 3 and 4 pressure ulcers calculated per 1,000 patient days. Targets were established to eliminate nosocomial Stage 3 and Stage 4 pressure ulcers and reduce Stage 2 pressure ulcers from 4.0 to less than 1.6 skin breaks per 1,000 patient days by the end of the second quarter of 2005.

The team developed a threats and opportunities matrix to help validate the need for change (Table 1). They encountered some initial resistance from staff, but were able to build acceptance as the project began to unfold.

Table 1: Threats and Opportunities Matrix
Threat Opportunity
Short Term Increase length of stay Improve quality of care
Increase costs Decrease medical complications to patient
Increase medical complications to patient
Long Term Decrease patient satisfaction Improve preventative care measures
Increase morbidity rate Improve hospital status/image
Decrease physician satisfaction Increase profitability
Increase number of lawsuits Improve customer satisfaction
Decrease reimbursement
Loss of accreditation

Measurement and Analysis

During the Measure phase, the team detailed the current process, including inputs and outputs. Using cause and effect tools, process steps having the greatest impact on the customer were identified as opportunities for improvement. The team also reviewed historical data and determined that overall process capability was acceptable, but that the sub-processes had a great deal of room for improvement. Improving these sub-processes would positively affect the overall process and further improve quality of care.

Measurement system analysis on the interpretation of the Braden Scale was performed to verify that results obtained by staff RNs were consistent with the results obtained by the enterostomal therapy RN, because this is the tool used to identify patients at risk of developing a pressure ulcer. This analysis indicated that the current process of individual interpretation was unreliable and would need to be standardized and re-evaluated during the course of the project.

A cause and effect matrix was constructed to rate the outputs of the process based on customer priorities and to rate the effect of the inputs on each output (Figure 1). The matrix identified areas in the process that have the most effect on the overall outcome, and consequently the areas that need to be focused on for improvement (Table 2).

The team identified several critical Xs affecting the process:

  • Frequency of the Braden Scale – The Braden Scale is an assessment tool used to identify patients at risk of developing pressure ulcers. Policy dictates how frequently this assessment is performed.
  • Heel protectors in use – Heel protectors are one of the basic preventative treatment measures taken to prevent pressure ulcers.
  • Incontinence protocol followed – Protocol must be followed to prevent against constant moisture on the patient’s skin that can lead to a pressure ulcer.
  • Proper bed – Special beds to relieve pressure on various parts of the body are used for high-risk patients as a preventative measure.
  • Q2H (every two hours) turning – Rotating the patient’s body position every two hours is done to prevent development of pressure ulcers.

Figure 1: Cause-and-Effect Matrix

Figure 1: Cause-and-Effect Matrix

Table 2: Data Analysis




% Defective

Z Score

Overall Process





Braden Scale Frequency





Proper Bed





Q2H Turning





Data analysis revealed that the bed type was not a critical factor in the process, but the use of heel protectors, incontinence protocol compliance, and Q2H turning were critical to the process of preventing nosocomial pressure ulcers. The impact of the Braden Scale frequency of performance was not identified until further analysis was performed (Figure 2).

Figure 2: One-Way Analysis of Means for Sub-Process Defects

Figure 2: One-Way Analysis of Means for Sub-Process Defects

Evaluating data specific to at-risk patients, the team separated populations who developed nosocomial pressure ulcers from those who did not have skin breakdowns. The Braden Scale result at the time of inpatient admission from each population was analyzed to see the effect on development of a nosocomial pressure ulcer. One unexpected finding was that the admit Braden Scale result was higher for patients who develop nosocomial pressure ulcers than for those who do not develop them, showing that patients at risk are not being identified in a timely manner, thus delaying the initiation of necessary preventative measures.

The team then looked at defects for Braden Scale frequency of performance for each population of patients using a chi square test. They found the frequency of Braden Scale performance did have an effect on the development of nosocomial pressure ulcers. This was confirmed with binary logistic regression analysis (Table 3).

Table 3: Binary Logistic Regression Analysis




Odds Ratio

No Defects





Braden Scale Defects





Bed Defects





Q2 Turn Defects





The most significant X is the Braden Scale frequency of performance. This analysis confirmed the need to increase the frequency of Braden Scale performance to identify at-risk patients.

Recommendations for Improvement

During the Improve phase, recommended changes were identified for each cause of failure on the FMEA with a risk priority number of greater than 200. Some of the recommendations include:

  • Frequency of Braden Scale performance to be increased to every five days
  • Braden Scale assessment in hospital information system (HIS) to include descriptions for each response
  • Global competency test on interpretation of Braden Scale to be repeated annually
  • Prompts to be added in HIS to initiate prevention/treatment protocols
  • ET Accountability Tracking Tool to be issued for non-compliance with prevention and treatment protocols as needed

The Braden Scale R&R was repeated after improvements were made on the interpretation of results. The data revealed an exact match between RNs and the ETRN 40 percent of the time, and RNs were within the acceptable limits (+/– 2) 80 percent of the time. Standard deviation was 1.9, placing the results within the specification limits. The data indicated that the RNs tend to interpret results slightly lower than the ETRN, which is a better side to err on because lower Braden Scale results identify patients at risk of developing pressure ulcers.

The Control Phase

Another round of data collection began during the Control phase to demonstrate the impact of the improvements that had been implemented. A formal control plan was developed to ensure that improvements would be sustained over time, and the project was turned over to the process owner with follow-up issues documented in the Project Transition Action Plan.

The team implemented multiple improvements, including compilation of a document concerning expectations for skin assessment with input from nursing and staff. They also gave a global competency test on interpretation of the Braden Scale, which will be repeated annually. The Braden Scale frequency was increased to five days, and they corrected the HIS calculation to trigger clinical alerts for repeat of the Braden Scale. Prompts were added for initiating the Braden Scale, and monthly chart audits were developed for documentation of Q2H turning. A turning schedule was posted in patient rooms to identify need and document results of Q2H turning of patient. Additional solutions included the following:

  • ETRN to attend RN orientation to discuss skin issues
  • Revise treatment protocol to be more detailed
  • Wound care products to be reorganized on units
  • Unit educators to address skin issues during annual competency testing
  • CNA and RN to report at shift change to identify patients with skin issues
  • Task list to be created for CNAs
  • ET accountability tracking tool to be issued for non-compliance with prevention and treatment protocols as needed

Results and Recognition

Since this was a quality-focused project, the benefits are measured in cost avoidance and an overall improved quality of care. A 60 percent reduction in the overall nosocomial pressure ulcer rate resulted in an annual cost avoidance of approximately $300,000.

To make sure their initiatives are producing a positive impact on the patient care environment, the hospital continuously measures patient and employee satisfaction through Press Ganey. Inpatient satisfaction is consistently ranked in the 99th percentile and employee satisfaction in the 97th percentile. TRMC also has received recognition in the industry for their achievements, including the Louisiana Performance Excellence Award for Quality Leadership (Baldrige criteria), Studer Firestarter Award and Press Ganey Excellence Award.

“This project is a perfect example of the need to verify underlying causes using valid data, rather than trusting your instincts alone,” said Sheri Eschete, Black Belt and leader of the pressure ulcer project at TRMC. “Six Sigma provided us with the tools to get to the real problem so that we could make the right improvements. There had been a perception that not turning the patients often enough was the issue, but the data revealed that it was really the frequency of the Braden Scale. Leveraging the data helped us to convince others and implement appropriate changes.”

The nosocomial pressure ulcer rate is monitored monthly as one of the patient-focused outcome indicators of quality care. The results are maintained on the performance improvement dashboard (Figures 3 and 4).

Figure 3: Stage 3 and 4 Nosocomial Ulcers

Figure 3: Stage 3 and 4 Nosocomial Ulcers

Figure 4: Stage 2 Nosocomial Ulcers

Figure 4: Stage 2 Nosocomial Ulcers

The Five Fundamental Assumptions of Six Sigma

Some people are angry and upset about the effectiveness of Six Sigma as a problem-solving discipline. Individuals routinely call for the next new quality discipline or argue why Six Sigma will not work in this case or the other. Some of this is one-upping – this approach is better; some of this is sour grapes – this approach is simpler, easier or less intimidating. Most of these arguments, however, indicate a lack of understanding of just what Six Sigma is intended to do for an organization.

To be fair, much of this dialogue is healthy. Six Sigma will never solve all problems and a smart leader knows he or she needs a complete set of quality approaches – not just one. Any continuous improvement processes must also be customized to the environment in which it is employed and must evolve as that environment changes. By far, however, the main reason for the clamor calling for new problem-solving methods is that Six Sigma has become more complex than necessary.

What follows in this article is a description of Six Sigma reduced to its fundamental assumptions, or theorems. If these simple concepts are understood, all the tools, all the tollgate deliverables, and all the statistics and jargon are put in their proper supporting roles. Rather than mastering all the tools and attempting to build the program from the bottom up, this approach depends on a top-down, theoretical foundation. Rather than a cookbook approach, Six Sigma should be seen as a mathematical proof.

Fundamental Assumption 1

Customers only pay for value.

It seems so simple. Of course the customer only pays for value, but most businesses define value incorrectly. The products and services being sold are not what confer value to the customer. Products and services are vehicles to deliver value. Value is only created when a specific need the customer has is fulfilled. If a customer need is not met, even if the product or service is perfect, no value is created. Quality is not a measure of perfection, but of effect.

Failing to understand customers and their needs is the biggest driver of cost in most businesses. Quality for many products is defined by whether engineering specifications are met rather than whether the product delivers to actual customer needs. Similarly, in many service environments, service quality is defined by what the customer wants or complains about rather than how the customer uses the service. This distinction is important because it is possible to deliver everything a customer asks for and do so perfectly while not satisfying his or her needs. When this happens, the costs to deliver go up but the revenue from delivering does not.

First Theorem of Six Sigma: Changes in critical to quality (CTQ) parameters  and only changes in CTQ parameters alter the fiscal relationship an organization has with its customers.

Those product or service qualities that alter the way a customer behaves with regard to purchase decisions are CTQs. Changes in CTQs, be they good or bad, drive customer loyalty. When CTQs are altered, a company’s ability to create customer value is altered. Improving the customer’s perception of value changes ultimately affects the fiscal relationship with that customer either in terms of price or cost to deliver.

The problem with managing to customer CTQs is not a question of intent. No business intentionally fails to deliver to CTQs. Companies fail to deliver to CTQs because of gaps in process knowledge. These gaps manifest themselves in process variation and poor process capabilities.

Fundamental Assumption 2

If a business has a profound and complete process knowledge, the products and services being delivered can be controlled so as to always create customer value. Gaps in process knowledge are the primary causes of failure and defect.

Process control focuses on ensuring that that the process is managed and executed in a consistent manner. If there are gaps in the understanding of how processes work or gaps in the understanding of how the customer ascribes value to the products and services being generated, process control is simply not possible. Six Sigma (and all continuous improvement processes) is fundamentally focused on closing these knowledge gaps.

Second Theorem of Six Sigma: Process outputs are caused by process, system and environmental inputs. 

 Y = ƒ (X1 . . . Xn)

The “holy grail” of Six Sigma is the process transfer function. Once this is properly defined, managers have all the tools they need to make the process perform in any manner desired. The transfer function is never really perfect, but were a company to ever have complete transfer functions for all their processes, optimizing costs, production and customer value would be a simple matter of arithmetic.

The big myth is that Six Sigma is about quality control and statistics. It is all that – but a helluva lot more! Six Sigma ultimately drives leadership to be better by providing the tools to think through tough issues. —Jack Welch

Good leaders strive to make good decisions all of the time. If, however, there are gaps in understanding, incorrect assumptions or false assumptions, what looks like a good decision will result in bad outcomes. Six Sigma is about reducing the probability of these bad outcomes.

Most processes have subject matter experts and institutional knowledge. In most cases, these subject matter experts and institutional knowledge allow businesses to create transfer functions that are 90 percent to 95 percent correct, which gives the illusion of expert knowledge. It is the illusion of expert knowledge that makes it difficult. Since most of the rules for how to run processes are known and since the knowledge gaps are subtle, it is believed that the issues are execution-related, not understanding-related. This is a dangerous situation as these small process knowledge gaps, compounded over a multistep process, can result in significant losses even when people attempt to execute the process to the best of their abilities.

Fundamental Assumption 3

All variation is caused.

Often the only clue to gaps in process knowledge is the degree of variability in the process. Processes always have variation (remember – entropy increases!) but it does not spontaneously occur. It must be caused. Variation is just another output of systems. Transfer functions can be written to describe it; the more complete those functions are the better variation can be controlled. There is no myth or magic. When a system does not perform in exactly the same manner for a static set of process inputs, that simply means there are additional factors that are not yet understood that influence those processes.

Third Theorem of Six Sigma: Variation in process outputs are caused by process, system and environmental inputs.

∂Y / ∂X = ƒ′ (X1 . . . Xn)

If Taguchi’s loss hypothesis (the cost of a system increases as it diverges from the performance expectations of its customer) is accepted, then process variation is the leading cause of customer dissatisfaction and operating costs. The factors that drive the process output and the process variation can be defined and controlled. Processes can then perform at the optimum balance between delivery and stability, thus driving the lowest possible cost and the highest possible customer satisfaction. In other words, if enough “profound knowledge” is added to the system, maximum value can be produced. The goal of Six Sigma, therefore, is always to increase process understanding. The goal is to populate transfer functions to the degree needed, and warranted, in order to best serve customers.

Fundamental Assumption 4

Given the right process knowledge and the ability to deliver products and services that satisfy the customers’ CTQs, management will always make the decision that most benefits the customer and achieves the highest possible return on investment.

The number one assumption when doing any type of continuous improvement is that if leadership is provided with the know-how, if knowledge gaps are filled in, leadership will make (or allow others to make) decisions that are best for the long-term well-being of the business. Some people are afraid to test this assumption; it is a trust issue. If power is held because of the ability to solve a recurring problem, if teams are rewarded for firefighting rather than preventing problems, then this process collapses. While it is tempting to let experience and tradition supersede structured problem solving, in the long term when overall systems understanding is improved, and leadership employs that learning for the advantage of all parties in the supply chain, the business prospers.

Fundamental Assumption 5

Given a choice between long-term sustainable growth and short-term profit, long-term growth will always outperform the short-term gain.

This is the final and most critical assumption. It is cornerstone to total quality management, the Toyota Production System (Lean) and Six Sigma. If people are helped with controlling their own destinies, and if they are provided with the wherewithal to achieve self-determination, they will naturally do what profits them the most. When educated about the long-term benefits of creating sustainable customer relationships, most people will choose to create value and maximize their payback on the relationship. This is a question of ethics.


There are three foundational theorems and a simple set of postulates or assumptions that these theorems are based upon. All the tools and processes of Six Sigma – both DMAIC (Define, Measure, Analyze, Improve, Control) and DFSS (Design for Six Sigma) – are grounded in these simple foundations. Get the assumptions correct and all else is commentary.