Executive / Senior Executive – Operational Excellence

Job Description

1. Monitoring, enabling and reporting the progress on TPM &

capacity utilization
2. Create awareness and implement e-learning initiative for six-

sigma at the sight
3. Monitoring and supporting belts on six-sigma and its completion
4. Maintaining all the fields and folders of Kaizen, course completion

and reports
5. Leading a monthly review process and management report

generation
6. Support QC to reduce lab incidence and invalid OOS
7. Delivering training locally

Salary: Not Disclosed by Recruiter
Industry: Pharma / Biotech / Clinical Research
Functional Area: Production, Manufacturing, Maintenance
Role Category: Production/Manufacturing/Maintenance
Role: Industrial Engineer
Employment Type: Permanent Job, Full Time
Keyskills: Training Delivery, Operational Excellence, Operations, Process Management, Six Sigma, Report Generation, TPM, Kaizen.

Making the Most of Quality Data

Plant-floor quality issues tend to focus on a company’s technical resources. When products fall out of spec, alarms sound and all hands are immediately on deck to fix things. Despite large technology investments to monitor and adjust production processes, manufacturers are still bedeviled by quality problems. The issue is not a lack of technology. It is a lack of quality intelligence.

When problems occur, manufacturers must obviously fix them. But the typical organization expends much more energy reacting to problems rather than preventing them. This is true despite our understanding that, “an ounce of prevention is worth a pound of cure.” We know that proactive measures can be immensely profitable, and yet our limited quality resources spend little time identifying strategic imperatives for avoiding problems. Instead, most of their time is spent responding to issues. Today’s quality professionals are too preoccupied with just fighting the fires that rage on shop floors.

Quality and the big picture

The most successful, forward-looking and competitive companies I work with focus on proactively preventing problems. How? By taking a holistic view of quality. They regularly step back to summarize and analyze large amounts of quality data. Stepping back gets them away from the fires, and out of the routine of fixing issues.

Imagine aggregating all of your quality data for the last month across all products and production lines. Doing so would allow you to see the nuanced quality differences between regions and plants. It would tell you where systemic issues need to be addressed and help prioritize improvement efforts. In other words, aggregating data allows you to see the big quality picture.

Today’s manufacturing plants make a dizzying variety of products. So you may be wondering how wholesale information can be extracted from vastly different parts, material types, and specification limits. To summarize information across different parts, data normalization can be used to allow fair comparisons even between disparate items. Today it is just a mathematical exercise; easy to perform with software, making data unification and summarization a reality.

Stop “storing and ignoring”

When critical features fall out-of-spec, alarms blare and support personnel descend on the shop floor and get the issues fixed. After completing their tasks, they quickly move on to the next daily priority or fire drill. In this case, at least the alarm data were used for solving the problem.

But what happens to data that triggers no alarms? What about data that meets specification limits? Most will say that if data is in-spec, then it is good enough. And that is the problem. When data is considered “good enough” it is just stored in a database, rarely to be seen again. The error here is assuming that since the data didn’t trigger an alarm, it contains no useful information. If data is not reviewed or analyzed, then expect to be blind to the information it contains. The truth is that value exists in any data you collect. Otherwise, it shouldn’t be collected.

When companies ignore in-spec data, they are throwing away enormously useful information. Many companies I have worked with have turned orphaned data into gold by extracting previously unknown information from it. It was unknown simply because they considered the data to be “good enough.” As a result, they were blind to the information the data contained. These experiences have me conclude that the greatest potential for modern quality improvement comes from aggregating and analyzing data that actually falls within specifications.

Seem odd? Not to me. Think of how frequently parts don’t meet specs. It’s rare. That means that very few data values are viewed for problem-solving purposes. And if those few data values receive the lion’s share of attention, what happens to the huge amount of data where no problems exist? They are stored and ignored.

And it’s getting worse. Because modern technologies support automated data collection, far more data is currently being gathered than in years past. This means that the amount of data being ignored is increasing. It’s staggering how much data is available and yet how little of it is ever viewed.

The reality is that companies rarely go back and look at data that is in spec. Yet, there is rich, valuable information hidden in those overlooked records. Imagine being an operations director who oversees 50 plants. If you could roll up all of your critical quality data across those locations, you would immediately have a holistic view of your manufacturing operations. You could identify which regions are the best performers. You could highlight the plants and production lines with the highest quality costs. You could pinpoint where defect levels could be reduced and which plants require attention to minimize the probability of recalls. And your company could become more competitive as a result.

Rather than simply reacting to quality problems, manufacturers need to direct their attention and time to proactively attacking quality. How? By regularly evaluating the massive amount of overlooked data that they already have.

Data aggregation through cloud technology

Traditional on-premises software solutions aren’t great for deploying across an enterprise. But cloud-based quality software platforms are. Since cloud-based solutions are securely hosted by vendors who monitor and maintain system infrastructure, the need for on-site IT support is minimized and capital costs are greatly reduced. The nature of cloud-based systems makes large-scale, multi-plant deployments fast, easy, and inexpensive, ensuring benefits are enjoyed sooner rather than later.

Plus, cloud-based systems connect manufacturing sites across the internet, support standardization, and store quality data from multiple plants in a centralized database. Because data is stored in one place, quality professionals, engineers, managers, and others can easily view the big picture of quality. A single data repository is ideal for supporting corporatewide quality strategies and initiatives.

Cloud-based quality systems should use simple web browsers, empowering quality professionals to break through geographical, cultural, and infrastructural barriers to connect facilities around the world—and provide data aggregation capabilities that can unlock critical information for driving quality improvements on a large scale.

The capability is here and the technology is inexpensive. So what keeps quality professionals from enjoying enterprise-wide cost and defect reduction? It’s those fires you keep fighting every day. Don’t just snuff them out—prevent them in the first place and use the time savings to re-imagine how quality can transform your organization’s performance.

 

Incorporate Lean Six Sigma in Automation Process

Robotic process automation (RPA) is the use of software robots to mimic the actions a human user would perform on a computer application in order to automate business processes that are highly repetitive and rule-based. RPA is emerging as a disruptive force in the service economy, where many middle- and back-office tasks are highly human intensive and fall into that category of repetitive and rule-based.

Organizations are increasingly embracing RPA to automate business processes to aggressively reduce costs and gain efficiency. The financial services and retail industries have taken the lead in adopting RPA, but there is evidence of increasing traction for RPA within the manufacturing and healthcare sectors as well. According to services research company Horses for Sources, the overall market of RPA grew by 64 percent to $200 million in 2016 and is expected to grow by 70 percent to 80 percent in 2018 due to the robust demand for this automation.

RPA Candidate Process Characteristics

Not every process is a good candidate for automation. Processes that have the following unique characteristics make them more suitable for automation than others.

  • Bottleneck: In the entire value chain, the bottleneck process is the prime candidate for automation because the throughput of the bottleneck process determines the efficiency of the overall value chain.
  • Stability: A stable and capable process with low variation is an appropriate candidate for automation because its behavior can be predicted and, therefore, its processing logic can be designed and coded for automation.
  • Robustness: A robust process where all failure modes are known is an appropriate candidate process for automation because all its exception flows and behavior can be designed and coded for automation.

Lean Six Sigma and RPA

What does the future hold for Lean Six Sigma (LSS) in this age of automation when organizations can significantly reduce their processing time (up to 80 percent) by adopting automation? Automating a broken process yields nothing more than a waste and at a rapid pace. As management expert Peter Drucker said, “There is nothing so useless as doing efficiently that which should not be done at all.”

LSS improves process performance by systematically eliminating wastes and reducing variation from the process. In the automation journey, LSS can assist organizations with assessing the candidate process and making the process more suitable for automation.

LSS can help organizations in the following ways at the onset of the automation journey.

  • Identifying the right candidates: Constraint or bottleneck processes are prime candidates for RPA. LSS tools such as bottleneck analysis can identify the right candidate processes within the value chain by spotting constraints.
  • Improving process stability: LSS can improve the stability and capability of the process by reducing variation in the process, thereby making the process more suitable for automation.
  • Improving process robustness: LSS tools such as failure mode and effects analysis (FMEA) can uncover the potential failure modes of the process and develop suitable mitigation plans for them, thereby making the process more suitable for automation.

An optimized and standardized process post-LSS is a better candidate for RPA. Deployment of LSS followed by automation improves both the effectiveness and efficiency of the process and delivers better return on investment (ROI) for an automation initiative.

Apply LSS at the beginning of an automation journey to identify the right candidate process and make the process well-suited for the automation.

Figure 1: Combining LSS and RPA

Figure 1: Combining LSS and RPA

Case Study: Opening a Bank Account

Consider a case of automation as applied to the process of opening an account at a bank.

Process Overview

A simple account opening value chain at a bank consists of four process steps:

  1. Application
  2. Verification
  3. Decision
  4. Approval and fulfillment

The following are the operational parameters for a standard account opening process.

  • Total number of full-time employees (FTEs) = 14
  • Cycle time = 5 days
  • Lead time = 12 days

Figure 2: Basic Process of Opening a Bank Account

Figure 2: Basic Process of Opening a Bank Account

Automation Without LSS

The bank undertakes an initiative to deploy automation in the account opening process. It selects the Application process as the candidate process for automation because it has highest number of FTEs. The following is the operational performance of the account-opening process after automation:

  • Total number of FTEs = 10 (28.6 percent reduction in total FTE count)
  • Cycle time = 5 (no reduction in cycle time)
  • Lead time = 11 (8.3 percent reduction in lead time)

Since a non-bottleneck process, Application, is now automated, a large inventory of applications is accumulated at the Verification stage.

Figure 3: Opening a Bank Account with Automation, Without LSS

Figure 3: Opening a Bank Account with Automation, Without LSS

Automation with LSS

The bank leverages LSS tools to assess the candidate process for automation and to reduce variation in the candidate process. The bank selects the Verification process as the candidate process for automation because it is a bottleneck process with the longest cycle time in the entire value chain.

An FMEA is conducted for the Verification process, which helps the bank discover potential failure modes (exception flows) of the account opening process such as a scenario where a customer has valid identity proof but invalid address proof. This enables the bank to manage all exceptional flows in the automation and avoid a failure of process automation due to exceptions.

The following are the operational performance metrics of the account opening process post automation and leveraging LSS:

  • Total number of FTEs = 11 (21.4 percent reduction in total FTE count)
  • Cycle time = 3 (20 percent reduction in the cycle time)
  • Lead time = 8 (33.3 percent reduction in lead time)

Clearly, the automation with LSS delivers better operational performance for the account-opening process with a 20 percent reduction in cycle time and 33.3 percent reduction in lead time.

Figure 4: Opening a Bank Account with Automation, with LSS

Figure 4: Opening a Bank Account with Automation, with LSS

Conclusion

Process excellence continues to be relevant in this age of automation. Process optimization or redesign – combined with LSS – should be an integral part of any automation journey.

Network Quality Assurance – Virtela NTT Communications – Mumbai

Designation Network Quality Assurance – Virtela NTT Communications – Mumbai
Job Description Hi,

Greetings from Virtela NTT Communications.

This is regarding the job opportunity with Virtela for Network Quality Assurance role. Please find below the detailed job description for your reference.

Job Title: Quality Assurance
Location: Mumbai
Experience: 4-10 years

1. PRE-REQUISITE:

  • Strong customer orientation with high focus on Quality
  • Possess telecom/ISP domain knowledge
  • Strong analytical skills in implementation and administration of Quality Assurance metrics for NOC & Service Delivery function
  • Has developed and overseen a continuous quality development programme including a comprehensive mentoring strategy
  • Strong interpersonal and problem solving skills
  • Past experience in Operations support role is a major plus

2. RESPONSIBILITIES:

  • Provide administration and support as Quality Lead to achieve high customer satisfaction and continues improvements in line with the business strategy for NOC and Service Delivery functions
  • Lead internal quality engagement program with Global Operations team at various levels
  • Determining, negotiating and agreeing to quality procedures, standards and/or specifications for Global Operations team
  • Contributing to build processes, best practices and identify tools to improve quality
  • Ensuring that Operational processes comply with the high quality standards and all the KPIs are met
  • Understand and Monitor end to end process work flow
  • Network Operations Center (NOC) and Service Delivery functions
  • Understanding of Incident Tickets, Incident Management, Trouble Tickets Workflow
  • Service Delivery – Provisioning of Circuits and hardware
  • Service Delivery – Installations of new services (circuits), meeting customer delivery dates
  • End to End Order Delivery Management
  • Identify process gaps, execution errors and documentation issues
  • Performing Ticket Audit Analysis and taking corrective actions
  • Create enhanced workflow documents for clear understanding of process
  • Conduct training sessions with team members
  • Build customized training and improvement plan to remediate problems
  • Recording, analyzing and distributing statistical information
  • Build quality metrics and reporting dashboard
  • Track and report KPIs to senior management

3. TRAINING AND CERTIFICATION REQUIRED FOR JOB
ITIL, Six Sigma Lean

4. EDUCATION:
Bachelor degree in Electronics/Computers/IT preferred

5. TECHNICAL AND OTHER SKILLS:
MS Office proficiency, ITIL process awareness Knowledge on network related technical skills such as LAN, WAN, Routing, Switching, Troubleshooting, MPLS, IP, Data NOC, BGP, OSPF, EIGRP, Circuit, T1, E1 Layer1, Layer2 issues.

Please rush in your resume if the above requirement is matching to your profile. You may reply and confirm on your interest regarding the same along with the below details:

Total Experience:
Relevant Experience (Network Quality Audit):
Other skills:
Current CTC:
Expected CTC:
Notice Period:
Reason for change:
Current location:
Short write up about your preferences:

Thanks & Regards,
Swati Maharana
smaharana@virtela.net
Talent Acquisition Team
Virtela Technology Services Incorporated
Mumbai
Be at the Helm of your Career!
Desired Profile Please refer to the Job description above
Experience 4 – 9 Years
Industry Type Telecom/ISP
Role Quality Assurance/Quality Control Manager
Functional Area IT Software – Network Administration, Security
Employment Type Full Time , Permanent Job
Education UG –

PG –

Doctorate –

Compensation:  Not disclosed
Location Mumbai
Keywords networking switching quality procedures routing quality audit quality assurance network audit network internal audit network statutory audit ITIL Six Sigma six sigma lean 6 sigma lean

Lean six sigma JOB-Assistant Manager/Manager

VWR (NASDAQ: VWR), headquartered in Radnor, Pennsylvania, is a leading, independent provider of laboratory products, services and solutions with worldwide sales in excess of $4.5 billion in 2016. VWR enables science in laboratory and production facilities in the pharmaceutical, biotechnology, industrial, education, government and healthcare industries. With more than 160 years of experience, VWR offers a well-established network that reaches thousands of specialized laboratories and facilities spanning the globe. VWR has more than 8,500 associates working to streamline the way scientists, medical professionals and production engineers stock and manage their businesses. In addition, VWR supports its customers by providing value-added service offerings, research support, laboratory services and operations services.
Designation Assistant Manager – Quality and Business Process Re-engineering – 1 Opening(s)
Job Description Key Tasks: 

  • Responsible for end to end Transition efforts to have a proper stabilized operations post transition & Implement operational governance
  • Design and implement KPI measures/service levels
  • Client/Stakeholder expectation management through NPS
  • Drive organizational compliance to ISO 9001:2015
  • Drive continuous improvement culture through training, co-ordination and implementation of principles of Lean/Six Sigma in day-to-day operations in VWR Global Business Center
  • Work closely with operation teams to obtain input of diverse views, facilitate generation of ideas, analyze operational risks, extend support in managing stakeholders/client escalations (RCA/CAPA)
  • Guide operations to conduct process capability study, prepare contingency plan for all levels and develop FSS to staff for holidays based on volume and process capability study
  • Prepare Dashboard/Reports by collecting, analyzing, and summarizing Operations data; making recommendations
  • Support Team to establish statistical confidence by identifying Significant sample size and acceptable error; determining levels of confidence
  • Conduct Process Audit to ensure processes are compliant with ISO requirements.
Desired Profile Skills, knowledge & experience:

  • Minimum 5 years of work experience in managing Quality and driving Continual Improvement projects which should be of Mid/large sized Cross functional
  • Have experience in handling team
  • Graduation/Post Graduation
  • Professional certification like ISO Auditor, Six Sigma, Kaizen, Project Management etc will be added advantage.
  • Hands on experience in MS application (like excel, power point, Visio)
  • Ability to work with minimal supervision and manage multiple tasks/projects simultaneously
  • Strong writing and presentation skills, with an ability to produce high-quality deliverables created through collaboration.
  • Experience in handling change related aspects of business processes including driving continuous improvement in day to day service delivery environment
  • Good analytical skills – Applied Knowledge in Basic QC tools such as Root cause analyze , Fish bone diagram, Pareto, Run Charts, etc.,
  • Ability to quickly adapt to change and to work in a high-energy fast paced environment working against deadlines
Experience 5 – 8 Years
Industry Type BPO / Call Centre / ITES
Role Assistant Manager/Manager-(Technical)
Functional Area ITES, BPO, KPO, LPO, Customer Service, Operations
Employment Type Full Time , Permanent Job
Education UG – Any Graduate – Any Specialization

PG –

Doctorate –

Compensation:  Not disclosed
Location Coimbatore
Keywords lean six sigma, training coordination, operations, iso 9001, quality management, process audit, iso auditor, six sigma, kaizen, project management, root cause analyze, fish bone diagram, pareto run charts.

TULSA FORENSIC LAB INCREASES EFFICIENCY WITH LEAN SIX SIGMA

If crime dramas are to be believed, case backlogs are a common part of policework. That’s especially true for a forensics lab in Tulsa, Oklahoma.

For the Tulsa Police Department (TPD), a case is relegated to the backlog when it’s more than 30 days beyond its initial request, and the forensic work hasn’t yet been completed. At the end of 2016, the TPD had more than 800 backlogged cases.

 

And now? A year later?

They’ve nearly cut that number in half.

“I just looked,” Tara Brians, the Lab Director for the TPD, told Tulsa’s News on 6. “We had, I believe, almost 800 cases at the end of 2016 and we have now less than 500.”

Soon, they hope to eradicate backlogged cases completely. What’s their secret?

Lean Six Sigma. And it came just in time.

Identifying Waste in the Backlog

The TPD lab is responsible for handling hundreds and hundreds of cases every year, and in some instances, certain slow-moving cases were taking up a lot of the lab’s time.

“There’s really no room for error and we really can’t afford to be wrong on our results,” Brians said. “Those cases are waiting and waiting and that’s where we would love to get to that request and almost start working on it immediately.”

They had to get more efficient, or the backlog was only going to grow larger. Lean Six Sigma was the solution.

“Budgets are not really being increased,” Brians said. “We are not really getting a lot of money so we had to figure out ways to increase efficiency and to do more with less.”

The Lean Six Sigma training uncovered unnecessary steps in their forensics process, which could be safely removed without sacrificing the precision of lab results.

Jon Wilson, the lab’s Operations Manager, said has seen a significant turnaround in lab performance. “The wait times between the initial step and the process can be minimized or shortened,” he said, “to decrease the overall turnaround time.”

It’s been a good year for the TPD forensics lab. Along with Lean Six Sigma implementation, the lab also earned accreditation through new international standards – the first lab in the world to be honored with such a distinction.

It isn’t, however, the first law enforcement forensics lab to change its fortunes through Lean Six Sigma.

Lean Six Sigma and Law Enforcement

In 2016 and 2017, the District 5 Idaho State Police totally redesigned its headquarters to make work faster and more efficient. One of their strategic decisions involved streamlining their own lab, because they were struggling to meet high demands.

“About half to three-quarters of the toxicology analyses completed across the state take place at this facility,” Matthew Gamette, the Idaho State Police director of Forensic Services, told the Idaho State Journal. “About one-third of the state’s breath-alcohol instruments come here and about one-third of the drug chemistry analyses comes through this lab.”

Both the Tulsa Police Department and the District 5 Idaho State Police are reaping the benefits of Lean thinking – faster processing and a much, much lighter backlog to worry about.

CASE STUDY: PORTUGUESE TIRE MANUFACTURER SAVES THOUSANDS USING SIX SIGMA’S DMAIC METHODOLOGY

A tire manufacturing company in Portugal has provided an excellent study in implementing Six Sigma and how it can impact business performance.

Six Sigma already has proven its value in the automobile industry. Companies including Ford and Toyota have made the methodology a key component of their success.

A recent study of implementation of Six Sigma at Continental Mabor, a tire manufacturing company located in Famalicao, Portugal, provides a step-by-step look at putting Six Sigma’s DMAIC methodology into place.

The study, published at the 2017 Manufacturing Engineering Society International Conference, was written by F.J.G. Silva, a professor in the school of engineering at Polytechnic of Porto, Portugal. It reported that the use of Six Sigma focused on improving the rubber extrusion process of two tire products: the tread and the sidewall. The primary goal was reduction of wasted material in the process.

Continental Mabor instituted Six Sigma because tire manufacturing is an intensely competitive business around the globe and “continuous flexibility and adaptation” is necessary, Silva wrote, adding that to achieve success, “it is crucial to seek operational excellence.”

Here is an overview of how Continental Mabor approached implementing the Six Sigma methodology of DMAIC, which stands for define, measure, analyze, improve and control.

Doing The Research

Continental Mabor started its Six Sigma journey by researching books and published scientific articles on Six Sigma methodology,

The company focused on improvements in the rubber extrusion process, particularly the mixing, preparation and construction departments. The mixing department receives raw materials that are transformed into compound sheets that are used in the preparation department on seven extrusion lines which focus on tread and sidewall extrusion. The ultimate “customer” for the extrusion process is the construction department.

The amount of material generated in the process – which is later reused for other purposes – is one of the indicators for the company on how efficient the operation is running. The focus is to limit the amount of extra material generated during the tread and sidewall extrusion process, called “work off.”

Once they were set on the focus of the project and the research, they moved forward by implementing the DMAIC cycle. The following shows how they did so.Six Sigma DMAIC

Define

To accurately define the problem areas in the process, the company drew up a project charter that identifies problems, establishes objectives and defines the scope of the project (including the employee teams involved). A project charter also:

  • Establishes the business case for how the project will impact overall organizational strategy
  • Clearly measures the impact on the business of the current problem and measures the gap between where things are and the desired state
  • Creates a clear scope for the project with identification of the areas where teams will focus to prevent “scope creep” – moving into areas outside the defined perimeter of the project

To create the charter, the company used a Gantt chart, a horizontal chart that maps out a product schedule. They also used a SIPOC diagram to plot the extrusion process in greater detail. SIPOC stands for supplier, inputs, process, outputs and customer. A SIPOC is a way to see an entire process in one graph and see the relationship between inputs and suppliers and the output for customers.

Measure

To get a handle on the current state of the extrusion process, Continental Mabor leaders then created a data collection plan. This included measuring the amount of rejected material during the extrusion process. Data was collected for 30 weeks, with 10 three-hour trials conducted each week. After this period of measurement, the company could determine the percentage of unused work off material generated in the tread and sidewall extrusion processes.

Analyze

With the amount of data collected, the focus then turned to finding the root causes of the defects in the process that caused variation in the amount of materials wasted. The company used a Ishikawa diagram to find the cause and effect relationship between various activities and inputs into the process and the problem of generating unused material. They then used a Pareto chart to prioritize which potential causes seemed to have the most unfavorable impact.

They discovered that one machine in the sidewall extrusion process was not performing as well as others, leading to a significant increase in extra material. In the tread extrusion process, they discovered that the method for feeding the machines was creating problems with machine stoppage and jamming.

Improve

In this phase, a list was made of all the problems and root causes, then the subsequent action taken to improve these issues. These include changes to the machinery itself and changes in the methods used by employees to feed material into the machine.

Control

With the improvements in place, data was then collected on the changes in the process. In this case, they were very significant. The company reduced the amount of work off material by five tons per day. After factoring in the cost of improvements to the machinery, the positive impact to the company’s bottom line was $165,000 euros per year, which translates to a little more than $200,000 U.S. dollars.

In his conclusion on the process improvement at Continental Mabor, Silva wrote that “the use of Six Sigma methodology played a decisive role in the achievement of the proposed goal, ensuring that there was a systematic and disciplined approach to the issues at hand through the DMAIC cycle.”

It also provides an excellent step-by-step education in how to implement Six Sigma successfully.

CASE STUDY: PORTUGAL TIRE MANUFACTURE SAVES THOUSANDS USING SIX SIGMA’S DMAIC METHODOLOGY

A tire manufacturing company in Portugal has provided an excellent study in implementing Six Sigma and how it can impact business performance.

Six Sigma already has proven its value in the automobile industry. Companies including Ford and Toyota have made the methodology a key component of their success.

A recent study of implementation of Six Sigma at Continental Mabor, a tire manufacturing company located in Famalicao, Portugal, provides a step-by-step look at putting Six Sigma’s DMAIC methodology into place.

The study, published at the 2017 Manufacturing Engineering Society International Conference, was written by F.J.G. Silva, a professor in the school of engineering at Polytechnic of Porto, Portugal. It reported that the use of Six Sigma focused on improving the rubber extrusion process of two tire products: the tread and the sidewall. The primary goal was reduction of wasted material in the process.

Continental Mabor instituted Six Sigma because tire manufacturing is an intensely competitive business around the globe and “continuous flexibility and adaptation” is necessary, Silva wrote, adding that to achieve success, “it is crucial to seek operational excellence.”

Here is an overview of how Continental Mabor approached implementing the Six Sigma methodology of DMAIC, which stands for define, measure, analyze, improve and control.

Doing The Research

Continental Mabor started its Six Sigma journey by researching books and published scientific articles on Six Sigma methodology,

The company focused on improvements in the rubber extrusion process, particularly the mixing, preparation and construction departments. The mixing department receives raw materials that are transformed into compound sheets that are used in the preparation department on seven extrusion lines which focus on tread and sidewall extrusion. The ultimate “customer” for the extrusion process is the construction department.

The amount of material generated in the process – which is later reused for other purposes – is one of the indicators for the company on how efficient the operation is running. The focus is to limit the amount of extra material generated during the tread and sidewall extrusion process, called “work off.”

Once they were set on the focus of the project and the research, they moved forward by implementing the DMAIC cycle. The following shows how they did so.Six Sigma DMAIC

Define

To accurately define the problem areas in the process, the company drew up a project charter that identifies problems, establishes objectives and defines the scope of the project (including the employee teams involved). A project charter also:

  • Establishes the business case for how the project will impact overall organizational strategy
  • Clearly measures the impact on the business of the current problem and measures the gap between where things are and the desired state
  • Creates a clear scope for the project with identification of the areas where teams will focus to prevent “scope creep” – moving into areas outside the defined perimeter of the project

To create the charter, the company used a Gantt chart, a horizontal chart that maps out a product schedule. They also used a SIPOC diagram to plot the extrusion process in greater detail. SIPOC stands for supplier, inputs, process, outputs and customer. A SIPOC is a way to see an entire process in one graph and see the relationship between inputs and suppliers and the output for customers.

Measure

To get a handle on the current state of the extrusion process, Continental Mabor leaders then created a data collection plan. This included measuring the amount of rejected material during the extrusion process. Data was collected for 30 weeks, with 10 three-hour trials conducted each week. After this period of measurement, the company could determine the percentage of unused work off material generated in the tread and sidewall extrusion processes.

Analyze

With the amount of data collected, the focus then turned to finding the root causes of the defects in the process that caused variation in the amount of materials wasted. The company used a Ishikawa diagram to find the cause and effect relationship between various activities and inputs into the process and the problem of generating unused material. They then used a Pareto chart to prioritize which potential causes seemed to have the most unfavorable impact.

They discovered that one machine in the sidewall extrusion process was not performing as well as others, leading to a significant increase in extra material. In the tread extrusion process, they discovered that the method for feeding the machines was creating problems with machine stoppage and jamming.

Improve

In this phase, a list was made of all the problems and root causes, then the subsequent action taken to improve these issues. These include changes to the machinery itself and changes in the methods used by employees to feed material into the machine.

Control

With the improvements in place, data was then collected on the changes in the process. In this case, they were very significant. The company reduced the amount of work off material by five tons per day. After factoring in the cost of improvements to the machinery, the positive impact to the company’s bottom line was $165,000 euros per year, which translates to a little more than $200,000 U.S. dollars.

In his conclusion on the process improvement at Continental Mabor, Silva wrote that “the use of Six Sigma methodology played a decisive role in the achievement of the proposed goal, ensuring that there was a systematic and disciplined approach to the issues at hand through the DMAIC cycle.”

It also provides an excellent step-by-step education in how to implement Six Sigma successfully.

Voice of the Customer (VOC): Because Your Customer Is Your Top Priority

 

Have you ever been on websites such as Yelp, Google, or HomeAdvisor? Well, these websites are all fall under the umbrella of VOC, or Voice of the Customer. This is how important VOC has become to future consumers. Unfortunately, you don’t want to find out that your business doesn’t add up by looking on Yelp or any of these sites, because once your business earns a bad name, it can be very difficult to gain back your reputation. A bad review is hard to forgive by future customers.

More Conventional VOC

Here are a few more traditional versions of VOC you are probably familiar with.

Surveys: Send out a small survey to your existing customers or even potential new customers. This is very cost effective, but unfortunately many won’t take the survey.

Market Research: These can be focus groups you can conduct or one-on-one interviews done in person or on the phone. If you have ever been at the receiving end of a quick interview, you know that feeling of wanting to get away. It is for that reason this has some weaknesses. Focus groups are excellent, as people are usually paid for it so they will answer questions, but answers might be too general.

Direct Customer Comments: This could be in the form of emails, letters, calls or customer ratings from websites like Yelp, HomeAdvisor or any of the 3rd party websites. These are usually the best because the ratings are from actual customers that have used your product or service.

Always remember, the very core of Six Sigma methodology is keeping your customers satisfied. They are why you are in business, and should always be a top priority to your business.

Why do you think you often hear this at the end of your flight? “We realize you have your choice of airlines, and so we thank you for flying XYZ Airlines.” Your customer is your business, without customers you don’t have a business.

A Bell-Shaped Distribution Does Not Imply Only Common Cause Variation

Some practitioners think that if data from a process have a “bell-shaped” histogram, then the system is experiencing only common cause variation (i.e., random variation). This is incorrect and reflects a fundamental misunderstanding about the relationship between distribution shape and the variation in a system. However, even knowledgeable people sometime make this mistake.

For example, paraphrasing from a popular Six Sigma textbook, when most values fall in the middle and tail off in either direction, we have statistical evidence of common cause variation.1,2This is an invalid statement, and the misunderstanding probably stems from the fact that if we were sampling means from a stable process, the central limit theorem would assure us that the distribution of sample means would be approximately normally distributed. However, even though the histogram of the subgroup means is bell-shaped, the process itself may still be non-normal or be experiencing special or systematic causes of variation (i.e., it may be out-of-control). To determine the correct status of the process, we must look at the control chart of the individual observations, not the distribution of subgroup means.

The fact that a “normal” distribution shape does not imply process stability is known as the Quetelet Fallacy and is documented in The History of Statistics.3 You may be surprised to learn that many educated people, including statisticians and engineers, have no knowledge of the fallacy or believe it to be true, and that the belief in the fallacy has a long history. The first documented example that it is false was given in Sir Frances Galton’s famous sweet pea experiment of 1875 that exposed the Quetelet conjecture as false.4

A proof is given below for the argument that a normal or bell-shaped histogram does not imply that the system is experiencing only common cause variation, and conversely a system experiencing only common cause variation will not necessarily have a normal distribution of observations.

Theorem: Normal does not imply Random, and Random does not imply Normal
Proof:
Part 1. The proof that “Random does not imply Normal” is obvious because you can generate random (i.e., common cause) distributions that are uniform, triangular, Weibull, Poisson, Cauchy, etc., and yes, even Normal (see JMP or Minitab for examples). Also, Walter A. Shewhart’s figure 9 in his 1931 book, Economic Control of Quality of Manufactured Product, contains an example. It is the histogram of the modulus of rupture for sitka spruce trees. The histogram is skewed, but Shewhart observes that it is at least approximately in a state of statistical control.5

Part 2. The proof that “Normal does not imply Random” is false is illustrated by a counter example given below. In this example the histogram is bell-shaped, but the system is experiencing both special cause (in this case systematic) variation and common cause (i.e., random) variation. In the graph the slope of the polynomial trend line characterizes special cause (systematic) variation, and common cause (random) variation is characterized by the spread of the points about the trend line.

Example: 
Clothing sales data for spring, summer, and fall (× 1,000 units)
{1, 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 7, 7, 8, 9}
Period 1: May, June (six weeks, new marketing dialog)
Period 2: July, August (seven weeks, old marketing dialog)
Period 3: September, October (six weeks, new marketing dialog)


Histogram of the sales data

The graph of the sales over time shows the effect of the marketing programs in the spring and fall. This change in performance was caused by systematic changes in the process (i.e., the marketing initiatives) and not just random variation.


Plot of sales performance over time

Irrespective of the shape of the distribution, a good way to arrive at the correct conclusion regarding process stability is by looking at a control chart of the behavior of the individual observations from the process, or for highly skewed distributions, by using the F* test6 [Cruthis, 1993] or the Dixon and Massey z-test7 where z ~ N(0, 1) and is given by: