Skip to main content
Advantive

SPC 101: The Complete Beginner's Guide to Statistical Process Control

Statistical Process Control 101

Learn all about SPC for manufacturing.

SPC DEMO

Minimize Production Costs, Quickly Detect Issues, and Optimize Your Product Quality

Don’t miss out! Book a demo of our specialized SPC software and unlock immediate improvements in your processes.

Book a Demo

Specification and Control Limits

Specification limits are boundaries set by a customer, engineering, or management to designate where the product must perform. Specification limits are also referred to as the “voice of the customer” because they represent the results that the customer requires. If a product is out of specification, it is nonconforming and unacceptable to the customer.

Remember: The customer might be the next department or process within your production system.

Control limits are calculated from the process itself. Because control limits show how the process is performing, they are also referred to as the “voice of the process.” Control limits show how the process is expected to perform; they show the variation within the system or the range of the product that the process creates.

Control limits have no relationship to specification limits.

If a product is outside the control limits, it simply means that the process has changed; the product might be in or out of specification. The shift could be caused by a decrease or increase in variation but has no relation to the specification limits.

Control limits are typically set to +3 standard deviations from the mean. For variable data, two control charts are used to evaluate the characteristic: one chart to show the stability of the process mean and another to describe the stability of the variation of individual data values.

Control limits must never be calculated based on specification limits.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional reference material

Additional sections from legacy spc-101:

Statistical Process Control

Have you heard about statistical process control (SPC) but aren’t quite sure what it is or how it could improve your bottom line? We’ve put together this short guide to answer some of the most common SPC manufacturing questions.

Statistical Process Control (SPC) Definition

At its most basic, statistical process control (SPC) is a systematic approach of collecting and analyzing process data for prediction and improvement purposes. SPC is about understanding process behavior so that you can continuously improve results.

As you learn about SPC, you’ll encounter terms  that describe central tendency:

  • Mean: the arithmetic average of a set of collected values
  • Mode: the value that occurs most often within a set of collected values
  • Median: the value that defines where half a set of collected values is above the value and half is below

You will also come across terms that describe the width or spread of data:

  • Variation: a term used to describe the amount of dispersion in a set of data
  • Range: a measure of dispersion that is equal to the maximum value minus the minimum value from a given set of data
  • Standard deviation: a measure used to quantify a data set’s dispersion from its mean value

Statistical Process Control (SPC) Definition

At its most basic, statistical process control (SPC) is a systematic approach of collecting and analyzing process data for prediction and improvement purposes. SPC is about understanding process behavior so that you can continuously improve results.

As you learn about SPC, you’ll encounter terms  that describe central tendency:

  • Mean: the arithmetic average of a set of collected values
  • Mode: the value that occurs most often within a set of collected values
  • Median: the value that defines where half a set of collected values is above the value and half is below

You will also come across terms that describe the width or spread of data:

  • Variation: a term used to describe the amount of dispersion in a set of data
  • Range: a measure of dispersion that is equal to the maximum value minus the minimum value from a given set of data
  • Standard deviation: a measure used to quantify a data set’s dispersion from its mean value

SPC Origins: Shewhart Statistical Process Control (SPC) Charts

SPC Origins: Shewhart Statistical Process Control (SPC) Charts

Dr. Walter A. Shewhart (1891–1967), a physicist at Bell Labs who specialized in the use of statistical methods for analyzing random behavior of small particles, was responsible for the application of statistical methods to process control. Up until Shewhart, quality control methods were focused on inspecting finished goods and sorting out the nonconforming product.

As an alternative to inspection, Shewhart introduced the concept of continuous inspection during production and plotting the results on a time-ordered graph that we now know as a control chart. By studying the plot point patterns, Shewhart realized some levels of variation are normal while others are anomalies.

Using known understandings of the normal distribution, Shewhart established limits to be drawn on the charts that would separate expected levels of variation from the unexpected. He later coined the terms common cause and assignable cause variation.

Dr. Shewhart concluded that every process exhibits variation: either controlled variation (common cause) or uncontrolled variation (assignable cause). He defined a process as being controlled when “through the use of past experience, we can predict, at least within limits, how the process may be expected to vary in the future.”

He went on to develop descriptive statistics to aid manufacturing, including the Shewhart Statistical Process Control Chart—now known as the X-bar and Range (Xbar-R) chart. The purpose of the Shewhart Statistical Process Control Chart is to present distributions of data over time to allow processes to be improved during production. This chart changes the focus of quality control from detecting defects after production to preventing defects during production.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy distributions:

Distributions

To begin evaluating the type of variation in a process, one must evaluate distributions of data—as Deming plotted the drop results in his Funnel Experiment. The best way to visualize the distribution of results coming from a process is through histograms. A histogram is frequency distribution that graphically shows the number of times each given measured value occurs. These histograms show basic process output information, such as the central location, the width and the shape(s) of the data spread.

Location: Measure of Central Tendency

There are three measures of histogram’s central location, or tendency:

  • Mean (the arithmetic average)
  • Median (the midpoint)
  • Mode (the most frequent)

When compared, these measures show how data are grouped around a center, thus describing the central tendency of the data. When a distribution is exactly symmetrical, the mean, mode and median are equal.

Location: Measure of Central Tendency

There are three measures of histogram’s central location, or tendency:

  • Mean (the arithmetic average)
  • Median (the midpoint)
  • Mode (the most frequent)

When compared, these measures show how data are grouped around a center, thus describing the central tendency of the data. When a distribution is exactly symmetrical, the mean, mode and median are equal.

Formula for estimating population mean

Formula for estimating population mean

To estimate a population mean, use the following equation:

Dispersion: Spread of the Data

The two basic measures of spread are the range (the difference between the highest value and the lowest value in the sample) and the standard deviation (the average absolute distance each individual value falls from the distribution’s mean). A large range or a high standard deviation indicate more dispersion, or variation of values within the sample set.

Formula for estimating standard deviation

To estimate the standard deviation of a population, use the following equation:

Dispersion: Spread of the Data

The two basic measures of spread are the range (the difference between the highest value and the lowest value in the sample) and the standard deviation (the average absolute distance each individual value falls from the distribution’s mean). A large range or a high standard deviation indicate more dispersion, or variation of values within the sample set.

Formula for estimating standard deviation

To estimate the standard deviation of a population, use the following equation:

Formula for estimating standard deviation

To estimate the standard deviation of a population, use the following equation:

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy effective-spc:

Overcoming Obstacles to Effective SPC

Statistical process control can help manufacturers achieve continuous process improvement—when it is implemented properly. Watch out for the following obstacles, which can sideline your SPC efforts.

SPC Obstacle #1: “We’re Too Unique for SPC”

If management (or others within the company) believe that company circumstances are so unique that statistical process control cannot be applied to processes, they are likely to argue that even considering SPC would be a waste of time. This obstacle tends to crop up for manufacturers that experience the following:

  • Short runs (i.e., frequent process or product changes)
  • Lack of metrics
  • Fear of change
  • Proprietary or unique processes

To overcome this obstacle: Explain that if a process creates output, then SPC can be applied. The first step is to start collecting data to show how the process behaves. After metrics are defined and data are collected and plotted, it is easy to see that the process does have measurable characteristics. Educating employees in short-run process control methods is a great way to show them that they are not alone. While one likes to feel special, the truth is that most companies that feel too special for statistical process control are the ones that can benefit the most from using SPC.

SPC Obstacle #2: “SPC Will Fix Everything!”

SPC isn’t a cure-all. If no action is taken pursuant to the knowledge gained from SPC analysis, then implementing SPC software for manufacturing or setting up dozens of control charts is not going to improve anything. A control chart can’t eliminate variation and won’t solve all your quality problems.

SPC is the foundation of an effective process-improvement methodology, but there are numerous other tools that should be used. Management teams that expect to solve all their quality problems simply by implementing SPC but doing nothing with the data typically abandon the initiative when it doesn’t miraculously solve every problem.

To overcome this obstacle: SPC education must include an understanding of what SPC does. SPC brings to light common cause and special cause variations, but other tools are needed to reduce or eliminate variation. Train employees to use other process-improvement tools to help reduce variation and create a Corrective Action or Process Improvement team to work on projects.

SPC Obstacle #3: Misunderstanding Limits

Before SPC implementation, many manufacturers collect product data and compare them to specification limits. If the product is within the boundaries set by the customer, the manufacturer assumes that the process is performing fine…in-control. This use of data and limits is called product control, not process control.

When SPC is implemented, you use control limits that are based on process behavior to truly control the process. However, some companies keep specification limits on their control charts, base control limits on something other than true process variation, or set control limits to a standard other than +3 sigma. If control limits do not accurately represent the process, they are useless and can cause more harm than good.

To overcome this obstacle: Ensure that employees understand that control limits are the voice of the process and show how the process is performing, whereas specification limits are the voice of the customer and are independent of process stability. Specification limits do not belong on a control chart. Control charts always use control limits, which are set at 3 sigma units on either side of the central line and are based on data. Drill into all employees that control limits are never based on any calculation using the specification limits. 

SPC Obstacle #4: Too Much Tampering

When a process is in state of statistical control, with primarily common cause variation present, any adjustment to the process is tampering and will only increase the variation. Operators often adjust machines that don’t need adjustment; good operators have a natural tendency to tinker with a process to try and make it perform at its best. Management can aggravate tampering by insisting that operators adjust a process when process data aren’t where management wants them.

These impulsive reactions create uncontrollable gyrations in the process. When the process deteriorates, management tends to blame the operator, resulting in distrust and damaged morale that can ruin an SPC initiative—and do irreversible harm to employee/management relations.

To overcome this obstacle: All employees, especially management, must be trained to understand variation and the dangers of tampering. Each data point on a control chart is independent of the previous one. Processes must be allowed to operate in their natural state if you are to understand the common cause variation. There is a saying in the SPC community, “Don’t do something, just stand there.” Training must include how tampering creates bias and nullifies control charts.

SPC Obstacle #5: Lack of Management Support

Employees who are expected to implement SPC without adequate training and resources will undoubtedly cause the initiative to fail. In many cases, management attempts to save money by scrimping on training, but the money saved will be outweighed by the wasted cost of an unsuccessful SPC program.

In some cases, employees get adequate training, but supervisors and management do not—and so do not support the initiative. If management is uncomfortable with SPC concepts, they will either avoid necessary actions (because they are uneasy with the changes) or recommend process changes based on a misunderstanding of process control. Either way, the SPC initiative suffers.

To overcome this obstacle: Management must provide the necessary resources to conduct thorough training for every employee and every level of the organization—including all levels of management. This training must be repeated at regular intervals, as new employees must be trained, and experienced employees need refresher courses.

Management must be involved with the SPC initiative so that employees know that management believes in and understands SPC. Management must set realistic goals for process improvement and base their analysis on solid metrics. Executive management should also involve front-line management in the selection of the areas to which to apply SPC. Doing so will increase the likelihood that front-line management will take ownership of the system and help it to gain acceptance with employees.

All managers must understand how decision-making should change after SPC is implemented. Remember Shewhart’s Fourth Foundation of Control Charts: Control charts are effective only to the extent that the organization can use, in an effective manner, the knowledge gained. Management must empower employees to make decisions gained from SPC analysis.

SPC Obstacle #6: Lack of Data Integrity

Data that lacks integrity has a devastating effect on analysis and decision-making. Using “bad” data can be worse than having no data. Data can be biased in many ways: Operators might be “rounding off” values before recording data. Subgrouping might not be rational. A measuring instrument might not be suited for the task or might be damaged or out of calibration.

To overcome this obstacle: Before the SPC initiative, set rules for data collection and analysis. Criteria should include the least number of significant digits for the measurement system, how much error (including gauge Repeatability and Reproducibility, bias, and linearity studies) is acceptable, calibration frequency for measurement instruments, rules for determination of outliers, and which actions to take with outliers. Sampling practices must be evaluated to prove rationality, and the sampling frequency must be sufficient to detect shifts in the process.

SPC Obstacle #1: “We’re Too Unique for SPC”

If management (or others within the company) believe that company circumstances are so unique that statistical process control cannot be applied to processes, they are likely to argue that even considering SPC would be a waste of time. This obstacle tends to crop up for manufacturers that experience the following:

  • Short runs (i.e., frequent process or product changes)
  • Lack of metrics
  • Fear of change
  • Proprietary or unique processes

To overcome this obstacle: Explain that if a process creates output, then SPC can be applied. The first step is to start collecting data to show how the process behaves. After metrics are defined and data are collected and plotted, it is easy to see that the process does have measurable characteristics. Educating employees in short-run process control methods is a great way to show them that they are not alone. While one likes to feel special, the truth is that most companies that feel too special for statistical process control are the ones that can benefit the most from using SPC.

SPC Obstacle #2: “SPC Will Fix Everything!”

SPC isn’t a cure-all. If no action is taken pursuant to the knowledge gained from SPC analysis, then implementing SPC software for manufacturing or setting up dozens of control charts is not going to improve anything. A control chart can’t eliminate variation and won’t solve all your quality problems.

SPC is the foundation of an effective process-improvement methodology, but there are numerous other tools that should be used. Management teams that expect to solve all their quality problems simply by implementing SPC but doing nothing with the data typically abandon the initiative when it doesn’t miraculously solve every problem.

To overcome this obstacle: SPC education must include an understanding of what SPC does. SPC brings to light common cause and special cause variations, but other tools are needed to reduce or eliminate variation. Train employees to use other process-improvement tools to help reduce variation and create a Corrective Action or Process Improvement team to work on projects.

SPC Obstacle #3: Misunderstanding Limits

Before SPC implementation, many manufacturers collect product data and compare them to specification limits. If the product is within the boundaries set by the customer, the manufacturer assumes that the process is performing fine…in-control. This use of data and limits is called product control, not process control.

When SPC is implemented, you use control limits that are based on process behavior to truly control the process. However, some companies keep specification limits on their control charts, base control limits on something other than true process variation, or set control limits to a standard other than +3 sigma. If control limits do not accurately represent the process, they are useless and can cause more harm than good.

To overcome this obstacle: Ensure that employees understand that control limits are the voice of the process and show how the process is performing, whereas specification limits are the voice of the customer and are independent of process stability. Specification limits do not belong on a control chart. Control charts always use control limits, which are set at 3 sigma units on either side of the central line and are based on data. Drill into all employees that control limits are never based on any calculation using the specification limits. 

SPC Obstacle #4: Too Much Tampering

When a process is in state of statistical control, with primarily common cause variation present, any adjustment to the process is tampering and will only increase the variation. Operators often adjust machines that don’t need adjustment; good operators have a natural tendency to tinker with a process to try and make it perform at its best. Management can aggravate tampering by insisting that operators adjust a process when process data aren’t where management wants them.

These impulsive reactions create uncontrollable gyrations in the process. When the process deteriorates, management tends to blame the operator, resulting in distrust and damaged morale that can ruin an SPC initiative—and do irreversible harm to employee/management relations.

To overcome this obstacle: All employees, especially management, must be trained to understand variation and the dangers of tampering. Each data point on a control chart is independent of the previous one. Processes must be allowed to operate in their natural state if you are to understand the common cause variation. There is a saying in the SPC community, “Don’t do something, just stand there.” Training must include how tampering creates bias and nullifies control charts.

SPC Obstacle #5: Lack of Management Support

Employees who are expected to implement SPC without adequate training and resources will undoubtedly cause the initiative to fail. In many cases, management attempts to save money by scrimping on training, but the money saved will be outweighed by the wasted cost of an unsuccessful SPC program.

In some cases, employees get adequate training, but supervisors and management do not—and so do not support the initiative. If management is uncomfortable with SPC concepts, they will either avoid necessary actions (because they are uneasy with the changes) or recommend process changes based on a misunderstanding of process control. Either way, the SPC initiative suffers.

To overcome this obstacle: Management must provide the necessary resources to conduct thorough training for every employee and every level of the organization—including all levels of management. This training must be repeated at regular intervals, as new employees must be trained, and experienced employees need refresher courses.

Management must be involved with the SPC initiative so that employees know that management believes in and understands SPC. Management must set realistic goals for process improvement and base their analysis on solid metrics. Executive management should also involve front-line management in the selection of the areas to which to apply SPC. Doing so will increase the likelihood that front-line management will take ownership of the system and help it to gain acceptance with employees.

All managers must understand how decision-making should change after SPC is implemented. Remember Shewhart’s Fourth Foundation of Control Charts: Control charts are effective only to the extent that the organization can use, in an effective manner, the knowledge gained. Management must empower employees to make decisions gained from SPC analysis.

SPC Obstacle #6: Lack of Data Integrity

Data that lacks integrity has a devastating effect on analysis and decision-making. Using “bad” data can be worse than having no data. Data can be biased in many ways: Operators might be “rounding off” values before recording data. Subgrouping might not be rational. A measuring instrument might not be suited for the task or might be damaged or out of calibration.

To overcome this obstacle: Before the SPC initiative, set rules for data collection and analysis. Criteria should include the least number of significant digits for the measurement system, how much error (including gauge Repeatability and Reproducibility, bias, and linearity studies) is acceptable, calibration frequency for measurement instruments, rules for determination of outliers, and which actions to take with outliers. Sampling practices must be evaluated to prove rationality, and the sampling frequency must be sufficient to detect shifts in the process.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy populations-and-sampling:

Populations and Sampling

population consists of all the possible elements or items associated with a situation; for example, all trout that are living in a lake. A sample refers to a portion of those elements or items. It is cost prohibitive to evaluate every member of a population and, in the case of destructive testing, may be impossible. For these reasons, manufacturers rely on sampling their data to cost-effectively make inferences of the population without measuring each piece.

  • Effective sampling plans must be representative of the population being studied.
  • In most cases, sampling plans need to be random and unbiased.
  • Sampling frequency and subgroup size are also crucial to a successful sampling plan.

Rational Sampling and Subgrouping

Rational Sampling and Subgrouping

Rational samples are taken with regard to the way the process output is measured (i.e., what, where, how, and when it is measured). Samples must be taken frequently enough to monitor any changes in the process. Samples should be selected with the goal of keeping the process stream intact. That is, in the context of manufacturing, a stream consists of a single part, process, and feature combination. Mixing any one of these parameters introduces ambiguity into the analysis. Odd sample sizes (3 and 5 are very common) are recommended because they have a natural median.

The correct sampling frequency depends on how fast the process is changing. To be representative of the population, samples must be taken often enough to catch any expected changes in the process, but with sufficient time between samples to display variation. Frequencies are usually defined in measurements of time (e.g., every 30 minutes, hourly, daily) but may also be defined using counts (e.g., every 100th product).

After the data have been sampled rationally, they must be subgrouped rationally as well. A rational subgroup contains parts that can be produced without any process adjustments – typically consecutively produced parts. Such a subgroup has little possibility of assignable cause variation within the subgroup. If only common cause variation exists within the samples, then any abnormal differences within or between the subgroups is attributable to assignable cause variation. Process streams should not be mixed within a subgroup. If the subgroup includes output of two or more process streams and each stream cannot be identified, then the sampling is not rational.

The subgroup size determines the sensitivity of a chart. As the sample size increases, the plotted statistic becomes more sensitive. That is, charts can detect smaller process shifts as the sample size increases.

Data must sometimes be grouped in subgroups of one. Subgroup size should be one when process adjustments or raw material changes must be made with each part or when only one value represents the monitored condition (e.g., daily yield, past week’s overtime). Subgroup size should also be one when sampling a known homogeneous batch.

In Advanced Topics in Statistical Process Control, Donald Wheeler suggests the following subgrouping principles:

  • Never knowingly subgroup unlike things together.
  • Minimize the variation within each subgroup.
  • Maximize the opportunity for variation between the subgroups.
  • Average across noise, not across signals.
  • Treat the chart in accordance with the use of the data.
  • Establish standard sampling procedures.

Random vs biased sampling

Random vs biased sampling

The purpose of a sample is to accurately represent the population. Statistical formulas that are used to estimate populations are based on the premise that the samples are random. In a random sample, every item in the population has an equal chance of being selected. A sample has bias when some of the items in a population have a greater chance of being sampled than others.

Example: Sampling pies

Suppose you are a taster in a pie factory. If a day’s production is one pie, then that pie is the population. To evaluate the population, you would need to eat the entire pie. However, you’d then be left with no pie to sell. A more effective option, assuming a uniform crust and homogeneous filling, would be to slice the pie into 12 equal sections and eat only one slice. By eating this sample slice, you can evaluate the quality of the entire pie and still be left with slices to sell.

If production increases to several pies per day, you may continue eating one slice from a pie and may not sample every pie. If you add a second shift or a second variety of pie, you would need to collect subgroups from these new sources of variation.

Imagine that you always take a sample slice from the same slice location for the pie samples. It may be possible that the location of that slice as the pie moves through the oven allows it to be perfectly cooked while the other side of the pie is slightly undercooked. This is another source of variation that needs to be considered with sampling. A true random sample would be one that is taken from different or random areas of each sampled pie.

Example: Sampling pies

Suppose you are a taster in a pie factory. If a day’s production is one pie, then that pie is the population. To evaluate the population, you would need to eat the entire pie. However, you’d then be left with no pie to sell. A more effective option, assuming a uniform crust and homogeneous filling, would be to slice the pie into 12 equal sections and eat only one slice. By eating this sample slice, you can evaluate the quality of the entire pie and still be left with slices to sell.

If production increases to several pies per day, you may continue eating one slice from a pie and may not sample every pie. If you add a second shift or a second variety of pie, you would need to collect subgroups from these new sources of variation.

Imagine that you always take a sample slice from the same slice location for the pie samples. It may be possible that the location of that slice as the pie moves through the oven allows it to be perfectly cooked while the other side of the pie is slightly undercooked. This is another source of variation that needs to be considered with sampling. A true random sample would be one that is taken from different or random areas of each sampled pie.

5 Ws and 2 Hs of Sampling

Who will be collecting the data? Evaluate the abilities of the operator who collects the data. How much time does the operator have? Does the operator have adequate resources to collect the data?

What is to be measured? Focus on important characteristics. Remember that it costs money to sample, so you should focus on the characteristics that are critical to controlling the process or key features that measure product conformity.

Where or at what point in the process will the sample be taken? The sample should be taken at a point early enough in the process that allows the data to be used for process control.

When will the process be sampled? Samples must be taken often enough to reflect shifts in the process. A good rule of thumb is to sample two subgroups between process shifts.

Why is this sample being taken? Will the data be used for product control or process control? What question(s) are you trying to answer with the data?

How will the data be collected? Will samples be measured or evaluated manually, or will the data be retrieved from an automated measurement source?

How many samples will be taken? The sample quantity should be adequate for control without being too large.

Attributes (Defects/Defectives)

Attributes (Defects/Defectives)

The discussion so far has centered on the benefits of measuring variables data. But in many situations, there is no measurement value, only a pass/fail rating or a defect count. Even so, attribute data can also be plotted on control charts and be vital to understanding process control. There are two distinct types of attribute data: defects and defectives.

Defects

Defects data, also known as counts data, are used to describe data collection situations in which the number of occurrences within a given unit is counted. An occurrence may be a defect, observation, or an event. A unit is an opportunity region to find defects, sometimes called the area of opportunity. A unit may be a batch of parts, a given surface area or distance, a window of time, or any domain of observation.

For example, suppose the number of weave flaws is counted on a bolt of fabric. The bolt represents a unit, and the weave flaws represent occurrences. There might be an unlimited number of types of flaws on a given bolt of fabric. Some flaws might be more severe than others. A flaw might or might not cause the bolt to be scrapped. Consecutively produced bolts might or might not be of uniform size.

Defectives

Defectives data, also known as go/no-go or pass/fail data, are used to describe data collection situations in which the unit either does or does not conform.

For example, light bulbs are tested in lots of 100. If a bulb lights up, it conforms and is accepted. If the bulb does not light, it is nonconforming. Or consider a filling operation. If a container is filled below the minimum weight, it is defective. Anything over the minimum weight is accepted. Either the fill volume meets the minimum requirements, or it does not.

Defects

Defects data, also known as counts data, are used to describe data collection situations in which the number of occurrences within a given unit is counted. An occurrence may be a defect, observation, or an event. A unit is an opportunity region to find defects, sometimes called the area of opportunity. A unit may be a batch of parts, a given surface area or distance, a window of time, or any domain of observation.

For example, suppose the number of weave flaws is counted on a bolt of fabric. The bolt represents a unit, and the weave flaws represent occurrences. There might be an unlimited number of types of flaws on a given bolt of fabric. Some flaws might be more severe than others. A flaw might or might not cause the bolt to be scrapped. Consecutively produced bolts might or might not be of uniform size.

Defectives

Defectives data, also known as go/no-go or pass/fail data, are used to describe data collection situations in which the unit either does or does not conform.

For example, light bulbs are tested in lots of 100. If a bulb lights up, it conforms and is accepted. If the bulb does not light, it is nonconforming. Or consider a filling operation. If a container is filled below the minimum weight, it is defective. Anything over the minimum weight is accepted. Either the fill volume meets the minimum requirements, or it does not.

Speak to a Distribution Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy process-behavior:

Process Behavior and Control

The terms in control and out of control are typically used when referring to a stable or unstable process. A process is in control (stable) when the average and standard deviations are known and predictable. A process is out of control (unstable) when either the average or standard deviation is changing or unpredictable.

  • In control: Stable, predictable, consistent, unchanging
  • Out of control: Unstable, unpredictable, inconsistent, changing

In Control

In Control

An in-control process has many benefits:

  • Scrap and rework estimates can be made prior to production.
  • Machine settings can be adjusted to optimize throughput.
  • Engineers can incorporate statistical tolerance into their drawings, increasing component tolerances without compromising assembly performance.
  • Product designs can be statistically modeled to accurately predict fit and performance yields prior to prototype assembly.
  • Machine utilization can be optimized (e.g., high-precision machines and resources will not be wasted on manufacturing low-precision dimensions).
  • Process-improvement resources will be better spent.

Remember, being in control does not mean that the process is within specification. A process can be extremely stable while consistently producing bad product.

Out of Control

Out of Control

A process is usually judged to be out of control based on five commonly used control chart rules. These rules signal a change in either the process average or the variation.

  1. Points are beyond control limits.
  2. Eight or more consecutive points are either above or below the centerline.
  3. Four out of five consecutive points are in or beyond the 2-sigma zone (referred to as zone B in the graphic).
  4. Six points or more point in a row are steadily increasing or decreasing.
  5. Two out of three consecutive points are in the 3-sigma region (referred to as zone A in the graphic).

Even an out-of-control process can reveal useful information. By using SPC to measure out-of-control processes, you can do the following:

  • Detect both unwanted and desirable process changes.
  • Prove whether a process change resulted in an improvement.
  • Determine when to make a process change.
  • Verify measurement system improvements.

Control charts, sometimes called process behavior charts, are tools to determine whether a process is stable or unstable.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy process-variation:

Understanding Process Variation

William Edwards Deming (1900-1993) was an important contributor to statistical process control and its use in manufacturing. According to the American Society for Quality (ASQ), his 14 key points on quality management are a core part of modern quality management programs.

Understanding process variation is an integral aspect of using Statistical Process Control (SPC) to improve your manufacturing processes. Dr. Deming’s first principle states, “The central problem in lack of quality is the failure of management to understand variation.” Only after management understands variation can a manufacturer succeed in implementing Dr. Deming’s second principle: “It is management’s responsibility to know whether the problems are in the system or in the behavior of the people.”

Types of Process Variation

There are two types of process variation:

  • Common cause variation is inherent to the system. This variation can be changed only by improving the equipment or changing the work procedures; the operator has little influence over it.
  • Assignable cause variation comes from sources outside of the system. This variation can occur because of operator error, use of improper tooling, equipment malfunction, raw material problems, or any other abnormal disruptive inputs.

The goal of SPC is to understand the difference between these two types of process variation—and to react only to assignable cause variation. Processes that show primarily common cause variation are, by definition, in control and running as well as possible.

Control versus capability

Note that keeping a process in control doesn’t mean that the product is acceptable; the system must also be capable of making acceptable products. Control and capability are different concepts.

SPC uses statistical tools—such as control charts—to identify process variations. Special cause variations—those outside the standard or expected variation—are identified and their causes need to be eliminated or at least understood.

Example of special cause variation

Suppose you drive to work each day. Your path has inherent or common variations, such as traffic lights. But suppose there is a railroad crossing that causes you to be 30 minutes late for work. That day’s commute would be special variation, and the railroad crossing would be the assignable cause.

As a result of understanding and reducing or eliminating assignable cause variations (perhaps there is a route with no railroad crossings), processes can be kept in control and continually improved. Adjusting an in-control process when there is no identified need is called tampering and only increases the variation of the system.

Types of Process Variation

There are two types of process variation:

  • Common cause variation is inherent to the system. This variation can be changed only by improving the equipment or changing the work procedures; the operator has little influence over it.
  • Assignable cause variation comes from sources outside of the system. This variation can occur because of operator error, use of improper tooling, equipment malfunction, raw material problems, or any other abnormal disruptive inputs.

The goal of SPC is to understand the difference between these two types of process variation—and to react only to assignable cause variation. Processes that show primarily common cause variation are, by definition, in control and running as well as possible.

Control versus capability

Note that keeping a process in control doesn’t mean that the product is acceptable; the system must also be capable of making acceptable products. Control and capability are different concepts.

SPC uses statistical tools—such as control charts—to identify process variations. Special cause variations—those outside the standard or expected variation—are identified and their causes need to be eliminated or at least understood.

Example of special cause variation

Suppose you drive to work each day. Your path has inherent or common variations, such as traffic lights. But suppose there is a railroad crossing that causes you to be 30 minutes late for work. That day’s commute would be special variation, and the railroad crossing would be the assignable cause.

As a result of understanding and reducing or eliminating assignable cause variations (perhaps there is a route with no railroad crossings), processes can be kept in control and continually improved. Adjusting an in-control process when there is no identified need is called tampering and only increases the variation of the system.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy spc-chart-examples:

SPC Charting Examples

Predictable Mean and Predictable Variation

Predictable Mean and Predictable Variation

Unpredictable Mean and Predictable Variation

Unpredictable Mean and Predictable Variation

Predictable Mean and Unpredictable Variation

Predictable Mean and Unpredictable Variation

Chaos

Chaos

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy spc-control-charts:

SPC Control Charts

All control charts have three common elements:

  • Plot points: Plot points usually represent individual measurements, averages, standard deviations, or ranges.
  • Centerline: The centerline is usually (but not always) the average of the points plotted on the chart.
  • Control limits: Control limits represent the amount of variability in the process.

The Four Foundations of Shewhart’s Control Charts

The Four Foundations of Shewhart’s Control Charts

There are four foundational guidelines to Shewhart statistical process control charts:

  1. Shewhart statistical process control charts always use control limits that are set to 3 sigma units on either side of the central line. These 3-sigma limits are always based on the chart’s data and define when action should be taken on the process. Control limits are never based on any calculation using the specification limits. Specification limits are the customer requirements and define how to treat the product, not the process.
  2. Always use an average dispersion statistic or a median dispersion statistic when computing 3-sigma control limits. Using the average or median of several dispersion statistics increases the robustness of the chart.
  3. The conceptual foundation of Shewhart’s charts is the notion of rational sampling and subgrouping.
  4. Control charts are effective only to the extent that the organization can effectively use the knowledge gained to take action.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy spc-manufacturing:

Why Use SPC in Manufacturing?

Today’s consumers expect the best quality products at the lowest price. Why do manufacturers use SPC? Because statistical process control can help you meet both these demands.

By using statistical process control, manufacturers can move from a detection approach to a prevention approach, reducing or eliminating the need to rely on sorting or inspection. SPC can increase productivity, reduce waste, and reduce the risk of shipping nonconforming products.

Statistical Process Control Reduces Scrap

Statistical Process Control Reduces Scrap

For many years, the term quality control meant inspecting to remove nonconforming products. Products are produced, then inspected to determine whether they are fit to be shipped to the customer. Products that aren’t acceptable are either scrapped or reworked. Sorting products is not only expensive—you’re basically paying one employee to make the product and another to make sure that the product is right—it’s also not very accurate. Studies have shown that 100% inspection is approximately 80% effective.

Statistical process control helps manufacturers escape this inefficient cycle. SPC leads to a system of preventing nonconforming product during the production process instead of waiting until products are complete to determine whether they are acceptable. This reduces waste, increases productivity, makes product quality more consistent, and reduces the risk of shipping non-conforming products.

When statistical process control is properly implemented, manufacturers foster an environment in which operators are empowered to make decisions about processes. In this way, processes—and product quality—can be continuously improved.

SPC is a powerful tool—but success depends on regular and proper application. Management must support its implementation through trust and education of employees and a commitment to supply the necessary resources.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy statistical-process-control-faqs:

Statistical Process Control FAQs

Still have questions about statistical process control (SPC)? Click the links below to locate information about popular topics.

What is SPC (statistical process control)?

Statistical Process Control (SPC) is a scientific, data-driven methodology for quality analysis and improvement. In manufacturing, SPC is an industry-standard methodology for measuring and controlling quality during the manufacturing process.

LEARN MORE:  WHAT IS SPC?

What are the origins of SPC?

Dr. Walter A. Shewhart (1891–1967), a specialist in the use of statistical methods, was responsible for the application of statistical methods to process control. Up until Shewhart, quality control methods were focused on inspecting finished goods and sorting out the nonconforming product. As an alternative to inspection, Shewhart introduced the concept of continuous inspection during production and plotting the results on a time-ordered graph that we now know as a control chart.

LEARN MORE: SPC 101

How does SPC relate to quality control in manufacturing?

By using statistical process control, manufacturers can move from a detection approach to a prevention approach, reducing or eliminating the need to rely on sorting or inspection. SPC can increase productivity, reduce waste, and reduce the risk of shipping nonconforming products.

LEARN MORE: WHY USE SPC IN MANUFACTURING?

How can I implement an SPC measurement system?

Control charts are used to determine whether a process is stable or unstable. However, using statistical process control just to “put out fires”—finding an out-of-control point on a control chart and then determining and removing the assignable cause—is not the same as creating continuous improvement. SPC can be fully realized only when you use it to improve processes and reduce variation.

LEARN MORE: STATISTICAL PROCESS CONTROL IMPLEMENTATION

What are statistical process control limits?

Control limits are calculated from the process itself. Because control limits show how the process is performing, they are also referred to as the “voice of the process.” Control limits show how the process is expected to perform; they show the variation within the system or the range of the product that the process creates.

LEARN MORE: SPECIFICATION AND CONTROL LIMITS

What are specification limits?

Specification limits are boundaries set by a customer, engineering, or management to designate where the product must perform. Specification limits are also referred to as the “voice of the customer” because they represent the results that the customer requires. If a product is out of specification, it is nonconforming and unacceptable to the customer.

LEARN MORE: SPECIFICATION AND CONTROL LIMITS

What are Shewhart statistical process control charts?

All control charts have three common elements:

  • Plot points: Plot points usually represent individual measurements, averages, standard deviations, or ranges.
  • Centerline: The centerline is usually (but not always) the average of the points plotted on the chart.
  • Control limits: Control limits represent the amount of variability in the process.

There are four foundational guidelines to Shewhart statistical process control charts.

LEARN MORE: SPC CONTROL CHARTS

What’s the best use of SPC control charts?

Control charts are used to determine whether a process is stable or unstable. There are many types of control charts that can be used to fit the nature of different types of data streams and sampling methods.

LEARN MORE: STATISTICAL PROCESS CONTROL (SPC) IMPLEMENTATION

How should I use SPC charts to determine process capability?

Capability is calculated from existing data but can be used as a prediction of future performance. However, the capability results must come from an in-control process if the results are to be used to predict the process’s behavior in the future. The most commonly used measures of capability are Cp, Cpk, Pp, and Ppk.

LEARN MORE: PROCESS CAPABILITY

Why do SPC initiatives fail, and how can I help ours succeed?

Statistical process control can help manufacturers achieve continuous process improvement—when it is implemented properly. Watch out for obstacles that can sideline your SPC efforts.

LEARN MORE: OVERCOMING OBSTACLES TO EFFECTIVE SPC

What is SPC (statistical process control)?

Statistical Process Control (SPC) is a scientific, data-driven methodology for quality analysis and improvement. In manufacturing, SPC is an industry-standard methodology for measuring and controlling quality during the manufacturing process.

LEARN MORE:  WHAT IS SPC?

What are the origins of SPC?

Dr. Walter A. Shewhart (1891–1967), a specialist in the use of statistical methods, was responsible for the application of statistical methods to process control. Up until Shewhart, quality control methods were focused on inspecting finished goods and sorting out the nonconforming product. As an alternative to inspection, Shewhart introduced the concept of continuous inspection during production and plotting the results on a time-ordered graph that we now know as a control chart.

LEARN MORE: SPC 101

How does SPC relate to quality control in manufacturing?

By using statistical process control, manufacturers can move from a detection approach to a prevention approach, reducing or eliminating the need to rely on sorting or inspection. SPC can increase productivity, reduce waste, and reduce the risk of shipping nonconforming products.

LEARN MORE: WHY USE SPC IN MANUFACTURING?

How can I implement an SPC measurement system?

Control charts are used to determine whether a process is stable or unstable. However, using statistical process control just to “put out fires”—finding an out-of-control point on a control chart and then determining and removing the assignable cause—is not the same as creating continuous improvement. SPC can be fully realized only when you use it to improve processes and reduce variation.

LEARN MORE: STATISTICAL PROCESS CONTROL IMPLEMENTATION

What are statistical process control limits?

Control limits are calculated from the process itself. Because control limits show how the process is performing, they are also referred to as the “voice of the process.” Control limits show how the process is expected to perform; they show the variation within the system or the range of the product that the process creates.

LEARN MORE: SPECIFICATION AND CONTROL LIMITS

What are specification limits?

Specification limits are boundaries set by a customer, engineering, or management to designate where the product must perform. Specification limits are also referred to as the “voice of the customer” because they represent the results that the customer requires. If a product is out of specification, it is nonconforming and unacceptable to the customer.

LEARN MORE: SPECIFICATION AND CONTROL LIMITS

What are Shewhart statistical process control charts?

All control charts have three common elements:

  • Plot points: Plot points usually represent individual measurements, averages, standard deviations, or ranges.
  • Centerline: The centerline is usually (but not always) the average of the points plotted on the chart.
  • Control limits: Control limits represent the amount of variability in the process.

There are four foundational guidelines to Shewhart statistical process control charts.

LEARN MORE: SPC CONTROL CHARTS

What’s the best use of SPC control charts?

Control charts are used to determine whether a process is stable or unstable. There are many types of control charts that can be used to fit the nature of different types of data streams and sampling methods.

LEARN MORE: STATISTICAL PROCESS CONTROL (SPC) IMPLEMENTATION

How should I use SPC charts to determine process capability?

Capability is calculated from existing data but can be used as a prediction of future performance. However, the capability results must come from an in-control process if the results are to be used to predict the process’s behavior in the future. The most commonly used measures of capability are Cp, Cpk, Pp, and Ppk.

LEARN MORE: PROCESS CAPABILITY

Why do SPC initiatives fail, and how can I help ours succeed?

Statistical process control can help manufacturers achieve continuous process improvement—when it is implemented properly. Watch out for obstacles that can sideline your SPC efforts.

LEARN MORE: OVERCOMING OBSTACLES TO EFFECTIVE SPC

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy statistical-process-control-implementation:

Statistical Process Control (SPC) Implementation

Statistical process control is certainly not the only technique used to improve processes. But for our purposes here, we will focus on two of the tools used most in SPC:

  • Histograms
  • Control charts

LEARN MORE: SPC TOOLS FOR ANALYSIS

Histogram

Histogram

Histograms can provide a quick view of process variation and are used to plot frequency distributions.

Control chart

Control chart

Control charts are the best-known tools associated with SPC.

Control charts are used to determine whether a process is stable or unstable. There are many types of control charts that can be used to fit the nature of different types of data streams and sampling methods.

Below are examples of the most commonly used control charts:

  • Variable data
    • Xbar-S: Xbar and standard deviation
    • Xbar–R: Xbar and range chart
    • IX–MR: Individual X and moving range chart
  • Attribute data
    • c: Defect count
    • u: Defect count, normalized to sample size
    • p: Proportion defective
    • np: Proportion defective multiplied by sample size

Control charts are discussed further in the Process Behavior and SPC Control Charts section as well as in our Definitive Guide to SPC Charts.

LEARN MORE: DEFINITIVE GUIDE TO SPC CHARTS

Although SPC charts are revealing, today’s manufacturers increasingly recognize the benefits of moving away from manual SPC—conducted by recording data on paper and then running analysis via offline spreadsheets or statistical software—and instead using real-time SPC software.

SPC Software

Quality control software for manufacturing offers multiple benefits:

  • Surface relevant information more quickly
  • Filter data according to role (e.g., operator, quality manager) and location (e.g., the lines being worked on that day)
  • Faster, focused, and more detailed analysis
  • Additional means of evaluating data (e.g., grading and stream summary)
  • Directed alerts and notifications
  • Mobile, enterprise-wide visibility into operations

InfinityQS is the leading provider of SPC software and services for manufacturers, providing quality intelligence solutions that work in the cloud or on-premises, across the globe.

LEARN MORE: SPC SOFTWARE FOR MANUFACTURING

Remember: Using statistical process control just to “put out fires”—finding an out-of-control point on a control chart and then determining and removing the assignable cause—is not the same as creating continuous improvement. SPC can be fully realized only when you use it to improve processes and reduce variation.

SPC Software

Quality control software for manufacturing offers multiple benefits:

  • Surface relevant information more quickly
  • Filter data according to role (e.g., operator, quality manager) and location (e.g., the lines being worked on that day)
  • Faster, focused, and more detailed analysis
  • Additional means of evaluating data (e.g., grading and stream summary)
  • Directed alerts and notifications
  • Mobile, enterprise-wide visibility into operations

InfinityQS is the leading provider of SPC software and services for manufacturers, providing quality intelligence solutions that work in the cloud or on-premises, across the globe.

LEARN MORE: SPC SOFTWARE FOR MANUFACTURING

Remember: Using statistical process control just to “put out fires”—finding an out-of-control point on a control chart and then determining and removing the assignable cause—is not the same as creating continuous improvement. SPC can be fully realized only when you use it to improve processes and reduce variation.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Additional sections from legacy the-problem-with-tampering:

The Problem with Tampering

When a process is centered on target and is in state of statistical control, any adjustments to the process only increase variation. Adjusting a process that is in control is referred to as tampering.

The Funnel Experiment and Deming’s Four Rules

The classic analysis of the effects of tampering is Deming’s Funnel Experiment. In this experiment, participants drop marbles through a funnel suspended over a target. The funnel represents the process, the marble drop location is the feature being produced, and the target is the customer specification.

Deming described four approaches—also referred to as rules—that encompass the typical ways in which the experiment participants tamper with the funnel (Out of Crisis, 1986, p. 328).

Rule 1: No adjustment

The optimal approach is to leave the funnel fixed and aimed at the target, without making any adjustments. When a process is stable, centered, and shows only the inherent variation, there is no reason to make an adjustment.

The takeaway: Before attempting any process adjustment, you must gather enough data to make sure you understand the normal behavior of the process. Use a control chart to track variations, and then adjust the process only when special variations occur.

Rule 2: Adjustment from last position

Sometimes referred to as the “human nature” approach, some participants move the funnel after each drop, to try and compensate for the previous drop’s variation. In this approach, the funnel is moved the exact negative distance of the drop. Compensating for the “error” of the drop, might improve the on-target average but doubles the variation.

The takeaway: When participants compensate for error, the variation doubles—and remember, variation is the true issue. This problem is prevalent in gauge calibration when manufacturers adjust a gauge after taking one standard measurement.

Rule 3: Adjustment from target

Participants trying to take a “logical” approach also move the funnel to try to compensate for the previous drop. But in this instance, the funnel is moved not based on its last location, but on its distance from the target. For example, if the measurement of the previous drop was 5 units above the target, participants move the funnel 5 units below the target.

The takeaway: Although this approach seems logical, it results in an oscillating process.

Rule 4: Adjustment from last drop

In this approach, participants move the funnel to point at the previous drop rather than the target. In other words, at drop n, they set the funnel over the location of the n-1 drop. As you might expect, this approach creates a pattern that moves steadily away from the target.

The takeaway: Believe it or not, this approach occurs in calibration scenarios when one product is used to set up for the next production. This issue is typical in workplaces where on-the-job training is prevalent.

The Funnel Experiment and Deming’s Four Rules

The classic analysis of the effects of tampering is Deming’s Funnel Experiment. In this experiment, participants drop marbles through a funnel suspended over a target. The funnel represents the process, the marble drop location is the feature being produced, and the target is the customer specification.

Deming described four approaches—also referred to as rules—that encompass the typical ways in which the experiment participants tamper with the funnel (Out of Crisis, 1986, p. 328).

Rule 1: No adjustment

The optimal approach is to leave the funnel fixed and aimed at the target, without making any adjustments. When a process is stable, centered, and shows only the inherent variation, there is no reason to make an adjustment.

The takeaway: Before attempting any process adjustment, you must gather enough data to make sure you understand the normal behavior of the process. Use a control chart to track variations, and then adjust the process only when special variations occur.

Rule 2: Adjustment from last position

Sometimes referred to as the “human nature” approach, some participants move the funnel after each drop, to try and compensate for the previous drop’s variation. In this approach, the funnel is moved the exact negative distance of the drop. Compensating for the “error” of the drop, might improve the on-target average but doubles the variation.

The takeaway: When participants compensate for error, the variation doubles—and remember, variation is the true issue. This problem is prevalent in gauge calibration when manufacturers adjust a gauge after taking one standard measurement.

Rule 3: Adjustment from target

Participants trying to take a “logical” approach also move the funnel to try to compensate for the previous drop. But in this instance, the funnel is moved not based on its last location, but on its distance from the target. For example, if the measurement of the previous drop was 5 units above the target, participants move the funnel 5 units below the target.

The takeaway: Although this approach seems logical, it results in an oscillating process.

Rule 4: Adjustment from last drop

In this approach, participants move the funnel to point at the previous drop rather than the target. In other words, at drop n, they set the funnel over the location of the n-1 drop. As you might expect, this approach creates a pattern that moves steadily away from the target.

The takeaway: Believe it or not, this approach occurs in calibration scenarios when one product is used to set up for the next production. This issue is typical in workplaces where on-the-job training is prevalent.

Speak to a Manufacturing Industry Expert

What to Expect

  • Free 20-minute call with a product expert
  • Live demo tailored to your industry requirements
  • Discover what products best fit your needs
  • No games, gimmicks, or high-pressure sales pitch

Customers using Advantive in quality advisor

“Enact helps us quickly respond to quality issues, which saves us money.”
Jegadish Gunasagaran Quality Assurance Manager, Bakery on Main
““What sets Ben & Jerry’s apart from our competitors is not only our insistence on high-quality ingredients, but also the extra and unique flavours we use to create a euphoric customer experience. Ensuring the final product reflects the passion and quality that we put into each pint required a quality solution that emphasized the same attention to details that we do.””
Melissa Corcia, Quality Manager, Ben & Jerry’s
““By utilizing InfinityQS® ProFicient™ to implement SPC and Six Sigma best practices across our manufacturing processes, Ben & Jerry’s will continue to identify opportunities for cost savings and ensure the highest level of customer satisfaction. The result is the perfect pint for our customers.””
Nina King, Quality Supervisor, Ben & Jerry’s