The goal of robustness studies is to demonstrate that our processes will be successful upon implementation in the field when they are exposed to anticipated noise factors. There are several assumptions and underlying concepts that need to be understood when setting out to conduct a robustness study. Carefully considering these principles, three distinct types of designs emerge that address robustness:

**I.** Having settled on process settings, we desire to demonstrate the system is insensitive to external noise-factor variation, i.e., robust against Z factor influence.

**II.** Given we may have variation between our selected process settings and the actual factor conditions that may be seen in the field, we wish to find settings that are insensitive to this variation. In other words, we may set our controlled X factors, but these factors wander from their set points and cause variation in our results. Our goal is to achieve our desired Y values while minimizing the variation stemming from imperfect process factor control. The impact of external noise factors (Z’s) is not explored in this type of study.

**III.** Given a system having controllable factors (X’s) and non-controllable noise factors (Z’s) impacting one or more desirable properties (Y’s), we wish to find the ideal settings for the controllable factors that simultaneously maximize the properties of interest, while minimizing the impact of variation from both types of noise factors.

This blog entry discusses the second type of design. Read on and stay tuned for the third part!

In this type of analysis, we aim to find the process factor settings that satisfy our requirements and be the most insensitive to expected variations in those settings. For example, we may decide baking temperature and baking time impact the rise height of bread, per the results of a lab-scale DOE. But we anticipate that on an industrial scale, changes in conveyor speed could impact baking time and perhaps that large ovens may cycle in temperature, giving rise to variation.

We can use propagation of error (POE) in our time-temperature DOE to find the sweet spot where such fluctuations in process factors yield the smallest amount of variation in results, i.e., the most robust settings to for success in the field.

For additional detail on using POE as a tool for robust design see Pat Whitcomb’s 2020 Overview of Robust Design, Propagation of Error and Tolerance Analysis.

The third design type will be discussed in the next blog post.

The goal of robustness studies is to demonstrate that our processes will be successful upon implementation in the field when they are exposed to anticipated noise factors. There are several assumptions and underlying concepts that need to be understood when setting out to conduct a robustness study.

First, it is necessary that the noise factors be identified and can be controlled during the robustness study itself. Noise factors that cannot be controlled, cannot be evaluated. They will merely serve to increase variation within the study and exacerbate the unexplained variation encountered, i.e., they increase the residual (error) term.

Second, it is necessary to consider the type of noise variation being studied. Our modeling will be built upon X’s – the factors we control within our process, and Z’s – noise factors external to our system that we (eventually) cannot control. Variation around the chosen set points for X factors creates one source of noise that influences our responses (Y’s). Variation of identified external noise (Z’s) creates another source of influence. These Z factors, however, do not have a “set point.” In the field, they randomly appear and cause variation upon process responses.

Some DOE experts differentiate the two groups of terms (variation in X’s versus variation in Z’s) by using the terms robustness for stability against X-factor variation, and ruggedness to express stability against Z-factor variation. That differentiation is not universally used and nowadays the term ruggedness is less common, so I will refer to both concepts under the umbrella term, “robust design.”

Carefully considering these principles, three distinct types of designs emerge that address robustness:

**I.** Having settled on process settings, we desire to demonstrate the system is insensitive to external noise-factor variation, i.e., robust against Z factor influence.

**II.** Given we may have variation between our selected process settings and the actual factor conditions that may be seen in the field, we wish to find settings that are insensitive to this variation. In other words, we may set our controlled X factors, but these factors wander from their set points and cause variation in our results. Our goal is to achieve our desired Y values while minimizing the variation stemming from imperfect process factor control. The impact of external noise factors (Z’s) is not explored in this type of study.

**III.** Given a system having controllable factors (X’s) and non-controllable noise factors (Z’s) impacting one or more desirable properties (Y’s), we wish to find the ideal settings for the controllable factors that simultaneously maximize the properties of interest, while minimizing the impact of variation from both types of noise factors.

DOE tools for the first type of robustness study are discussed below, with more to come in future blog posts.

Here, we aim to prove that our process is robust against expected noise factors due to field conditions. For example, we may settle on baking conditions for making bread, but we now wish to demonstrate that anticipated changes in ambient temperature, ambient humidity, and flour brand do not impact the rise height of the bread.

The first thing we need to ask ourselves is our threshold of acceptance. Do we care about 0.1 mm of rise height? Or perhaps anything under 10 mm (1cm) change from baseline is inconsequential. Whatever value we settle on is our response change delta (ΔY) of interest – the amount of noise factor impact deemed to be alarming. If the actual noise factor causes a change less than that value, we accept that we may not detect it in this DOE.

We also need some assessment of the natural variation in the system, i.e., how much variation in results seen when repeatedly running the same conditions. This is our standard deviation sigma (σ) Note that this value would be reported in the fit statistics if a prior DOE had been run on this system (generally the case prior to doing a robustness study).

For this type of robustness study, a resolution III two-level factorial design suffices, provided we have sufficient Power (>80%) in the design, which is driven by the delta to sigma ratio (ΔY/σ). Power is the probability of detecting an effect of the chosen ΔY value, if indeed it exists. (Stat-Ease software conveniently calculates power when you construct factorial experiments using its design wizard.) If we do not have sufficient power, then more runs must be done—the best option being an upgrade to a resolution IV design. Due to their greater flexibility for number of runs, Plackett-Burman designs are another commonly used resolution III template for these types of robustness studies.

The hope of course is that we find nothing of significance, i.e., none of the noise factors cause the response to change by more than our chosen ΔY. That would be the goal of this type of robustness DOE.

If we do see noise factors appearing as significant, we have a dilemma: a resolution III experiment won’t reveal if the indicated factor is responsible, or if we really have an interaction between two other factors that is the culprit. All we will know for sure is that the system is NOT robust against the noise factors we have evaluated, at least at the selected value for ΔY. Note, however, that if we ran a Resolution IV DOE, we would have greater confidence that the indicated noise factor was the cause. But we really can’t be certain of that conclusion with anything less than a Resolution V experiment.

It is for this reason that we do not recommend using a Resolution III experiment for anything other than this type of robustness evaluation where we merely wish to prove our process is insensitive to external noise factors.

For additional information, see Mark Anderson’s 2021 webinar on DOE for Ruggedness Testing.

Look forward to parts 2 and 3 of this series, coming out in April and covering the other two design scenarios.

Observing process improvement teams at Imperial Chemical Industries in the late 1940s George Box, the prime mover for response surface methods (RSM), realized that as a practical matter, statistical plans for experimentation must be very flexible and allow for a series of iterations. Box and other industrial statisticians continued to hone the strategy of experimentation to the point where it became standard practice for stats-savvy industrial researchers.

Via their Management and Technology Center (sadly, now defunct), Du Pont then trained legions of engineers, scientists, and quality professionals on a “Strategy of Experimentation” called “SCO” for its sequence of **s**creening, **c**haracterization and **o**ptimization. This now-proven SCO strategy of experimentation, illustrated in the flow chart below, begins with fractional two-level designs to screen for previous unknown factors. During this initial phase, experimenters seek to discover the vital few factors that create statistically significant effects of practical importance for the goal of process improvement.

The ideal DOE for screening resolves main effects free of any two-factor interactions (2FI’s) in broad and shallow two-level factorial design. I recommend the “resolution IV” choices color-coded yellow on our “Regular Two-Level” builder (shown below). To get a handy (pun intended) primer on resolution, watch at least the first part of this Institute of Quality and Reliability YouTube video on Fractional Factorial Designs, Confounding and Resolution Codes.

If you would like to screen more than 8 factors, choose one of our unique “Min-Run Screen” designs. However, I advise you accept the program default to add 2 runs and make the experiment less susceptible to botched runs.

Stat-Ease® 360 and Design-Expert® software conveniently color-code and label different designs.

After throwing the trivial many factors off to the side (preferably by holding them fixed or blocking them out), the experimental program enters the characterization phase (the “C”) where interactions become evident. This requires a higher-resolution of V or better (green Regular Two-Level or Min-Run Characterization), or possibly full (white) two-level factorial designs. Also, add center points at this stage so curvature can be detected.

If you encounter significant curvature (per the very informative test provided in our software), use our design tools to augment your factorial design into a central composite for response surface methods (RSM). You then enter the optimization phase (the “O”).

However, if curvature is of no concern, skip to ruggedness (the “R” that finalizes the “SCOR”) and, hopefully, confirm with a low resolution (red) two-level design or a Plackett-Burman design (found under “Miscellaneous” in the “Factorial” section). Ideally you then find that your improved process can withstand field conditions. If not, then you will need to go back up to the beginning for a do-over.

The SCOR strategy, with some modification due to the nature of mixture DOE, works equally well for developing product formulations as it does for process improvement. For background, see my October 2022 blog on Strategy of Experiments for Formulations: Try Screening First!

Stat-Ease provides all the tools and training needed to deploy the SCOR strategy of experiments. For more details, watch my January webinar on YouTube. Then to master it, attend our Modern DOE for Process Optimization workshop.

Know the SCOR for a winning strategy of experiments!