Design Evaluation - Graphs - FDS Graph

Fraction of Design Space (FDS) graph is used to calculate the volume of the design space having a prediction variance less than or equal to a specified value.

The ratio of this volume to the total volume of the design space is the fraction of design space.

The goal is to produce a single plot showing the cumulative fraction of the design space on the x-axis (from zero to one) versus the prediction variance on the y-axis.

Stat-Ease, Inc. recommends an FDS score of at least 0.8 or 80% for exploration and optimization, and 100% for stability and robustness testing such as demonstrating the design space for quality by design (QbD) work.

The FDS Graph tool provides options for evaluating the FDS as a function of four error types:

Mean - Calculates FDS based on the half-width of the confidence interval. Use the Mean error type if the goal of the experiment is to find the optimized factor settings for specific response goals.

Pred - Calculates FDS based on the half-width of the prediction interval. The Pred error type is rarely used but can be used when more precision predicting the average of future samples is desired.

Diff - Calculates FDS based on the size of the difference between pairs of observations. Use the Diff error type as a substitute for power on response surface and mixture designs.

Tolerance - Calculates FDS based on the half-width of the tolerance interval. Use the Tolerance setting if you need to prove individual outcomes will meet specifications.

One or two-sided intervals can be specified on the pull down. Specify the type and direction to match the goals for the most critical response. The default two-sided interval is good for initial estimates of the FDS score.

There are three parameters: “delta”, “sigma”, and “alpha” for each error type and a fourth “Proportion” for the Tolerance error type.

delta” specifies the maximum acceptable half-width (margin of error) of the respective interval for the Mean, Pred, and Tolerance error types. The best way to determine the delta is to answer the question, “plus or minus how much is an acceptable estimate?”

When the Diff error type is used, delta is the minimum desired change in the response The best way to determine this delta is to answer the question, “how much of a change in the response is important?”

In either case a larger delta produces a higher FDS score.

sigma” is an estimate for the standard deviation that will appear on the ANOVA. It can be obtained from previous work with this system, work from a similar system, or outright guessing. If the unexplained nuisance variation can be minimized during the experiment then a smaller sigma can be entered to improve the FDS.

alpha” is the significance level used throughout statistical analysis. Our default is 0.05 or 5%. It is the acceptable risk of a Type I error. Larger alpha increases FDS. Two-sided intervals use alpha/2 whereas one-sided interval use alpha to compute the t critical value.

Proportion” is only used for Tolerance error type. It is the proportion of the individual outcomes required to be contained by the tolerance interval.

The FDS score can be increased by building a larger design, increasing the delta, reducing the sigma, increasing the alpha and/or decreasing the Proportion.


  • George E.P. Box and Norman R. Draper. Empirical Model-Building and Response Surfaces. Wiley, 1987.
  • George E.P. Box and K.B. Wilson. On the experimental attainment of optimum conditions. Journal of the Royal Statistical Society: Series B, 13(1):1–45, 1951.
  • Brusco, Cradit, and Steinly. An exact algorithm for hierarchically well-formulated subsets in second-order polynomial regression. Technometrics, 51(3):306–315, 2009.
  • Christensen, Pearson, and Johnson. Case-deletion diagnostics for mixed models. Technometrics, 34(1):38–45, 1992.
  • William S. Cleveland. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74(368):829–836, 1979.
  • J. Cornell. Experiments with Mixtures. Wiley, 3rd edition, 2002.
  • D. Cuthbert. Use of half-normal plots in interpreting factorial two-level experiments. Technometrics, 1(4):311, 1959.
  • DeGryze, Langhans, and Vandebroek. Using the correct intervals for prediction: a tutorial on tolerance intervals of ordinary least-squares regression. Chemometrics and Intelligent Laboratory Systems, 87(2):147–154, 2007.
  • N. Draper and D. Lin. Small response-surface designs. Technometrics, 32(2):187–194, 1990.
  • Hahn and Meeker. Statistical Intervals, A Guide for Practitioners. 1991.
  • Kenward and Roger. Small sample inference for fixed effects from restricted maximum likelihood. Biometrics, 53(3):983–997, 1997.
  • Raymond H. Meyers and Douglas C. Montgomery. Response Surface Methodology: Process and Product in Optimization Using Designed Experiments. Wiley, 1995.
  • D. Miller. Reducing transformation bias in curve fitting. The American Statistician, 38(2):124–126, 1984.
  • Douglas C. Montgomery, Elizabeth A. Peck, and G. Geoffrey Vining. Introduction to Linear Regression Analysis. Wiley, 3rd edition, 2001.
  • Raymond H. Myers. Classical and Modern Regression with Applications. Duxbury Press, 1986.
  • John A. Nelder. The selection of terms in response-surface models; how strong is the weak-heredity principle? The American Statistician, 1988.
  • G.W. Oehlert. A note on the delta method. American Statistician, 46:27–29, 1992.
  • J. Peixoto. A property of well-formulated polynomial regression models. The American Statistician, 44(1):26–30, 1990.
  • G. Piepel, J. Szychowski, and J. Loeppky. Augmenting scheffe linear mixture models with squared and/or crossproduct terms. Journal of Quality Technology, 2002.
  • K. G. Roquemore. Hybrid designs for quadratic response surfaces. Technometrics, 18(4):419–423, 1976.
  • Geoff Vining. Technical advice: residual plots to check assumptions. Quality Engineering, 23(1):105–110, 2011.
  • Weisberg and Stanford. Applied Linear Regression. John Wiley & Sons, Inc., 1985.
  • A Zahran, CM Anderson-Cook, and RH Myers. Fraction of design space to assess prediction capability of response surface designs. Journal of Quality Technology, 35(4):377–386, 2003.