Issue: Volume 7, Number 5
Date: May 2007
From: Mark J. Anderson, Stat-Ease, Inc., Statistics Made Easy® Blog

Dear Experimenter,

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed the previous DOE FAQ Alert, please click on the links at the bottom of this page. If you have a question that needs answering, click the Search tab and enter the key words. This finds not only answers from previous Alerts, but also other documents posted to the Stat-Ease web site.

Feel free to forward this newsletter to your colleagues. They can subscribe by going to If this newsletter prompts you to ask your own questions about DOE, please address them via mail

For an assortment of appetizers to get this Alert off to a good start, see these new blogs at
— Cartoon quantifies commitment issue
— Black belt master of multitasking?
— De Moivre’s insight on standard deviation — "The Most Dangerous Equation"
— Expert’s judgments skewed by biases?

Topics in the body text of this DOE FAQ Alert are headlined below (the "Expert" ones, if any, delve into statistical details).

1. FAQ: Design evaluation — why replication reduces leverage
2. FAQ: Design evaluation (Part 2) — power calculations
3. Info Alert: Using Graphical Diagnostics to Deal With Bad Data — published in April issue of Quality Engineering magazine
4. Event Alert: Talk on "A Mixture Design Planning Process"
5. Workshop Alert: "DOE for DFSS: Variation by Design" (New!)

PS. Quote for the month: Advice on computer software from Box, Hunter and Hunter. (Scroll down to the end of this e-mail to enjoy the actual quote.)


1. FAQ: Design evaluation — why replication reduces leverage

-----Original Message-----
New Jersey
"I use Design-Expert® to design experiments. My question regards its tools for design evaluation which provide very useful details on leverage of particular points (experimental runs). The program Help offers this explanation:

Numerical value between 0 and 1 that indicates the potential for a design point to influence the model fit. A value of 1 means that the predicted value will be forced to be exactly equal to the actual value, with zero residual. The sum of the leverage values across all cases equals the number of parameters fit by the model.
The maximum leverage an experiment can have is 1/k, where k is the number of times the experiment is replicated. Values larger than 2 times the average leverage are flagged. A high leverage point is "bad" because if there is a problem with that data point (unexpected error) that error strongly influences the model.

I wonder how replication of a data point reduces leverage? What is the mathematical and conceptual basis for this."

Answer (from Stat-Ease Consultant Wayne Adams):
"Leverage is a measure of how influential a point is on the model. A point with a leverage of 1.0 will force the model to exactly fit this point. So in an unreplicated design the maximum leverage is 1.0 = 1 / 1 — think of "k" as the number of times the entire design is performed. The empirical models generated from the analysis of the design are based on the average effects found in the data. Replicating a point provides a better estimate of the true average at a given factor setting. Neither point will be fit by the model; the model fits the average of the two points. Thus the leverage for the
individual points decreases."

I would like to add this very simple illustration on the value of leverage for evaluating an experiment design. Let's say that Wayne and I face off for a bowling contest. We each bowl one game and the best man wins. However, our colleague Shari Kraber observes that both our games are totally leveraged (L = 1). If only we'd both bowled two games each, then the leverage of every result would have been halved (L=0.5). Of course the more replication, the better. For example, four games each would reduce leverage to 0.25, and so forth...
— Mark

PS. For more detail on leverage, see the March 1998 Stat-Teaser at Also refer to FAQ 1 of the September 2005 DOE FAQ Alert posted at

(Learn more about replication and leverage by attending the three-day computer-intensive workshop "Experiment Design Made Easy." See for a description of this class and then link from this page to the course outline and schedule. Then, if you like, enroll online.)


2. FAQ: Design evaluation (Part 2) — power calculations

-----Original Question-----
From: Philippines

"When I make use of your software's design evaluation feature to look at the power of my experiment, it benchmarks 1/2, 1, and 2 standard deviations. Is it necessary to look at these particular signal-to-noise ratios to determine the acceptable power of the design? Is there any rationale for 80% probability of detection as a cut-off for the power of the design?"

Version 7 of Stat-Ease software introduced options under Evaluation to change the number of standard deviations used for power assessment. Even better, V7.1 of Design-Expert®* mainstreamed power calculations into the design-builder. The calculation is based on this ratio (numerator/denominator):
1. Numerator — The "signal": The difference in response you hope to see at a minimum (if it gets generated by the factors with the changes you make in them).
2. Denominator — the "noise": The actual variation (standard deviation) you expect due to process, sampling and test after any averaging, for example, from repeated measures. For example, I once set up an experiment aimed at improving the yield of a valuable pharmaceutical — natural vitamin E. A 1.0 percent increase netted $200,000 in added profits per year. That was a difference (signal) to detect. The standard deviation (noise) of the lined-out process came to 0.62. Thus, the signal-to-noise ratio was 1.6 (=1.0/0.62). I performed a 16-run screening design (resolution IV) on seven factors that produced good results. This was the first in a series of experiments that ultimately resulted in millions of dollars in yield improvement. Back then we did all our calculations by hand! Today it would be extremely easy to set up with Design-Expert software. It reveals that the 16-run design produces 80.6 percent power to see main
effects — the goal of screening. For more background and details on power, see "Sizing Fixed Effects for Computing Power in Experimental Designs" by Oehlert and Whitcomb posted at They say that "The sweet spot for power is the range from about 0.8 [80 percent] to 0.95 [95 percent]. Power below this range means there is a substantial chance of missing effects of interest, and the additional resources spent to increase power above this range give little return on investment." In other words, do not overdo an experiment when runs are very costly in terms of time on the machine, labor for technicians, material expense and so forth.

*A free fully-functional trial of Design-Expert V7.1 software can be downloaded from


3. Info Alert: Using Graphical Diagnostics to Deal With Bad Data —pPublished in April issue of Quality Engineering magazine

In the latest issue of Quality Engineering (Volume 19, Issue 2 April 2007) you will find on pages 111 - 118 an article by me and my colleague Pat Whitcomb on "Using Graphical Diagnostics to Deal With Bad Data." See the manuscript for this publication posted at


4. Events Alert: Talk on "A Mixture Design Planning Process"

At the Quality & Productivity (Q&P) Research Conference in Santa Fe, New Mexico this June 4-6, Pat Whitcomb will present "A Mixture Design Planning Process" he developed with Professor Gary W. Oehlert of the University of Minnesota. The talk demonstrates why standard power calculations (used for factorial design) are not of much use due to the collinearity present in mixture designs. As an alternative, Pat will introduce a process to determine if a particular mixture design has adequate precision for DOE needs. For more details on the Q&P Conference, see


5. Workshop Alert: "DOE for DFSS — Variation by Design" (New!)

Stat-Ease premiers its new "DOE for DFSS: Variation by Design" workshop on June 20-21 at our Minneapolis training center. For details, see From there you can enroll online via Internet e-commerce. See for schedule and site
information on all Stat-Ease workshops open to the public. To enroll, click the "register online" link on our web site or call Stat-Ease at 612.378.9449. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition. Or consider bringing in an expert from Stat-Ease to teach a private class at your site.*

*Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff. For a quote, e-mail


I hope you learned something from this issue. Address your general questions and comments to me at:



Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (
2021 East Hennepin Avenue, Suite 480
Minneapolis, Minnesota 55413 USA

PS. Quote for the monthAdvice on computer software:

"Seek computer programs that allow you to do the thinking."

— Box, Hunter and Hunter
(Stu Hunter stopped by our booth at the American Society of Quality's World Conference in Orlando last week. He lamented the low level of sophistication of a talk on statistical tools. It touted "TOA," which neither Stu nor I had ever heard before. This stands for Taguchi orthogonal arrays.
— Mark)
Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:
— Students of Stat-Ease training and users of Stat-Ease software
— Stat-Ease consultants Pat Whitcomb, Shari Kraber and Wayne Adams (see for resumes)
— Statistical advisor to Stat-Ease: Dr. Gary Oehlert (
— Stat-Ease programmers, especially Tryg Helseth and Neal Vaughn (
— Heidi Hansel, Stat-Ease marketing director, and all the remaining staff


Interested in previous FAQ DOE Alert e-mail newsletters?
To view a past issue, choose it below.

#1 Mar 01, #2 Apr 01, #3 May 01, #4 Jun 01, #5 Jul 01 , #6 Aug 01, #7 Sep 01, #8 Oct 01, #9 Nov 01, #10 Dec 01, #2-1 Jan 02, #2-2 Feb 02, #2-3 Mar 02, #2-4 Apr 02, #2-5 May 02, #2-6 Jun 02, #2-7 Jul 02, #2-8 Aug 02, #2-9 Sep 02, #2-10 Oct 02, #2-11 Nov 02, #2-12 Dec 02, #3-1 Jan 03, #3-2 Feb 03, #3-3 Mar 03, #3-4 Apr 03, #3-5 May 03, #3-6 Jun 03, #3-7 Jul 03, #3-8 Aug 03, #3-9 Sep 03 #3-10 Oct 03, #3-11 Nov 03, #3-12 Dec 03, #4-1 Jan 04, #4-2 Feb 04, #4-3 Mar 04, #4-4 Apr 04, #4-5 May 04, #4-6 Jun 04, #4-7 Jul 04, #4-8 Aug 04, #4-9 Sep 04, #4-10 Oct 04, #4-11 Nov 04, #4-12 Dec 04, #5-1 Jan 05, #5-2 Feb 05, #5-3 Mar 05, #5-4 Apr 05, #5-5 May 05, #5-6 Jun 05, #5-7 Jul 05, #5-8 Aug 05, #5-9 Sep 05, #5-10 Oct 05, #5-11 Nov 05, #5-12 Dec 05, #6-01 Jan 06, #6-02 Feb 06, #6-03 Mar 06, #6-4 Apr 06, #6-5 May 06, #6-6 Jun 06, #6-7 Jul 06, #6-8 Aug 06, #6-9 Sep 06, #6-10 Oct 06, #6-11 Nov 06, #6-12 Dec 06, #7-1 Jan 07, #7-2 Feb 07, #7-3 Mar 07, #7-4 Apr 07, #7-5 May 07 (see above)

Click here to add your name to the DOE FAQ Alert newsletter list server.

Statistics Made Easy™

DOE FAQ Alert ©2007 Stat-Ease, Inc.
All rights reserved.


Software      Training      Consulting      Publications      Order Online      Contact Us       Search

Stat-Ease, Inc.
2021 E. Hennepin Avenue, Ste 480
Minneapolis, MN 55413-2726
p: 612.378.9449, f: 612.378.2152