DOE FAQ Alert Electronic Newsletter

Issue: Volume 1, Number 9
November, 2001
Mark J. Anderson, Stat-Ease, Inc.

Dear Experimenter,

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. Feel free to forward this newsletter to your colleagues. They can subscribe by going to:

Before I get into the meat of this message, I offer the following link as an appetizer: It details experiments to more rapidly cool beer - my favorite topic while in college. One of my college professors, Dr. Ted Davis (now Dean of Institute of Technology at University of Minnesota), put us chemical engineering students to work on the heat transfer rate of cans put into the freezer versus under cold running water. Which works better is lost in the mists of time, or obliterated by the effects of alcohol, but it's now evident that a jet engine works better than anything. Be very careful trying this at home.

Here's what I cover in the body text of this DOE FAQ Alert:

1. FAQ: How to check several testers to see if they agree
2. FAQ: What if the center point is not at planned levels?
3. Reader feedback from last issue: How to treat replicates
4. Events alert: DOE success stories for non-manufacturing
5. Workshop alert: "Experiment Design Made Easy" heading south

PS. Statistics quotes for the month: Subject - Measurements, their necessity for science and near impossibility to make

1 - FAQ: How to check several testers to see if they agree

-----Original Question-----

From: Saint Paul (paraphrased from a phone call)

"We have 3 labs at various locations that routinely test samples for percent purity of a vital ingredient in parts that we produce for a medical device. I plan to split each of 25 samples into 3 pieces and get them analyzed at each of the labs, so we can make direct comparisons of their results. How should I set up this experiment?"


Inter-laboratory collaboratives like this should be set up as a blocked (paired), one-factor design. Design-Expert® software offers this option in its Factorial tab under the General Factorial choice. Leave the default choice of factors at 1 and enter 3 levels (categorical). Identify the 3 labs at this point by entering their names. Then enter 25 for the number of replicates and click the option to assign one block per replicate. When you analyze the data, be sure you establish an overall significance to the results (via the F-test in the analysis of variance (ANOVA)) before you look at the pairwise comparisons.

You will do well for power of this comparison with so many samples. Design-Expert's design evaluation reveals that the power of your design is 87.6% for a shift of 1 standard deviation (at 5% alpha level). Therefore, if you do see statistically significant differences, you should consider their practical implications. Perhaps a small disagreement can be tolerated. Professor Gary Oehlert, statistical advisor to Stat-Ease and expert on power calculations, offers this as an elaboration on my answer:

>Your write-up looks pretty good, but I have [a] comment [on]... the issue of "how big is an effect?" We size effects relative to the standard deviation of the errors, so a bias between labs of size 1 SD means different things depending on the size of the SD. Thus confidence intervals on the pairwise differences in means (relative biases) will be more interpretable than the outcome of a test. This thought is implicit in your blurb, but I would make it much more explicit.<

(Learn more about simple comparisons like this, plus many other very useful tools, in our new, 2-day, "Statistics for Technical Professionals" (STP) workshop. It shows engineers, scientists and quality professionals how to gear up their stats knowledge to achieve Six Sigma or other quality objectives. See for all the details. Call Stat-Ease at 1.612.378.9449 to arrange for a private STP class at your site. I just did two presentations of STP for clients who needed a warm-up for our "Experiment Design Made Easy" workshop. For details see They rated our new STP workshop very highly for this purpose. Mark)

2 - FAQ: What if the center point is not at planned levels?

-----Original Question-----

From: Michigan (paraphrased from a telephone call for support)

"I designed a two-level factorial design that looked at several process factors and one key chemical ingredient. To get a test for curvature and an estimate of pure error, I incorporated several center points to run randomly at various points throughout the design. However, the actual mid-point levels varied more than I would have expected from what was planned. How should I analyze this data?"


First of all, I suggest you refer to the answer I gave in my April e-mail ( to FAQ 1: "Botched runs in fractional two-level factorial designs - When to enter actual versus planned factor levels." In your case the botched runs happen to be center points, which adds a new wrinkle to the problem. From what you say, I assume that your mid-point values deviated significantly from what you planned, but not so much that they produced a deviant effect of any practical importance. In this case, I'd suggest you first try leaving the center point levels at their original (planned) levels. Then analyze your data and check the analysis of variance (ANOVA) to see whether curvature is significant. Design-Expert and Design-Ease® software won't do this test if it detects no values at the true center point. (Computers tend to be very picky about such things!) If I were you, after assessing curvature in the manner described above, I'd then go back to my design layout, enter the actual values for what were planned to be center points, and re-analyze the data. You might see slight changes in your effects, but perhaps not large enough to materially affect the outcome of your experiment.

What should be done if there's significant curvature? You didn't ask this question, so I assume you know what to do. In case some of you readers aren't sure, the answer is to augment the factorial with additional runs that allow estimation of the individual squared terms, which then provide a fit of the curvature. Design-Expert offers this feature under its "Design Tools" option. Select "Augment Design" and go with the default choice to create a Central Composite design (CCD). The software then adds the runs needed to apply response surface methods (RSM). After performing these runs and entering the resulting responses, you then can get a better picture of how your response depends on the input factors.

(Learn more about two-level factorial design, including the role of center points, by attending the 3-day computer-intensive workshop Experiment Design Made Easy." Go to for a description and links to the course outline and schedule. Our next classes will be presented in Anaheim, California on December 4-6 and Charlotte, North Carolina on January 8-10.)

3 - Reader feedback: How to treat replicates

From: Reader in Puerto Rico

Re: October DOE FAQ Alert, FAQ 2 - How to treat replicates (You can see this at

You said:

>I am subscribed to your DOE FAQ, even though I do not have Stat-Ease software, I found it very interesting. Regarding the 1st FAQ of this letter [the October DOE FAQ Alert], I have always thought that the only way to compute MS (Pure Error) is by having more than one observation per FLC (factor level combination). Otherwise you can compute MSError but not MS (pure error). Another drawback of averaging the observation is that the average will count as 1 degree of freedom only. That is not a problem on simple designs, but in more complex designs, then it could increase the total sampling cost dramatically. Am I right?<

You're correct if the replication truly reflects all the errors involved from start to finish. For example, if you're studying a chemical reaction, a true replicate requires re-charging the reaction vessel, bringing it back up to temperature, etc. It would not be correct to simply re-test samples and treat them as replicates, or even to re-sample a lined-out reaction.

4 - Events alert: DOE success stories for non-manufacturing

Do you have a success story showing how DOE improved an administrative process, a marketing approach, or something else business-related? Doug Montgomery, host for the Spring Research Conference of the Quality and Productivity (Q&P) section of the American Statistical Association (ASA), asked us to sponsor a two-paper session on "The Use of DOE in Non-Manufacturing Environments." If you'd like to give a talk, or just share a case study, please call us and ask for Shari Kraber or e-mail now by clicking [] . The conference will be held on June 5-7 in Phoenix, Arizona. It will be a great opportunity to hear how DOE and other statistical tools can be used to enhance productivity and improve the quality of products and services.

5 - Workshop alert: "Experiment Design Made Easy" heading south

Stat-Ease will be heading south to present "Experiment Design Made Easy" (EDME) workshops in Anaheim,CA on December 4-6 and Charlotte, NC on January 8-10. Pick which coast is best for you and get signed up. If you've already attended EDME or the like, consider expanding your knowledge of DOE by attending "Response Surface Methods for Process Optimization" in San Jose on January 15-17.

See for schedule and site information for all Stat-Ease workshops open to the public. To enroll, call Stat-Ease at 1.612.378.9449. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition, or consider bringing in an expert from Stat-Ease to teach a private class at your site. Call us to get a quote.

I hope you learned something from this issue. Address your questions and comments to me at:

Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (
Minneapolis, Minnesota USA

PS. Statistics quotes for the month:

"Science depends on measurement, and things not measurable are therefore excluded."
- William H. George

"Measurement always has an element of error in it. If a perfect correspondence with observation does appear, it must be regarded as accidental, and 'should give rise to suspicion rather than satisfaction'."
- Abraham Kaplan (condensed from original quote)

Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:

- Students of Stat-Ease training and users of Stat-Ease software
- Fellow Stat-Ease consultants Pat Whitcomb and Shari Kraber (see for resumes)
- Statistical advisor to Stat-Ease: Dr. Gary Oehlert (
- Stat-Ease programmers, especially Tryg Helseth (
- Heidi Hansel, Stat-Ease marketing director, and all the remaining staff

Interested in previous FAQ DOE Alert e-mail newsletters? To view a past issue, choose it below.

#1 - Mar 01, #2 - Apr 01, #3 - May 01, #4 - Jun 01, #5 - Jul 01 , #6 - Aug 01, #7 - Sep 01, #8 - Oct 01, #9 - Nov 01 (See above)

Click here to add your name to the FAQ DOE Alert newsletter list server.

Statistics Made Easy

©2001 Stat-Ease, Inc. All rights reserved.