DOE FAQ Alert


DOE FAQ Alert Electronic Newsletter

Issue: Volume 1, Number 8
Date:
October, 2001
From:
Mark J. Anderson, Stat-Ease, Inc.

Dear Experimenter,

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. Feel free to forward this newsletter to your colleagues. They can subscribe by going to: http://www.statease.com/doealertreg.html.

As usual, I am prefacing my e-mail Alert with a link to a fun and/or educational website. I figure you all could use some relief from all the bad news this past month. It seems like another age - one of great innocence, but prior to the tragic events of September 11th, over 1 million children in the United Kingdom participated in the "Giant Jump", a massive seismic experiment. See http://www.scienceyear.com/home.html for background on the sponsors, who offer lots of fun stuff for children aged 10 to 19 and the adults around them - especially their teachers. Click on the link labeled "Giant Jump" to see whether the kids in UK made their island rock. Hopefully they won't be doing the same thing on the Millennium Bridge in London (see why at http://www.arup.com/millenniumbridge/ ).

Here's what I cover in the body text of this DOE FAQ Alert:

1. FAQ: Averaging response measurements to improve precision
2. FAQ: How to treat replicate data - as unique runs, or by averaging
3. X-FAQ*: Analyzing variance as a response
4. Workshop alert: Mixture class in Philadelphia, RSM in Atlanta

PS. Statistics quotes for the month: Subject - Only God knows
PPS. Photos of the month: Fall colors (and the chemistry behind them)

*(Reminder: topics that delve into statistical detail are rated "X" for eXpert. Read these only if you dare to eXpand the eXtent of your knowledge of statistics for eXperimenters.)


1 - FAQ: Averaging response measurements to improve precision

-----Original Question-----
From: India

"I have corresponded with you earlier on various aspects of your Design-Expert [® software] and every time I get a nice solution. I have one new problem: In one of our experiments the response involves measurement of a property which can be determined only as whole numbers and not in fractions or decimals, for example: either 50 or 60. When replications are carried out, the result is that the values are almost the same, causing the error mean square to be very, very low. In that case the lack of fit is extremely high since the denominator (error mean square is negligibly small). How [do I] get over this problem?"

Answer:

I recommend you average the replicates and then enter each average as one run (row) each, rather than entering each individual data value as a run. The averaged values will be more precise than the individual results. In any case, this is probably the way you should be handling replicates, assuming they come from samples being re-tested. (See FAQ 2 below for more discussion on averaging results like this.)

Pat Whitcomb, consultant and principal of Stat-Ease, adds: "I don't see a problem [even if you don't replicate the measurements]. Since you know the cause for the significant lack of fit it can safely be ignored. In fact, nonsignificant lack of fit may signal a problem (perhaps a change in the measurement process) and be an interesting diagnostic."

(Learn more about the power of averaging in our new 2-day "Statistics for Technical Professionals" workshop, which debuts October 30-31 in Minneapolis. It shows engineers, scientists and quality professionals how to gear up their stats knowledge to achieve Six Sigma objectives or other quality improvement initiatives. See http://www.statease.com/clas_stp.html for all the details. To enroll, just hit the "register online" link on
the page.)


2 - FAQ: How to treat replicate data - as unique runs, or by averaging

-----Original Question-----
From: California

"It has been awhile since I've done a DOE, and even longer since I've taken the Stat-Ease class. I have a rather rudimentary question regarding how the software functions. Quite simply, when data is entered into the design data sheet for the response, is it an individual data point that should be input or an average? It has always been my understanding that an average is input for the response. Thus, for a given run condition, if I had a sample size of 5, I would average the results for these five samples (all generated at the same time) and input this average into Design-Ease [® software] data sheet. A co-worker of mine is indicating that this is not correct and that each individual data point needs to be entered into the data sheet, and that the software will take care of averaging for the respective groups. Thus, in order for each individual data point to be inputted, for a given run condition with a sample size of five, this would translate into 5 separate runs (at the same, specified condition) with a sample size of n=1 at each run. For example: with a 2x2 factorial = 4 runs my worksheet would show the four respective runs, though I would generate 5 samples at each run condition. My co-worker's run sheet would show a total of 20 runs and only one sample would be generated at each run condition. This question seems to come up again and again. Will you please set me straight, one way or the other?"

Answer:

Let's overlook for the moment that you've chosen as an example a meager 2x2 factorial (I will get back to that). You are absolutely correct that averages should be entered when re-testing and or re-sampling from a given run setup. Otherwise you will get an unrealistically low error, reflecting only the variability of test and sampling, not the overall error from setup to finish. This causes an increase in false-positive outcomes.

By the way, if you do collect a number of samples per setup from a process, why not compute the standard deviation and enter this as a second response? You might see significant effects on variability as well as the average of your response(s). It's worth a try! However, be sure you analyze the standard deviation (or variance) in the log scale. (See FAQ 3 below for details on why this should be.)

Now, getting back to your example of a 2x2 factorial: This design should be truly replicated to get sufficient data for estimating the main effects (A and B) as well as the interaction (AB) - four runs won't get this job done. A true replicate involves re-setting all factors, lining out the process, collecting the multiple samples and doing the tests. The complete replication of a 2x2 produces 8 runs. These should be run at random to avoid bias from lurking time-related variables.

(Learn more about two-level factorial design by attending the 3-day computer-intensive workshop "Experiment Design Made Easy." Go to http://www.statease.com/clasedme.html for a description and links to the course outline and schedule. Our next class will be in San Jose on October 23-25th. Only two seats remain as of this writing, so call us immediately at 1.612.378.9449 if you want to sign up. Otherwise, enroll online from the site noted above for the next "Experiment Design Made Easy" workshop, which will be presented in Anaheim on December 4-6.)


3 - X- FAQ: Analyzing variance as a response

-----Original Question-----
From: State of Washington

"We are trying to reduce variability in oil pressure in a swim roll on a paper machine. Is it possible to run the statistics using variance? I'm just not sure if it would be statistically correct."

Answer:
Variance is fair game for any sort of DOE. Just be sure to analyze it in log scale. The log stabilizes the variability of variance (or standard deviation). (Technically speaking, the variability associated with the estimate of variance increases as the variance itself increases. Taking the log of the estimated variance removes this dependence). This is noted by Box, Hunter and Hunter (BHH) in their classic text on DOE, "Statistics for Experimenters" in Table 7.13 on page 234. (The BHH book, and others on DOE, can be purchased via the Stat-Ease site at: http://www.statease.com/prodbook.html.)

I've found that if variance changes quite a bit, say more than 3-fold, over the course of your experiment, the log produces a more normal plot of residuals and a more random pattern on the residuals versus predicted response plot, both of which are desirable to meet statistical assumptions underpinning the probability values reported in the ANOVA. The bottom line is that with the log you're likely to get a more accurate, precise and perhaps simpler predictive model.

(For more options on how to reduce variability, consider attending the "Robust Design, DOE Tools for Reducing Variability" workshop, which provides a plethora of tools for achieving Six Sigma objectives. See http://www.statease.com/clasrdrv.html for course content. This computer-intensive class requires proficiency in RSM which can be gained by attending Stat-Ease's "Response Surface Methods for Process Optimization" workshop. (See Alert 4 below.))


4 - Workshop alert: Mixture class in Philadelphia, RSM in Atlanta

Stat-Ease heads south next month (anyone who's been to Minnesota knows the reason why!) to present workshops in:
- Philadelphia on November 6-8 for "Mixture Design for Optimal Formulation" (see http://www.statease.com/clas_mix.html for a description and links to the course outline and schedule)
- Atlanta on November 13-15 for "RSM for Process Optimization" (see http://www.statease.com/clas_rsm.html).
To enroll, just hit the "register online" link on the web pages referenced above, or call Stat-Ease at 1.612.378.9449.

See http://www.statease.com/clas_pub.html for schedule and site information for all Stat-Ease workshops open to the public. Bring along several colleagues and take advantage of quantity discounts in tuition (seats are limited, so register early!). If you have 10 or more people on your staff that need training, consider bringing in an expert from Stat-Ease to teach a private class at your site. Call us to get a quote and details on setup, etc.


I hope you learned something from this issue. Address your questions and comments to me at:

[email protected]

Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (http://www.statease.com)
Minneapolis, Minnesota USA

PS. Statistics quote for the month:

"What is too sublime for you, seek not, into things beyond your strength, search not."

- Sirach, Old Testament, 3:20/126

PPS. Photos of the month: Fall colors (and chemistry behind them): See http://scifun.chem.wisc.edu/chemweek/fallcolr/fallcolr.html.

Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:

- Students of Stat-Ease training and users of Stat-Ease software
- Fellow Stat-Ease consultants Pat Whitcomb and Shari Kraber (see http://www.statease.com/consult.html for resumes)
- Statistical advisor to Stat-Ease: Dr. Gary Oehlert (http://www.statease.com/garyoehl.html)
- Stat-Ease programmers, especially Tryg Helseth (http://www.statease.com/pgmstaff.html)
- Heidi Hansel, Stat-Ease marketing director, and all the remaining staff


Interested in previous FAQ DOE Alert e-mail newsletters? To view a past issue, choose it below.

#1 - Mar 01, #2 - Apr 01, #3 - May 01, #4 - Jun 01, #5 - Jul 01 , #6 - Aug 01, #7 - Sep 01, #8 - Oct 01 (See above.)

Click here to add your name to the FAQ DOE Alert newsletter list server.

Statistics Made Easy

©2001 Stat-Ease, Inc. All rights reserved.