DOE FAQ Alert


DOE FAQ Alert Electronic Newsletter

Issue: Volume 2, Number 10
Date:
October 2002
From:
Mark J. Anderson, Stat-Ease, Inc.

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed previous DOE FAQ Alerts, click on the links below. Feel free to forward this newsletter to your colleagues. They can subscribe by going to http://www.statease.com/doealertreg.html.

I offer the following science-related trivia as an appetizer. Last month, September 7th, marked the 75th anniversary of the invention of television (TV). On that date in 1927 a 21 year-old, self-taught genius transmitted an image of a horizontal line to a receiver in the next room. Afterwards he triumphantly wired his backers: "THE DAMNED THING WORKS!" This successful trial fulfilled a quest that began when the inventor as a 14-year-old farm boy looked over a newly-plowed field and realized that an image could be scanned onto a picture tube in the same way: row by row. Who is this inventor? Get the whole (mostly sad) story at http://www.farnovision.com/. Coincidentally, this is the time of the year when the major USA networks unveil their new TV shows. At times like this one wonders if the invention of television really was an advance for humanity. But then a show such as the one noted in FAQ 4 below comes along to provide a ray (cathode?) of hope.

Here's what I cover in the body text of this DOE FAQ Alert (topics that delve into statistical detail are rated "X"):

1. Alert: Free update available for V6.07 of Design-Expert® or Design-Ease® software (if you are not currently a user, get the free 30-day trial version)
2. FAQ: In a factorial experiment, is it acceptable to use logs? (A question on the transformation of factors and/or responses)
3. FAQ: How much replication is needed for DOE? (Addressed as an issue of what's needed: pure error or power.)
4. Simulation alert: Try Galileo's thought experiments
5. Reader feedback: Thoughts on the Stat-Teaser newsletter article detailing a flying disk science project
A. Effect of color on plastic's physical properties
B. Effect of the learning curve
C. Effect of a thrower's level of expertise

6. Events alert: A heads-up on DOE talks and demos: A new class of designs unveiled at Fall Technical Conference
7. Workshop alert: Latest listing of classes, plus a word on prerequisites

PS. Quote for the month - Galileo's views on God and reason


1 - Alert: Free update available for V6.07 of Design-Expert or Design-Ease software (if you are not currently a user, get the free 30-day trial version)

If you (as an individual user) own a permanently licensed copy of version 6 of Design-Ease (DE6) or Design-Expert (DX6), go to http://www.statease.com/soft_ftp.html#dx6updt for downloads that will patch your software with the latest enhancements. We recommend you do this even though the changes may affect only a few users. To see what got added or fixed, click the "Read Me" link for either DE or DX (whichever program you will be updating).

If you own networked software that needs updating, you must call Stat-Ease customer service at 1.612.378.9449. We do not post patches for networked software on the Web. Be prepared to provide your serial number. We will then send you a replacement CD-ROM to re-install on your network.

Before updating or buying version 6 of Design-Expert, feel free to download a fully-functional, but 30-day limited, version of the software from http://www.statease.com/dx6trial.html. Users of Design-Ease or earlier versions of Design-Expert (V5 or before) should consider upgrading their software to DX6. See why you should do it at http://www.statease.com/dx6descr.html. Then call Stat-Ease for a quote. After validating your current serial number, we will give you a special upgrade price.


2 - FAQ: In a factorial experiment, is it acceptable to use logs? (A question on the transformation of factors and/or responses)

-----Original Question-----
From: California

"I enjoy your FAQ list, so here's a question of my own. In a factorial experiment, is it acceptable to use logarithms? My concern is about the effect on the centerpoint, as it will no longer be in the center of the space. Thank you."

Answer:
I will address your FAQ in two parts:
A. Transformations of response(s) with log (or logit)
B. Log transformation of factor(s)
The second type of transformation relates to your concern about centerpoints, not the first.

A. Transformations of response(s) with log (or logit)
Taking the log of your response poses no problems. In fact, a transformation like this often proves to be the key to getting a good statistical analysis. In our software we offer the log and many other transformations for users to apply programmatically, so you need not do it yourself - just leave the response in its original units. In your case, since the response is concentration, I wonder if the "logit" with lower and upper bounds of 0 and 100 might be best. The logit is recommended for bounded responses such as yield. For more details on this transformation see FAQ 2 at http://www.statease.com/news/faqalert1.txt (DOE FAQ Alert, Volume 1, Number 1, March 1, 2001). For a discussion of the plain old log and other more common response transformations, refer to Chapter 4 ("Dealing with Non-Normality via Response Transformations") in the book "DOE Simplified" (see http://www.statease.com/doe_simp.html).

B. Log transformation of factor(s)
Transformation of factors (symbolized mathematically as "X") may be beneficial for analysis, but it generally does not help nearly as much as a response (Y) transformation. Also, as you point out, if you add centerpoints to your design, they must be centered in the transformed space, not by the original metric. For example, let's say you apply a log(base 10) transform to a factor ranging from 10 to 1000, with a centerpoint set at 505. The transformed range becomes 1 to 3 with the centerpoint at 2 (not the same as log(505)!). You then must take the antilog of this centerpoint value to get it back in actual units. This value is 100, which differs greatly from 505, but creates a more sensible design in the log metric. In my opinion, it's really not worth the bother to transform input factors (X) in two-level designs. However, we
do have a few clients that experiment on factors varying over several orders of magnitude (such as my example) who routinely apply X-transformations.

Response to answer:
"Mark, Thanks for your prompt reply. I made an error in my question, in that the FACTOR is concentration, not the RESPONSE. Nonetheless, you either surmised that, or are incredibly thorough, because part B answers my question. Thank you!"

Reply to response:
I inferred what you really meant, but just to be safe, gave a thorough answer (hopefully credible!) covering the two options for transformation - factor versus response.

(Learn more about fractional two-level factorial designs by attending the 3-day computer-intensive workshop "Experiment Design Made Easy." See http://www.statease.com/clasedme.html for a description. Link from this page to the course outline and schedule. Then, if you like, enroll online.)


3 - FAQ: How much replication is needed for DOE? (Addressed as an issue of what's needed: pure error or power.)

-----Original Question-----
From: New York

"When doing DOE how much replication is needed? Do we need to replicate the entire DOE 2,3,4 times? Or do some selective runs need to be repeated?"

Answer (from Pat Whitcomb, Stat-Ease statistical consultant):
"There are two main reasons to replicate points in a DOE:
A. To estimate pure error
B. To increase power.

A. Replication for pure error
Estimation of pure error is usually a concern with response surface designs. Pure error is a necessary component in testing lack of fit. To estimate pure error about 5 degrees of freedom are needed. This is one reason for the 5 or 6 center points present in central composite and Box-Behnken response surface designs.

B. Replicating to increase power
Replicating a factorial is usually done to increase power; i.e., increase the likelihood of detecting an effect of a given size. (See "Sizing Fixed Effects for Computing Power in Experimental Designs" at http://www.statease.com/pubs/power.pdf) Here the amount of replication depends on the size effect you wish to detect with a high power. For example if you look at the main effects model for a 2^3 full factorial the power of finding an effect of 1 sigma is 19.5%, 2 sigma is 57.2%. Normally we would like a high probability (> 80%) of detecting an effect of a size that is practically important. Replicating the design (for a total of 16 runs) increases the power for a 1 sigma effect to 45.2% and for a 2 sigma effect to 95.6%. If you are only interested in 2 sigma or higher effects the one replicate is enough. If you are interested in 1 sigma effects further replication is required. Design-Ease version 6 calculates power as part of the design evaluation. (For details on how to estimate power using Design-Expert see "Power Calculations," pages 11-26 of the Design-Expert User Guide, at http://www.statease.info/dx6files/manual/DX11-Details-Design.pdf.)"

(Learn more about developing good RSM designs by attending the "Response Surface Methods for Process Optimization" workshop. For a description, see http://www.statease.com/clas_rsm.html. Link from this page to the course outline and schedule. You can enroll on-line by linking to the Stat-Ease e-commerce page for workshops.)


4 - Simulation alert: Try Galileo's thought experiments

On October 29, NOVA will air a two-hour television special celebrating the story of the father of modern science and his struggle to get Church authorities to accept the truth of his astonishing discoveries. The program is based on Dava Sobel's best selling book, Galileo's Daughter, which reveals a new side to the famously stubborn scientist -- that his closest confidante was his illegitimate daughter, Sister Maria Celeste, a cloistered nun. I've read this book and recommend it for anyone interested in the history of science. To conduct virtual versions of Galileo's thought experiments (inclined plane, pendulum, cannon balls), see http://www.pbs.org/wgbh/nova/galileo/experiments.html.


5 - Reader feedback: Thoughts on the Stat-Teaser newsletter article detailing a flying disk science project [it can be seen at http://www.statease.com/newsltr.html ]
A. Effect of color on plastic's physical properties
B. Effect of the learning curve
C. Effect of thrower's level of expertise

A. Effect of color on plastic's physical properties
-----Original Message-----
From: Ronald Miller, Senior Staff Engineer-Materials Development,
The Hoover Company, North Canton, Ohio

"Mark, I just finished reading your experiment with the plastic disks and would like to comment on the P. S. It is true that the color of a plastic will affect its physical properties (such as tensile strength, modulus or impact strength). However, it would have little effect on the aerodynamic properties outside of a slight difference in weight due to the specific gravity differences in pigments. I doubt that the thrower could perceive such a small difference, which is less than 1%. Even if difference in modulus (stiffness) affected the aerodynamics or the
ability of the thrower, the pigment would change the modulus much less than 1% and would be insignificant."

Answer:
I figured as much. I never really took my daughter's hypothesis that color would not matter very seriously, but when someone told me that it wasn't completely far-fetched, I felt bad. It's good to remember that saying something like color did not create a significant effect in performance for a given experiment does not mean that it has no effect. All one can say in such a case is that an effect could not be detected given the number of runs (which impacts statistical power) and the variability in the process (young girls throwing disks in an outdoor environment) and measurement (children doing this).

B. Effect of learning curve
-----Original Message-----
From: Mark R Walter, Eaton Aerospace, Bethel, Connecticut

"Mark, Excellent article: well written and a good example. One comment which came to my mind is the issue of "Learning Curves". One can hypothesize that the performance (accuracy) improves over time as the operators become more skilled. It would be interesting to repeat some of the experiment to see if a learning curve effect occurs."

Answer:
I appreciate the compliments on my article. The "learning curve" aspect should be considered in any DOE that's operator-dependent. Certainly this lends weight to the need for randomization of the run order. However, it might also be very beneficial to do a fair amount of pre-experimentation (trial runs). Hopefully this would bring the operators well up the learning curve to a point where further practice would have a negligible effect.

C. Effect of thrower's level of expertise
----- Original Message -----
From: Dr. Steve L. Aprahamian, Senior Development Chemist
Bayer Corp., Reaction Injection Molding (RIM) Technology
Pittsburgh, Pennsylvania

"Mark, I wanted to write to tell you how much I enjoy your experiments in Stat-Teaser newsletter. They are well thought out, use real-life problems, and for the most part, get me to think about the results and what they mean. On occasion, I even think about what the results mean, and how my predictions align with the results. In the September 2002 edition of Stat-Teaser, you wrote about your daughter's experiment with flying disks. I had some thoughts and comments.

The one item I would question in the design is the use of Thrower as a factor. I do not know how you factored it in, I would tend to use that as an external factor, noise factor, or whatever term you want to describe the factors that you can't control in practice (though you can in the experiment).

I would imagine that there are interactions with the thrower and the results (both accuracy and distance, the same discussion applies to both). In your example, your daughter and her cousin, I would NOT expect to see a big difference, but I thought it should be commented on.

In a thought experiment, imagine that the two throwers are an "expert" and a "novice" with the disks. I will also keep the discussion to distance, though it will be the same argument for accuracy. You need to examine the data, looking at each thrower so that a difference in the other factors are not "swamped" by the variability in the throwers!

You could notice several different outcomes on, for example, the effect of design (solid vs ring):
1) No affect on either thrower
2) One of the designs improves BOTH throwers (could be "absolute" increase or even a "percentage" increase)
3) one of the designs improves one and NOT the other. Either the expert's technique is so good that his ability "masks" the variability in the performance, or the novices techniques is so poor nothing could help it.
4) One design could INCREASE one thrower's distance and actually DECREASE the other thrower's distance. Again, you are talking
about how their technique interacts with the thrower.

These kind of results have "real-world" applications and problems. Weather (humidity, temperature, etc) can change a process. You
can study its effects, but it is a "non-controllable" factor. You might find a formulation or process conditions that run great in the summer, but terrible in the winter and need to know if that is better than one that runs "better than average" all of the time.
If you must have only one, the one with the overall lower variability is usually better, but if you can change, you might want a "summer formulation" and a "winter formulation."

In medicine, you might have a drug treatment that for 99% of the population cures a particular disease with no side effects. But with 1% the side effect is a nasty death. There is an alternate treatment that has a few minor side effects, but nothing serious, but is only 75% effective in treatment. You would want to start with the less effective treatment and only go to the more severe if the risk of death is warranted. You might also want to study the "1%" in a more detailed manner to understand what makes them susceptible to the first treatment and be able to predict which people should NOT get this treatment up front so the others could get it right away and not have to get the less effective treat-
ment.

Just some thoughts. I am amazed sometimes, how simple newsletter experiments can get me to think about other work and results. I really appreciate the intellectual workout and the opportunity to share some thoughts. Thank you."

Answer:
Steve, thanks for the in-depth reply to my article on flying disks. You can't imagine how happy this makes me, because my intention for these "Mark's Experiment" articles is to provoke thought about how to do DOE better. Keeping it simple and making it fun (KISMIF) hopefully overcomes the natural fear of statistics on the part of even the most highly educated technical professionals who could greatly benefit from these powerful tools.

Your lack of fear about statistics is exceptional! You bring up some really good issues about experiments involving people. In this case, I knew from playing around with my daughter Katie and her cousin that they were very similar in their ring tossing abilities. However, perhaps I am showing my natural parenting bias, but I expected Katie to perform somewhat better. Therefore, I blocked the experiment by thrower, thus removing any difference. As you can see below from the Design-Expert ANOVA report on the distance response, I was right:

Factor: Coefficient Estimate
Intercept: 52.16
Katie: 6.09
Cousin: -6.09
A-Disk type: 12.20

It turned out that both Katie and her cousin threw the rings significantly further than the solid disks, but they differed in ability by a relatively large margin (note the comparable block versus factor (A-Disk type) model coefficients).

As you point out, for blocking to be effective, the variable must not interact with other factors. As you say, blocking would not've been appropriate if I'd done the flying disk experiment with a "novice" versus an "expert," such as my son Ben, who plays disk golf regularly. In this situation, the variable should be put in as a factor. This then provides an opportunity to "segment the market" so-to-speak by tailoring your findings, in this case (or the drug example you brought up) by the demographics.

Another option for variables that are categorical is to do separate experiments. For example, as a young engineer, I investigated an alternative continuous reaction process to what had always been done in batch mode. (The objective was to hydrogenate a vegetable oil to clarify it of yellowish color bodies.) It would not make sense to set up a DOE with the variable of continuous versus batch reactor either as a block or as a factor, because they differed so completely in their effects of time, temperature, pressure, etc. (The continuous process
required an incredibly active noble metal catalyst that literally cost its weight in gold, while the batch process used cheap nickel as a catalyst, which could be dumped into the kettle in high concentrations.) On the medical front, I've always thought that someone studying interactions of drugs would want to do separate studies for male versus female and/or the very young versus the very old. However, this falls way outside my area of expertise scientifically or statistically.


6 - Events alert: A heads-up on DOE talks and demos: New class of designs unveiled at Fall Technical Conference

SECOND NOTICE: On Friday, October 18 Pat Whitcomb and Gary Oehlert will unveil a new class of designs that offer good value for industrial experimenters who cannot afford to do more runs than necessary for adequate response modeling. Their talk, entitled "Central Composite Designs from Small, Optimal, Equireplicated, Resolution V Fractions of 2^k Designs," will be presented in the afternoon track of "Topics in Designed Experiments" at the Fall Technical Conference (FTC) in Valley Forge, Pennsylvania. For details, see http://www.cpid.net/46FTC_Information.htm (no longer available) .

Click http://www.statease.com/events.html for a listing of where Stat-Ease consultants will be giving talks and doing DOE demos. We hope to see you sometime in the near future!


7 - Workshop alert: Latest listing of classes, plus a word on prerequisites

See http://www.statease.com/clas_pub.html for schedule and site information on all Stat-Ease workshops open to the public. To enroll, click the "register online" link at our web site or call Stat-Ease at 1.612.378.9449. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition, or consider bringing in an expert from Stat-Ease to teach a private class at your site. Call us to get a quote.

Most of our workshops involve intensive computer use. We assume that in today's age everyone knows how to use a mouse and navigate in Window's graphical user interface, so that should not pose any problems. The big factor for student success is knowledge of statistics. If you're completely ignorant on this subject (or afraid to admit you really do know something about it), don't worry: Stat-Ease offers workshops to get you up to speed. On the other hand, even the most expert statistician will benefit from our advanced classes on DOE. Just be sure you read the class descriptions posted on our web site very carefully, especially the information on prerequisites. We've mapped out the flow of Stat-Ease classes at http://www.statease.com/training.html. If you're not sure about your qualifications, call the number noted above and ask for help from a Stat-Ease statistical consultant. In the end, you must choose which class you want to attend. We hope it will be a good fit - for your sake and to minimize potential disruption of the class due to outliers in the normal distribution of students.


I hope you learned something from this issue. Address your questions and comments to me at:

Mark@StatEase.com

Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (http://www.statease.com)
Minneapolis, Minnesota USA

PS. Quote for the month - Galileo's views on God and reason:

"I do not feel obliged to believe that the same God who has endowed us with sense, reason, and intellect has intended us to forgo their use."

- Galileo Galilei

Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:

- Students of Stat-Ease training and users of Stat-Ease software
- Fellow Stat-Ease consultants Pat Whitcomb and Shari Kraber (see http://www.statease.com/consult.html for resumes)
- Statistical advisor to Stat-Ease: Dr. Gary Oehlert (http://www.statease.com/garyoehl.html)
- Stat-Ease programmers, especially Tryg Helseth (http://www.statease.com/pgmstaff.html)
- Heidi Hansel, Stat-Ease marketing director, and all the remaining staff.


Interested in previous FAQ DOE Alert e-mail newsletters? To view a past issue, choose it below.

#1 - Mar 01, #2 - Apr 01, #3 - May 01, #4 - Jun 01, #5 - Jul 01 , #6 - Aug 01, #7 - Sep 01, #8 - Oct 01, #9 - Nov 01, #10 - Dec 01, #2-1 Jan 02, #2-2 Feb 02, #2-3 Mar 02, #2-4 Apr 02, #2-5 May 02, #2-6 Jun 02, #2-7 Jul 02, #2-8 Aug 02, #2-9 Sep 02, #2-10 Oct 02 (see above)

Click here to add your name to the FAQ DOE Alert newsletter list server.

Statistics Made Easy

©2002 Stat-Ease, Inc. All rights reserved.