> If you are having trouble viewing this email view it online.
Vol: 11 | No: 5 | Sep/Oct'11
The DOE FAQ Alert

Heads-up (below!)
New strategy of experimentation flowchart

Dear Experimenter,

Here’s another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed the previous DOE FAQ Alert, click here.

TIP: Get immediate answers to questions about DOE via the Search feature on the main menu of the Stat-Ease® web site. This not only pores over previous alerts, but also the wealth of technical publications posted throughout the site.

Feel free to forward this newsletter to your colleagues. They can subscribe by going to this registration page.

Also, Stat-Ease offers an interactive website—The Support Forum for Experiment Design. Anyone (after gaining approval for registration) can post questions and answers to the forum, which is open for all to see (with moderation). Furthermore the forum provides program help for Design-Ease® and Design-Expert® software. Check it out and search for answers. If you come up empty, do not be shy: Ask your question! Also, this being a forum, we encourage you to weigh in with answers! The following Support Forum topic provides a sample of threads that developed since my last Alert:

  • Area: Analysis, Topic: “Equation Only”, Question: “I am trying to use the coded equation from ANOVA to create a new response (equation only)…"

To open yet another avenue of communications with fellow DOE aficionados, sign up for The Stat-Ease Professional Network on Linked In and start or participate in discussions with other software users. Check out the thread “on lighter side” and weigh in with your best anecdote about statisticians versus engineers or the like.  Have some fun!

Stats Made Easy Blog

StatsMadeEasy offers wry comments weekly from an engineer with a bent for experimentation and statistics. Simply enter your e-mail in the forwarding field at  and get new StatsMadeEasy entries delivered directly to your inbox. Or, click this link to:

Subscribe with Feedburner

“Your StatsMadeEasy Blog brightens up a dreary work day...”
—Applied Statistician, Florida Smiley Face

Topics discussed since the last issue of the DOE FAQ Alert (latest one first):

Also see the new comments on my 6/25/11 blog-alert on “Fun summer-time experiment: Super-cool beer so it instantly freezes solid” and other recent posts. Please do not be shy about adding your take about any news or views you see in StatsMadeEasy.  Thanks for paying attention.

  If this newsletter prompts you to ask your own questions about DOE, please address them via e-mail to:


Topics in the body text of this DOE FAQ Alert are headlined below (the expert ones, if any, delve into statistical details):

1:  FAQ: How to experiment on a multi-step process
2:  FAQ: Ignoring a discrepant response on a run that otherwise succeeded
3:  FAQ: Where to compare model coefficients for multiple responses
4:  FAQ: What to make of multiple confirmation runs
5:  Info alert: Interaction revealed by factorial design leads to 65% yield increase; response surface methods (RSM) leveraged by Monte Carlo simulation; updated “DOE it Yourself” list of fun projects to do at home or school
6:  Reader response: Selecting effects via the half-normal versus a backwards regression
7:  Webinar alert: (Encore) Basics of Response Surface Methodology (RSM) for Process Optimization, Part 1
8:  Events alert: Learn about “Managing Uncertainty in Design Space”
9:  Workshop alert: “Designed Experiments for Industry” India (last chance!); (New!) “Designed Experiments for Assay Optimization” (DEAO)
PS. Quote for the month: The maternal line to invention.
(Page down to the end of this e-zine to enjoy the actual quote.)

- Back to top -

1: FAQ: How to experiment on a multi-step process

Original Question:

From a Research Engineer:
“I am a user of Design-Expert v8 software.  The product that my company fabricates undergoes a two-step process—injection molding followed by thermal bonding.

The injection molding is affected by process parameters such as injection speed, mold temperature, back pressure and melt temperature.  The measured responses after this step are warpage, dimension and birefringence.

The second step involves thermal bonding of the injected parts, which is affected by temperature, speed and pressure; plus other factors.  The primary responses at this stage are overall warpage and bond strength.

In this two-step case how should I conduct the DOE?  Ultimately I just want a good part from the thermal bonding process.  However the response from the preceding injection molding process affects this final result.
Your advice is very much appreciated.”

Answer from me:

This is a hard question, but one that I can relate to being a chemical engineer who worked for years on manufacturing process improvement.  In cases such as this that involved a series of unit operations I would first work on the one that stood out for being a bottleneck or problematic due to quality and/or yield.  Another approach is to go to the unit operation furthest upstream and work your way down.  In any case, you will do well by focusing only on one unit operation at a time, if at all possible.  The goal of DOE will be to develop a predictive model that helps you control the outputs.

Since you do not indicate that thermal bonding is particularly problematic, I suggest taking the latter approach—starting with the first unit operation: injection molding.  Set up a two-level design on the four factors listed plus, perhaps, a number of others.  Proceed according to the strategy of experimentation outlined below.

Strategy of Experiment Chart

Strategy of experimentation flowchart

After developing profound knowledge on the injection molding, then turn your attention to the thermal bonding and finally the entire process.

Consultant Pat Whitcomb adds:

“If the best settings for thermal bonding depend on the settings used in injection molding; i.e. the two steps cannot be optimized independently, then you may need to run a split plot design.  In a split plot a number of parts would be made during one injection molding run and then these parts would be used for a factorial on thermal bonding.  Then on to the next injection molding run to make parts for another factorial on thermal bonding; and so on.  For more on split plots see:”

(Learn more about the strategy of experimentation by attending the two-day computer-intensive workshop Experiment Design Made Easy.  Click on the title for a description of this class and link from this page to the course outline and schedule.  Then, if you like, enroll online.)

- Back to top -

2: FAQ: Ignoring a discrepant response on a run that otherwise succeeded

Original Question:

From a Lean Six Sigma Consultant:
“Generally, Design-Expert does not allow Y2 to use all the data if Y1’s model needed some omissions.
In other words, why does Design-Expert force Y2 to depend on Y1’s data which has ignored points?”


From Stat-Ease Consultant Wayne Adams:
“Try right-clicking in the cell you want to remove and setting the cell status to ignore.  [See the screen shot below.]  This way the data will be available to the other responses and just removed from the single response.   Make sure you set the row status to normal or highlight first.”

Single Response Value Screenshot

Screen shot of Stat-Ease software showing how to ignore a single response value (row highlighted).

- Back to top -

3: FAQ: Where to compare model coefficients for multiple responses

Original Question:

From a Technical Consultant on Energy and Nuclear Power:
“I have a historical design with 5 independent variables (not categorical) and 74 responses.  The data seem good—fits to approximations are excellent.  My question is: Is there a simple graph showing what factors and interactions affect each of the 74 responses.  It is basically the data in the ANOVA summary, but in a way which is at-a-glance clear?  The idea is that when doing trade-offs etc, it is clear which variables are the main effects and which interactions (if any) are significant.  Thanks.”


From Stat-Ease Consultant Brooks Henderson:
“There is one tool in Design-Expert version 8 that may help you out.  Go to the “Summary” node in the software and Click on the “coefficients table” button on the floating “summary tool” palette.  You will see something like the image below.  Notice the three responses down the side in rows (Burst, Push, and Track).  Then observe the list of all factors across the top in columns.  The terms kept in the model for each response will display the coefficient and p-value in a color code based on the size of the p-value (see the legend at the bottom).  This will give a clear picture of which terms/interactions are affecting each response.”

Coefficients Table

Coefficients Table

(Learn more about modeling historical data by attending the two-day computer-intensive workshop Response Surface Methods for Process Optimization. Click on the title for a complete description.  Link from this page to the course outline and schedule.  Then, if you like, enroll online.)

- Back to top -

4: FAQ: What to make of multiple confirmation runs

Original Question:

From a Life Prediction Engineer:
“I completed a successful experiment that led us to a new and improved formulation that now might meet all customer specifications.  A dozen (12) follow-up blends exhibited average responses that fell within the prediction intervals (PI) presented by the new Confirmation node that came out with Design-Expert version 8.0.4.*  However, should we also worry whether each of the individual blends fall within the PI showed under the Point Prediction screen.  Perhaps this creates a ‘double jeopardy,’ that is, being overly harsh in prosecuting the confirmation results.”


From Stat-Ease Consultant Wayne Adams:
"Your instincts are correct: Focus on the average of the number (n) runs you complete for the confirmation—not the individual results.  Statistical models only predict the average behavior of the system.  If the average confirmation response is within the confirmation node’s prediction interval, then the model is confirmed.

Do not worry whether each of the individual blends fall within the original PI showed under the Point Prediction screen.  This requires another statistical interval that contains the next outcome, and then the next outcome, followed by the next outcome, and so on and so forth.  The formula for such an interval can be found in, Hahn and Meeker, Statistical Intervals, Wiley, 1991, pp. 62-64, Section 4.8 “Prediction Interval to Contain All of m Observations.”  The prediction interval that contains m future outcomes is quite a bit wider than the prediction interval for 1 future outcome.
Even with this “all of m” corrected interval the conclusion drawn depends on how many fall (and how far) outside the limits these observations go.  Take a look at the general distribution of the confirmation observations.  One of the requirements for confirmation work is that it be done at the same conditions as the original block of experimental runs.  If there is a consistent bias towards one side of the interval, then something different—random or unaccounted-for fixed effect(s)—probably occurred during the design than during the confirmation.  Unfortunately (but being realistic), a whole host of things can cause the confirmation to fail, not the least of which being that the model is wrong.”

Confirmation Report

Confirmation node

*Design-Expert is now at version 8.0.5. If you have version 8, download the latest update here.

- Back to top -

5: Info alert: Interaction revealed by factorial design leads to 65% yield increase; response surface methods (RSM) leveraged by Monte Carlo simulation; updated “DOE it Yourself” list of fun projects to do at home or school

A Williamette Valley Company (WVCO) chemist designed a two-level factorial experiment that revealed substantial interactions in their polyurethane process.  Knowing this, WVCO implemented changes that increased first-pass yields 65 percent and overall plant yields by 20 percent. For details and the inspirational story, see this case study published in the August issue of Adhesives & Sealants Industry.

As detailed in this Desktop Engineering story on how “Two-step optimization for product design takes manufacturing variability into account”, Chad Johnson and his TRW team used a combination of response surface methodology (RSM) and Monte Carlo analysis to optimize a braking system.

“DOE It Yourself”, a list of fun science projects compiled by me, has been updated (with new links mainly)—see it posted here.  Enjoy!

- Back to top -

6: Reader response: Selecting effects via the half-normal versus a backwards regression

Original Question:

From Chad Johnson TRW Certified 6-Sigma Master Black Belt Manager:
(Re: Jul/Aug DOE FAQ Alert #4. Expert FAQ: “Selecting effects via the half-normal versus a backwards regression: How do you explain discrepancies between these two approaches?”)

“Mark, I've learned something here about the caution required in de-selecting model terms using the backward selection algorithm.  Ok, so....lesson learned when I have a factorial model implemented and I can see the half-normal plot.  What do you suggest when using an RSM model?  No half-normal plots.”

Answer from me:

Yes, Wayne and Shari have provided some good food for thought here on the advantage of using graphical versus numerical selection of effects.  This is especially apropos to screening designs where one simply wants to separate the vital few factors from the trivial many.  The purpose of RSM differs—this is intended to provide a mapping that is adequate for moving the process into a more desirable and/or robust region.  Taking out insignificant model terms serves little purpose other than parsimony (no reason not to keep things simple, if possible!).

- Back to top -

7: Webinar alert: (Encore) Basics of Response Surface Methodology (RSM) for Process Optimization, Part 1

Response Surface Methods (RSM) can lead you to the peak of process performance.  In this intermediate-level webinar presented on Tuesday, October 18 at 10:30 AM CDT,* Stat-Ease Consultant Shari Kraber will introduce the fundamental concepts of response surface methods (RSM).

If you are new to RSM, this webinar is for you!  Stat-Ease webinars vary somewhat in length depending on the presenter and the particular session—mainly due to breaks for questions: Plan for 45 minutes to 1.5 hours, with 1 hour being the target median.  When developing these one-hour educational sessions, our presenters often draw valuable material from Stat-Ease DOE workshops.  Attendance may be limited, so sign up soon by contacting our Communications Specialist, Karen Dulski, via  If you can be accommodated, she will provide immediate confirmation and, in timely fashion, the link with instructions from our web-conferencing vendor GotoWebinar.

*(To determine the time in your zone of the world, try using this link.  We are based in Minneapolis, which appears on the city list that you must manipulate to calculate the time correctly.  Evidently, correlating the clock on international communications is even more complicated than statistics!  Good luck!)

- Back to top -

8: Events alert: Learn about “Managing Uncertainty in Design Space”

In back-to-back conferences, Consultant Pat Whitcomb will talk about “Managing Uncertainty in Design Space.”  He will do so at the gathering of industrial statisticians and the like for their Fall Technical Conference in Kansas City on October 13-14.  He follows up with the same presentation for chemical engineers and others attending AIChE's Annual Meeting in Minneapolis on October 17-19.  For details on the talk, see this abstract.  We hope you can attend one presentation or the other.

Those of you who work on development of medical devices should look up Stat-Ease at their exhibit (Booth #729) for the MD&M Minneapolis expo on November 2-3.

Click here for a list of upcoming appearances by Stat-Ease professionals.  We hope to see you sometime in the near future!

- Back to top -

9: Workshop alert: “Designed Experiments for Industry” in India (last chance); (New!) “Designed Experiments for Assay Optimization”

Seats are filling fast for the following DOE classes.  If possible, enroll at least 4 weeks prior to the date so your place can be assured.  However, do not hesitate to ask whether seats remain on classes that are fast approaching!  Also, take advantage of a $395 discount when you take two complementary workshops that are offered on consecutive days.

All classes listed below will be held at the Stat-Ease training center in Minneapolis unless otherwise noted.

* Attend both SDOE and EDME to save $295 in overall cost.

    • February 2-3, 2012**

** Take both EDME and RSM in February to earn $395 off the combined tuition!

*** Take both SDOE and DELS in February to earn $295 off the combined tuition!

**** Take both MIX and MIX2 to earn $395 off the combined tuition!

See this web page for complete schedule and site information on all Stat-Ease workshops open to the public.  To enroll, click the "register online" link on our web site or call Elicia at 612-746-2038.  If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition.  Or, consider bringing in an expert from Stat-Ease to teach a private class at your site.****

****Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff.  For a quote, e-mail

- Back to top -


Please do not send me requests to subscribe or unsubscribe—follow the instructions at the very end of this message.
I hope you learned something from this issue. Address your general questions and comments to me at:



Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc.
2021 East Hennepin Avenue, Suite 480
Minneapolis, Minnesota 55413 USA

PS. Quote for the month—the maternal line to invention:

Possibility is the mother of invention and the daughter of vision.

—John Dubuc, Lean 6 Sigma Consultant

Stat-Ease, Design-Ease, Design-Expert and Statistics Made Easy are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:
—Students of Stat-Ease training and users of Stat-Ease software
Stat-Ease consultants Pat Whitcomb, Shari Kraber, Wayne Adams and Brooks Henderson
—Statistical advisor to Stat-Ease: Dr. Gary Oehlert
Stat-Ease programmers led by Neal Vaughn
—Heidi Hansel Wolfe, Stat-Ease marketing director, Karen Dulski, and all the remaining staff that provide such supreme support!

Twitter-SmileyFor breaking news from Stat-Ease go to this Twitter site.

DOE FAQ Alert ©2011 Stat-Ease, Inc.
Circulation: Over 5500 worldwide
All rights reserved.