In many rubber and plastics processes, powerful interactions affect final performance. You will not discover interactions when you change only one factor at a time. Proper design of experiments (DOE) will reveal interactions that can help you achieve breakthrough improvements in process efficiency and product quality. The big gains come from a simple form of DOE called two-level factorial design. This approach to experimentation has proven helpful in controlling part shrinkage, but it can be applied to any measurable response. In this article you will be given the primary details from an engineering perspective.
A somewhat different version of this article appeared in Modern Paint and Coatings.
What would you do it confronted with an "opportunity" to make a major change, involving many factors, but you need to do it quickly? The traditional approach to experimentation requires you to change only one factor at a time (OFAT). However, the OFAT approach doesn’t provide data on interactions of factors, a likely occurrence with chemical processes. An alternative approach called “two-level factorial design” can uncover critical interactions. This statistically based method involves simultaneous adjustment of experimental factors at only two levels, offering a parallel testing scheme that’s much more efficient than the serial approach of OFAT.
G.C. Derringer provides an easy-to-read explanation of the commonly used optimization function called desirability. When used as the final step in DOE, this function allows simultaneous optimization of multiple responses, resulting in the discovery of a group of optimal factor settings.
Design of experiments identifies which factors matter and which ones don't when microwaving popcorn, as well as helping find optimal settings.
This presentation details and demonstrates a procedure that, despite missing data, allows the use of user-friendly, normal-probability plots for two-level-factorial effect selection.
A look at augmenting the usual probability plot effects with points representing pure error.
Details and demonstrates a fun experiment to do at home or in class to build understanding of variation and how it can be handled with simple comparative designs. For teaching purposes it works best if each student breaks two brands of clips, thus providing data for a paired t-test, which blocks out variability due to the tester.
An updated version of paper-clip experiment is provided in the June 2009 Stat-Teaser posted at https://cdnm.statease.com/news/news0906.pdf.