Stat-Ease is hosting an amazing group of speakers for the 2021 Online DOE Summit. Taking place on September 28 & 29, this conference will be accessible to technical professionals around the globe. Register to attend the live in-person sessions via GotoWebinar, or plan to watch the on-demand recordings after the conference. Registration is free of charge.

Here is the line-up:

Hour 1: (Keynote) Martin Bezener (President & Chief Technical Officer, Stat-Ease)

**Stat-Ease 360 Reveal**

Hour 2: Gregory Hutto (Wing Operations Analyst, US Air Force)

**The Tooth Fairy in Experimental Design: White Lies We Tell Our Software**

Hour 3: Drew Landman (Professor & Associate Chair, Dept of Mech & Aero Engr, Old Dominion University)

**An I-Optimal Split-Plot Design for eVTOL Tilt-Rotor Performance Characterization**

Hour 4: Jason Pandolfo (Research Scientist, Quaker Chemical)

**Using Logistic Regression to Predict the Stability of Metalworking Fluid Emulsions in Varying Water Quality Conditions**

Hour 1: (Keynote) Hank Anderson (VP Software Development, Stat-Ease)

**Python Integration with Stat-Ease 360 - A Tutorial**

Hour 2: Oliver Thunich (Statistics Consultant, STATCON GmbH)

**Adjusting a DOE to Unpredictable Circumstances**

Hour 3: Steven Mullen (Senior Scientist, Cook Medical)

**Improving Process Understanding of an IVF Cell Culture Incubator via Response Surface Methodology**

Hour 4: Gregory Perrine (Research Scientist, Georgia-Pacific)

**Debonder Formulation Optimization Using a KCV Mixture/Process Design in Paper Handsheets**

*We look forward to seeing you!*

Please send any questions to Shari@statease.com.

One of the greatest tools developed by Cuthbert Daniel (1976), was the use of the half-normal plot to visually select effects for two-level factorials. Since these designs generally contain no replicates, there is no pure error to use as the base for statistical F-tests. The half-normal plot allows us to visually distinguish between the effects that are small (and normally distributed) versus large (and likely to be statistically significant). The subsequent ANOVA is built on this decision to split the effects into the few that are likely “signal” versus the majority that are likely “noise”.

Often the split between the groups is obvious, with a clear gap between them (see Figure 1), but sometimes it is more ambiguous and harder to decide where to “draw the line” (see Figure 2).

Stat-Ease consultants recommend staying conservative when deciding which effects to designate as the “signal”, and to be cautious about over-selecting effects. In Figure 2, the A effect is clearly different from the other effects and should definitely be selected. The C effect is also separated from the other effects by a “gap” and is probably different, so it should also be included in the potential model terms. The next grouping consists of four three-factor interactions (3FI’s.) Extreme caution should be exercised here – 3FI terms are very rare in most production and research settings. Also, they fall “on the line”, which indicates that they are most likely within the normal probability curve that contains the insignificant effects. These terms should be pooled together to estimate the error of the system. The conservative approach says that choosing A and C for the model is best. Adding any other terms is most likely just chasing noise.

Hints for choosing effects:

- Split the effects into two groups, distinguishing between the “big” and “small” ones, right versus left, respectively
- Start from the right side of the graph – that is where the biggest effects are
- Look for gaps that separate big effects from the rest of the group
- STOP if you select a 3FI term – these are very unlikely to be real effects (throw them back into the error pool)
- Don’t skip a term – if a smaller effect looks like it could be significant, then all larger effects also must be included
- Effects need to “jump off” the line – otherwise they are just part of the normal distribution

Sometimes you have to simply accept that the changes in the factor levels did not trigger a change in the response that was larger than the normal process variation (Figure 3). Note in Figure 3 that the far right points are straddling the straight line. These terms have virtually the same size effect – don’t select the lower one just because it is below the line.

When you are lucky enough to have replicates, the pure error is then used to help position green triangles on the half-normal plot. The triangles span the amount of error in the system. If they go out farther than the biggest effect, that is a clear indication that there are no effects that are larger than the normal process variation. No effects are significant in this case (see Figure 4).

**Conclusion**

The half-normal plot of effects gives us a visual tool to split our effects into two groups. However, the use of the tool is a bit of an art, rather than an exact science. Combine this visual tool with both the ANOVA p-values and, most importantly, your own subject matter knowledge, to determine which effects you want to put into the final prediction model.

Version 13 of Design-Expert® software (DX13) provides a substantial step up on ease of use and statistical power for design of experiments (DOE). As detailed below, it lays out an array of valuable upgrades for experimenters and industrial statisticians. See DX13’s amazing features for yourself via our free, fully functional, trial download at www.statease.com/trial/.

Quite often an experiment leads to promising results that lie just beyond its boundaries. DX13 paves the way via its new wizard for modifying your design space. Press the Augment Design button, select “Modify design space” and off you go. Run through the “Modify Design Space – Reactive Extrusion” tutorial, available via program Help, to see how wonderfully this new wizard works. As diagrammed on its initial screen, the modify-design-space tool facilitates shrinking and moving your space, not just expanding it. And it works on mixture as well as process space.

For assessing measures that come by counts, Poisson regression models fit with greater precision than ordinary methods. Demonstrate this via the “Poisson Regression – Antiseptic” tutorial where Poisson regression proves to be just the right tool for modeling colony forming units (CFU) in a cell culture. This new modeling tool, along with logistic regression for binary responses (introduced in version 12), puts Design-Expert at a very high level for a DOE-dedicated program.

Easily model any response in various ways to readily compare them. Then chose the model most fitting for achieving optimization goals. Simply press the plus **[+]** button on the Analysis branch. The Antiseptic tutorial demonstrates the utility of trying several modeling alternatives, none of which can do better than Poisson regression (but worth a try!).

Optimal (custom) designs work wonderfully well for laying out statistically ideal experiments. However, the numerical levels they produce often extend to an inconvenient number of decimal places. No worries: DX13 provides a new “Round Columns” button—very convenient for central composite and optimal designs. As demonstrated in the Antiseptic tutorial, this works especially well for mixture components—maintaining their proper total while making the recipe far easier for the experimenter to accomplish. Do so either on the basis of significant digits (as shown) or by decimal places.

DX13 makes it far easier to bring in existing data. Simply paste in your data from a spreadsheet (or another statistical program) and identify each column as an input or output. If you paste in headers, right click rows to identify names and units of measure. For example, DX13 enables entry of the well-known Longley data (see the “Historical Data – Unemployment” tutorial for background) directly from an Excel spreadsheet. Easy! Once in Design-Expert, its advanced tools for design evaluation, modeling and graphics can be put to good use.

- The Constraints node now allows you to modify existing limits: Second thoughts? No problem!
- New ribbon with easy access to versatile design-layout features such as Change View, and Hide/Show Columns
- Runs outside the constraints flagged, but still usable for analysis; furthermore, they can be moved back into the valid space via the right-click menu
- Adding verification runs after an analysis no longer invalidates it
- Continuous and discrete numeric factors now indicated in the Design Summary

- Response name now included when copying equations to Excel
- Pearson, Deviance, and Hosmer-Lemeshow goodness-of-it tests added for logistic regression

- New preference for the default layout of the Diagnostics tabs

- Box (and whiskers) Plot for Graph Columns: Another very useful tool for data exploration prior to analysis.
- Control multiple graphs at the same time with the factors tool: Side-by-side interactive views—enlightening!
- Perturbation and trace plots now colored by factor
- New All-Factor graphs option shows only factors selected for the model
- When the number of tick-marks becomes large, only a subset is shown
- For large designs, the Leverage graph scales to maximum value, rather than 1
- When FDS-graph crosshair goes above 80% it changes to black, rather than red

As a Stat-Ease statistical consultant, I am often asked, **“What are the green triangles (Christmas trees) on my half-normal plot of effects?”**

Factorial design analysis utilizes a half-normal probability plot to identify the largest effects to model, leaving the remaining small effects to provide an error estimate. Green triangles appear when you have included replicates in the design, often at the center point. Unlike the orange and blue squares, which are factor effect estimates, the green triangles are noise effect estimates, or “pure error”. The green triangles represent the amount of variation in the replicates, with the number of triangles corresponding to the degrees of freedom (df) from the replicates. For example, five center points would have four df, hence four triangles appear. The triangles are positioned within the factor effects to reflect the relative size of the noise effect. Ideally, the green triangles will land in the lower left corner, near zero. (See Figure 1). In this position, they are combined with the smallest (insignificant) effects and help position the red line. Factor effects that jump off that line to the right are most likely significant. Consider the triangles as an extra piece of information that increases your ability to find significant effects.

Once in a while we encounter an effects plot that looks like Figure 2. **“What does it mean when the green triangles are out of place - on the upper right side instead of the lower left?”**

This indicates that the variation between the replicates is greater than the largest factor effects! Since this error is part of the normal process variation, you cannot say that any of the factor effects are statistically significant. At this point you should first check the replicate data to make sure it was both measured and recorded correctly. Then, carefully consider the sources of process variation to determine how the variation could be reduced. For a situation like this, either reduce the noise or increase the factor ranges. This generates larger signals that allow you to discover the significant effects.

*- Shari Kraber*

*For statistical details, read “*Use of Replication in Almost Unreplicated Factorials*” by Larntz and Whitcomb.*

*For more frequently asked questions,* sign up for Mark’s bi-monthly e-mail, The DOE FAQ Alert.

Design of experiments (DOE), being such an effective combination of multifactor testing with statistical tools, hits the spot for engineers and scientists doing industrial R&D. However, as documented in my white paper on Achieving Breakthroughs in Non-Manufacturing Processes via Design of Experiments (DOE), this statistical methodology works equally well for business processes. Yet, non-manufacturing experimenters rarely make it beyond simple one-factor-at-a-time (OFAT) comparisons known as A/B splits—most recently embraced, to my great disappointment, by *Harvard Business Review**. But to give HBR some credit, this 2017 feature on experimentation at least mentions “multivariate” (I prefer “multifactor”) testing as a better alternative.

To see an illuminating example of multifactor testing applied to marketing, see my April 21 StatsMadeEasy blog: Business community discovers that “Experimentation Works”.

Another great case for applying multifactor DOE came from Kontsevaia and Berger in a study published by the *International Journal of Business, Economics and Management*al**. To maximize impressions per social-media posts, they applied a fractional two-level design on 6 factors in 16 runs varying:

A. Type of Day/Day of the week: Weekend (Sat, Sun) vs Workday (Thu, Fri)

B. Social Media Channel: LinkedIn vs Twitter

C. Image present: No vs Yes

D. Time of Day: Afternoon (3-6pm) vs Morning (7-10am)

E. Length of Message: Long (at least 70 characters) vs Short (under 70 characters)

F. Hashtag present: No vs Yes

The multifactor marketing test revealed the choice of channel for maximum impressions to be highly dependent on posts going out on weekends versus workdays. This valuable insight on a two-factor interaction (AB) would never have been revealed by a simple OFAT split.

Design-Expert® software makes multifactor business experiments like this very easy for non-statisticians to design, analyze and optimize for greatly increased returns. Aided by Stat-Ease you can put DOE to work for your enterprise and make a big hit career-wise.

*“Building a Culture of Experimentation”, Stefan Thomke, March-April, 2020.

**“Analyzing Factors Affecting the Success of Social Media Posts for B2B Networks: A Fractional-Factorial Design Approach”, August, 2020.