Stat-Ease Blog

Blog

ASQ/ASA Fall Technical Conference Attendees - Sharpen those DOE Skills!

posted by Greg & Shari on Aug. 19, 2019

Expand your design of experiments (DOE) expertise at the 2019 Fall Technical Conference.

Martin Bezener will be teaching a pre-conference short course on Practical DOE and giving a session talk on binary data (abstracts are below). Shari Kraber will be hosting our exhibit booth and can discuss your DOE needs.

Short Course Abstract (Wed, Sep 25): Practical DOE: ‘Tricks of the Trade’

In this dynamic short-course, Stat-Ease consultant Martin Bezener reveals DOE tricks of the trade that make the most from statistical design and analysis of experiments. Come and learn many secrets for design of experiment (DOE) success, such as:

  • How to build irregularly-shaped DOE spaces that cover your region of interest
  • Using logistic regression to get the most from binomial data such as a pass/fail response
  • Clever tweaks to numerical optimization
  • Cool tools for augmenting a completed experiment to capture a peak beyond its reach
  • Other valuable tips and tricks as time allows and interest expressed

Session 6A Abstract (Fri, Sep 27, 1:30pm): Practical Considerations in the Design of Experiments for Binary Data

Binary data is very common in experimental work. In some situations, a continuous response is not possible to measure. While the analysis of binary data is a well-developed field with an abundance of tools, design of experiments (DOE) for binary data has received little attention, especially in practical aspects that are most useful to experimenters. Most of the work in our experience has been too theoretical to put into practice. Many of the well-established designs that assume a continuous response don’t work well for binary data yet are often used for teaching and consulting purposes. In this talk, I will briefly motivate the problem with some real-life examples we’ve seen in our consulting work. I will then provide a review of the work that has been done up to this point. Then I will explain some outstanding open problems and propose some solutions. Some simulation results and a case study will conclude the talk.

Be sure to mark these as must attend events on your FTC schedule!

Hurry – Early Bird Registration ends Aug 25. Hotel block ends Aug 27. Final Registration ends September 13.


Links for more info:

www.falltechnicalconference.org


How Can I Convince Colleagues Working on Formulations to Use Mixture Design Rather than Factorials or Response Surface Methods as They Would Do for Process Studies?

posted by Martin Bezener, PhD. on Aug. 12, 2019

We recently published the July-August edition of The DOE FAQ Alert. One of the items in that publication was the question below, and it's too interesting not to share here as well.

Original question from a Research Scientist:

"Empowered by the Stat-Ease class on mixture DOE and the use of Design-Expert, I have put these tools to good use for the past couple of years. However, I am having to more and more defend why a mixture design is more appropriate than factorials or response surface methods when experimenting on formulations. Do you have any resources, blogs posts, or real-world data that would better articulate why trying to use a full factorial or central composite design on mixture components is not the most effective option?"

Answer from Stat-Ease Consultant Martin Bezener:

“First, I assume you are talking about factorials or response surface method (RSM) designs involving the proportions of the components. It makes no sense to use a factorial or RSM if you are dealing with amounts, since doubling the amount of everything should not affect the response, but it will in a factorial or response-surface model.

"There are some major issues with factorial designs. For one thing, the upper bounds of all the components need to sum to less than 1. For example, let’s say you experimented on three components with the following ranges:

A. X1: 10 - 20%
B. X2: 5 - 6%
C. X3: 10 - 90%

then the full-factorial design would lay out a run at all-maximum levels, which makes no sense as that gives a total of 116% (20+6+90). Oftentimes people get away with this because there is a filler component (like water) that takes the formulation to a fixed total of 100%, but this doesn't always happen.

"Also, a factorial design will only consider the extreme combinations (lows/highs) of the mixture. So, you'll get tons of vertices but no points in the interior of the space. This is a waste of resources, since a factorial design doesn't allow fitting anything beyond an interaction model.

"An RSM design can be ‘crammed’ into mixture space to allow curvature fits, but this is generally a very poor design choice. Using ratios of components provides a work-around, but that has its own problems.

"Whenever you try to make the problem fit the design (rather than the other way around), you lose valuable information. A very nice illustration of this was provided in the by Mark Anderson in his article on the “Peril of Parts & the Failure of Fillers as Excuses to Dodge Mixture Design” in the May 2013 Stat-Teaser.”

An addendum from Mark Anderson, Principal of Stat-Ease and author of The DOE FAQ Alert:

"The 'problems' Martin refers to for using ratios (tedious math!) are detailed in RSM Simplified Chapter 11: 'Applying RSM to Mixtures'. You can learn more about this book and the others in the Simplified series ('DOE' and 'Formulation') on our website."


Links for additional information:


Greg’s DOE Adventure - Simple Comparisons

posted by Greg on July 26, 2019

Disclaimer: I’m not a statistician. Nor do I want you to think that I am. I am a marketing guy (with a few years of biochemistry lab experience) learning the basics of statistics, design of experiments (DOE) in particular. This series of blog posts is meant to be a light-hearted chronicle of my travels in the land of DOE, not be a textbook for statistics. So please, take it as it is meant to be taken. Thanks!

As I learn about design of experiments, it’s natural to start with simple concepts; such as an experiment where one of the inputs is changed to see if the output changes. That seems simple enough.

For example, let’s say you know from historical data that if 100 children brush their teeth with a certain toothpaste for six months, 10 will have cavities. What happens when you change the toothpaste? Does the number with cavities go up, down, or stay the same? That is a simple comparative experiment.

“Well then,” I say, “if you change the toothpaste and 6 months later 9 children have cavities, then that’s an improvement.”

Not so fast, I’m told. I’ve already forgotten about that thing called variability that I defined in my last post. Great.

In that first example, where 10 kids got cavities. That result comes from that particular sample of 100 kids. A different sample of 100 kids may produce an outcome of 9, other times it’s 11. There is some variability in there. It’s not 10 every time.

[Note: You can and should remove as much variability as you can. Make sure the children brush their teeth twice a day. Make sure it’s for exactly 2 minutes each time. But there is still going to be some variation in the experiment. Some kids are just more prone to cavities than others.]

How do you know when your observed differences are due to the changes to the inputs, and not from the variation?

It’s called the F-Test.

I’ve seen it written as:

f-test-formula.PNG

Where:

s = standard deviation

s2 = variance

n = sample size

y = response

ӯ (“y bar”) = average response

In essence, this is the amount of variance for individual observations in the new experiment (multiplied by the number of observations) divided by the total variation in the experiment.

Now that, by itself, does not mean much to me (see disclaimer above!). But I was told to think of it as the ratio of signal to noise. The top part of that equation is the amount of signal you are getting from the new condition; it’s the amount of change you are seeing from the established mean with the change you made (new toothpaste). The bottom part is the total variation you see in all your data. So, the way I’m interpreting this F-Test is (again, see disclaimer above): measuring the amount of change you see versus the amount of change that is naturally there.

If that ratio is equal to 1, more than likely there is no difference between the two. In our example, changing the toothpaste probably makes no difference in the number of cavities.

As the F-value goes up, then we start to see differences that can likely be credited to the new toothpaste. The higher the value of the F-test, the less likely it is that we are seeing that difference by chance and the more likely it is due to the change in the input (toothpaste).

Trivia

Question: Why is this thing called an F-Test?

Answer: It is named after Sir Ronald Fisher. He was a geneticist who developed this test while working on some agricultural experiments. “Gee Greg, what kind of experiments?”. He was looking at how different kinds of manure effected the growth of potatoes. Yup. How “Peculiar Poop Promotes Potato Plants”. At least that would have been my title for the research.


2019 Analytics Solutions Conference Wrap Up

posted by Greg on July 12, 2019

We recently wrapped up the 2019 Analytics Solutions Conference in Minneapolis, MN. This was the first time we organized a US conference on analytics with our Norwegian partners, Camo Analytics. It could not have gone better! A BIG congratulations on a job well done to Shari Kraber who spearheaded the organization of this conference.

The goal of the conference was to help attendees transform their business, from R&D to production, using data-driven tools. If I may toot our own horn here, we did that. There was a lot of information to digest at this meeting, but the camaraderie of attendees made it easy. We overheard many conversations in the halls between attendees about what they just learned or discussing potential solutions to business problems they were facing.

The conference started with a short course day. Two all day seminars were conducted in parallel, one titled “Practical DOE: ‘Tricks of the Trade’” and the other “Realizing Industry 4.0 through Industrial Analytics”. We saw a lot of people experienced with design of experiments (DOE) go to the Industry 4.0 course and a lot of the analytics pros headed to learn about DOE. What a great way to start!

Day one wrapped up with an opening reception for all attendees and speakers, fostering networking and building relationships.

The conference days were filled with session talks that wrapped around keynote presentations (a fantastic trio of speakers, find out more at the link below). The result was a great mix of academic-style overviews with down-to-earth, real-world case studies. It struck a perfect balance.

EdGooding6a.jpg

If you couldn’t make it to the conference, details on each presentation (including pdf’s of the presentations), are posted on the ASC speakers page.

To add some fun to the mix, the evening’s social event was a dinner cruise on the Mississippi River for all attendees. It was quite the treat, especially to those of us who live here! We rarely take the opportunity to enjoy activities like this in our own backyard. We traveled thru a lock, spotted a bald eagle in the trees, and enjoyed the casual atmosphere.

Cruise.jpg
Cruise2.jpg

The next Stat-Ease hosted conference will be our 8th European DOE Users Conference. This year it will be held in Groningen, Netherlands from June 17-19. We hope to see you there!

If you would like to be kept in the loop about the conference, sign up for our mailing list. We will be sending out information regarding a call for speakers and registration later in the fall. Sign up now, before you forget!

See you in Groningen!


Four Tips for Graduate Students' Research Projects

posted by Shari on May 22, 2019

Graduate students are frequently expected to use design of experiments (DOE) in their thesis project, often without much DOE background or support. This results in some classic mistakes.

  1. Designs that were popular in the 1970’s-1990’s (before computers were widely available) have been replaced with more sophisticated alternatives. A common mistake – using a Plackett-Burman (PB) design for either screening purposes, or to gain process understanding for a system that is highly likely to have interactions. PB designs are badly aliased resolution III, thus any interactions present in the system will cause many of the main effect estimates to be biased. This increases the internal noise of the design and can easily cause misleading and inaccurate results. Better designs for screening are regular two-level factorials at resolution IV or minimum-run (MR) designs. For details on PB, regular and MR designs, read DOE Simplified.
  2. Reducing the number of replicated points will likely result in losing important information. A common mistake – reducing the number of center points in a response surface design down to one. The replicated center points provide an estimate of pure error, which is necessary to calculate the lack of fit statistic. Perhaps even more importantly, they reduce the standard error of prediction in the middle of the design space. Eliminating the replication may mean that results in the middle of the design space (where the optimum is likely to be) have more prediction error than results at the edges of the design space!
  3. If you plan to use DOE software to analyze the results, then use the same software at the start to create the design. A common mistake – designing the experiment based on traditional engineering practices, rather than on statistical best practices. The software very likely has recommended defaults that will make a better design that what you can plan on your own.
  4. Plan your experimentation budget to include confirmation runs after the DOE has been run and analyzed. A common mistake – assuming that the DOE results will be perfectly correct! In the real world, a process is not improved unless the results can be proven. It is necessary to return to the process and test the optimum settings to verify the results.

The number one thing to remember is this: Using previous student’s theses as a basis for yours, means that you may be repeating their mistakes and propagating poor practices! Don’t be afraid to forge a new path and showcase your talent for using state-of-the-art statistical designs and best practices.