Вы находитесь на странице: 1из 1

Inside the Lean House And Homer says.. D.O.

E Intelligent Experimentation
by: Michael Coyne, SQA, Faurecia Automotive Seating Business Group

(This month I am switching to a subject more in line with the other side of my training, Six Sigma) I have heard it so many times. A supplier explains to me their staff has undertaken to carry out a DOE experiment to determine the root cause of a quality/productivity problem. When I enquire as to the details of the experiment, I am told the team has shutdown the production line to immediately start experimentation. They will introduce small incremental changes in one variable at a time while holding all other variables steady. In an effort to maintain stability, similar trials will be run one after another. Due to the large number of variables to be trialed, only a single trial will be undertaken for each variable. All outputs of the process will be recorded for future reference. This is a great deal of work to fix the problem, so I should be happy, shouldnt I? OFAT, I say. So what exactly is this mysterious DOE (pronounced D-O-E, not Doh!)? Well, one of the best characterizations Ive heard about DOE is, If SPC listens to the process, DOE interrogates it. Design of Experiment is a methodology that employs matrix algebra and statistics to ensure clear and tangible results come from experimentation. DOE methodology started with the great statistician Sir Ronald Fisher, who refined it while conducting genetics studies at the Rothamsted Experimental Station in England and who subsequently published his work in the book, The Design of Experiments. In this book he details the six criteria used to create efficient experimental designs: Comparison; Randomization; Replication; Blocking; Orthogonality; factorial experimentation. Comparison refers to the tactic of comparing treatments one against the other. The best known example of this is the use of a placebo trial in medical drug trials, thus establishing a baseline response. Randomization refers to the tactic of completing trials in random order, so no unknown factor skews our test results. Replication refers to carrying out multiple trials for a particular treatment or set of parameter settings. This helps us to drive out error in the estimation of the process response and to quantify the amount of variation in the response. Blocking involves grouping sets of trials into blocks that can be compared against each other.

This helps us to understand irrelevant sources of variation in the experiment. An example of this would be grouping the participants in a medical drug trial into male/female blocks. Orthogonality is a matrix algebra term that means terms are independent of each other in determining the response of the process. What this means in simple terms is that I can tell that a change in the process is caused by this variable and not another one. The last criteria, the use of factorial experiments instead of One-Factor-At-a-Time (OFAT) experiments provides two key benefits, reducing the number of trials and providing information on interactions. This last point can be the key to understanding a process. Some effects are only significant and present when two or more variables are present and OFAT methods will never show these effects. Quickly, the typical process for conducting a DOE experiment is to first identify all possible factors (variables) that could affect the output of the process. These factors can be categorized as constants, noise and experimentals. A constant factor is something that does not significantly vary and a noise factor varies, but we have no control over it. After the experimental factors are identified, based on our expert knowledge of the process we will choose those factors we wish to experiment on. The number of factors chosen is typically a function of how many trials we can tolerate and afford. Once the factors are chosen, an appropriate design can be selected based on whether we want to model the process or just understand which factors have an impact on the process. Some common designs used are full factorial, fractional factorial, Taguchi, DOptimal and Box-Wilson. With our design in hand, we determine the number of trials per treatment, randomize the order in which they are carried out, run our variables at extreme maximum/minimum values and start to collect the response data. With the response data in hand, we apply statistical tools such as ANOVA and confidence intervals to validate our results and understand them. Finally, we run validation trials to confirm our models and optimize the process. Well, I have just barely touched on this subject, books have literally been written on it. But take my word for it; it pays to follow a low-OFAT diet.

Вам также может понравиться