Вы находитесь на странице: 1из 33

DMAIC:

ANALYZE PHASE
Marianne Erika A. Axalan
Mikha Noreen I. Calixtro
Ma. Pamela D. Gerez The purpose of this step is to identify,
Lois Gabrielle I. Silvestre validate, and select the root cause of the
5IEC project problem which is identified via root
cause analysis.
I. IDENTIFY ROOT CAUSES
1. Cause and Effect Diagram (Fishbone Diagram)
A. Developed by Dr. Kaoru Ishikawa in 1943

B. Identifies many possible causes for an effect or


problem
1. Cause and Effect Diagram (Fishbone Diagram)
1. Cause and Effect Diagram (Fishbone Diagram)
Advantages of Cause and Effect diagram

1.Analyzing actual conditions for the improvement of


product or service quality

2.Elimination of conditions causing nonconforming product


or service

3.Standardization of existing and proposed operations

4.Education and training in decision-making


2. Cause and Effect Matrix (XY Matrix)
A. Can be used to prioritize projects based on
quantitative evaluation
B. A way of mapping out how value is transmitted from
the input factors (Xs) to the product outputs (Ys)
II. VERIFY ROOT CAUSES
1. Hypothesis Testing
Hypothesis testing is a statistical analysis where a
hypothesis is stated, sample data are collected, and a
decision is made based on the sample data and related
probability value (p-value).

It is designed to help us make an inference about the


true population value at a desired confidence level.
1. Hypothesis Testing
Probability Value (p-value)

How likely it would be to get data like the sample


value observed

Confidence Interval

Indicates a range of values thats likely to encompass


the true value

Confidence Level

The probability that the confidence interval


encompasses the true value
1. Hypothesis Testing
The type of hypothesis test that could be conducted is based
on the data type (discrete or continuous) of the y data.

Continuous Data Discrete Data

Conduct tests on: Conduct test on:


Mean ( or x) Proportions
Median
Variance ( or s)
1. Hypothesis Testing
Two types of error associated with hypothesis testing:

1.Type I Error ( error)

The probability that a hypothesis that is actually


true will be rejected.

Alpha () is typically set to 5%

If p-value < ,
reject H0
1. Hypothesis Testing
Two types of error associated with hypothesis testing:

1.Type II Error ( error)

The probability that a hypothesis that is actually


false will be accepted.

Power is the ability of the test to detect a


difference if one exists.

Power = 1 -
Goal: a power value closer to 1
1. Hypothesis Testing
Steps of Conducting a Hypothesis Test:

1.State the Null Hypothesis (H0) - a statement of what is


to be disproved

2.State the Alternative Hypothesis (H1) - a statement of


what is to be proved

3.Set Alpha () - the significance level of the test

4.Collect a sample of observations from the population

5.Construct a critical region

6.Calculate test statistic based on the sample

7.Draw a conclusion about H0


1. Hypothesis Testing
Example: Hypothesis Test of a Sample Mean

The nominal specification for filling a bottle with a test


chemical is 30 cc. The plan is to draw a sample of n=25 units
from a stable process and, using the sample mean and standard
deviation, construct a two-sided confidence interval that has
a 95% probability of including the true population mean.
1. Hypothesis Testing
Solution:
1.H0: = 30 cc

2.H1: 30 cc

3.Alpha () = 0.05

4.A sample of 25 bottles was measured and the following


statistics were computed:

x = 28 cc s = 6 cc

5.Computing for test statistic t:


1. Hypothesis Testing
6. Critical Region: -2.064 < t0 > 2.064

df=n-1=25-1=24

/2=0.025; t0.025,24=2.064 (based on t distribution table)


1. Hypothesis Testing
7. Conclusion: Failed to reject H0 and accept the hypothesis
that the lot mean is 30 cc for the data at hand.
2. Measurement System Analysis (MSA)
An experimental and mathematical method of determining how much
the variation within the measurement process contributes to
overall process variability.

An effective MSA process can help assure that the data being
collected is accurate and the system of collecting the data is
appropriate to the process. Good reliable data can prevent wasted
time, labor and scrap in a manufacturing process.

An ineffective measurement system can allow bad parts to be


accepted and good parts to be rejected, resulting in dissatisfied
customers and excessive scrap. MSA could have prevented the
problem and assured that accurate useful data was being collected.
There are 6 parameters to investigate in an MSA.

1.Discrimination - sometimes called resolution, refers to the ability of


the measurement system to divide measurements into data categories. All
parts within a particular data category will measure the same.

A measurement systems discrimination should enable it to divide the region of


interest into many data categories. The goal is to have at least 5 distinct
values or categories of readings. In Six Sigma, the region of interest is the
smaller of the tolerance or six standard deviations

When unacceptable discrimination exists, the range chart shows discrete


jumps or steps.
2. Stability - the change in bias over time when using a measurement
system to measure a given master part or standard.

Statistical stability is a broader term that refers to the overall


consistency of measurements over time

A systems statistical stability is determined through the use of control


charts.

Control charts are then constructed and evaluated.

A (statistically) stable system will show no out-of-control readings.

SPC Charts use a variety of tests to determine stability.


3. Bias - also referred to as Accuracy, is a measure of the distance between the average
value of the measurements and the "True" or "Actual" value of the sample or part. See the
illustration below for further explanation.

a. Subtract the reference value from x


to yield the Bias:

Bias = x - Reference Value

Process Variation = 6 Standard Deviations (Sigma)

b. Calculate the Bias percentage:

Bias Percentage = Bias / Process Variation

c. Analyze the results. If there is a relatively high value, examine the following
potential root causes:

Appraisers not following the measurement procedure


An error in measuring the Reference Value
Instability in the measurement. If the SPC chart shows a trend, the measurement
device could be wearing or calibration could be drifting.
Accuracy vs. Precision

Efforts to improve measurement system quality are aimed at improving both accuracy
and precision.

For best accuracy of the data:

1. Accept all data as it is collected. Assigning special cause and


scrutinizing data comes later.

2. Record the data at the time it occurs.

3. Avoid rounding off the data (rounding can create resolution problems).

4. On the data collection plan, record as many details around the data. Record
legibly and carefully.
4. Repeatability - A measurement system is repeatable if its
variability is consistent. Consistent variability is operationalized by
constructing a range or sigma chart based on repeated measurements of
parts that cover a significant portion of the process variation or the
tolerance, whichever is greater.

If the range or sigma chart is out of control,then special causes are


making the measurement system inconsistent. If the range or sigma chart is
in control then repeatability can be estimated by finding the standard
deviation

5. Reproducibility - A measurement system is reproducible when


different appraisers produce consistent results.

The standard deviation of reproducibility (o) is estimated by finding the


range between appraisers (Ro) and dividing by d 2 . Reproducibility is
then computed as 5.15o.
6. Linearity - This is a test to examine the performance of the measurement
system throughout the range of measurements.

This is a test to examine the performance of the measurement system throughout the
range of measurements.

Linearity represents the change in accuracy through the expected operating range of a
measurement device.

Linearity examines how accurate your measurements are through the expected range of
the measurements.
According to AIAG (2002), a general rule of thumb for
measurement system acceptability is:

1. Under 10 percent error is acceptable.

2. 10 percent to 30 percent error suggests that the system is acceptable


depending on the importance of application, cost of measurement device,
cost of repair, and other factors.

3. Over 30 percent error is considered unacceptable, and you should improve


the measurement system. AIAG also states that the number of distinct
categories the measurement systems divides a process into should be
greater than or equal to 5.

All measurement system have error. If you dont know how much of the variation
you observe is contributed by your measurement system, you cannot make
confident decisions.
3. Design of Experiments
DOE is used when the x factors can be changed to
understand the impact that each factor has on the
response, the y.

Used to determine the important factors, to compare a


variety of options

To find the optimal setting for the important factors


3. Design of Experiments
Faster Serving Time During Peak Hours

Number of Servers Number of Buffers Number of Helpers

5 servers 50 pcs 4 helpers

Depending on the objective of the experiment, the


experimental design will change.

The experimental design is the list of all experimental runs.


3. Design of Experiments
Two-level Factorial Experiment

Each x factor is varied over two levels. For a continuous


factor, these are low and high values.

For a discrete factor, it may be conditions of interest, such


as Supplier A and Supplier B, where the levels are
arbitrarily assigned a low and high level.

The minimum number of runs in this kind of design, known as a


full factorial, is 2k where k is the number of factors that
are studied in the experiment.
3. Design of Experiments
Fractional Factorial Experiment

If more than four x factors are considered in a design, a


fractional factorial design is usually run.

A fractional factorial design means that only a subset of the


full factorial will be run.

The reason for using a fraction of the design is to minimize


the runs made.
3. Design of Experiments
Design Experiment Outcome

The outcome of a designed experiment is an understanding of


the factors that are important to the y, or response.

A hypothesis test is used for each factor to determine if


that factor has a significant effect on the response.
3. Design of Experiments
Design Experiment Outcome
ANALYZE PHASE TOLLGATE QUESTIONS
1. Have then been any revisions to the charter? Has the scope
changed?

2. What was the approach to analyzing the data? Why were these tools
chosen? What worked well/did not work well about these tools?

3. What are the root causes of the problems? How were these
conclusions drawn?

4. How did the team analyze the data to identify the factors that
account for variation in the process?

5. Once completing the analysis, what is the opportunity represented


by addressing the problem? What is the impact on customer
satisfaction, retention, and loyalty?

Вам также может понравиться