Вы находитесь на странице: 1из 37

1|Page

Given a Weibull (3.5, 6500) as the distribution for the HPT Nozzles of a fleet (fleet size 150),
create a tool that help you decide when do you want to do the next inspection base on the
risk you are willing to take (base on the inventory in the shop) and answer the following
questions:
1. How many nozzles do you expect to need to replace if you do an inspection at 4500
cycles?
2. You have 40 in your inventory and you want to be 95% confident that you have the
nozzles that you need to replace. What would be your recommendation if you want
to be 99% confidence you have the inventory that you need?
3. How would your decision be changed if your <3.5 for example 2?
4. What about >3.5 for example 5?
Please elaborate

INTRODUCTION
In life data analysis (also called "Weibull analysis"), the practitioner attempts to make
predictions about the life of all products in the population by fitting a statistical distribution
to life data from a representative sample of units. The parameterized distribution for the
data set can then be used to estimate important life characteristics of the product such as
reliability or probability of failure at a specific time, the mean life and the failure rate.
The term "life data" refers to measurements of product life. Product life can be measured in
hours, miles, cycles or any other metric that applies to the period of successful operation of a
particular product. Since time is a common measure of life, life data points are often called
"times-to-failure"
Lifetime Distributions (Life Data Models)
Statistical distributions have been formulated by statisticians, mathematicians and
engineers to mathematically model or represent certain behavior.

2|Page

The probability density function (pdf) is a mathematical function that describes the
distribution. The pdf can be represented mathematically or on a plot where the x-axis
represents time, as shown next.

Some distributions, such as the Weibull and lognormal, tend to better represent life data and
are commonly called "lifetime distributions" or "life distributions." In fact, life data analysis is
sometimes called "Weibull analysis" because the Weibull distribution, formulated by
Professor Waloddi Weibull, is a popular distribution for analyzing life data. The Weibull model
can be applied in a variety of forms (including 1-parameter, 2-parameter, 3-parameter or
mixed Weibull). Other commonly used life distributions include the exponential, lognormal
and normal distributions. The analyst chooses the life distribution that is most appropriate to
model each particular data set based on past experience and goodness-of-fit tests.

(Beta) the Weibull Shape or Slope Parameter

(Theta) the Weibull Scale Parameter

3|Page

The Weibull Shape Parameter ()


The last & most important parameter of the weibull distribution is the Shape Parameter, .
This parameter also called the slope parameter because this value is equal to the slope of
your data when plotted on a weibull probability plot.
Below is an example of what happens to the weibull distribution with the Shape varies.

when < 1, the weibull distribution represents a system with a decreasing failure rate
like the early life failures on the bathtub curve.

when = 1, the weibull distribution is approximately equal to the exponential


distribution and the failure rate is constant.

when > 1, the weibull distribution represents a system with an increasing failure rate
like the end of life failures on the bathtub curve.

when = 3.5, this is another important beta value where the weibull distribution is
approximately equal to the normal distribution and the failure rate is increasing.
What Beta values tell us
Beta values are extremely important because they tell us the failure behavior of the
component. Knowledge of the failure behavior will lead us down a certain path when
trying to improve overall reliability and availability. This will aid in decisions as to whether
to apply preventive or predictive maintenance techniques to the equipment component.

Weibull Scale parameter,


A change in the scale parameter, , has the same effect on the distribution as a change of
the abscissa scale. Increasing the value of while holding constant has the effect of
stretching out the pdf. Since the area under a pdf curve is a constant value of one, the "peak"
of the pdf curve will also decrease with the increase of

4|Page

The Following Tool is built in EXCEL

The user inputs can be


plugged in here

2
3
4

Keypoints from the tool


1. The tool is interactive enough to take Shape factor, Scale factor ,number of nozzles in
fleet,percentage confidence and cycle number from the user
2. The output section displays percentage failure of the components at corresponding cycle
number ,shape and scale factor of the weibull distribution
3. Failures in the lot(specified by user) are displayed here
4. Total failures with indicated Confidence interval is indicated here

5|Page

Task1: We are expected to find the total number of nozzles required to be replaced at
4500 cycles The Life of HPT nozzles are found by fitting a weibull data of SHAPE factor 3.5
and scale factor 6500 with 95% and 99% CI

We can observe from the sheet above that at 4500 cycles the failure rate of the nozzles in
fleet is 24.1254% (i.e 36 out of 150 nozzles are Failed) and with 95% confidence we can say
that the Total failures are 45 out of 150

45

6|Page

Repeating the same for 99% confidence

We can observe from the sheet above that at 4500 cycles the failure rate of the nozzles in
fleet is 24.1254% (i.e 36 out of 150 nozzles are Failed) and with 95% confidence we can say
that the Total failures are 49 out of 150

50

In the inventory we have 40 in stock and while inspecting at 4500 cycles we found that
failures are 45 with 95 % CI and 49 with 99% confidence so the inspection time /cycle at
which the inspection should happen is less than 4500 cycle

7|Page

To decide with optimum cycle time of inspection there comes many parameters into
limelight for example cost function. A cost function (CF) has to developed to identify the cost
per unit of time associated with different FFI (failure finding inspection) intervals, for the
proposed extended period of life. Moreover, a mathematical model has to be defined to
obtain the optimal FFI interval, during the extended period of the replacement life. Following
this methodology, the optimum FFI interval that generates the lowest cost per unit of time,
can be obtained
The following cost parameters are considered for cost modelling of FFI in the postponement
scenario
Direct cost of inspection task
Cost of possible repair due to a finding
Cost of an accident, due to multiple failures

Opportunity cost of the systems lost production


Task 2: We are expected to find the total number of nozzles required to be replaced at
4500 cycles The Life of HPT nozzles are found by fitting a weibull data of SHAPE factor
2 and 5 with scale factor 6500 with 95% and 99% CI and state our observations and
recommendations
Before we proceed with Task 2 we need to validate the tool created
Done with help of Minitab
Done in excel by feeding basic equations of weibull distribution

8|Page

Given
parameters for
weibull
distribution =
3.5, =6500
Generating
Data for
Analysis using
Minitab with
given weibull
parameters

9833.305
9383.167
9291.802
9124.458
9112.325
9022.399
8994.941
8982.733
8634.911
8439.03
8428.263
8397.177
8307.408
8146.506
8129.065
7918.475
7841.313
7785.636
7785.401
7767.542
7720.399
7643.515
7456.225
7312.293
7244.027

6878.939
6826.684
6802.72
6780.891
6723.731
6723.08
6721.732
6657.563
6644.625
6626.675
6617.9
6596.207
6563.141
6560.831
6510.815
6486.209
6483.403
6480.049
6408.631
6388.088
6310.966
6294.009
6279.5
6277.577
6265.86

6074.172
5970.669
5963.616
5958.922
5946.198
5939.931
5833.803
5822.978
5788.534
5786.72
5784.67
5775.643
5768.662
5763.021
5715.195
5703.998
5588.151
5576.749
5543.308
5497.408
5408.377
5401.849
5346.023
5325.056
5319.236

The above equations were used to plot the curves


**reliability function or survival function

4805.425
4747.139
4730.129
4728.441
4668.974
4621.238
4568.386
4536.978
4522.842
4466.214
4435.34
4405.153
4401.109
4355.005
4334.654
4266.081
4140.754
4132.864
4104.71
4082.011
4081.696
4049.025
3848.135
3819.657
3804.84

3388.692
3381.724
3357.32
3296.385
3282.428
3254.453
3201.586
3149.031
2982.042
2925.943
2865.366
2860.932
2819.871
2664.729
2565.902
2460.917
2164.833
1773.561
1507.623
1275.233
1076.443
3621.386
3619.341
3598.867
3410.4

6241.483
6239.925
6193.713
6192.278
6112.539
6101.168
6077.79
5237.979
5231.374
5199.642
5033.145
5000.277
4953.404
4950.304
3765.599
3703.961
3702.768
7122.491
7094.337
7043.888
6923.381
6910.946
6909.115
6879.079

9|Page

Plots obtained from Minitab


Survival is nearly 75% which shows failure is nearly 25 % (validating the
answer from the tool made)

10 | P a g e

P(t)
0.00025

The probability density


function (pdf) is a
mathematical function that
describes the distribution.
*** EQUATIONS USED FROM
FORMULA TABLE IN PG 8

0.0002
0.00015
0.0001
0.00005
0
0

2000

4000

6000

8000

10000

12000

F(t)
1.2
1

Failure function :Measure of


unreliability

0.8
0.6
0.4
0.2
0
0

2000

4000

6000

8000

10000

12000

R(t)
1.2
1

The Reliability plot for the


Weibull distribution returns
the reliability of the system
with Time/cycle

Reliability

0.8
0.6
0.4
0.2

cycles

0
0

2000

4000

6000

8000

10000

12000

Plots made in EXCEL

Failure is nearly 24.2 % (validating the answer from the tool made)

11 | P a g e

Task 2: We are expected to find the total number of nozzles required to be replaced at
4500 cycles The Life of HPT nozzles are found by fitting a weibull data of SHAPE factor
2 and 5 with scale factor 6500 with 95% and 99% CI and state our observations and
recommendations

At = 2 and scale factor ()=6500 we get

At 95% CI

At 99% CI

12 | P a g e

At = 5 and scale factor ()=6500 we get

At 95% CI

At 99% CI

13 | P a g e

We observe that the percentage of failures increased with decreased failure rate or
Shape factor (This is something we should think about it !!!!). This trend is also observed
from the graphs below

14 | P a g e

A very interesting point:


The nozzles said to follow a weibull distributed life where shape factor physically signify
its proportionality with Failure rate, so we expect that the probability of failure increases
with shape factor but its not!!!
Its not till a particular cycle value which is equal to the scale factor of the distribution
At 6500 cycle we observe this from the tool created

At scale factor 6500 and shape factor 3.5 we observe at 6500 cycles total failures in 104
at 95%CI
Physically if the say failure rate proportional value is shape factor then if we change
failure rate to 1000, We expect that at 6500 cycles the failure rate should be 100% but
we observe

15 | P a g e

So this is the limitation of the tool and the distribution starts making physical sense after
the number of cycles are greater than scale factor of the distribution

The observations made above are abiding with the physics that failure rate proportional
value (shape factor) increases then number of failures increases

16 | P a g e

Conclusions:
At = 3.5, =6500 at 95 %CI we observe that out of 150, 45 nozzles failed and with
99%CI 49 nozzles failed, but our inventory has 40 and inventory size is fixed we
can go ahead doing inspection at some where less than 4150 cycles (where
99%CI total failures are 40)
Cost parameters are to be considered while coming uo with optimum number of
cycles
The tool made out of using weibull distribution Starts making Physical sense only
after the number of cycles are greater than the scale factor (see explanation in PG
14)
Refer Appendix A for more info on Hazard plot

17 | P a g e

APPENDIX A
Discussion on Hazard plot

The hazard function is the instantaneous rate of failure at a given time. Characteristics of a
hazard function are frequently associated with certain products and applications. Different
hazard functions are modeled with different distribution models. You can also model hazard
functions non-parametrically.
Increasing hazard function
Indicates that items are more likely to fail with time. For example, many mechanical items
that are prone to stress or fatigue have an increased risk of failure over the lifetime of the
product. Engineers might use a test to simulate wear-out stress. For example, engineers
could simulate extended usage of a light bulb over time and then record the time until a
failure occurs

18 | P a g e

Problem 3 (Stack Up Analyzer)


Purpose: Create a stack-up Analyzer given a set of measured dimensions.
Minimum capability is to analyze normal, and uniform input distributions with a given Cp and
provide a probability of exceedance for different tolerance dimensions.
Three example problems are provided to be analyzed, also include a simulation to confirm.
A discussion about the capability, and limitations, of your tool is required.

INTRODUCTION
Typically, any exposition on tolerancing will include the two cornerstones, arithmetic and
statistical tolerancing. We will make no exception, since these two methods provide
conservative and optimistic benchmarks, respectively. Under arithmetic tolerancing it is
assumed that the detail part dimensions can have any value within the tolerance range and
the arithmetically stacked tolerances describe the range of all possible variations for the
assembly criterion of interest. In the basic statistical tolerancing scheme it is assumed that
detail part dimensions vary randomly according to a normal distribution, centered at the
midpoint of the tolerance range and with its 3 spread covering the tolerance interval. For
given part dimension tolerances this kind of statistical analysis typically leads to much
tighter assembly tolerances, or for given assembly tolerance it requires considerably less
stringent tolerances for detail part dimensions, resulting in significant savings in cost or even
making the difference between feasibility or infeasibility of a proposed design
The methods covered are: worst case or arithmetic tolerancing, simple statistical tolerancing
or the RSS method, RSS methods with inflation factors which account for nonnormal
distributions, tolerancing with mean shifts, where the latter are stacked arithmetically or
statistically in different ways, depending on how one views the tradeoff between part to part
variation and mean shifts.

19 | P a g e

Worst case

T1+T2+T3+T4+.

RMS

12 + 22 + 32 + 42

RMS WITH DISTR/Inflation

12 2 + 22 2 + 32 2 + 42 2 ****

RMS WITH INFLATION /MEAN SHIFT

12 2 (1 )2 + 22 2 (1 )2 + 32 2 (1 )2 + 42 2 (1
)2

**

**** For c value in RMS WITH DISTR/Inflation

** =Cp (given) Potential Process Capability


For entering
For entering
POSITIVE NEGATIVE COMPONENT
COMPONENT
nominal
tolerance
NOMINAL NOMINAL TOLERANCE TOLERANCE
value

DESCRIPTION FROM / TO
CP
Bearing
Housing
Casing

Distribution
1 uniform
1 uniform
1 uniform

0.5125
0.9525

+/0.0025
0.0025
0.0025

0.5093

SQUARED

COMPONENT
TOLERANCE
SQUARED with distr

TOLERANCE
SQUARED with distr*cp

0.00000625
0.00000625
0.00000625

0.00001849
0.00001849
0.00001849

0.00001849
0.00001849
0.00001849

Gasket

1.33 normal

0.1

0.005

0.000025

0.000025

2.7225E-06

Casing

1 normal

0.8725

0.0025

0.00000625

0.00000625

0.00000625

Housing

1 normal

0.5125

0.0025

0.00000625

0.00000625

0.00000625

Bearing
Gear

1.33 uniform
1.33 uniform

0.5093 0.0025
1.8975 0.0025
2.9161 0.0225

0.00000625
0.00000625

0.00001849
0.00001849

2.01356E-06
2.01356E-06

0.01140

0.00864

+ tol

- tol

RSS

0.0564

0.0114

SUB TOTALS
EXAMPLE :1

2.95

NOMINAL ANSWER
SOLUTION
Worst case

0.0339
Tol

MIN

MAX

0.0225

0.0114

0.0564

0.008

0.026

0.0422

Tol statistical with Dist

0.01140 0.02250044

0.04530

Tol statistical with Dist and Cp

0.00864 0.02525595

0.04254

Tol statistical

0.008

RSS WITH INFLATION RSS with inf and mean shift

0.026

0.02250044

0.02525595

0.0422

0.04530

0.04254

METHODS

EXAMPLE :1

Bearing
Gear

Housing

Casing

Gasket

Bearing
Housing
Casing

0.00864

Tol statistical with Dist and Cp

0.026

0.0114

0.02525595

0.02250044

MIN

MAX

0.04254

0.04530

0.0422

0.0564

- tol
0.0114

+ tol
0.0564

0.026
0.0422

RSS

0.008

0.00000625
0.00000625

0.00000625

0.00000625

0.000025

0.00000625
0.00000625
0.00000625

COMPONENT
TOLERANCE
SQUARED

Example 1 : snapshot of the tool created in EXCEL

0.01140

0.008

Tol statistical

Tol statistical with Dist

0.0225

SOLUTION
Worst case

0.0339

NOMINAL ANSWER

0.0025
0.0025
0.0225

0.0025
0.5093
1.8975
2.9161

0.5125

1.33 uniform
1.33 uniform

1 normal

0.0025

0.8725

1 normal

0.005

+/0.0025
0.0025
0.0025

0.1

0.5125
0.9525

0.5093

NEGATIVE COMPONENT
NOMINAL TOLERANCE

1.33 normal

Distribution
1 uniform
1 uniform
1 uniform

2.95

Tol

CP

POSITIVE
NOMINAL

SUB TOTALS

DESCRIPTION FROM / TO

0.04530

0.02250044

RSS WITH INFLATION

0.01140

0.00001849
0.00001849

0.00000625

0.00000625

0.000025

0.00001849
0.00001849
0.00001849

COMPONENT
TOLERANCE
SQUARED with distr

0.04254

0.02525595

RSS with inf and mean shift

0.00864

2.01356E-06
2.01356E-06

0.00000625

0.00000625

2.7225E-06

0.00001849
0.00001849
0.00001849

TOLERANCE
SQUARED with distr*cp

20 | P a g e

VERIFIED FROM TEMPLATE


PROVIDED

SOLUTION
Worst case

Tol statistical

Tol

MIN

MAX

0.0225

0.0114

0.0564

0.008

0.026

0.0422

Tol statistical with Dist

0.01140 0.02250044

0.04530

Tol statistical with Dist and Cp

0.00864 0.02525595

0.04254

0.0436
0.014

MIN

MAX
0.0804 0.1676
0.110 0.1382

0.0804

0.1676

0.014
0.110
0.1382

RSS

Example 2 : snapshot of the tool created in EXCEL

0.02259 0.10141152 0.14659


Tol statistical with Dist
0.02185 0.10215352 0.14585
Tol statistical with Dist and Cp

SOLUTION
Worst case
Tol statistical

0.124

NOMINAL ANSWER

MRSS

0.021

0.10141152
0.14659

0.02259

0.10215352
0.14585

0.02185

0
0

0.000
0.000
- tol

0.000128868

0.00004356

0.000

0.0066

1 uniform

Face Runout

0.0436

0.000128868

4.05005E-05

0.00001369

0.000

0.0037

1 uniform

blade looseness

+ tol

0.00007396
4.05005E-05

0.00007396

0.000025

0.000

0.005

1 uniform

Blade to tip

22.626

4.73344E-05

4.73344E-05

0.000016

0.000

0.004

11.41

1 uniform

Disk to CL

22.75

1.089E-07

0.000001

0.000001

SUB TOTALS

0.000106502

0.000106502

0.000036

0.000

Shaft

0.000

1 uniform

BH Radial
0.001

4.9997E-06

0.00000169

0.000

0.0013

1 uniform

BH looseness
0.006

0.000001
4.9997E-06

0.000001

0.000001

0.000

0.001

1 Normal

Bearing housing

11.216

0.00000025

0.00000025

0.00000025

0.000

0.0005

1 Normal

Frame looseness

1.33 Normal

0.000004

0.000004

0.000004

4.73344E-05
0.00001849
3.9204E-06

TOLERANCE

4.73344E-05
0.00001849
0.000036

0.000016
0.00000625
0.000036

COMPONENT
TOLERANCE

0.000
0.000
0.000

COMPONENT
TOLERANCE

0.004
0.0025
0.006

FACTORED
TOTAL

0.000

22.75

POSITIVE NEGATIVE COMPONENT


NOMINAL NOMINAL TOLERANCE

0.002

Distribution
1 uniform
1 uniform
1.33 Normal
1 Normal

Tol

Cp

Frame

Case
Runout
looseness

DESCRIPTION FROM / TO

21 | P a g e

(TOOL IN EXCEL)

22 | P a g e

Example 3:
The diameters of rotor shafts have a mean of 0.249in and a limit of +/-0.009. The inner
diameters of bearings have a mean of 0.255in and a limit of 0.006.
A - Given the two part stackup what is the clearance?
B - Given the attached data what is the actual clearance tolerance?

BEARING

0.255+/- 0.006 in
Maximum material limit
0.255-0.006 =0.249 in
Least material limit
0.255+0.006 =0.261 in

SHAFT
0.249+/- 0.009 in
Maximum material limit
0.249-0.009 =0.240 in
Least material limit
0.249+0.009 =0.258 in

23 | P a g e

Clearences

BEARING
MMC

SHAFT

LMC

MMC

0.009

0.003

LMC

0.009

0.021

Worst case calculations

Snapshot of the tool

For Case (A)

From a design engineer perspective (Case A) it is expected that the process of


Manufacturing shaft and bearing is Uniform

24 | P a g e

CASE B
From manufacturing point of view (Case B) we observe that shaft dimensions follow
normal distribution
From the data given in the excel sheet we observe that the Maximum diameter of shaft
reported is 0.279965(MMC) and minimum diameter reported is 0.240759(LMC)

From the histogram we observe that the standard deviation () is 0.005002

From I Chart we observe LCL=0.24499 and UCL=0.27501, used to calculate Cp of the process

25 | P a g e

Cp=(UCL-LCL)/6

CP FOR THE PROCESS IS 1

Now we try building a spread sheet similar to example 1 and example 2 for the present
process

26 | P a g e

DESCRIPTION FROM / TO
CP

FACTORED

COMPONENT

NOMINAL NOMINAL TOLERANCE

TOTAL

TOLERANCE

TOLERANCE

SQUARED

+/-

Distribution

1.33 uniform
1 normal

Bearing
shaft
SUB TOTALS

0.255
0.249
0.249

NOMINAL
ANSWER
SOLUTION

POSITIVE NEGATIVE COMPONENT

0.006
Tol

MIN

MAX

0.006
0.005002

0.255

0.011002

+ tol

- tol

0.017002 -0.005002

0.000
0.000

0.012
MRSS

0.000036
2.502E-05
0.008

COMPONENT
TOLERANCE

TOLERANCE

SQUARED with distrSQUARED with distr*cp

0.000106502
2.502E-05
0.01147

1.15981E-05
2.502E-05
0.00605

-0.00546832
0.01747

-0.00005129
0.01205

RSS

-0.002
0.0138

-0.005002 0.017
0.008
-0.002 0.0138
Tol statistical with Dist0.01147 -0.00546832 0.01747
Tol statistical with Dist0.00605
and Cp-0.00005129 0.01205
Worst case

0.011002

Tol statistical

POINTS OF DISCUSSION
What would happen if the dimensions were correlated? Demonstrate mathematically or
through simulation
If the dimensions are correlated

27 | P a g e

What is a more appropriate way to handle the one-sided distributions? For example, the
Flatness is distributed lognormal. A drawing limit indicates 99.73%
One-sided distribution or Non-normal distribution
Although the normal distribution takes center stage in statistics, many processes follow
a non normal distribution. This can be due to the data naturally following a specific type of
non-normal distribution (for example, bacteria growth naturally follows an exponential
distribution)
You have several options for handling your non normal data. Several tests, including the one
sample Z test, T test and ANOVA assume normality. You may still be able to run these tests if
your sample size is large enough (usually over 20 items). You can also choose to transform
the data with a function, forcing it to fit a normal model. However, if you have a very
small sample, a sample that is skewed or one that naturally fits another distribution type,
you may want to run a non parametric test. A non-parametric test is one that doesnt
assume the data fits a specific distribution type. Non parametric tests include the Wilcoxon
signed rank test, the Mann-Whitney U Test and the Kruskal-Wallis test
Imagine that you are watching a race and that you are located close to the finish line. When
the first and fastest runners complete the race, the differences in times between them will
probably be quite small.
Now wait until the last runners arrive and consider their finishing times. For these slowest
runners, the differences in completion times will be extremely large. This is due to the fact
that for longer racing times a small difference in speed will have a significant impact on
completion times, whereas for the fastest runners, small differences in speed will have a
small (but decisive) impact on arrival times.
This phenomenon is called heteroscedasticity (non-constant variance). In this example, the
amount of Variation depends on the average value (small variations for shorter completion
times, large variations for longer times).

28 | P a g e

This distribution of running times data will probably not follow the familiar bell-shaped curve
(a.k.a. the normal distribution). The resulting distribution will be asymmetrical with a longer
tail on the right side. This is because there's small variability on the left side with a short tail
for smaller running times, and larger variability for longer running times on the right side,
hence the longer tail.

Why does this matter?

Model bias and spurious interactions: If you are performing a regression or a design
of experiments (any statistical modelling), this asymmetrical behavior may lead to a
bias in the model. If a factor has a significant effect on the average speed, because
the variability is much larger for a larger average running time, many factors will
seem to have a stronger effect when the mean is larger. This is not due, however, to a
true factor effect but rather to an increased amount of variability that affects all
factor effect estimates when the mean gets larger. This will probably generate
spurious interactions due to a non-constant variation, resulting in a very complex
model with many (spurious and unrealistic) interactions.

29 | P a g e

If you are performing a standard capability analysis, this analysis is based on the
normality assumption. A substantial departure from normality will bias your
capability estimates.

The Box-Cox Transformation


One solution to this is to transform your data into normality using a Box-Cox transformation. Minitab
will select the best mathematical function for this data transformation. The objective is to obtain a
normal distribution of the transformed data (after transformation) and a constant variance.
Consider the asymmetrical function below :

The diagram above illustrates how, thanks to a Box-Cox transformation, performed

30 | P a g e

DATA:

16 blades in the set

24 blades in a lot

Nominal dimension: 60.87 mils +/- 0.47 mils

CMM GRR: 10% of tolerance

Measurement data attached here:

Blades are kept in a box, randomly


sorted into sets.

Blades are not life-tracked parts.

Aero will want to know worst case blade per set, both high and low, and also average
twist per set of 16 blades.
Quality will want to understand defective rates and the expected number of defective
blades per lot of 24 blades. What is the probability that there will be more than 2 in a
lot?
What is the probability that a set of 16 will have at least one defective blade?
How can the sample measurement data help provide information about the process?
Given: Tracked sample measurement data
Task 1: Try to fit a distribution to the data

31 | P a g e

From the plots we can clearly observe that the data fits well into largest extreme value
method
ML Estimates of Distribution Parameters
Distribution
Largest Extreme Value

Location
60.76378

Scale
0.16592

32 | P a g e

While the distribution is plotted we find all the blades with are defective fall in the area
outside 6 (99.73%) of the total population

For a total of 349 values of 16 are defective (from the graph we can observe that the
value below 60.28 and value above 61.436 is defective)

33 | P a g e

Task2: Aero will want to know worst case blade per set, both high and low, and also
average twist per set of 16 blades. So generate set of 16 Randomly
Set_1
set_2
set3
set4
set5
set6
set7
set8
set9
set10
60.7700 61.3122 60.8567 60.7430 60.8294 60.5968 60.7596 60.6776 60.8127 60.8341
60.6892
60.8564
60.5891
60.7112
60.7839
61.2419
61.0130
60.8452
61.6027
60.8943
60.9248
61.0816
60.9016
60.7989
61.0001
60.818
0.108

61.0381 61.1001 60.7626


60.5904 60.7029 60.6853
61.1894 60.8189 61.2212
60.6543 60.8933 60.6476
60.7548 61.6800 60.8232
61.7420 60.8497 61.0655
60.8849 60.9503 60.7626
60.5285 61.0535 60.8183
60.8492 60.5310 60.6643
60.8189 61.3409 60.5913
60.9592 60.8196 60.9265
60.8678 60.9155 60.6943
61.7200 61.0519 60.7942
60.5000 60.9042 61.5234
60.9500 60.9810 60.8105
60.7976 60.85324 60.75107
0.2686 0.20202 0.14306

60.7058 60.7769 60.9172


60.7876 60.7079 60.8408
60.9423 61.3370 60.9466
60.7311 60.7096 60.9034
60.7860 60.7455 60.9094
60.8438 60.6814 61.0715
60.6734 60.6033 61.1203
60.6422 60.5721 60.9503
60.7955 60.8495 60.7258
60.6633 60.6757 60.6011
60.9052 60.8050 60.7575
60.6449 60.8222 60.9711
60.7191 60.6203 60.7167
60.6271 60.7391 60.5621
60.6633 60.9348 60.6964
60.7031 60.69052 60.76295
0.07416
0.1108 0.14563

60.8821 61.0269 60.9083


60.6345 60.8915 60.7350
60.8173 61.0709 60.5387
60.8186 60.7227 60.8028
60.9951 60.7345 60.5494
60.6777 60.7993 60.9935
60.7901 60.8565 60.7307
60.8242 61.0137 60.6754
60.9969 60.9464 60.5600
60.7331 60.6597 60.5746
60.8798 60.8557 61.0180
60.7112 60.9267 60.4467
60.6679 60.6621 60.5648
60.9045 60.9981 60.7120
60.6704 60.6268 60.8108
60.738 60.78149 60.63585
0.0933 0.126613 0.014198

34 | P a g e

For simplicity Initially we solve for 10 sets (16 in a set)

For set 1 :The value below 60.61 and above 61.53 are worst cases and standard
deviation is the average twist per set= 0.108 (similarly it can be repeated for all the sets)

Task 3: Quality will want to understand defective rates and the expected number of
defective blades per lot of 24 blades. What is the probability that there will be more
than 2 in a lot?

Method used :Hyper geometric method

failure

We have N=349, n=22, K=16

Probability of more than 2 defects in a lot

35 | P a g e

P(x>2) =1-p(x=0)-P(x=1)-p(x=2)
=1-0.3445-0.388-0.19557
=0.0711

Task 4 : What is the probability that a set of 16 will have at least one defective
blade?

We have N=349, n=16, K=16

Probability of atleast one is defective =1-p(x=0)


= 1-0.463398
=0.5367

Task 5 : How can the sample measurement data help provide information about the
process?

From a design perspective when an engineer comes up with tolerance Data ,it is
expected that the process is uniform ,But this can never be achieved as no machining
operation or tool can have very high repeatability, precision and accuracy.
There is always a chance where we can be close enough to the dimension_/+
tolerances mentioned, The word Close enough comes with addition of cost .
The sample measurement data is nothing but the representation of how repeatable,
accurate and precise the data is.
As per the requirement if the sample data shows very low deviation from the mean
value we can say that Variance of the production process is very low
If the sample data is in-between the control limits we can say that the process is well
under control
If we observe something like this in the process capability chart

36 | P a g e

Expect the circled part rest of the distribution is more or less under control, The
deviation in the circled part may be due to many reasons like
Change in operator, Change in tool, Change in machine tool, Change in machining
method and parameters (Tool feed, tool speed) or any failure might have lead to this
defect

37 | P a g e

This is what we have already discussed , there is a clear indication in the control chart
that 16 observations are out of spec limits mentioned ,We can observe from the three
zones in the chart. The discrepancies are somewhat periodic so they may be purely
due to wear of the tool or Shift of the operator