Вы находитесь на странице: 1из 16

Australian Journal of Basic and Applied Sciences, 4(2): 286-301, 2010

ISSN 1991-8178

Goodness of fit Tests for Generalized Frechet Distribution


1
Abd-Elfattah, A.M., 2Hala, A.Fergany, 3Omima, A.M

1
Institute of Statistical Studies & Research, Cairo University, Egypt
2
Department of Mathematics, Faculty of Science, Tanta University, Egypt.
3
Researcher, Faculty of Science, Tanta University, Egypt.

Abstract: An important problem in statistics is to obtain information about the form of the population
from which the sample is drawn. Goodness of fit test is employed to determine how well the observed
sample data "fits" some proposed model. The standard goodness of fit statistics are inappropriate when
the parameters of the hypothesized distribution are estimated from the data used for the test. In this
paper, we obtain the tables of critical values of modified Kolmogorov-Smirnov (KS) test, Cramer-Von
Mises (CVM) test, Anderson-Darling (AD) and W atson test for generalized Frechet distribution with
unknown parameters. The sampling distributions for these tests statistics are investigated. This article
uses Monte Carlo and Pearson system techniques to create tables of critical values for such situations.
Further more, we present power comparison between KS test, CVM test, AD test and W atson test.
W e used the software package MathCAD (2001) for all programs.

Key words: Anderson-Darling test, Cramer Von Mises test, Kolomogorov-Sminrov test, W atson
statistic, Critical values, Generalized Frechet distribution, Power function.

INTRODUCTION

In many situations, our interest is focused on the nature of distribution for one or more population from
which the sample have been drawn. Goodness of fit test is employed to determine how well the observed
sample data "fits" some proposed model. These tests are designed for a null hypothesis which is an assertion
about the form of the parent population from which the sample is drawn. The main problem is that of testing
the hypothesis about the distribution function, F(x), of the form Ho:F(x)=F o (x), where F o (x) = P(X#x) is a
specified family of cumulative distribution functions. W hen F 0 (x) is completely specified (i.e. does not contain
unknown parameters) and the data are uncensored the tests are all distribution free and percentage points for
the various test statistics are generally known.
The complete sample procedures of goodness of fit tests are inappropriate for use with censored samples.
The critical values obtained from published tables of the complete sample test statistic are necessarily
conservative. Also the goodness of fit test for censored data is inappropriate when the parameters of
hypothesized distribution are estimated from the data.
The KS, CVM and AD test statistics are all distribution free and percentage points for these statistics are
generally known when F 0 (x) is completely specified and the data are uncensored. This is no longer the case
when data are censored or F 0 (x) involves unknown parameters. If the hypothesized distribution contains
unknown parameters that must be estimated for the sample data, then the standard tables used for these
statistics no longer valid and the resulting test would be extremely conservative. In this case, these statistics
can be modified by inserting estimates of parameters in F 0 (x). It is not possible to obtain an analytic expression
for the distribution of test statistic. Hence M onte Carlo methods are often used to obtain critical values for the
modified goodness of fit tests with estimated parameters.
Several works exist in the literature that addresses the problem of finding critical values for goodness of
fit tests when unknown parameters must be estimated from the sample data. Lilliefors (1967, 1969) had used
Monte Carlo method to construct tables of critical values for the Kolomogrov-Smirnov test when the mean and
variance of the normal distribution are estimated and when the mean of an exponential distribution is estimated,
Stephens (1970, 1974), Durbin(1975), Green and Hegazy(1976) considered several tests of goodness of fit for
normal and exponential distributions with unknown parameters. Goodness of fit tests for the two parameters
W eibull distribution have been discussed by Smith and Bain (1976), Stephens (1977), Littele, et al (1979) and

Corresponding Author: Abd-Elfattah, A.M., Institute of Statistical Studies & Research, Cairo University, Egypt

286
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

W ozniak and Li (1990). Many authors have extended this idea to include tests for gamma, logistic, Laplace,
Rayleigh, lognormal, two parameter exponential, inverse Gaussian, Burre type XII, Burre type III, generalized
gamma, Lomax and generalized exponential distributions, see for examples Ashour and Abd-Elfattah (1994),
Abd-Elfattah and D ahlan (2003), Abd-Elfattah and Alharby (2004) and Hassan (2005).Recently, Shawky and
Bakoban (2009) were studied goodness of fit tests for the generalized gamma distribution.
In recent years, several standard life time distributions have been generalized via exponentiation. Examples
of such exponentiated distributions are the exponentiated W eibull family, the exponentiated exponential, the
exponentiated Rayleigh, the exponentiated (generalized) Frechet and the exponentiated Pareto family of
distributions.
Amongst the authors who have considered the exponentiated distributions are Mudholkar & Hutson (1996),
Gupta & Kundu (2001), Surles & Padgett (2001), Nadarajah & Kotz (2003) and K undu & Gupta (2007, 2008).
A common feature in families of exponentiated distributions is that the distribution function may be written

as , where G(.), is the distribution function of a corresponding non generalized

distribution and á >0 denotes the generalized parameter. The generalized Frechet distribution is obtained by
generalization of the Frechet distribution. One of the most important families of distributions is the generalized
Frechet. A new three parameter distribution, named as Exponentiated Frechet (EF) distribution has been
introduced by Nadarajah & Kotz (2003) as a generalization of the standard Frechet distribution. There are over
fifty applications ranging from accelerated life testing through to earthquakes, floods, horseracing, rain fall,
queues in supermarkets, sea currents, wind speeds and track race records, see Kotz and Nadarajah (2000).
The distribution function is

(1.1)

where are the shape parameters and is the scale parameter.

The probability density function is

(1.2)

W e organize this paper as follows. Section 2 deals with the estimation of unknown parameters. Section
3 describes the modified test statistics Section 4 discusses the problem of obtaining the critical values for the
test statistics if the underlying distribution is generalized Frechet using two different methods. Section 5 gives
power comparisons among the KS, CVM, AD, W atson, and L n test statistics.

2. Estim ation of the Unknown Param eters:


This section is concerned with the maximum likelihood estimation of the unknown parameters á ,ë and

for the EF(á ,ë, ó).

Let be a random sample from a generalized Frechet distribution with unknown

parameters á, ë and ó. The maximum likelihood estimators of the three parameters are obtained by solving
these equations simultaneously; see Abd-Elfattah, and Omima (2009)

(2.1)

287
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

(2.2)

and

(2.3)

W e apply iterative procedure Newton Raphson method to find the solution of system of nonlinear
equations.

3. Goodness of Fit Tests:


Goodness of fit tests (GOF) measure the degree of agreement between the distribution of an observed data
sample and the theoretical statistical distribution. A goodness of fit test based on the empirical distribution
function (EDF), when the parameters are estimated, is called a modified goodness of fit test. In this section,

the Kolomogorov-Sminrov statistic Cramer-von-Mises statistic , Anderson-Darling statistic ,

W atson test statistic , and test statistic which introduced by Liao and Shimokawa (1999) are

described. The AD statistic is a modification of CVM statistic giving more weight to observations in the tail
of the distribution, which is useful in detecting outliers (see Anderson and darling (1954), Stephens (1977)).
The W atson statistic is a modification of the CVM test statistic; it is also measure the discrepancy between
the empirical distribution function and the hypothesized distribution function. L n test statistic measures the
average of all weighted distances over the entire range of x, which combines the characteristic of the KS,
CVM and AD statistics (see, Liao and Shimokawa (1999)).

The K-s Test Statistic D n is:

(3.1)

W here is a cumulative distribution function of EF (á,ë,ó) distribution, , and

are the estimated parameters using maximum likelihood estimators of á,ë andó

288
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

The C-M statistic is:

(3.2)

The A-D statistic is:

(3.3)

The W atson statistic is:

(3.4)

Liao and Shimokawa L n statistic is:

(3.5)

4. Critical Values Calculations:


Critical values for the modified goodness of fit tests are generated using Monte Carlo procedure. Lilliefors
(1967) first used this approach to find tables of critical values for a modified the KS,
Now suppose the null hypothesis is

H o : the sample comes from EF(á,ë,ó) distribution ,F o ( x ),with unknown parameters á,


ë and ó.
The aim is to obtain tables of critical values for all test statistics using two different methods. The first
method by using Monte Carlo simulation and the second method is obtaining the sampling distributions for
the proposed test statistics using Pearson system technique. From the resulting sampling distributions the critical
values for the test statistics are obtained. The two methods are carried out via MathCAD (2001).

4.1 Method A:
Monte Carlo Simulation is used to create critical values for the proposed test statistics for a generalized
Frechet distribution with unknown parameters. T he following steps are used in calculating critical values for
test statistics

Step (1): Arandom sample from the Exponentiated Frechet was generated. Firstly a

random sample U (1 ), U (2 ),… , U (n ) of n order statistics from a uniform (0,1) distribution is generated, then the
i-th order statistics from the EF(á,l,s) with á=1 ,ë=1and s=1 will be obtained as follows

289
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

, i = 1,2,… ..,n (4.1)

Step (2): This random sample is used to estimate the unknown parameters by maximum likelihood method.
Step (3): The resulting maximum likelihood estimators of the unknown parameters used to determine the
hypothesized cumulative distribution function for the generalized Frechet distribution.
Step (4): Selected sample size as n = 10(5) 50,60,70 and 80. The modified KS, CVM, AD, W atson, and L n
test statistics are calculated for the given values of n.
Step (5): This procedure is repeated 10000 times, thus generating 10000 independent values of the appropriate
test statistics. These 10000 values were then ranked, and the values of these test statistics at eight significance
levels, i.e., ã=0.01, 0.025, 0.05, 0.10, 0.15, 0.20, 0.25 and 0.9 are calculated. These provided the critical values
for that particular test under each sample size used.
Tables 1 list the critical values for the statistics D n , , , and L n when á, ë ,ó are
unknown using Monte Carlo method.

4.2 Method B:
Critical Values Using Pearson System: The statistician Karl Pearson (1895) devised a system, or family,
of distributions that includes a unique distribution corresponding to every valid combination of mean, standard
deviation, skewness, and kurtosis. If you compute sample values for each of these moments from data, it is
easy to find the distribution in the Pearson system that matches these four moments. The Pearson system
embeds seven basic types of distribution together in a single parametric framework. It includes common
distributions such as the normal and t distributions, simple transformations of standard distributions such as
a shifted and scaled beta distribution and the inverse gamma distribution. The criterion for fixing the
distribution family is

(4.2)

W here are the measures of skewness and kurtosis respectively? So for different values of

K, there exist different types of distributions. The following steps are used in calculating critical values for the
test statistics using Pearson’s technique:
Step(1):Repeat the above steps from 1-3 in method A, then mean, variance, skewness, kurtosis and Pearson
coefficient are calculated for each test statistic and sample size.
Step(2): The resulting values of equation (4.2) yield the types of Pearson distribution.
Step(3): For any particular distribution, the constants and the parameters of distributions are calculated. These
provided the critical values for the above test statistics at significance levels, ã = 0.01, 0.025, 0.05, 0.10, 0.15,
0.20 and 0.25 for different sample sizes.
As a result of computer simulation, the following functions are obtained. Each function is defining a
specific type of Pearson ’s curves. In particular, the type I Pearson’s curves has the density function

(4.3)

m 1 >-1, where l 1, l2 , m 1 and m 2 are the parameters of the family of distributions and k 1 is a constant. W hile, type
IV Pearson’s curve has the density

290
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

where k 2 is a constant, d, a and ø are the parameters of distributions. The last type of Pearson ’s curves that
fitted to the test statistics is type VI and it has the density function

where k 3 is a constant, e 1 , h 1 and p are the parameters of distributions.

5. Power Study:
The power of a goodness of fit test is defined as the probability that a statistic will lead to the rejection
of the null hypothesis, H 0 , when it is false. The power of a goodness of fit test at the significance level ã is
denoted by 1-â, where â is the probability of committing a type II error, failing to reject a false null
hypothesis. A power comparison was made among KS statistic, CVM statistic, AD statistic, W atson statistic,
and L n test statistic for the generalized Frechet distribution with unknown shape á, ë and scale parameters ó.
The power was determined by generating 10000 random sample of size n = 10(5) 30 the null hypothesis (H 0 )
is that the complete sample comes from the EF (á, ë , ó) distribution. T he alternative hypothesis H a is that the
sample follows some other distribution. The following alternative distributions are considered:
1. The Exponentiated exponential distribution.
2. The Cauchy distribution.
3. The W eibull distribution with three parameters.
4. The Logistic distribution
5. The W eibull distribution with two parameters.

For each test, the appropriate test statistic was calculated and compared to its respective critical values and
counted the number of rejections of the null hypothesis. The power results for the tests at the significance level
ã = 0.01, 0.025, 0.05, 0.10, 0.15, 0.20 are presented in Table3 to Table7.

Table 1: Critical Points of Test Statistics using M ethod A


Sam ple size test Statis. ã =0.01 ã =0.025 ã =0.05 ã =0.1 ã = 0.15 ã = 0.20 ã = 0.25 ã = 0.90

10 0.537 0.508 0.482 0.449 0.426 0.407 0.393 0.268

1.259 1.14 1.028 0.912 0.833 0.766 0.717 0.238

7.843 7.259 6.768 6.245 5.882 5.615 5.38 2.955

3.04 2.933 2.837 2.736 2.666 2.612 2.57 2.189

15.427 9.83 7.295 5.573 4.989 4.672 4.515 3.942

15 0.522 0.503 0.478 0.449 0.43 0.415 0.402 0.252

1.776 1.616 1.479 1.313 1.202 1.123 1.059 0.41

10.864 10.129 9.486 8.753 8.288 7.956 7.646 4.509

4.784 4.64 4.509 4.361 4.263 4.191 4.133 3.576

17.519 9.628 7.359 5.872 5.465 5.275 5.137 4.348

291
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 1: Continue

20 0.512 0.497 0.473 0.446 0.428 0.414 0.402 0.269

2.287 2.056 1.88 1.691 1.551 1.454 1.374 0.592

13.919 12.856 12.081 11.216 10.683 10.235 9.878 6.031

6.53 6.324 6.155 5.978 5.852 5.761 5.689 4.726

14.192 9.289 6.987 6.763 6.248 5.953 5.731 4.725

25 0.508 0.484 0.462 0.441 0.425 0.412 0.402 0.282

2.695 2.473 2.272 2.048 1.897 1.786 1.693 0.797

16.447 15.443 14.558 13.589 12.951 12.464 12.064 7.729

8.192 7.982 7.792 7.582 7.44 7.339 7.252 6.432

14.776 10.375 9.125 7.894 7.178 6.675 6.337 5.078

30 0.502 0.48 0.46 0.437 0.422 0.411 0.402 0.287

3.142 2.886 2.655 2.401 2.247 2.126 2.021 0.993

19.16 18.072 17.002 15.934 15.235 14.698 14.235 9.326

9.889 9.637 9.42 9.18 9.034 8.923 8.827 7.868

17.667 14.319 11.806 9.498 8.178 7.461 6.984 5.411

35 0.492 0.472 0.454 0.433 0.419 0.408 0.399 0.295

3.576 3.281 3.062 2.78 2.604 2.461 2.344 1.24

21.755 20.609 19.488 18.326 17.557 16.945 16.44 11.19

11.568 11.282 11.077 10.807 10.64 10.504 10.393 9.355

26.154 20.474 15.797 11.443 9.548 8.321 7.645 5.727

40 0.487 0.466 0.449 0.43 0.417 0.406 0.397 0.295

4.059 3.692 3.405 3.116 2.937 2.79 2.668 1.451

24.643 23.123 21.883 20.625 19.823 19.178 18.622 12.846

13.304 12.944 12.668 12.391 12.22 12.082 11.963 10.812

42.058 29.153 20.615 13.572 10.576 9.159 8.255 6.032

45 0.482 0.463 0.447 0.428 0.415 0.405 0.396 0.302

4.48 4.11 3.82 3.494 3.281 3.114 2.977 1.672

292
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 1: Continue

27.207 25.624 24.42 23.06 22.05 21.354 20.78 14.633

14.968 14.609 14.331 14.017 13.813 13.651 13.52 12.277

77.613 48.435 29.9968 16.41 12.126 10.011 8.912 6.309

50 0.48 0.458 0.442 0.424 0.412 0.402 0.395 0.303

4.885 4.487 4.198 3.815 3.596 3.433 3.289 1.904

29.654 28.058 26.738 25.273 24.296 23.527 22.904 16.354

16.631 16.235 15.961 15.589 15.376 15.222 15.082 13.757

134.654 75.069 39.738 20.802 14.065 11.088 9.737 6.581

60 0.466 0.45 0.436 0.42 0.408 0.399 0.392 0.308

5.574 5.196 4.869 4.501 4.268 4.089 3.931 2.354

34.56 32.905 31.353 29.718 28.716 27.912 27.169 19.887

19.82 19.45 19.132 18.772 18.55 18.373 18.221 16.701

348.545 176.87 18.878 31.399 18.644 13.747 11.336 7.104

70 0.461 0.446 0.433 0.416 0.407 0.398 0.91 0.312

6.444 6.049 5.651 5.235 4.943 4.75 4.574 2.849

39.793 37.861 36.254 34.456 33.221 32.329 31.553 23.547

23.188 22.803 22.412 22.005 21.723 21.528 21.364 19.695

1.45*10 3 530.078 181.527 56.572 27.585 18.054 13.741 7.595

80 0.455 0.422 0.43 0.413 0.403 0.395 0.389 0.314

7.203 6.735 6.347 5.891 5.579 5.365 5.17 3.305

44.505 42.541 40.78 38.779 37.547 36.554 35.756 27

26.44 25.992 25.612 25.161 24.862 24.65 24.461 22.645

4.07*10 3 1.2*10 3 360.597 93.256 37.304 22.443 16.425 8.059

Table 1 shows that all critical values increase as the significant level (ã) decreases. Also, the critical values

for KS (D n ) test statistics decrease as n increases. Moreover, the critical values for CVM ( ), AD

( ), W atson ( ), and L n test statistics increase usually as n increases. However, we note from Table

1 that for the specific sample sizes the critical values for test statistics L n is the biggest for all significant
levels. Also, for the specific sample size the critical values for Watson test statistics is bigger than the critical
values for CVM test statistics for all significant and for all n , the specific sample size the critical values for
AD test statistics is bigger than the critical values for W atson test statistics for all significant and for all n and
the specific sample sizes the critical values for test statistics KS is the smallest for all significant levels.

293
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 2: Sam pling D istributions for Five Test Statistics U sing Pearson System s
Sam ple Size n Test Statistics D istribution Type
-----------------------------------------------------------------------------------------------------------------------
ã = 0.01 ã=0.025 ã=0.05 ã=0.1 ã=0.15 ã=0.2 ã =0.25

10 I IV IV IV VI I I

I I I I IV IV IV

I I IV IV I I I

I I I I I IV IV

IV IV IV I I I I

15 IV IV I I I I I

I I I I IV IV IV

I I IV VI I I I

I I I I IV IV IV

IV IV IV I I I I

20 I IV VI I I I I

I I I IV IV IV IV

I I IV IV I I I

I I I I IV IV IV

IV IV I I IV IV I

25 IV IV I I I I I

I I I IV IV IV VI

I I IV VI I I I

I I I IV IV IV IV

IV VI I I I I IV

30 IV IV VI I I I I

I I I IV IV IV IV

I I IV IV I I I

I I I IV IV IV IV

IV IV VI I I I I

35 IV IV I I I I I

I I I IV IV VI I

294
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 2: Continue

I I IV VI I I I

I I I IV IV VI I

IV IV IV VI I I I

40 IV IV I I I I I

I I I IV IV VI I

I IV IV VI I I I

I I I IV IV VI I

IV IV IV VI I I I

45 IV IV VI I I I I

I I I IV IV VI I

I I I VI I I I

I I I IV IV VI I

IV IV IV IV I I I

50 IV IV I I I I I

I I I IV IV I I

I IV IV I I I I

I I I IV IV VI I

IV IV IV IV VI I I

60 IV IV I I I I I

I I IV IV VI I I

I IV IV I I I I

I I IV IV IV I I

IV IV IV IV VI I I

70 IV IV VI I I I I

I I I IV VI I I

I I IV VI I I I

I I I IV IV I I

IV IV IV IV IV VI I

295
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 2: Continue

80 IV IV I I I I I

I I IV IV VI I I

I IV IV I I I I

I I IV IV VI I I

IV IV IV IV IV VI I

Table 3: Power function when the alternative distribution is Exponentiated exponential distribution
sam p. size Test Statistics Power of the test
----------------------------------------------------------------------------------------------------------------------
ã = 0.2 ã = 0.15 ã=0.1 ã = 0.05 ã = 0.025 ã = 0.01
10 Dn 0.9079 0.9176 0.9297 0.9446 0.9547 0.9644

0.9478 0.9581 0.9653 0.9733 0.9811 0.9870

0.6856 0.7136 0.7430 0.7800 0.8076 0.8321

0.9157 0.9372 0.9562 0.9303 0.9774 0.9852

Ln 0.0007 0.0026 0.0159 0.2116 0.247 0.4350


15 Dn 0.9615 0.9749 0.9805 0.9886 0.9894 0.9936

0.9822 0.9856 0.9878 0.9908 0.9933 0.9945

0.8009 0.8227 0.8480 0.8794 0.9003 0.916

0.9781 0.9819 0.9860 0.9898 0.9924 0.9943

Ln 0.0005 0.0011 0.0045 0.0487 0.1639 0.4077


20 Dn 0.9907 0.9925 0.9941 0.9957 0.9967 0.9972

0.9946 0.9956 0.9968 0.998 0.9985 0.9989

0.8677 0.8894 0.9100 0.9318 0.9461 0.9591

0.9936 0.9951 0.9961 0.9976 0.998 0.9988

Ln 0.0005 0.0025 0.0074 0.0106 0.0917 0.2755


25 Dn 0.9965 0.997 0.9977 0.9981 0.9988 0.9993

0.9978 0.998 0.9988 0.999 0.9993 0.9996

0.9073 0.9210 0.9361 0.9514 0.9632 0.9738

0.9975 0.998 0.9981 0.999 0.9994 1

Ln 0.0005 0.0049 0.0170 0.0553 0.1039 0.2660


30 Dn 0.9976 0.998 0.9989 0.9994 0.9995 0.9996

0.9987 0.9991 0.9995 0.9998 0.9998 1

0.8158 0.8314 0.8447 0.9602 0.9729 0.9792

0.9989 0.999 0.9994 0.9997 0.9999 1

Ln 0.0005 0.0049 0.0261 0.0910 0.1676 0.2510

From Table 3, we note that the modified CVM test is the most powerful for all significant levels and different
sample size n.

296
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 4: Power function when the alternative distribution is Cauchy distribution


sam p. size Test Statistics Power of the test
---------------------------------------------------------------------------------------------------------------------
ã = 0.2 ã = 0.15 ã=0.1 ã = 0.05 ã = 0.025 ã = 0.01
10 Dn 0.502 0.503 0.5036 0.5040 0.5048 0.5055

0.6280 0.781 0.8516 0.9088 0.9150 0.9162

0.8248 0.8295 0.8366 0.911 0.9200 0.9318

0.614 0.6174 0.6256 0.6530 0.8808 0.9056

Ln 0.9612 0.9659 0.9726 0.9842 0.9927 0.9972


15 Dn 0.607 0.6084 0.6102 0.612 0.7224 0.7249

0.7154 0.7182 0.9293 0.9334 0.9386 0.952

0.8444 0.8544 0.8654 0.8828 0.9276 0.9410

0.6571 0.6945 0.7119 0.7196 0.9314 0.9363

Ln 0.9567 0.9603 0.9643 0.9782 0.9884 0.9973


20 Dn 0.4919 0.4926 0.4936 0.4955 0.4963 0.6635

0.7517 0.7569 0.9363 0.9650 0.9703 0.9730

0.8724 0.8837 0.8923 0.9097 0.9196 0.9516

0.7462 0.7494 0.7532 0.7694 0.9619 0.9702

Ln 0.9608 0.9649 0.9712 0.9737 0.9869 0.9950


25 Dn 0.4529 0.4534 0.4556 0.4573 0.4598 0.5931

0.7961 0.8258 0.8365 0.9721 0.9765 0.9839

0.8899 0.8977 0.9076 0.9201 0.9302 0.9420

0.77 0.7833 0.8181 0.8363 0.9718 0.9768

Ln 0.9663 0.9712 0.9762 0.9837 0.9880 0.9955


30 Dn 0.4148 0.4161 0.4172 0.419 0.420 0.5247

0.860 0.863 0.866 0.9795 0.9884 0.9894

0.9012 0.9086 0.9184 0.9322 0.9429 0.9528

0.85 0.8578 0.8622 0.8670 0.9821 0.9886

Ln 0.9747 0.9788 0.9860 0.9920 0.9957 0.9975

From Table 4, we note that the m odified L n test is the m ost powerful for all significant levels and for all sam ple sizes.

Table 5: Power function when the alternative distribution is Three param eter W eibull distribution
sam p. size Test Statistics Power of the test
---------------------------------------------------------------------------------------------------------------------
ã = 0.2 ã = 0.15 ã=0.1 ã = 0.05 ã = 0.025 ã = 0.01
10 Dn 0.5435 0.5830 0.6266 0.680 0.7140 0.7405

0.7697 0.7697 0.7698 0.7698 0.7699 0.7699

0.7682 0.7696 0.7692 0.7697 0.7697 0.7697

0.6984 0.7349 0.7661 0.7695 0.7695 0.7697

Ln 0.9902 0.9902 0.9904 0.9994 0.9994 0.9994

297
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 5: Continue
15 Dn 0.7254 0.7655 0.801 0.8311 0.8389 0.8405

0.8394 0.8394 0.8405 0.8407 0.8407 0.8409

0.8394 0.8401 0.8403 0.8405 0.8405 0.8405

0.8146 0.8315 0.8404 0.8406 0.8406 0.8408

Ln 0.9989 0.9989 0.991 0.991 1 1


20 Dn 0.8156 0.844 0.8656 0.8802 0.8816 0.8846

0.8843 08847 0.8847 0.8849 0.8849 0.885

0.884 0.8842 0.8845 0.8847 0.8847 0.8847

0.8712 0.88 0.8845 0.8847 0.8847 0.8847

Ln 0.9937 0.9937 0.994 0.994 0.999 1


25 Dn 0.9086 0.9262 0.9380 0.944 0.9463 0.9470

0.9086 0.9362 0.9380 0.944 0.9463 0.9470

0.9444 0.9444 0.9448 0.9469 0.9469 0.9472

0.9404 0.9450 0.9469 0.9469 0.9469 0.947

Ln 0.999 0.999 0.999 0.9994 1 1


30 Dn 0.9222 0.9332 0.9425 0.9474 0.9484 0.9487

0.9085 0.9275 0.9285 0.9385 0.9385 0.95

0.9460 0.9465 0.9485 0.9485 0.9488 0.949

0.9428 0.9470 0.9485 0.9485 0.9485 0.9485

Ln 0.999 0.9995 0.9995 1 1 1

From Table 5, we note that the modified L n test is the most powerful for all significant levels and for all
sample sizes.

Table 6: Power function when the alternative distribution is logistic distribution


sam ple size Test Statistics Power of the test
----------------------------------------------------------------------------------------------------------------------
ã = 0.2 ã = 0.15 ã=0.1 ã = 0.05 ã = 0.025 ã = 0.01
10 Dn 0.4766 0.4809 0.4868 0.4951 0.5 0.5071

0.5640 0.5707 0.5804 0.5981 0.6058 0.6096

0.5902 0.5933 0.5960 0.6007 0.6032 0.6063

0.5331 0.5434 0.5541 0.5675 0.5795 0.5944

Ln 0.9980 0.9989 0.9991 0.9995 1 1


15 Dn 0.4538 0.4630 0.428 0.4892 0.5 0.5044

0.6454 0.6524 0.6597 0.6670 0.6709 0.6729

0.6546 0.6571 0.6599 0.6651 0.6685 0.6702

0.6114 0.6259 0.6415 0.6550 0.6619 0.668

Ln 0.9976 0.998 0.9987 0.9994 0.9997 1

298
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 6: Continue
20 Dn 0.4242 0.4349 0.4470 0.4658 0.4789 0.4885

0.6948 0.6994 0.7042 0.7083 0.711 0.7124

0.6997 0.7023 0.7043 0.7075 0.7096 0.7111

0.6789 0.6874 0.6950 0.7022 0.7070 0.7106

Ln 0.9989 0.9990 0.9993 0.9994 0.9996 1


25 Dn 0.3828 0.3943 0.4104 0.4249 0.4406 0.4604

0.7338 0.7379 0.7405 0.7437 0.7458 0.7467

0.7377 0.7389 0.740 0.7416 0.7436 0.7446

0.7230 0.7289 0.7351 0.7404 0.7435 0.7453

Ln 0.9980 0.9985 0.9987 0.9992 0.9995 0.9998


30 Dn 0.3251 0.3347 0.3473 0.3659 0.3825 0.4005

0.758 0.761 0.7639 0.7658 0.7664 0.7667

0.76 0.761 0.7625 0.7640 0.7651 0.7659

0.7516 0.7557 0.7593 0.7640 0.7657 0.7666

Ln 0.9985 0.9987 0.999 0.9997 0.9997 0.9998

From Table 6, we note that the modified L n test is the most powerful for all significant levels and for all
sample sizes.

Table 7: Power function when the alternative distribution is Two param eter W eibull distribution
sam ple size Test Statistics Power of the test
----------------------------------------------------------------------------------------------------------------------
ã = 0.2 ã = 0.15 ã=0.1 ã = 0.05 ã = 0.025 ã = 0.01
10 Dn 0.915 0.9155 0.9159 0.9159 0.9159 0.9159

0.915 0.9155 0.9159 0.9159 0.9159 0.9159

0.915 0.9155 0.9159 0.9159 0.9159 0.9159

0.915 0.9155 0.9159 0.9159 0.9159 0.9159

Ln 0.8872 0.888 0.8894 0.9974 0.9983 0.9993


15 Dn 0.9598 0.9598 0.9598 0.9598 0.9598 0.9598

0.9598 0.9598 0.9598 0.9598 0.9598 0.9598

0.9598 0.9598 0.9598 0.9598 0.9598 0.9598

0.9598 0.9598 0.9598 0.9598 0.9598 0.9598

Ln 0.9321 0.9325 0.9335 0.9971 0.9984 0.9995


20 Dn 0.9819 0.9819 0.9819 0.9819 0.9819 0.9819

0.9819 0.9819 0.9819 0.9819 0.9819 0.9819

0.9819 0.9819 0.9819 0.9819 0.9819 0.9819

0.9819 0.9819 0.9819 0.9819 0.9819 0.9819

Ln 0.96 0.9612 0.9619 0.9624 0.9997 0.9999

299
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Table 7: Continue
25 Dn 0.9893 0.9893 0.9893 0.9893 0.9893 0.9893

0.9893 0.9893 0.9893 0.9893 0.9893 0.9893

0.9893 0.9893 0.9893 0.9893 0.9893 0.9893

0.9893 0.9893 0.9893 0.9893 0.9893 0.9893

Ln 0.9875 0.9878 0.9879 0.9985 0.9999 0.9999


30 Dn 0.9942 0.9942 0.9942 0.9942 0.9942 0.9942

0.9942 0.9942 0.9942 0.9942 0.9942 0.9942

0.9942 0.9942 0.9942 0.9942 0.9942 0.9942

0.9942 0.9942 0.9942 0.9942 0.9942 0.9942

Ln 0.9932 0.9934 0.999 0.9995 1 1

W e conclude from Tables 3 - 7 the modified L n test is the most powerful for all alternative distribution
except when the distribution is the Exponentiated exponential distribution.

REFERENCES

Abd-Elfattah, A.M. and A.H. Alharby, 2004. " Economic and Commerce, Faculty of Commerce, Ain
Shames University,1,1-8. "A study on Lomax distribution as a life testing model" ,The Scientific Journal
Economic and Commerce, Faculty of commerce, Ain Shams University, 1: 1-8.
Abd-Elfattah, A.M. and M .A. Dahlan, 2003. "Goodness of fit tests for the Burr distribution type III based
on censored and complete samples" ,The Scientific Journal, Faculty of commerce, El Azhar U niversity, 28:
1-16.
Abd-Elfattah, A.M. and A.M. Omima, 2009. "Estimation of the Unknown Parameters of the Generalized
Frechet Distribution", Journal of Applied Sciences Research, 5(10): 1398-1408.
Anderson, T.W . and D.A. Darling, 1954. "A test of goodness-of-fit". J. Amer. Statist. Assoc., 49: 765-769.
Ashour, S.K. and A.M. Abd-Elfattah, 1994. "Modified goodness of fit tests for the Burr distribution with
two unknown shape parameters" ,The Egyptian Statistical Journal, Institute of Statistical Studies &Research,
Cairo University, 38(1): 115-128.
Durbin, J., 1975. "Kolmogorov-Smirnov tests when parameters are estimated with applications to tests of
exponentially and test on spacings", Biometrica, 62(1): 5-22.
Green, J.R. and Y.A.S. Hegazy, 1976. "Powerful modified EDF goodness-of-fit tests". J.A.S.A., 71: 204-
209.
Gupta, R.D. and D. Kundu, 2001. "G eneralized exponential distributions: different methods of estimation",
Journal of Statistical Computation and Simulation, 69: 315 - 338.
Gupta, R.D. and D . Kundu, 2007. "Generalized exponential distribution: existing results and some recent
developments", Journal of Statistical Planning and Inference, doi: 10.1016/j.jspi.2007.03.030
Kotz, S. and S. Nadarajah, 2000. "Extreme Value Distributions: Theory and Applications. London: Imperial
College Press.
Kundu, D. and R.D. Gupta, 2008. "Generalized exponential distribution; Bayesian Inference",
Computational Statistics and Data Analysis, 52: 1873-1883.
Hassan, A.S., 2005. "Goodness-of-fit for the generalized exponential distribution", Interstat Electronic
Journal, July 2005, 1-15.
Liao, M. and T. Shimokawa, 1999." A new goodness-of-fit test for Type-I extreme-value and 2-parameter
W eibull distributions with estimated parameters", Journal of Statistical Computation and Simulation, 64: 23-48.
Lilliefors, H.W ., 1967. "On the Kolmogorov-Smirnov test for normality with mean and variance unknown",
J.A.S.A., 62: 399-402.
Lilliefors, H.W ., 1969. "On the Kolmogorov-Smirnov test for the exponential distribution with mean
unknown", J.A.S.A., 64: 387-389.
Littell, R.D., J.T. McClave and W .W . Offen, 1979. "Goodness of-fit tests for the two-parameter W eibull
distribution", Commun. Statist.-Simul. Comput, 8(3): 257-269.

300
Aust. J. Basic & Appl. Sci., 4(2): 286-301, 2010

Mudholkar, G.S. and A.D. Huston, 1996. "The exponentiated W eibull family: some properties and a flood
data application " ,Communications in Statistics-Theory and Methods, 25: 3059-3083.
Nadarajah, S. and S. Kotz, 2003. "The Exponentiated Frechet Distribution", Interstat Electronic Journal.
Pearson, K., 1895." Contributions to the mathematical theory of evolution. II. Skew variations in
homogeneous material ",Philosophical Transactions of the Royal Society of London, Series A., 186: 343-414.
Shawky, A.I. and R.A. Bakoban, 2009. "Modified Goodness-of-Fit Tests for Exponentiated gamma
Distribution with Unknown Shape Parameter", Interstate Electronic Journal, July 2009.
Smith, R.M., L.J. Bain, 1976. "Correlation type goodness-of-fit statistics with censored sampling",
Communications in Statistics, Part A-Theory and Methods, 5: 119-132.
Stephens, M .A., 1970. "Use of the Kolmogorov-Smirnov, Cramer-von Mises and related statistics without
extensive tables", J. Roy. Statist. Soc., B., 32: 115-122.
Stephnes, M.A., 1974." EDF statistics for goodness-of-fit and some comparisons", J. Amer. Statist. Assoc.,
69: 703-737.
Stephnes, M.A., 1977. Goodness-of-fit for the extreme-value distribution", Biometrika, 64: 583-588.
Surles, J.G. and W .J. Padgett, 2001. " Inference for reliability and stress-strength for a scaled Burr Type
X distribution", Lifetime Data Analysis, 7: 187-200.
W ozniak, P.J. and X. Li, 1990. "Goodness of fit tests for the two-parameter W eibull distribution with
estimated parameters", J. Statist. Comp. and Simu, 34: 131-143.

301

Вам также может понравиться