Академический Документы
Профессиональный Документы
Культура Документы
Research Article
Estimating the Reliability Function for a Family of
Exponentiated Distributions
Received 4 June 2013; Revised 17 December 2013; Accepted 22 March 2014; Published 29 April 2014
Copyright © 2014 A. Chaturvedi and A. Pathak. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A family of exponentiated distributions is proposed. The problems of estimating the reliability function are considered. Uniformly
minimum variance unbiased estimators and maximum likelihood estimators are derived. A comparative study of the two methods
of estimation is done. Simulation study is preformed.
exponentiated Pareto distribution. They also considered five (vi) For 𝑔(𝑥; 𝑎, 𝜃) = (1 + (𝑥/𝜉)), 𝜉 > 0 and 𝑎 = 0, it gives
other estimation procedures and compared their perfor- us exponentiated Lomax distribution [see [30]].
mances through numerical simulations.
The purpose of the present paper is manyfold. We propose (vii) For 𝑔(𝑥; 𝑎, 𝜃) = (1 + (𝑥𝜉 /])), 𝜉 > 0, ] > 0 and 𝑎 = 0, it
a family of exponentiated distributions, which covers as turns out to be exponentiated Burr distribution with
many as ten exponentiated distributions as specific cases. We scale parameter ] [see [31]].
consider the problems of estimating 𝑅(𝑡) and “𝑃.” Uniformly (viii) For 𝑔(𝑥; 𝑎, 𝜃) = 𝑥𝛾 exp(]𝑥), 𝛾 > 0, ] > 0 and 𝑎 = 0,
minimum variance unbiased estimators (UMVUES) and it gives the exponentiated form of modified Weibull
maximum likelihood estimators (MLES) are derived. In order distribution of Lai et al. [32].
to obtain these estimators, the major role is played by the esti-
(ix) For 𝑔(𝑥; 𝑎, 𝜃) = 𝛾 exp[(𝑥/𝛾)] −1], 𝛾 > 0, ] > 0 and 𝑎 =
mator of the probability density function (pdf) at a specified
0, we get the exponentiated form of a modification of
point, and the functional forms of the parametric functions
Weibull distribution [see [33]].
to be estimated are not needed. It is worth mentioning here
that, in order to estimate “𝑃,” in the literature, the authors (x) For 𝑔(𝑥; 𝑎, 𝜃) = (𝑥 − 𝑎) + (]/𝜆) log((𝑥 + ])/(𝑎 + ])),
have considered the case when 𝑋 and 𝑌 follow the same ] > 0, it gives exponentiated form of the generalized
distributions, may be with different parameters. We have Pareto distribution [see [34]].
generalized the result to the case when 𝑋 and 𝑌 may follow
any distribution from the proposed exponentiated family of
distributions. Simulation study is carried out to investigate
3. UMVUES of 𝑅(𝑡) and ‘‘𝑃’’
the performance of the estimators. Throughout this section, we assume that 𝛼 is unknown, but
In Section 2, we introduce the family of exponentiated “𝑎”, 𝜆, and 𝜃 are known. Let 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 be a random
distributions. In Section 3, we derive the UMVUES of 𝑅(𝑡) sample of size 𝑛 from (1).
and “𝑃.” In Section 4, we obtain the MLES of 𝑅(𝑡) and “𝑃.” In
Section 5, analysis of a simulated data has been presented for Lemma 1. Let 𝑇 = ∑𝑛𝑖=1 log(1 − 𝑒−𝜆𝑔(𝑋𝑖 ,𝑎,𝜃) ). Then, 𝑇 is
illustrative purposes. Finally, in Section 6, conclusions have complete and sufficient for the family of distributions given in
been provided. (1). Moreover, the pdf of 𝑇 is
Proof. The result follows from Lemma 1, Lehmann-Scheffé Using Lemma 1 of Chaturvedi and Tomer [37] and Lemma 2,
theorem [see [35, p. 357]], and the fact that the UMVUE of 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃) at a specified point “𝑥” is
Γ (𝑛 − 𝑞)
𝐸(−2𝛼𝑇)−𝑞 = ; 𝑞 < 𝑛. (5)
𝑓̂ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) = 𝜆(1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )
−1
2𝑞 Γ (𝑛)
× 𝑔 (𝑥; 𝑎, 𝜃) 𝑒−𝜆𝑔(𝑥;𝑎,𝜃)
In the following lemma, we provide the UMVUE of the
sampled pdf (1) at a specified point “𝑥.” 𝑖
∞ [log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )]
×∑ 𝛼̂𝑖+1
Lemma 3. The UMVUE of 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃) at a specified point 𝑖=0 𝑖!
“𝑥” is −1
= 𝜆(1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) ) 𝑔 (𝑥; 𝑎, 𝜃) 𝑒−𝜆𝑔(𝑥;𝑎,𝜃)
𝑓̂ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) 𝑛−2
Γ (𝑛 − 1)
−1
− (𝑛 − 1) 𝜆𝑇 (1 − 𝑒 −𝜆𝑔(𝑥;𝑎,𝜃) −1
) × (−𝑇)−1 ∑ (−𝑇)𝑖
{
{ 𝑖=0 Γ (𝑛 − 𝑖 − 1)
{
{
{ ×𝑔 (𝑥; 𝑎, 𝜃) 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) 𝑖
={
{ 𝑛−2 [log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )]
{
{ × [1 − 𝑇−1 log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )] ; 𝑇 < log {1 −𝑒−𝜆𝑔(𝑥;𝑎,𝜃) } , × ,
{ 𝑖!
{0; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
(11)
(6)
Proof. Since 𝑇 is complete and sufficient for the family or
of distributions 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃), any function 𝐻(𝑇) of 𝑇
satisfying 𝐸[𝐻(𝑇)] = 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃) will be the UMVUE of 𝑓̂ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃)
𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃). To this end, from (1) and Lemma 1,
−1 −𝜆𝑔(𝑥;𝑎,𝜃) −1
{ − (𝑛 − 1) 𝜆𝑇 (1 − 𝑒 )
0
𝛼𝑛 {
{
∫ 𝐻 (𝑡) (−𝑡)𝑛−1 𝑒𝛼𝑡 𝑑𝑡 {
{
{
{ −𝜆𝑔(𝑥;𝑎,𝜃)
−∞ Γ (𝑛) (7) { ×𝑔 (𝑥; 𝑎, 𝜃) 𝑒
={
−𝜆𝑔(𝑥;𝑎,𝜃) −𝜆𝑔(𝑥;𝑎,𝜃) 𝛼−1 {
{ 𝑛−2
×[1 − 𝑇−1 log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )] ; 𝑇 < log {1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) } ,
= 𝛼𝜆𝑔 (𝑥; 𝑎, 𝜃) 𝑒 (1 − 𝑒 ) , {
{
{
{
{
or 0; otherwise,
{
𝛼𝑛−1 ∞ (12)
− ∫ 𝐻 [−𝑢 + log {1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) }]
Γ (𝑛) log(1−𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )
𝑛−1 −𝛼𝑢 (8) which coincides with Lemma 3. Thus, the UMVUE of
× [−𝑢 + log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )] 𝑒 𝑑𝑢 the power of 𝛼 can be used to derive the UMVUE of
−1 ̂ 𝑎, 𝛼, 𝜆, 𝜃) at a specified point.
𝑓(𝑥;
= 𝜆(1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) ) 𝑔 (𝑥; 𝑎, 𝜃) 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) .
In the following theorem, we obtain the UMVUE of 𝑅(𝑡).
Equation (8) is satisfied if we choose
Theorem 5. The UMVUE of 𝑅(𝑡) is given by
𝐻 [−𝑢 + log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )]
𝑛−1
(− (𝑛 − 1) 𝜆𝑔 (𝑥; 𝑎, 𝜃) {
{ 1 − [1 − 𝑇−1 log {1 − 𝑒−𝜆𝑔(𝑡;𝑎,𝜃) }] ;
{
{ {
{
{ {
{
{
{
{ ×𝑒−𝜆𝑔(𝑥;𝑎,𝜃) (−𝑢)𝑛−2 ) ̂ (𝑡) =
{
{ 𝑅 { 𝑇 < log (1 − 𝑒−𝜆𝑔(𝑡;𝑎,𝜃) ) ,
−𝜆𝑔(𝑥;𝑎,𝜃) {
{
= { × ( (1 − 𝑒 ) {
{
{
{ 𝑛−1 −1
{
{
{ ×[−𝑢 + log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )] ) ; 𝑢 > 0, {0; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
{
{
{ (13)
{0; otherwise,
(9)
Proof. Let us consider the expected value of the integral
∞
and the lemma holds. ̂ 𝑎, 𝛼, 𝜆, 𝜃)𝑑𝑥 with respect to 𝑇; that is,
∫𝑡 𝑓(𝑥;
Remark 4. We can write (1) as ∞ ∞
−𝜆𝑔(𝑥;𝑎,𝜃) −1 ∫ {∫ 𝑓̂ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) 𝑑𝑥} ℎ (𝑡; 𝛼, 𝜆, 𝜃) 𝑑𝑡
𝑓 (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) = 𝜆(1 − 𝑒 ) 0 𝑡
∞
× 𝑔 (𝑥; 𝑎, 𝜃) 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) 𝛼 = ∫ [𝐸𝑇 {𝑓̂ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃)}] 𝑑𝑥 (14)
(10) 𝑡
𝑖
∞ [log (1 − 𝑒−𝜆𝑔(𝑥;𝑎,𝜃) )] 𝛼𝑖 ∞
×∑ . = ∫ {𝑓 (𝑥; 𝑎, 𝛼, 𝜆, 𝜃)} 𝑑𝑥 = 𝑅 (𝑡) .
𝑖=0 𝑖! 𝑡
4 Journal of Probability and Statistics
We conclude from (14) that the UMVUE of 𝑅(𝑡) can be denoting by 𝑔−1 (⋅), the inverse function of 𝑔(𝑥; 𝑎, 𝜃), from
̂ 𝑎, 𝛼, 𝜆, 𝜃) from 𝑡 to ∞. Thus,
obtained simply integrating 𝑓(𝑥; Lemma 3, we obtain
0 −1 𝑛−1
{
s]
{
{ 1+ (𝑚 − 1) ∫ [1 − 𝑇−1 log {1 − 𝑒−𝜆 1 𝑔(ℎ (−(1/𝜆 2 ) log(1−𝑒 ))) }] (1 − ])𝑚−2 𝑑];
{
{ 1
{
{
{ ℎ−1 {−𝜆−1 𝑆 −1 −1 𝑇
{
{ 2 log (1 − 𝑒 )} > 𝑔 {−𝜆 1 log (1 − 𝑒 )} ,
{
{
{
{
{ 0
̂
𝑃 = {1 + (𝑚− 1) ∫ −1 s] 𝑛−1
(16)
{
{ [1− 𝑇−1 log {1−𝑒−𝜆 1 𝑔(ℎ (−(1/𝜆 2 ) log(1−𝑒 ))) }]
{
{ −1 𝜆
log{1−𝑆 𝑒 2 ℎ(𝑔−1 (−(1/𝜆 ) log(1−𝑒𝑇 )))
1 }
{
{
{
{ × (1− ])𝑚−2 𝑑];
{
{
{
{
{
{ ℎ−1 {−𝜆−1 𝑆 −1 −1
2 log (1 − 𝑒 )} < 𝑔 {−𝜆 1 log (1 − 𝑒 )} .
𝑇
{
∞ ∞
Proof. From the arguments similar to those adopted in 𝑃̂ = ∫ ∫ 𝑓̂1 (𝑥; 𝑎1 , 𝛼1 , 𝜆 1 , 𝜃1 ) 𝑓̂2 (𝑦; 𝑎2 , 𝛼2 , 𝜆 2 , 𝜃2 ) 𝑑𝑥 𝑑𝑦.
proving Theorem 5, it can be shown that the UMVUE of “𝑃” 𝑦=𝑎2 𝑥=𝑦
is given by (17)
∞ ∞ −1
𝑃̂ = − ∫ [∫ (𝑛 − 1) 𝜆 1 (1 − 𝑒−𝜆 1 𝑔(𝑥;𝑎1 ,𝜃1 ) ) 𝑔 (𝑥; 𝑎1 , 𝜃1 ) 𝑒−𝜆 1 𝑔(𝑥;𝑎1 ,𝜃1 )
𝑦=𝑎2 𝑥=𝑦
𝑛−2
⋅ (1 − 𝑇−1 log (1 − 𝑒−𝜆 1 𝑔(𝑥;𝑎1 ,𝜃1 ) )) 𝑑𝑥] ⋅ 𝑓̂2 (𝑦; 𝑎2 , 𝛼2 , 𝜆 2 , 𝜃2 ) 𝑑𝑦
∞ 𝑛−1
= 1 + (𝑚 − 1) 𝜆 2 𝑆−1 ∫ {1 − 𝑇−1 log (1 − 𝑒−𝜆 1 𝑔(𝑦;𝑎1 ,𝜃1 ) )} (18)
𝑦=max[ ℎ−1 {
−𝜆−1
2 log(
1−𝑒𝑆 )},𝑔−1 {
−𝜆−1
1 log( 1−𝑒𝑇 )}]
−1
⋅ (1 − 𝑒−𝜆 2 ℎ(𝑦;𝑎2 ,𝜃2 ) ) ℎ (𝑦; 𝑎2 , 𝜃2 ) 𝑒−𝜆 2 ℎ(𝑦;𝑎2 ,𝜃2 )
𝑚−2
⋅ [1 − 𝑆−1 log (1 − 𝑒−𝜆 2 ℎ(𝑦;𝑎2 ,𝜃2 ) )] 𝑑𝑦.
0 −1 𝑛−1
(−(1/𝜆 2 ) log(1−𝑒𝑆] )))
𝑃̂ = 1 + (𝑚 − 1) ∫ [1 − 𝑇−1 log {1 − 𝑒−𝜆 1 𝑔(ℎ }] (1 − ])𝑚−2 𝑑]. (19)
1
Journal of Probability and Statistics 5
0 −1 𝑛−1
𝑃̂ = 1 + (𝑚 − 1) ∫ −1 𝑇
[1 − 𝑇−1 log {1 − 𝑒−𝜆 1 𝑔(ℎ (−(1/𝜆 2 ) log(1−𝑒s] )))
}] (1 − ])𝑚−2 𝑑]. (20)
log{1−𝑆−1 𝑒𝜆 2 ℎ(𝑔 (−(1/𝜆 1 ) log(1−𝑒 ))) }
The theorem now follows on combining (19) and (20). Irrespective of the distribution, the MLE of a is 𝑎̃ = 𝑋(1) =
min1≤𝑖≤𝑛 𝑋𝑖 . From (3), the log-likelihood function is
Corollary 7. For the case when 𝑎1 = 𝑎2 = 𝑎, say, 𝜆 1 = 𝜆 2 = 𝜆,
𝑛
say, 𝜃1 = 𝜃2 = 𝜃, say, and 𝑔(𝑥; 𝑎, 𝜃) = ℎ(𝑦; 𝑎, 𝜃),
𝐿 (𝑎, 𝛼, 𝜆, 𝜃 | 𝑥) = 𝑛 log 𝛼 + 𝑛 log 𝜆 + ∑ log 𝑔 (𝑥𝑖 ; 𝑎, 𝜃)
𝑖=1
𝑛
− 𝜆∑𝑔 (𝑥𝑖 ; 𝑎, 𝜃) + (𝛼 − 1)
𝑛−1
{ 𝑖 (𝑛 − 1)! (𝑚 − 1)! 𝑆 𝑖 𝑖=1
{
{ 1 − ∑ (−1) ( ); 𝑇 > 𝑆, 𝑛
{
{ (𝑛 − 𝑖 − 1)! (𝑚 + 𝑖 − 1)! 𝑇
𝑃̂ = {
𝑖=0 × ∑ log (1 − 𝑒−𝜆𝑔(𝑥𝑖 ;𝑎,𝜃) ) .
𝑚−2
{
{ 𝑖 (𝑛 − 1)! (𝑚 − 1)! 𝑇 𝑖+1 𝑖=1
{1 − ∑ (−1) (𝑛 + 𝑖)! (𝑚 − 𝑖 − 2)! ( 𝑆 ) ,
{ 𝑇 < 𝑆. (22)
𝑖=0
{
(21) First of all, we replace “𝑎” by 𝑋(1) in (22); then we dif-
ferentiate (22) with respect to all the unknown parameters
and equate these differential equations to zero. The MLES of
Remarks 1. unknown parameters are obtained on solving these equations
simultaneously. Let 𝛼̃, 𝜆,̃ and 𝜃̃ be the maximum likelihood
(i) In the literature, the researchers have dealt with the estimators of 𝛼, 𝜆, and 𝜃, respectively.
estimation of 𝑅(𝑡) and “𝑃,” separately. If we look at The following lemma provides The MLE of 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃)
the proofs of Theorem 5 and Theorem 6, we observe at a specified point “𝑥”.
that the UMVUE of the sampled pdf is used to
obtain the UMVUES of 𝑅(𝑡) and “𝑃.” Thus, we Lemma 8. The MLE of 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃) at a specified point “𝑥”
have established interrelationship between the two is
estimation problems. ̃ ̃ ̃ ̃ 𝛼̃−1
̃ (𝑥; 𝑎̃, 𝜃)
𝑓̃ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) = 𝛼̃𝜆𝑔 ̃ 𝑒−𝜆𝑔(𝑥;̃
𝑎,𝜃)
(1 − 𝑒−𝜆𝑔(𝑥;̃𝑎,𝜃) ) .
(ii) In the literature, the researchers have derived the (23)
UMVUES of “𝑃” for the case when 𝑋 and 𝑌 follow the
same distribution (may be with different parameters). Proof. The proof follows from (1) and the one-to-one prop-
We have obtained UMVUES of “𝑃” for all the three erty of the MLE.
situations, when 𝑋 and 𝑌 follow the same distribution
having all the parameters same other than 𝛼’s, when In the following theorem, we derive the MLE of 𝑅(𝑡).
𝑋 and 𝑌 have the same distribution with different
Theorem 9. The MLE of 𝑅(𝑡) is given by
parameters and when 𝑋 and 𝑌 have different distri-
butions. ̃ ̃ 𝛼̃
̃ (𝑡) = 1 − (1 − 𝑒−𝜆𝑔(𝑡;̃
𝑅 𝑎,𝜃)
) . (24)
2
(iii) It follows from Lemma 2 that Var(̂ 𝛼) = 𝛼 /(𝑛 − 1) →
0 as 𝑛 → ∞. Thus, 𝛼̂ is a consistent estimator of Proof. Using Lemma 8 and invariance property of the MLES,
̂ 𝑎, 𝛼, 𝜆, 𝜃), 𝑅(𝑡),
𝛼. Since 𝑓(𝑥; ̂ and 𝑃̂ are continuous ∞
functions of consistent estimators, they are also ̃ (𝑡) = ∫ 𝑓̃ (𝑥; 𝑎, 𝛼, 𝜆, 𝜃) 𝑑𝑥
𝑅
𝑡
consistent estimators of 𝑓(𝑥; 𝑎, 𝛼, 𝜆, 𝜃), 𝑅(𝑡), and “𝑃,” ∞ 𝛼̃−1
̃ ̃ ̃ ̃
respectively. ̃ ∫ 𝑔 (𝑥; 𝑎̃, 𝜃)
= 𝛼̃𝜆 ̃ 𝑒−𝜆𝑔(𝑥;̃𝑎,𝜃) (1 − 𝑒−𝜆𝑔(𝑥;̃𝑎,𝜃) ) 𝑑𝑥
𝑡
1
̃
4. MLES of 𝑅(𝑡) and ‘‘𝑃’’ When All the = 𝛼̃ ∫ ̃ ̃
(1 − 𝑢)𝛼−1 𝑑𝑢,
(1−𝑒−𝜆𝑔(𝑡;̃𝑎,𝜃) )
Parameters Are Unknown (25)
Looking at the distributions belonging to the family (1), we and the theorem follows.
notice that, except for exponentiated Pareto and generalized
exponentiated Pareto distributions [(𝑉) and (𝑋)], 𝑎 = 0; that In the following theorem, we obtain the maximum
is, “𝑎” is known. For these two distributions 𝜃 contains “𝑎.” likelihood estimator of “𝑃.”
6 Journal of Probability and Statistics
Let the rv 𝑋 follow exponentiated exponential distribution Considering negative log-likelihood, then differentiating it
with pdf with respect to all unknown parameters, and equating these
differential coefficients to zero, from (32),
𝛿 𝛿 𝛼−1
𝑓 (𝑥; 𝛼, 𝜆, 𝛿) = 𝛼𝜆𝛿𝑥𝛿−1 𝑒−𝜆𝑥 (1 − 𝑒−𝜆𝑥 ) ; 𝑥, 𝛼, 𝜆, 𝛿 > 0,
𝑛
𝑛 𝛿
(31) − − ∑ log (1 − 𝑒−𝜆𝑥𝑖 ) = 0,
𝛼 𝑖=1
where 𝛼, 𝜆, and 𝛿 are unknown. Denoting, 𝐿 1 (𝛼, 𝜆, 𝛿 | 𝑥), the
𝛿
likelihood, the log-likelihood is given by 𝑛 𝑛 𝑛
𝑥𝛿 𝑒−𝜆𝑥𝑖
− + ∑𝑥𝑖𝛿 − (𝛼 − 1) ∑ 𝑖 = 0,
𝜆 𝑖=1 𝑖=1 (1 − 𝑒
−𝜆𝑥𝑖𝛿 )
log 𝐿 1 (𝛼, 𝜆, 𝛿𝑥) = 𝑛 log 𝛼 + 𝑛 log 𝜆 + 𝑛 log 𝛿
(33)
𝑛 𝑛 𝑛 𝑛 𝑛
The following sample of size 50 is generated from (31), for 1.6636, 0.4313, 2.4856, 1.2428, 1.5750, 1.1564, 1.4370,
𝛼 = 3.5, 𝜆 = 1, and 𝛿 = 1 as follows. 3.1483, 1.1472, 1.4889, 0.8209, 0.6693, 1.2684, 3.2650,
3.2285.
Sample 1. 1.4904, 0.5808, 2.2326, 0.9756, 2.3062, 1.4692, ̃ = 1.044041,
For this sample, we have 𝛼̃ = 2.932394, 𝜆
1.0433, 2.9527, 0.9241, 2.5901, 1.2021, 1.3022, 3.6532, 0.8146, and 𝛿̃ = 1.106363. Using these results (obtained from
1.7083, 0.3126, 2.4925, 0.9462, 0.4557, 2.0120, 1.6554, 1.1651, Samples 1 and 2) and Theorem 10, we get 𝑃 = 0.5384615 and
0.9703, 1.0305, 0.4618, 0.7465, 1.9249, 1.5763, 1.9127, 0.9742, 𝑃̃ = 0.5586546.
2.7401, 1.3686, 1.8546, 2.3296, 1.4526, 1.1719, 4.4671, 1.1218,
1.1901, 0.8248, 3.0091, 0.9511, 1.3308, 2.2145, 2.0216, 1.6740,
1.5620, 2.0339, 0.6882, 3.1335.
6. Conclusions
Assuming that the data represents life spans of items in
hours, 𝑅(0.35) = 0.986, and solving (33) simultaneously, we We propose a class of distributions, which covers as many as
̃ = 1.170381, 𝛿̃ = 1.098677, and 𝑅(0.35)
get 𝛼̃ = 3.711821, 𝜆 ̃ = ten exponentiated distributions as special cases. The prob-
0.9872. lems of estimating 𝑅(𝑡) and “𝑃” are considered. UMVUES
In order to obtain the maximum likelihood estimator of and MLES are derived. A comparative study of the two
“𝑃,” we have generated one more sample of size 50 from (31), methods of estimation is done. The estimators of “𝑃” are
for 𝛼 = 3, 𝜆 = 1, and 𝛿 = 1 as follows. derived, which cover the cases when 𝑋 and 𝑌 may follow
Sample 2. 1.2271, 0.9467, 1.9800, 0.6613, 0.4526, 1.5701, the same, as well as, different distributions. By estimating
1.0414, 1.9008, 1.2378, 3.7137, 1.4012, 0.5640, 1.6704, 2.0615, the sampled pdf to obtain the estimators of 𝑅(𝑡) and “𝑃,”
1.4360, 1.6696, 0.7443, 0.9568, 1.6306, 0.2327, 1.0024, an interrelationship between the two estimation problems is
0.8178, 0.7704, 2.1587, 0.8460, 2.3833, 1.0702, 1.1584, established. Simulation study is performed, and a real-data
3.1261, 3.7417, 3.0401, 2.4465, 0.6874, 3.2289, 2.4881, example is presented.
Journal of Probability and Statistics 9
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences