Chapter 3

Discrete Lifetime Models

Abstract

This chapter is concerned with the current and prospective tools that enable the representation of lifetime data through some model. A major component of such a formulation is to identify the probability distribution of the underlying lifetime. We discuss some possible lifetime distributions and their properties in this chapter. Often, a family of distributions is initially chosen and then a member that adequately describes the features of the data is assumed to be the model. We begin with the Ord family comprising of the binomial, Poisson, negative binomial, hypergeometric, negative hypergeometric and beta-Pascal distributions. The power series family and its subfamily, the Lerch distributions, are considered next. Among the distributions discussed in this class are the Hurwitz zeta, Zipf, Zipf-Manelbrot, Good, geometric, uniform, discrete Pareto, Estoup, Lotka, logarithmic and the zeta models. This is followed by considering the Abel series family consisting of generalized Poisson, quasi-binomial I, quasi-negative binomial, quasi-binomial II, and quasi-logarithmic series distributions. Further, the Lagrangian family, with special reference to the Geeta and generalized geometric models, are also discussed. The reliability characteristics of each of the above families such as hazard rate, mean residual life, characterizations based on relationships between reliability functions, recurrence relations, etc. are discussed. Of special interest in reliability modelling in discrete time is the development of discretized versions of acclaimed continuous life distributions. In this category, we review the works on discrete Weibull I, half-logistic, geometric, inverse-Weibull, generalized exponential, gamma and Lindley models and their reliability aspects. We conclude the discussions by presenting other models that do not belong to the above classifications like the discrete Weibull II and III and the S-distributions.

Keywords

Ord family; Power series distributions; Abel series; Discrete versions of continuous models; Discrete Weibull reliability properties

3.1 Introduction

The main emphasis of this chapter is to present a discussion of the current and prospective tools that enable the representation of lifetime data through some model. A major component in such a formulation is the distribution of the underlying lifetime. Probability distributions facilitate characterization of the uncertainty prevailing in the data set by identification of the patterns of variation. By summarizing the observations into a mathematical form that contains a few unknown parameters, distributions provide a means to the best possible understanding to the basic data generating mechanism. Lifetime distributions, being a probabilistic description of the behaviour of the length of life, is to a marked extent depends on the mode of failure of the device under study. Choice of an appropriate distribution for the data set depends largely on our knowledge about the physical characteristics of the process that give rise to the observations. In some cases, the relationship between the failure mechanism and various concepts encountered in Chapter 2 can be forged. However, many situations, unlike the case of the normal distribution where the central limit theorem affords a bridge between theory and practice, the discrete distributions we confront with are difficult to justify on practical grounds. The real position is that within the range of observations we have in hand, more than one distribution may pass well established goodness-of-fit tests in many cases. Characterization properties, being unique to a specified distribution, are the only exact tools that can identify the correct model. When the sample observations provide evidence of characteristic properties exhibited by reliability functions such as hazard rate, mean residual life, etc., pertaining to a particular model with high accuracy, there is reasonable justification to choose that model. If such a model passes the goodness-of-fit test also, then we have a match between observations and the chosen distribution both in terms of comparable physical properties and statistical validity. In Chapter 2, several examples of this nature were illustrated for modelling real data with some specified distributions. The subsequent sections of this chapter will to present various discrete distributions along with their properties that can be used for modelling and analysis of lifetime data. For a detailed discussion of various aspects of reliability modelling, we refer to Blischke and Murthy (2000).

3.2 Families of Distributions

In the presence of lack of understanding of the data generating mechanism due to insufficient details about the physical characteristics, the investigator has to be content with finding the best approximating distribution from a collection of candidate models. Based on a preliminary assessment of the features of the observations at disposal, mathematical formulation of a suitable model is initially made. A family of distributions that contains enough members with different shapes and varying characteristics can be useful aids in the process of empirical modelling described above.

3.2.1 Ord Family

The family of distributions has its origin in 1895 when Pearson introduced a system bearing his name in the continuous case. A discrete version of the Pearson system, defined by the difference equation

f ( x ) f ( x 1 ) = ( a x ) f ( x 1 ) b 0 + b 1 x + b 2 x ( x 1 ) ,

Image (3.1)

was studied by Ord (1967a, 1967b), which is referred to as the Ord family in the sequel, where x takes values in a subset T of integers.

The members of the system with non-negative integers as the values of X are:

  1. (i)  binomial

f ( x ) = ( n x ) p x ( 1 p ) n x , x = 0 , 1 , 2 , , n ; 0 < p < 1 , q = 1 p ;

Image (3.2)

  1. (ii)  negative binomial

f ( x ) = ( k + x 1 x ) p k ( 1 p ) x , x = 0 , 1 , 2 , ;

Image (3.3)

  1. (iii)  Poisson

f ( x ) = e λ λ x x ! , x = 0 , 1 , 2 , , λ > 0 ;

Image (3.4)

  1. (iv)  hypergeometric

f ( x ) = ( M x ) ( N M n x ) / ( N n ) , x = 0 , 1 , 2 , , n ;

Image (3.5)

  1. (v)  negative hypergeometric (beta-binomial)

f ( x ) = ( k + x 1 x ) ( N k x M x ) / ( N M ) , x = 0 , 1 , , M ;

Image (3.6)

  1. (vi)  beta-Pascal

f ( x ) = A k + A ( k + x 1 x ) ( A + B 1 A ) / ( k + A + B + x 1 k + A ) ,

Image (3.7)

  1. x=0,1,2,Image; A=M1Image; B=NMImage.

The first four moments of the family are given by

μ 1 = ( b 1 + a 1 ) / ( 1 2 b 2 ) , μ 2 = ( b 1 + a 1 ) { a b 2 + ( 1 b 2 ) ( b 1 2 b 2 ) } / ( 1 2 b 2 ) 2 ( 1 3 b 2 ) μ 3 = μ 2 { 4 b 2 ( a + 1 b 1 b 2 ) 4 b 1 3 } / ( 1 2 b 2 ) ( 1 4 b 2 ) ,

Image

and

μ 4 = 3 μ 2 2 + μ 2 ( 1 4 b 2 ) ( 1 5 b 2 ) [ { ( 1 b 2 ) ( 1 2 b 2 ) } + 6 { ( b 1 2 b 2 2 ) + ( b 2 b 1 ) 2 b 2 ( a + b 1 1 ) } + b 2 μ 2 { 1 2 b 2 ( 1 3 b 2 ) 1 } ] .

Image

These will be required for analysing the descriptive characteristics of the distribution, estimating parameters by the method of moments, and in computing standard errors of statistics. All distributions of the system are either J shaped or unimodal. A crucial issue of interest when families are considered for model selection, is the criteria to distinguish its members. Ord (1972) considered the quantities I=μ2μImage, which is the variance mean ratio called index of dispersion, and S=μ3μ2Image, and proposed a diagram of (S,I)Image. In this diagram, the line S=2I1Image defines the binomial or Poisson or negative binomial distributions as I<,=,>1Image. The other distributions are obtained by analyzing T=S2I+1Image. The region S<2I1Image gives the beta-binomial, S>2I1Image corresponds to the hypergeometric or beta-Pascal according to whether S<1Image or >1 in addition. Since the parameters of the family are completely determined by the first three moments, the (S,I)Image diagram can be used to locate a suitable distribution for a given data, by replacing the population moments with the corresponding sample moments. Ord (1972) also proposed the plot of ux=xpxpx1Image against x, where pxImage is the sample frequency. Since successive uxImage values are dependent, a smoothing vx=12(ux+ux+1)Image is has also been suggested. For example, the binomial distribution satisfies

u x = ( n + 1 ) p q p q x

Image

in the population, so that the plot of (x,xpxpx1)Image must show an approximate line. An interesting property of the Ord family is that the truncated version of X at either end also belongs to this family.

Various reliability functions, like the hazard rate, for the Ord family can not be expressed in simple forms. However, the members of the family admit a simple relationship between the hazard rate and the mean residual life. Ahmed (1991) has shown that X has binomial distribution in (3.2) if and only if

m ( x ) = n p + q x h ( x ) .

Image (3.8)

Since the mean residual life m(x)=m(x)xImage, the relationship between m(x)Image and h(x)Image characterizing the binomial distribution is obvious. On similar lines, Osaki and Li (1988) have shown that X follows negative binomial distribution

f ( x ) = ( n 1 x 1 ) p x q n x , n x , x = 1 , 2 , ,

Image

if and only if

E ( X | X > m ) = μ + ( m + 1 x ) h ( m + 1 ) / p

Image

for all integers mx1Image. They have also shown that X has Poisson distribution in (3.4) if and only if

m ( x ) = μ + ( 1 + x ) λ h ( x + 1 ) .

Image

A more general result for the Ord family in (3.1) rewritten as

f ( x + 1 ) f ( x ) = ( x + d ) f ( x ) b 0 + b 1 x + b 2 x 2 ,

Image

was characterized by Nair and Sankaran (1991) through the identity

m ( x ) = μ + ( a 0 + a 1 x + a 2 x 2 ) h ( x + 1 ) ,

Image (3.9)

where

b i = a i 2 a 2 + 1  and  d = a 1 a 2 μ 2 a 2 + 1 , i = 0 , 1 , 2 .

Image

The values of a0,a1Image and a2Image for the distributions in (3.2)(3.7) are listed in Table 3.1.

Table 3.1

Values of a0,a1 and a2 in (3.9) for the Ord family

Distribution a 0 Image a 1 Image a 2 Image
binomial q q 0
Poisson 1 1 0
negative binomial 1 p Image 1 p Image 0
hypergeometric 1 2 + 2 n + 2 M N Image N + 2 M 2 + 2 n + 2 M 1 Image N M n 2 + 2 n + 2 M 1 Image
negative hypergeometric N k N + 1 M Image N k 1 N + 1 M Image 1 M N 1 Image
beta-Pascal B A + 2 K + 1 Image 1 + B A + 2 K + 1 Image 1 A + 2 K + 1 Image

Image

Figs 3.1A, 3.1B and 3.1C show the hazard rates of negative binomial, hypergeometric and negative hypergeometric distributions for various values of the parameters.

Image
Figure 3.1 Hazard rate functions for (A) negative binomial distribution, (B) hypergeometric distribution, and (C) negative hypergeometric distribution.

Various distributional properties of the individual models along with results on inference and applications have been discussed in Johnson et al. (1992). Further, characterization problems of the family in terms of truncated moments have been addressed by Glanzel et al. (1984) and Glanzel (1987, 1991). Of particular interest is the result

E ( X 2 | X x ) = P ( x ) E ( X | x x ) + Q ( x )

Image (3.10)

in Glanzel (1991), where P(x)Image and Q(x)Image are polynomials of degree at most one, with real coefficients. Obviously, (3.10) leads to a property of the variance residual life in relation to the mean residual life.

There are two related families in connection with (3.1) that deserves special mention. One is the Katz (1965) system defined by

f ( x + 1 ) f ( x ) = α + β x 1 + x , x = 0 , 1 , 2 , α > 0 , β < 1 .

Image (3.11)

It is easy to see that (3.11) is a special case of (3.1) when b0=b1Image, b2=0Image. This restricted model contains the binomial (β<0Image), Poisson (β=0Image) and negative binomial (0<β<1Image) distribution as particular members. These members can be discriminated by performing a test of hypothesis H0:μ2=μImage against H1:μ2>μImage (μ2<μImage), with H0Image if accepted gives Poisson, otherwise H1:μ2>μImage (μ2<μImage) gives negative binomial (binomial). The test is based on the statistic

Z = S 2 x ¯ x ¯

Image

which is closely approximated by a normal distribution N(β1β,2n)Image, where n is the sample size, and S2Image and x¯Image are the variance and mean of the sample. Following (3.9), for the Katz family, we see that

m ( x ) = μ ( 1 β ) 1 ( 1 + x ) h ( x + 1 )

Image

is satisfied for all x, and conversely. Sudheesh and Nair (2010) established the following characteristic properties for the Ord and Katz families, respectively:

σ 2 ( x ) = E ( c 0 + c 1 X + c 2 X 2 | X > x ) + ( μ m ( x ) ) ( m ( x ) x 1 ) ,

Image (3.12)

where

c 0 = μ + b 0 b 1 + b 2 1 2 b 2 , c 1 = b 1 1 1 2 b 2 , c 2 = b 2 1 2 b 2 ,

Image

and

σ 2 ( x ) = α + β m ( x ) + ( μ m ( x ) ) ( m ( x ) x 1 ) ( 1 β ) .

Image (3.13)

For example, in the Poisson case, we have

σ 2 ( x ) = λ + ( λ m ( x ) ) ( m ( x ) x 1 ) ,

Image

or alternatively in terms of the variance and the hazard rate as

σ 2 ( x ) = λ + ( x + 1 ) ( x + 1 λ ) h ( x + 1 ) ( x + 1 ) 2 h 2 ( x + 1 ) .

Image

Let τ be the class of real-valued functions C(X)Image of X. Then, for the Ord family, we have

inf C ( x ) τ V ( C ( X ) ) E 2 [ ( d 0 + d 1 X + d 2 X 2 ) Δ C ( X ) ] = 1 ,

Image

where di=ciσ1Image and Δ is the forward difference operator, i=0,1,2Image. In particular, for the Katz family, we have

inf C ( X ) τ σ 2 ( 1 β ) V [ C ( X ) ] E 2 ( ( α + β X ) Δ C ( X ) ) = 1

Image

provides the lower bound to the variance of a random function C(X)Image. Under some regularity conditions, Nair and Sudheesh (2008) showed that these bounds reduce to the Cramer-Rao and Chapman-Robbins inequalities. The moments of the Katz family are

μ = α 1 β , μ 2 = α ( 1 β ) 2 , μ 3 = μ 2 ( 1 + β )  and  μ 4 = 3 μ 2 2 + μ 2 ( β + 2 ) 2 ( 1 β ) 2 .

Image

It has a probability generating function

P ( t ) = ( 1 β t 1 β ) α β .

Image

An extension of the Ord family has been provided by Sindhu (2002) wherein the difference equation in (3.1) has the modified form

f ( x + 1 ) f ( x ) f ( x ) = d 0 + d 1 x + d 2 x 2 k 0 + k 1 x + k 2 x 2

Image (3.14)

with d0,d1,d2,k0,k1Image and k2Image as real constants. Besides containing all members of the Ord family, the system in (3.14) contains several other distributions such as the confluent hypergeometric distribution of Bhattacharya (1966), Borel-Tanner model (Tanner, 1953), and the Haight distribution (Haight, 1961).

Theorem 3.1

A necessary and sufficient condition for the distribution of X to belong to the family in (3.14) is that

d 2 E ( X 2 | X x ) + ( d 1 + 2 k 2 ) E ( X | X x ) + d 0 + k 1 k 2 + ( k 0 + k 1 x + k 2 x 2 ) h ( x + 1 ) = 0 .

Image

Remark 3.1

An extension of Glanzel's result in (3.10) is also possible from the above theorem. The relevant result is that X follows (3.14) if an only if

d 2 E ( X 3 | X x ) = A ( x ) E ( X 2 | X x ) + B ( x ) E ( X | X x ) + C ( x ) ,

Image

where A(x)=d2x+qImage with q as a real constant and B(x)Image and C(x)Image are polynomials of degree at most one with real coefficients.

The approach for relationships between reliability functions in reversed time is the same and very similar results emerge. Corresponding to (3.9), the identity between r(x)Image and λ(x)Image for the Ord family is

r ( x + 1 ) = μ ( a 0 + a 1 x + a 2 x 2 ) λ ( x )

Image

with a0=μ+b0b1+b212a2Image, a1=b1112b2Image and a2=b212b2Image.

Gupta et al. (1997) have given a formula for computing the hazard rate as

1 h ( x ) = i = 0 u = x x + i f ( u + 1 ) f ( u )

Image (3.15)

when the ratio of two successive probabilities are known. By utilizing of this, for the Katz family, we have

h ( x ) = [ 1 + x ! ( k 1 ) ! ( k + x 1 ) ! { ( k + x 1 + x ) β + + ( k + x + j 1 x + j ) β j + } ] 1 ,

Image

where k=αβImage. An alternative expression is

h ( x ) = [ F 1 2 ( 1 , α β + x ; 1 + x ; β ) ] 1 ,

Image (3.16)

where F12Image is the hypergeometric function defined as

F 1 2 ( a , b , c ; x ) = 1 + a b c x + a ( a + 1 ) b ( b + 1 ) c ( c + 1 ) 2 ! x 2 + ,

Image (3.17)

with c0,1,2,Image.

For an elaborate study on the orthogonal polynomials in the cumulative Ord family and their application to variance bounds, one may refer to the recent work of Afendras et al. (2017).

3.2.2 Power Series Family

The power series family of distributions were initially studied by Kosambi (1949) and Noack (1950). Any distribution that can be represented as

f ( x ) = a ( x ) θ x A ( θ ) , x = 0 , 1 , , θ > 0 ,

Image (3.18)

where a(x)>0Image and A(θ)=a(x)θxImage, is said to be of power series form. In (3.18), A(θ)Image is called the series function. Sometimes, (3.18) is also referred to as the discrete linear exponential family in view of the specification of (3.18) as

f ( x ) = a ( x ) b ( α ) e α x .

Image

Initially, Noack (1950) considered f(x)Image with support as the whole set of non-negative integers. Patil (1962) extended the support to any arbitrary non-empty subset T of non-negative integers such that

A ( θ ) = x T a ( x ) θ x ,

Image (3.19)

with a(x)>0Image, θ0Image, so that A(θ)Image is positive, finite and differentiable. Then, (3.18) for xTImage is called a generalized power series distribution (GPSD). The system in (3.19) includes the binomial, negative binomial, Poisson and logarithmic series distributions. A truncated GPSD is also a GPSD.

A further extension of the GPSD has been introduced by Gupta (1974) who defined the modified power series distributions (MPSD), specified by the probability mass function of the form

f ( x ) = a ( x ) ( g ( θ ) ) x B ( θ ) , x T ,

Image (3.20)

where a(x)>0Image, and g(θ)Image and B(θ)Image are positive finite and differentiable. When g(θ)Image is invertible, MPSD reduces to the GPSD. Apart from all distributions belonging to the GPSD family, the modified version contains more distributions, such as the generalized binomial distribution of Jain and Consul (1971). Being the more general form, the properties of (3.20) will be mentioned so that the results for GPSD can be deduced from them. The moments of (3.20) are

μ = E ( X ) = B ( θ ) g ( θ ) B ( θ ) g ( θ ) = g ( θ ) g ( θ ) d d θ [ log B ( θ ) ] , μ 2 = g ( θ ) g ( θ ) d μ d θ ,

Image

and

μ r + 1 = g ( θ ) g ( θ ) d μ r d θ + r μ 2 μ r 1 , r = 1 , 2 ,

Image

The factorial moments can be calculated from the recurrence relation

μ ( r + 1 ) = g ( θ ) g ( θ ) d μ ( r ) d θ + μ ( r ) μ r μ ( r ) .

Image

Consul (1995) has developed characterizations of the exponential family of distributions

f ( x ) = exp [ x p ( α ) + q ( α ) + s ( x ) ]

Image (3.21)

which contains numerous discrete probability mass functions and probability density functions of continuous random variables for various choices of p(α)Image, q(α)Image and S(x)Image. In the discrete case, the translations a(x)=exp[s(x)]Image, B(θ)=exp[q(α)]Image and g(θ)=exp[p(α)]Image reduce (3.21) to the form (3.20). His main result can be stated in our notation as

m ( x ) = μ + g ( θ ) g ( θ ) θ ( log S ( x + 1 ) ) .

Image (3.22)

As examples, the Lagrangian Poisson distribution

f ( x ) = ( 1 + α x ) x 1 x ! θ x exp [ θ ( 1 + α x ) ] , x = 0 , 1 , , θ > 0 , 0 α θ 1 ,

Image (3.23)

satisfies the property

m ( x ) = θ 1 α θ + ( α + 1 θ ) 1 θ log S ( x + 1 ) ;

Image

the Lagrangian negative binomial with probability function

f ( x ) = n n + m x ( n + m x x ) θ x ( 1 θ ) n + m x x , x = 0 , 1 , 2 , , 0 < θ < 1 , n > 0 , m = 0 , 1 m θ 1 ,

Image (3.24)

is characterized by

m ( x ) = n θ 1 m θ + θ ( 1 θ ) 1 m θ θ log S ( x + 1 ) , 0 < θ < 1 .

Image (3.25)

Notice that (3.24) contains, as special cases, the binomial distribution (m=0Image), negative binomial (m=1Image), the Geeta distribution

f ( x ) = 1 α x 1 ( α x 1 x ) θ x 1 ( 1 θ ) α x x , x = 0 , 1 , 2 ,

Image

when n=α1Image and x is replaced by x1Image so that the identity in (3.25) is true for x=1,2,,nImage (binomial), for all x (negative binomial), and x=2,3,Image (Geeta). The identity in (3.22) can be used to obtain the lower bound for the variance as

V ( C ( X ) ) E 2 [ g ( X ) Δ C ( X ) ] ,

Image

when C(x)Image is any real-valued function of x (Nair and Sudheesh, 2008) and

g ( x ) = g ( θ ) g ( θ ) 1 σ f ( x ) S ( x + 1 ) θ .

Image

3.2.3 Lerch Family

An important sub-family of the MPSD is the Hurwitz-Lerch zeta distributions (HLZD). Various properties of this class, along with their applications to reliability, have been studied by Gupta et al. (2008). The HLZD arises from the general Hurwitz-Lerch zeta function defined by

ϕ ( z , s , a ) = k = 0 z k ( k + a ) s , s C  when  | z | < 1 , a = C ( 0 , 1 , 2 , )

Image (3.26)

and R(s)>1Image when |z|=1Image and C is the set of complex numbers. Some known special cases of the function (3.26) are the Riemann zeta function

ζ ( s ) = k = 1 1 k s = k = 0 1 ( 1 + k ) s = ϕ ( 1 , s , 1 ) ,

Image (3.27)

Hurwitz zeta function

ζ ( s , a ) = k = 0 1 ( k + a ) s = ϕ ( 1 , s , a ) ,

Image (3.28)

and the polylogarithmic function

A ( z , s ) = k = 1 z k k s = z k = 0 z k ( 1 + k ) s = z ϕ ( z , s , 1 ) .

Image (3.29)

For detailed properties of these functions, we refer to Erdelyi et al. (1953). Zornig and Altmann (1995) introduced the three parameter HLZD as a generalization of the Zipf, Zipf-Mandelbrot and Good distributions mentioned below. Subsequently, the family has found some applications as a model in ecology, linguistics, information sciences, statistical physics and survival analysis.

Aksenov and Savageau (2005) defined the Lerch family of distributions through the probability mass function

f ( x ) = c z x ( a + x ) s , x = 0 , 1 , 2 , ,

Image (3.30)

where c1=ϕ(z,s,a)Image, the Lerch transcendent (Lerch, 1887), also called the Hurwitz-Lerch zeta function defined in (3.26) for z>0Image, a>0Image. By virtue of the relationship

ϕ ( z , s , a ) = z m ϕ ( z , s , m + a ) + k = 0 m 1 z k ( k + a ) s ,

Image (3.31)

we can arrive at the distribution function

F ( x ) = 1 z x + 1 ϕ ( z , s , a + x + 1 ) / ϕ ( z , s , a ) .

Image

The probability generating function and the moment generating function are, respectively,

P ( t ) = ϕ ( t z , s , a ) ϕ ( z , s , a ) , | t | 1 ,

Image

and

M ( t ) = ϕ ( z e t , s , a ) ϕ ( z , s , a ) .

Image

There is unimodality for the family if s<0Image and a1Image with mode at

m 0 = 1 + [ 1 z 1 / s 1 a ] ,

Image

where [W]Image is the integer part of W. Furthermore,

μ = ϕ ( z , s 1 , a ) ϕ ( z , s , a ) a

Image

and

μ 2 = ( a + μ ) 2 + ϕ ( z , s 2 , a ) 2 ( a + μ ) ϕ ( z , s 1 , a ) ϕ ( z , s , a ) .

Image

Regarding the reliability functions, we obtain directly from the definitions that

h ( x ) = 1 ( a + x ) s ϕ ( z , s , a + x )

Image (3.32)

and

λ ( x ) = c z x ϕ ( z , s , a ) z x + 1 ϕ ( z , s , a + x + 1 ) .

Image (3.33)

We observe that the expressions in (3.32) leads to some new interesting recurrence relationships and formulae. Since

ϕ ( z , s , a + x ) = [ h ( x ) ( a + x ) s ] 1

Image

and

ϕ ( z , s , a + x + 1 ) = 1 z [ ϕ ( z , s , a + x ) ( a + x ) s ] = [ h ( x + 1 ) ( a + x + 1 ) s ] 1

Image

from (3.31), we have on eliminating ϕ between the last two equations,

h ( x + 1 ) = z ( a + x a + x + 1 ) s h ( x ) 1 h ( x ) ,

Image (3.34)

a recurrence relation for evaluating h(x)Image. One may use the initial value h(0)=casImage in (3.34) to initiate the recurrence.

The probability mass function and the hazard rate function of the family are presented in Figs 3.2 and 3.3 respectively.

Image
Figure 3.2 Probability mass functions of Hurwitz-Lerch distribution.
Image
Figure 3.3 Hazard functions of Hurwitz-Lerch distribution.

To simplify (3.33) further, we apply (Magnus et al., 1966)

ϕ ( z , s , a ) z x + 1 ϕ ( z , s , a x + 1 ) = k = 0 x z k ( a + k ) s

Image

to write a closed-form expression

λ ( x ) = z x ( a + x s k = 0 x z k ( a + k ) s ) = [ k = 0 x ( a + x a + x k ) s 1 z k ] 1 .

Image (3.35)

The mean residual life is given by

m ( x ) = 1 z x ϕ ( z , s , a + x + 1 ) t = x + 1 z t ϕ ( z , s , a + t ) .

Image

It now follows that

m ( x + 1 ) z ϕ ( z , s , a + x + 2 ) m ( x ) ϕ ( z , s , a + x + 1 ) = z ( a + x + 1 ) s ,

Image

or

m ( x + 1 ) m ( x ) = z + m ( x + 1 ) ( a + x + 1 ) s ϕ ( z , s , a + x + 1 ) .

Image (3.36)

Lerch family can be written in the form (3.20) with

B ( θ ) = ϕ ( z , s , a ) , a ( x ) = ( a + x ) s  and  g ( θ ) = z

Image

as a modified power series family with parameter z in place of g(θ)=θImage. Hence, the moments satisfy the recurrence formula

μ r + 1 = z d μ r d z + r μ z μ r 1 , r = 1 , 2 , ,

Image

for the central moments, and

μ ( r + 1 ) = z d μ ( r ) d z + μ ( r ) μ r μ ( r )

Image

connecting the factorial moments. With the use of the expressions for μ and μ2Image given above, the central and factorial moments can be computed. In the context of reliability modelling, the following characterization in terms of reliability concepts are useful.

Theorem 3.2

A discrete random variable X with support (0,1,2,)Image belongs to the Lerch family in (3.30) if and only if

m ( x ) = μ + z log S ( x + 1 ) z .

Image (3.37)

Proof

We have

S ( x + 1 ) = t = x + 1 z t ( a + t ) s ϕ ( z , s , a ) .

Image

Differentiating with respect to z, we get

S z = t = x + 1 t z t 1 ( a + t ) s ϕ ( z , s , a ) t = x + 1 z t ( a + t ) s ϕ ( z , s , a ) ϕ ( z , s , a ) ϕ ( z , s , a )

Image

and so

1 S ( x + 1 ) S ( x + 1 ) z = 1 z E ( X | X > x ) ϕ ( z , s , a ) ϕ ( z , s , a ) .

Image (3.38)

By virtue of the identity

ϕ ( x , s 1 , a ) = ( a + z z ) ϕ ( z , s , a )

Image

and the fact that

μ = ϕ ( z , s 1 , a ) ϕ ( z , s , a ) a ,

Image

(3.38) simplifies to (3.37). To prove the sufficiency part, condition (3.7) means that

t = x + 1 t f ( t ) = μ t = x + 1 f ( t ) + z t = x + 1 f ( t ) z .

Image (3.39)

Changing x+1Image to x and from the resulting equation, subtracting (3.39), yields

x f ( x ) = μ f ( x ) + z f ( x ) z ,

Image

or

z log f ( x ) z = x μ = x z ϕ ( z , s , a ) ϕ ( z , s , a ) z .

Image

Upon integrating, we get

f ( x ) = z x ϕ ( z , s , a ) A ( x ) .

Image

Since x=0f(x)=1Image and A(x)=(a+x)sImage we have the proof. □

Theorem 3.3

A necessary and sufficient condition for X to have Lerch distribution is that

E [ ( a + X ) s | X > x ] = ( a + x + 1 ) s h ( x + 1 ) 1 z , | z | < 1 , a > 0 ,

Image (3.40)

for all x.

Proof

We have

f ( x ) ( a + x ) s = z x ϕ ( z , s , a ) ,

Image

or

t = x + 1 f ( t ) ( a + t ) s = z x + 1 ( 1 z ) ϕ ( z , s , a ) .

Image

Thus,

E [ ( a + X ) s | X > x ] = z x + 1 ( 1 z ) S ( x + 1 ) ϕ ( z , s , a ) = ( a + x + 1 ) s z h ( x + 1 ) ,

Image

so that the condition is necessary. The sufficiency part follows by retracing the above steps. □

Theorem 3.4

The random variable X has Lerch distribution if and only if

E ( X | X x ) = μ + z log F ( x ) x .

Image (3.41)

Proof

From Theorem 3.2 and the relationship

F ( x ) E ( X | X x ) + S ( x + 1 ) E ( X | X > x ) = μ ,

Image

we obtain the characterization in (3.41). □

Theorem 3.5

The random variable X has Lerch distribution if and only if the variance residual life is given by

σ 2 ( x ) = z log S ( x + 1 ) z + z 2 2 log S ( x + 1 ) z 2 .

Image (3.42)

Proof

Sudheesh and Nair (2010) have characterized the modified power series family by

σ 2 ( x ) = g ( θ ) g ( θ ) m ( x , θ ) θ .

Image (3.43)

Since the Lerch family is a special case when g(θ)=θ=zImage, we have

σ 2 ( x ) = z m ( x , z ) z .

Image (3.44)

Substituting (3.37) into (3.44), we obtain (3.42). □

The estimation of parameters of the Lerch family has been addressed by Aksenov and Savageau (2005), who proposed the method of moments and the method of maximum likelihood. Denoting the sample moments by

p r = 1 n λ = 1 n x i r

Image

from a random sample (x1,,xn)Image from the distribution, the equations to be solved in the method of moments are as follows:

ϕ ( z ¯ , s ¯ 1 , a ¯ ) ϕ ( z ¯ , s ¯ , a ¯ ) a ¯ = p 1 , a ¯ 2 2 a ¯ ϕ ( z ¯ , s ¯ 1 , a ¯ ) ϕ ( z ¯ , s ¯ , a ¯ ) + ϕ ( z ¯ , s ¯ 2 , a ¯ ) ϕ ( z ¯ , s ¯ , a ¯ ) = p 2 , a ¯ 3 + 3 a ¯ 2 ϕ ( z ¯ , s ¯ 1 , a ¯ ) ϕ ( z ¯ , s ¯ , a ¯ ) 3 a ¯ ϕ ( z ¯ , s ¯ 3 , a ¯ ) ϕ ( z ¯ , s ¯ , a ¯ ) = p 3 ,

Image

to get the estimates (z¯,s¯,a¯)Image of (z,s,a)Image. The above equations have to be solved numerically. Aksenov and Savageau (2005) also presented formulas for the asymptotic variances and covariances of the estimates obtained by the method of moments and the method of maximum likelihood.

Remark 3.2

  1. (1)  The population moments given above are deduced from the general expression for the rth moment about the origin given by

μ r = 1 ϕ ( z , s , a ) j = 0 r ( r j ) ( a ) r j ϕ ( z , s j , a ) ;

Image

  1. (2)  For calculation of the coefficients of skewness and kurtosis, the formula for the central moments given by

μ r = 1 ϕ ( z , s , a ) j = 0 r ( 1 ) r j ( r j ) ϕ ( z , s j , a ) ( ϕ ( z , s 1 , a ) r j ϕ ( z , s , a ) ) r j

Image

  1. will be useful.

Gupta et al. (2008) have studied the Lerch family with positive integers as support. They considered the probability mass function of the form

f ( x ) = z x T ( z , s , a ) ( a + x ) s + 1 , x = 1 , 2 , 3 , , s 0 , 0 a 1 , 0 < z 1 ,

Image

where

T ( z , s , a ) = x = 1 z x ( a + x ) s + 1 = z ϕ ( z , s + 1 , a + 1 ) .

Image

The basic characteristics of this distribution are evaluated from

μ = T ( z , s 1 , a ) T ( z , s , a ) a , σ 2 = T ( z , s 2 , a ) T ( x , s , a ) ( T ( z , s 1 , a ) T ( z , s , a ) ) 2 ,

Image

and

μ r + 1 = z d μ r d z + r σ 2 μ r 1 , r = 1 , 2 , .

Image

Other functions of interest are the probability generating function given by

P ( t ) = T ( z t , s , a ) T ( z , s , a ) ,

Image

the distribution function given by

F ( x ) = 1 z x T ( z , s , a + x ) T ( z , s , a ) ,

Image

the survival function given by

S ( x ) = z k 1 T ( z , s , a + x + 1 ) T ( z , s , a ) ,

Image

the hazard rate given by

h ( x ) = z ( a + x ) s + 1 T ( z , s , a + x 1 ) ,

Image

and the reversed hazard rate given by

λ ( x ) = z k [ T ( z , s , a ) z k T ( z , s , a + x ) ] ( a + x ) s + 1 .

Image

The maximum likelihood method is proposed to estimate the parameters with the estimators solved from

E ( X ) = x ¯ , i = 1 n log ( a + x i ) n = E ( log ( a + X ) )

Image

and

1 n i = 1 n 1 a + x i = E ( 1 a + X ) .

Image

They further observed that the likelihood equations are the same as the method of moments equations.

Zornig and Altmann (1995) and Kemp (2010) provided a list of members of the Lerch family. We now present some of these distributions and their properties. It may be noted that there is a difference in the expressions for the probability generating functions when the supports are different. For instance,

P ( t ) = ϕ ( z t , s , a ) / ϕ ( z , s , a ) , x = 0 , 1 , 2 , , = t ϕ ( z t , s , a ) / ϕ ( z , s , a ) , x = 0 , 1 , 2 , , = t ϕ ( z t , s , a ) z b ϕ ( t z , s , a + b ) ϕ ( z , s , a ) z b ϕ ( z , s , a + b ) , x = 1 , 2 , , b .

Image

Geometric Distribution

The geometric distribution is a member of all the families discussed so far, and hence enjoys the properties of all families. In addition to some of the characteristic properties already discussed in the preceding chapter, we present a few more results here that are relevant to reliability studies. The foremost among them is the no-ageing (lack of memory) property of the geometric lifetimes. It states that X is geometric with probability mass function

f ( x ) = q x p , x = 0 , 1 , 2 , ,

Image (3.45)

if and only if

P ( X s + t | X s ) = P ( X t )

Image (3.46)

for all t,s=0,1,2,Image, or equivalently,

P ( X = s + t | X s ) = P ( X = t ) .

Image

Writing (3.46) as

P ( X s + t ) P ( X s ) = P ( X t ) ,

Image

we can recognize the left hand side as the distribution of the residual life (Xs|Xs)Image. Thus, the residual life of a geometric lifetime at any age is the same as the original lifetime, thereby, justifying the name ‘no-ageing’ property.

Being a characteristic property, for devices that does not age with time, the geometric law provides a suitable model. This is consistent with our earlier observation that the hazard rate, mean residual life and variance residual life are constant independent of the age of the device. Further, all these properties are equivalent to one another. The notion of no-ageing, also defined as

S ( x + y ) = S ( x ) S ( y )

Image (3.47)

for all x,yImage, forms the basis of many ageing concepts discussed in the next chapter. Also, (3.47) suitably extended to higher dimensions is employed to generate various forms of multivariate geometric distributions; see Chapter 7 for details. A simple extension of the geometric law is possible by assuming X to take values a,a+d,a+2d,Image, with

P ( X = a + n d ) = q x p , n = 0 , 1 , 2 , ,

Image (3.48)

in which case also the no-ageing property applies when X is replaced a+XdImage in (3.46). More interesting results of (3.48), in connection with ageing behaviour of X, can be seen in Chapter 4. Some additional properties of the geometric distribution are as follows:

  1. (i)  If X1,,XnImage are independent geometric random variables with parameter p, then X1+X2++XnImage is distributed as negative binomial with parameters (n,p)Image;
  2. (ii)  If X1,,XnImage are independent geometric random variables with parameters p1,p2,,pnImage respectively, then X(1)=min(X1,,Xn)Image is a geometric random variable with parameter p=1i=1n(1pi)Image;
  3. (iii)  If X1Image and X2Image are independent random variables with same parameter p, then X(2)=max(X1,X2)Image has distribution

P ( X ( 2 ) = x ) = P ( X 1 = x , X 2 < x ) + P ( X 2 = x , X 1 < x ) + P ( X 1 = X 2 = x ) = 2 q x p ( 1 q x ) + q 2 x p 2 = 2 q x p q 2 x p q 2 x + 1 p .

Image

  1. The method of proof can be extended readily to the case of n variables.
  2. (iv)  If X is negative binomial with

f ( x | K = k ) = ( k + x 1 k 1 ) q x p k

Image

  1. and K has geometric distribution

P ( K = y ) = p 1 q 1 y 1 , y = 1 , 2 , ,

Image

  1. then the mixture (unconditional distribution of X) is geometric (pp11pq1)Image. Thus, geometric distribution arises as a mixture.
  2. (v)  Let Y be a continuous random variable with exponential distribution

f 1 ( y ) = λ e λ y , y > 0 , λ > 0 .

Image

  1. If we denote by X and U, the integer part (largest integer not exceeding Y) and the fractional part of Y, Steutel and Thiemann (1989) have shown that X and U are independent and further that X has geometric distribution in (3.45) with p=1eλImage; see also Deng and Chhikkara (1990). Arising from the above result, it is also shown that the order statistics Xk:nImage, k=1,2,,nImage, from the geometric distribution satisfies

X k : n = d j = 0 k 1 X 1 : n j + [ j = 0 k 1 U n j n j ] ,

Image

  1. where =dImage denotes equality in distribution, all variables on the right hand side are independent, and [Y]=XImage as stated above. Also,

j = 0 k 1 X 1 : n j X k : n X 1 : n j + n 1

Image

  1. in which the inequalities denote the usual stochastic orderings discussed in Chapter 4.

It is more informative to pursue the above results in a more general framework. Let Y be a continuous random variable with probability density function f1(y)Image satisfying P(YD)=1Image, where D is an open subset of RImage, the one-dimensional Euclidean space. If D can be divided into mutually exclusive regions R1,,RkImage with corresponding regions S1,,SkImage such that Ti(Y):RiSiImage, i=1,2,,kImage, are injective and continuously differentiable. Then U=T(Y)Image has probability density function

g ( u ) = { i = 1 k f 1 ( T i 1 ( u ) ) J ( T i 1 ( u ) ) , u T ( R i , ) 0 , u T ( R i ) .

Image

Here, J stands for the absolute Jacobian of transformation and T1:T(D)DImage is the inverse of T (see Hoffman-Jorgensen, 1994). Applying the above transformation theorem with D=RImage, T(Y)=UImage, T(D)=[0,1)Image, then J=1Image for every x=1,2,Image. When Y is a continuous random variable with X as integer part and U as fractional part, we have the following result.

Theorem 3.6

The distribution of U is continuous with probability density function

g ( u ) = x = f 1 ( x + u ) , 0 < u < 1 .

Image

Example

Starting with the exponential case

g ( u ) = x = 0 λ e λ ( x + u ) = λ e λ u 1 e λ , 0 < u < 1 ,

Image

which is again the exponential distribution truncated at unity.

The application of the above result in reliability analysis is quite clear. On the one hand, we can convert the data on exponential to geometric and vice versa. Also by simulating exponential observations, their integer parts will be geometric. Some of the reliability functions from one distribution can be expressed in terms of the other. For example, in the exponential case, the mean residual life function is

m ( y ) = E ( Y y | Y > y ) = E ( Y ) = E ( X + U ) = E ( X ) + E ( U ) = E ( X x | X > x ) + E ( U ) 1 = 1 p + ( 1 λ e λ 1 e λ ) 1 .

Image

Thus, the difference between the mean residual lives of the exponential and geometric lifetimes is 1λeλ1e¯λImage. Even if failure times are observed at integer values, a very good approximation to continuous measurement can be made by virtue of the above result, from the estimated value of p, employing the equation p=1eλImage.

  1. (vi)  The other distributional properties are:

probability generating function  P ( t ) = p ( 1 q t ) 1 , mean  μ = q p ,  variance  μ 2 = q p 2 ,

Image

  1. and the rth factorial moment is

μ ( r ) = r ! ( q p ) r .

Image

Theorem 3.7

For any discrete random variable X, we have

E ( X h ( X ) ) 2 μ 2 E ( X 2 ) + μ

Image

with equality holding if and only if X is geometric.

Proof

We have

E ( X h ( X ) ) = x = 0 x f 2 ( x ) S ( x ) .

Image

From Cauchy-Schwarz inequality, we find

( x = 0 x f 2 ( x ) S ( x ) ) ( x = 0 x S ( x ) ) ( x = 0 x f ( x ) ) 2 = μ 2 .

Image (3.49)

By some straight forward calculations, we then get

x S ( x ) = E ( X 2 ) + μ 2 .

Image

Hence, from (3.49), we have the inequality stated in the theorem. The equality sign in (3.49) holds if and only if

[ x S ( x ) ] 1 2 = K ( x S ( x ) ) 1 2 f ( x ) ,

Image

which gives h(x)=Image a constant, characterizing the geometric law. □

  1. (viii)  Geometric distribution is a member of the Lerch family with s=0Image and a=1Image. The reliability properties follow from the general formulas for this family and also from our discussions in Chapter 2.

Discrete Uniform Distribution

The discrete uniform distribution arises from (3.30) when z=1Image, s=0Image and a=1Image, with probability mass function

f ( x ) = 1 b , x = 1 , , b .

Image (3.50)

It has distribution function F(x)=xbImage and survival function S(x)=bx+1bImage. Various distributional characteristics are as follows:

μ = ( b + 1 ) ( 2 b + 1 ) 6 , μ 2 = b 2 1 12 , P ( t ) = t ( 1 t b ) b ( 1 t ) , t 1 .

Image

If X1,,XnImage are independent random variables with distribution in (3.50), then X(1)=minXiImage and X(n)=maxXiImage, i=1,2,,nImage have respective probability mass functions

P ( X ( 1 ) = x ) = ( b x + 1 ) n ( b x ) n b n

Image

and

P ( X ( n ) = x ) = x n ( x 1 ) n b n .

Image

Among the reliability functions, the hazard rate is h(x)=1bx+1Image and the mean residual life function is m(x)=(bx+1)2Image, giving

h ( x ) m ( x ) = 1 2 ,

Image

a constant, characterizing the uniform distribution as a special case of the negative hypergeometric distribution (see Chapter 1). The variance residual life is found to be

σ 2 ( x ) = ( b x + 1 ) ( b x 1 ) 12 = m ( x ) ( m ( x ) 1 ) 3 .

Image

Also, the functions in reversed time turns out to be

λ ( x ) = 1 x ,  a reciprocal linear function, r ( x ) = x 2 ,  a linear function of  x ,

Image

with their product

λ ( x ) r ( x ) = 1 2 ,

Image

a constant. Further, the reversed variance residual life becomes

v ( x ) = x ( x 2 ) 12 = r ( x ) ( r ( x ) 1 ) 3 ,

Image

which is also a characterization of the discrete uniform distribution.

Discrete Pareto

The discrete Pareto distribution, also known as the Zipf distribution and as Riemann zeta distribution, is specified by the probability mass function (Fig. 3.4)

f ( x ) = C x ( s + 1 ) , x = 1 , 2 , , s > 0 ,

Image (3.51)

where

C = [ x = 1 x ( s + 1 ) ] 1 = 1 ζ ( s + 1 ) = ϕ ( 1 , s + 1 , 1 ) ,

Image

and ζ()Image is the Riemann zeta function defined earlier in (3.27). As a model of random phenomenon, the distribution in (3.51) have been used in literature in different contexts. It is used to model the size or ranks of objects chosen randomly from certain type of populations, for example, the frequency of words in long sequences of text approximately obeys the discrete Pareto law. Zipf (1949) law states that out of a population of N elements, the frequency of elements of rank (s+1)Image is (3.51) and hence the name Zipf distribution, when used in linguistics. Seal (1947) has used it to model the number of insurance policies per individual, and as the distribution of surnames by Fox and Lasker (1983); see also Good (1953) and Haight (1966). Various distributional characteristics are obtained as special cases of the corresponding results of the Lerch family. In particular, moments of (3.51) are given by

μ r = ζ ( s r + 1 ) ζ ( s + 1 ) , r < s ,

Image

and the moments are infinite for rsImage. The maximum likelihood estimator sˆImage is obtained by solving

1 n i = 1 n log x i = ζ ( s ˆ + 1 ) ζ ( s ˆ + 1 ) .

Image

Some special cases of the discrete Pareto are the Estoup model, given by

f ( x ) = c x , x = 1 , 2 , , b ,

Image (3.52)

(who noted that the rank r and frequency f in a French text were related by a hyperbolic law which translates in to the form (3.52)) and the Lotka law given by

f ( x ) = c x 2 ,

Image

where c=6π2Image. Lotka law results from the finding that the number of authors making n contributions is about naImage of those making one contribution, in which empirically a is nearly equal to two. When the value of x in (3.51) is truncated at b, we obtain

f ( x ) = 1 H ( b , s ) x s , x = 1 , 2 , , b ,

Image (3.53)

which is a proper distribution with H(b,s)=x=1b(1x)sImage. In this case,

F ( x ) = H ( x , s ) H ( b , s )  and  S ( x ) = H ( b , s ) H ( x 1 , s ) H ( b , s )

Image

and accordingly

λ ( x ) = H ( x , s ) x s  and  h ( x ) = 1 x s [ H ( b , s ) H ( x 1 , s ) ] .

Image

The mean and variance are given by

μ = H ( b , s 1 ) H ( b , s )  and  μ 2 = H ( b , s 2 ) H ( b , s ) H 2 ( b , s 1 ) H 2 ( b , s ) ,

Image

and the moment generating function is given by

M ( t ) = 1 H ( b , s ) j = 1 b e j t j s .

Image

A distribution that is unrelated to the above, but bears the name ‘zeta distribution’ was introduced by Haight (1969) with probability mass function

f ( x ) = ( 2 x 1 ) σ ( 2 x + 1 ) σ , x = 1 , 2 , , σ > 0 .

Image

The distribution and survival functions are, respectively,

F ( x ) = 1 ( 2 x + 1 ) σ

Image

and

S ( x ) = ( 2 x 1 ) σ .

Image

Thus, the hazard and reversed hazard functions have simple closed form expressions as

h ( x ) = 1 ( 2 x 1 2 x + 1 ) σ

Image

and

λ ( x ) = [ ( 2 x + 1 2 x 1 ) 1 ] / [ ( 2 x + 1 ) σ 1 ] .

Image

Further, the mean residual life and reversed mean residual life are found to be

m ( x ) = μ + g ( x ) h ( x ) 1 h ( x ) x

Image

and

r ( x ) = x μ + g ( x ) λ ( x ) ,

Image

where

g ( x ) = μ ( 1 ( 2 x + 1 ) σ ) ζ x ( σ ) ( 2 x 1 ) σ ( 2 x + 1 ) σ ,

Image

ζx(σ)=t=1x1tσImage and μ=(12σ)ζ(σ)Image is the mean.

Image
Figure 3.4 Hazard functions of discrete Pareto distribution.

The variance of the distribution is

μ 2 = ( 1 2 1 σ ) ζ ( σ 1 ) μ 2 .

Image

Notice that the mean is infinite for σ1Image and the variance is infinite for σ2Image. Some further literature on discrete Pareto versions can be seen in Lin and Hu (2001) and Shan (2005).

Figs 3.43.7 show hazard rate functions of discrete Pareto, Estoup, Lotka and truncated discrete Pareto distributions.

Image
Figure 3.5 Hazard functions of Estoup distribution.
Image
Figure 3.6 Hazard functions of Lotka distribution.
Image
Figure 3.7 Hazard functions of truncated discrete Pareto.

Hurwitz-Zeta Distribution

A second major distribution belonging to the Lerch family is the Hurwitz-zeta distribution defined by

f ( x ) = c ( a 1 + x ) s , x = 1 , 2 , , s > 1 ,

Image (3.54)

with c1=ζ(s,a)=ϕ(1,s,a)Image being the Hurwitz-zeta function defined in (3.28). This distribution is mainly used in linguistics in connection with ranking problems. We note that

ζ ( s , a ) ζ ( s , a + 1 ) = a s ,

Image

so that

F ( x ) = 1 ζ ( s , a + x ) ζ ( s , a )  and  S ( x ) = ζ ( s , a + x 1 ) ζ ( s , a ) .

Image

Accordingly, the reliability functions have the expressions

h ( x ) = [ ( a 1 + x ) s ζ ( s , a + x 1 ) ] 1 , λ ( x ) = [ ( a 1 + x ) s ( ζ ( s , a ) ζ ( s , a + x ) ) ] 1 .

Image

Also,

ζ ( s , a + x 1 ) = [ ( a + x 1 ) s h ( x ) ] 1 ζ ( s , a + x ) = [ ( a + x ) s h ( x + 1 ) ] 1 ,

Image

along with

ζ ( s , a + x 1 ) ζ ( s , a + x ) = ( a + x 1 ) s ,

Image

leads to

h ( x + 1 ) = ( a + x 1 a + x ) s h ( x ) 1 h ( x ) , x = 1 , 2 ,

Image (3.55)

Formula (3.55) enables the evaluation of successive hazard rates starting from h(1)=1asζ(s,a)Image. The expression for λ(x)Image given above admits further simplification on applying the relationship

ζ ( s , a ) ζ ( s , a + k ) = n = 0 k 1 ( a + n ) s ,

Image

which leads to a compact expression of the form

λ ( x ) = ( a + x 1 ) s n = 0 x 1 ( a + n ) s .

Image

Consequently, we have the recurrence relation

λ ( x + 1 ) = λ ( x ) [ λ ( x ) + ( a + x a + x + 1 ) s ] 1 ,

Image

which can be determined iteratively by setting the initial value as λ(1)=1Image. We can write the mean residual life as

m ( x ) = 1 ζ ( s , a + x ) t = x + 1 ζ ( s , a + t 1 ) = 1 ζ ( s , a + x ) t = x ζ ( s , a + t ) .

Image

Hence,

m ( x ) ζ ( s , a + x ) m ( x + 1 ) ζ ( s , a + x + 1 ) = ζ ( s , a + x ) .

Image

By virtue of

ζ ( s , a + x + 1 ) = ζ ( s , a + x ) ( a + x ) s ,

Image (3.56)

we have a recurrence relation

m ( x + 1 ) = ζ ( s , a + x ) ( m ( x ) 1 ) ζ ( s , a + x ) ( a + x ) s .

Image

The reversed mean residual life is given by

r ( x ) = 1 ζ ( s , a ) ζ ( s , a + x 1 ) t = 1 x 1 ζ ( s , a ) ζ ( s , a + x ) .

Image

Further simplification yields

r ( x + 1 ) = 1 + ( t = 1 x 1 ( a + t 1 ) s t = 1 x ( a + t 1 ) s ) r ( x ) .

Image

The probability mass function and hazard rate function of Hurwitz-zeta distribution are presented in Figs 3.8 and 3.9, respectively.

Image
Figure 3.8 Probability mass functions of Hurwitz-zeta distribution.
Image
Figure 3.9 Hazard functions of Hurwitz-zeta distribution.

A special case of the Hurtwitz-zeta model that is in use is the Zipf-Mandelbrot law (Mandelbrot, 1959, 1966) specified by

f ( x ) = c ( a 1 + x ) , x = 1 , 2 , , n ,

Image

obtained when s=1Image in (3.54). See Zornig and Altmann (1995) for a discussion on the distribution.

A third category of distribution in the Lerch family makes use of the general term in the polylogarithmic function A(z,s)Image defined in (3.29).

Accordingly, the general form of the probability mass function in such cases is the Good distribution (Good, 1953) given by

f ( x ) = C z x x s , x = 1 , 2 , , 0 < x < 1 ,

Image (3.57)

with C=A(z,s)=zϕ(z,s,1)Image. An important member in the above form, repeatedly used in the sequel, is the logarithmic series distribution (Fig. 3.8).

Setting s=1Image in (3.57), we have the probability mass function of the distribution as

f ( x ) = C z x x , x = 1 , 2 , , 0 < x < 1 ,

Image (3.58)

where C=log(1z)Image. It is also member of the GPSD discussed above. It is easily seen that

f ( x ) = z ( x 1 ) x f ( x 1 ) , x = 2 , 3 , , 0 < x < 1 .

Image (3.59)

Historical remarks and the origin of the logarithmic series distribution and its properties can be found in Johnson et al. (1992). Regarding distributional properties, it may be noted that the probability generating function has the form

P ( t ) = log ( 1 z t ) log ( 1 z ) .

Image

The factorial moment of order r is

μ ( r ) = log ( 1 z ) 1 z r ( r 1 ) ! ( 1 z ) r

Image

from which the mean and variance are obtained as

μ = log ( 1 z ) 1 z ( 1 z ) 1 , μ 2 = log ( 1 z ) ( 1 z ) 2 ( 1 + z log ( 1 z ) ) .

Image

Various central moments can be found from the above value of μ2Image and the recurrence relation

μ r + 1 = ( 1 z ) d μ r d z + r μ 2 μ r 1 .

Image

From the recurrence relation for f(x)Image mentioned in (3.58), the ratio f(x+1)f(x)Image is always less than unity, so that f(x)Image is always a decreasing function. The distribution has a long tail and it resembles the geometric distribution for large values of x. We can identify the logarithmic series distribution by plotting xfx(x1)fx1Image against x, where fxImage is the observed frequency of x (see Eq. (3.58)). The plot is expected to give a straight line parallel to the x-axis with intercept z. In problems of estimation of z, the maximum likelihood estimator zˆImage is obtained by solving the equation

x ¯ = z ˆ ( 1 z ˆ ) log ( 1 z ˆ ) ,

Image

with x¯Image being the mean of a random sample drawn from the distribution. The reversed hazard rate function is

λ ( x ) = z x x t = 1 x z t t

Image

giving

λ ( x + 1 ) = z x λ ( x ) z x λ ( x ) + x + 1 .

Image (3.60)

On the other hand, the hazard rate function is the reciprocal of an infinite series, viz.,

h ( x ) = [ t = 0 x z t x + t ] 1 .

Image

Hence,

h ( x + 1 ) = z x h ( x ) ( x + 1 ) ( 1 h ( x ) ) .

Image

Fig. 3.10 gives hazard rate functions of Good distribution.

Image
Figure 3.10 Hazard functions of Good distribution.

The rest of the reliability functions characterizing the logarithmic distribution are given in the following theorem.

Theorem 3.8

A discrete random variable X with the support of (1,2,)Image follows the logarithmic series distribution if and only if, for all x,

m ( x ) = 1 + x 1 z h ( x + 1 ) .

Image (3.61)

Proof

Assume that the relationship (3.61) is true for the random variable X. Then,

1 S ( x + 1 ) t = x + 1 t f ( t ) = 1 + x 1 x f ( x + 1 ) S ( x + 1 ) ,

Image

or

t = x + 1 t f ( t ) = 1 + x 1 z f ( x + 1 ) .

Image

This gives

x f ( x ) = x 1 z f ( x ) 1 + x 1 z f ( x + 1 )

Image

and

f ( x + 1 ) f ( x ) = z x 1 + x , x = 1 , 2 ,

Image (3.62)

Consequently, from (3.58), X has logarithmic series distribution. The converse is proved by retracing the steps from (3.62).  □

From (3.61), the mean residual life function of X is determined as

m ( x ) = 1 + x 1 x h ( x + 1 ) x

Image

with the expression for h(x)Image given above.

Some general aspects of various distributions and their applications, other than those connected with reliability modelling, are given in the works of Vilaplana (1987), Gut (2005), Doray and Luong (1995, 1997), Kulasekera and Tonkyn (1992) and Panaretos (1989). Figs 3.1A, B, C and Fig. 3.3 provide the shapes of the hazard rate functions of the modified power series and Lerch families.

3.2.4 Abel Series Distributions

The family of Abel series distributions is named after the Norwegian mathematician Niels Henrik Abel (1802–1829) who found the series whose general term specifies the probability mass function of the family. Abel polynomials (Comptet, 1994) are defined in terms of sequence px(θ)Image, where

p x ( θ ) = α x ( θ , λ ) = θ ( θ + x λ ) x 1 x ! , x = 0 , 1 , 2 , ,

Image (3.63)

with the first term α0=1Image and the second term α1=θImage. Successive differentiation of (3.63) with respect to θ yields

d r d θ r α x ( θ , λ ) = α x r ( θ + r λ , λ ) , r = 0 , 1 , 2 , .

Image (3.64)

For a fixed λ, not related to θ, αx(θ,λ)Image forms the basis of a set of polynomials in θ. Hence, for a finite, positive and successively differentiable function P(θ)=P(θ,λ)Image in θ, we have

P 1 ( θ ) = x = 0 α x ( θ , λ ) g ( x λ ) ,

Image (3.65)

where

g ( x λ ) = [ d x d θ x P 1 ( θ ) ] θ = x λ .

Image

Accordingly, restricting the parameter space to (θ,λ)Θ×ΛImage for which the terms in the expansion (3.65) are non-negative, we conclude that

f ( x ) = α x ( θ , λ ) g ( x λ ) P 1 ( θ ) , x = 0 , 1 , 2 , ,

Image (3.66)

qualifies to be a probability mass function of a discrete random variable X. The distribution in (3.66) will be designated as the Abel series distribution (ASD), with series function P1(θ)Image. It may be noted that the truncated version of the ASD is also an ASD.

Some special notation and functions are required to describe the properties of ASD, as given in Charalambides (1990). They are the shift operator E and the usual differential operator D combined to give

D E λ a ( u ) = d a ( u λ ) d u .

Image

The Bell partition polynomials

B n = B n ( a 1 , a 2 , , a n ; b 1 , b 2 , , b n )

Image

are defined by

B n = n ! b k k 1 ! k n ! ( a 1 1 ! ) k 1 ( a 2 2 ! ) k 2 ( a n n ! ) k n , n = 1 , 2 , , B 0 = 1 ,

Image

where the summation is extended over all partitions of n over k1,,kn0Image satisfying k1+2k2++nkn=nImage and k=k1+k2++knImage is the number of parts of the partition. The special cases fk=1Image, k=1,2,Image, and fk=(s)k=s(s1)(sk+1)Image, k=1,2,Image, for real s of the Bell polynomials are denoted by Bn(a1,a2,an)Image and Bn(s)(a1,a2,an)Image. Also, let

a ( t ) = k = 0 a k t k / k !

Image

be the generating function of the sequence (ak)Image. With the above notations Charalambides (1990) has then shown that

  1. (i)  the probability generating function of ASD is

P ( t ) = [ P 1 ( θ ) ] 1 { exp [ θ q 1 ( t D E λ ) P 1 ( u ) ] } u = 0 ,

Image

  1. where q(w)=weλwImage, q1Image is the inverse of q and DEλa(u)=da(μλ)duImage;
  2. (ii)  the factorial moments are given by

μ ( r ) ( θ ) = [ P 1 ( θ ) ] 1 [ B r ( Q 1 Q r ) E θ P ( u ) ] u = 0 , r = 0 , 1 , 2 , ,

Image

  1. with

Q m = ( 1 ) m 1 θ λ m 1 D m ( 1 λ D ) m B m 1 ( m ) ( C 1 , C 2 , , C m 1 ) , m = 1 , 2 , ,

Image

  1. where

C k = ( k + 1 λ D ) ( 1 λ D ) 1 / ( k + 1 ) , k = 1 , 2 , .

Image

Various basic members of the ASD are obtained by the Abel series expansions of the exponential, binomial and logarithmic functions. We will discuss these distributions in some detail.

Generalized Poisson Distribution

Take

P 1 ( θ ) = e θ , d r P 1 ( θ ) d θ r = e θ

Image

and consequently g(xλ)=exλImage. Thus, we have the probability mass function as

f ( x ; θ , λ ) = θ ( θ + x λ ) x 1 x ! e ( θ + x λ ) , x = 0 , 1 , 2 , , θ > 0 , 0 < λ < 1 ,

Image (3.67)

representing the generalized Poisson distribution of Consul and Jain (1973a, 1973b) who obtained it from the Lagrangian expansion. A comprehensive study of this distribution is available in Consul (1989). By way of properties of the distribution, we have:

(i) f ( x ) = ( θ + x λ ) x 1 e λ ( θ + x λ λ ) x 2 f ( x 1 ) ; (ii) μ = θ 1 λ , μ 2 = θ ( 1 λ ) 3 ; (iii) ( 1 λ ) μ r + 1 = θ μ r + θ μ r θ + λ μ r λ , r = 0 , 1 , 2 , ,

Image (3.68)

and

μ r + 1 = r θ ( 1 λ ) 3 μ r 1 + 1 1 λ [ μ r ( t ) t ] t = 1 , r = 1 , 2 , 3 , ,

Image

where μr(t)Image is μrImage with λ and θ replaced by λt and θt, respectively.

The distribution in (3.67) is a modified power series distribution. We can estimate the parameters by the method of moments as

θ ˆ = m 1 3 m 2

Image

and

α ˆ = m 2 m 1 3 1 m 1 ,

Image

where α=λθImage and miImage is the ith sample moment, i=1,2Image.

Regarding reliability characteristics, which has not been studied so far, we first observe that the hazard rate function is

h ( x ) = ( θ + x λ ) x 1 e λ x x ! t = x ( θ + t λ ) t 1 t ! e t λ .

Image (3.69)

Also, (3.69) satisfies the relationship

h ( x + 1 ) = ( θ + λ + x λ ) x ( θ + x λ ) x 1 ( x + 1 ) h ( x ) ( 1 h ( x ) )

Image (3.70)

from which successive values of h(x)Image can be calculated starting with h(0)=eθImage.

Secondly, the mean residual life function is obtained from the theorem due to Consul (1995) given below, with parametrization as [θ,α)Image.

Theorem 3.9

A discrete random variable X has generalized Poisson distribution if and only if

m ( x ) = μ [ 1 + S ( x + 1 ) θ ] x ,

Image

for x=0,1,2,Image.

Reliability functions in reversed time obey similar properties. We have

λ ( x ) = ( θ + λ x ) x 1 e λ x x ! t = 0 x ( θ + λ t ) t 1 t ! e λ t

Image

and therefrom

1 λ ( x + 1 ) = 1 + 1 λ ( x ) ( x + 1 ) e λ ( θ + λ x ) x 1 θ ( θ + λ + λ x ) .

Image

Differentiating the distribution function

F ( x ; θ , α ) = t = 0 x ( 1 + α t ) t 1 t ! θ t e θ ( 1 + α t ) , 0 α < 1 θ , α = λ θ ,

Image

and simplifying the resulting expression, we obtain

E ( X | X < x + 1 ) = μ [ 1 + log F θ ] .

Image

Thus, the reversed mean residual life has the expression

r ( x + 1 ) = x + 1 μ ( 1 + log F θ ) .

Image

When λ=0Image (α=0)Image, all the above results reduce to those of the Poisson distribution. In the Poisson process, the probability of occurrence of a single event remains constant, whereas in the generalized Poisson case the probability depends on the previous occurrence. On the basis of experimental evidence, the generalized case appears to model situations in which failures occur rarely in short periods of time where the frequency of such events are of interest over longer duration of time.

Quasi-Binomial Distribution I

Set P1(p)=(p+q)nImage,

g ( x λ ) = n ( n 1 ) ( n x + 1 ) ( 1 p x λ ) n x

Image

and

( p + q ) n = x = 0 n ( n x ) p ( p + x λ ) x 1 ( 1 p x λ ) x 1 .

Image

When q=1pImage, we have the probability mass function

f ( x ) = ( m x ) p ( p + x λ ) x 1 ( 1 p x λ ) n x , x = 0 , 1 , 2 , , n , 0 p 1 , p n < λ < 1 p n

Image (3.71)

A random variable X having its as probability mass function as in (3.71) is said to have a quasi-binomial distribution I (QBD-I). The probabilities of X are computed from the recurrence relation

f ( x + 1 ) = ( n x ) ( p + λ x ) ( x + 1 ) ( 1 p x λ ) ( 1 + λ p + x λ ) x ( 1 λ 1 p x λ ) n x 1 f ( x ) .

Image

QBD-I is unimodal and the values of the parameter λ substantially affects various characteristics of the distribution. Expressions for the mean and variance are

μ = E ( X ) = p k = 0 n 1 n ! ( n k ) ! λ k

Image

and

μ 2 = k = 0 n 2 ( k + 1 ) p 2 λ k n ! ( n k 2 ) ! + 1 2 p λ k = 0 n 2 ( k + 1 ) ( k + 4 ) λ 4 n ! ( n k 2 ) ! + μ μ 2 ,

Image

respectively. When λ>(<)0Image, the mean of the QBD 1 is larger (smaller) than the mean of the binomial. Further distributional aspects and estimation of the parameters have been discussed by Consul and Famoye (2006).

The hazard rate function of QBD 1 is given by

h ( x ) = ( n x ) p ( p + x λ ) x 1 ( 1 p x λ ) n x t = x n ( n t ) p ( p + t λ ) t 1 ( 1 p t λ ) n t ,

Image

and there exists a recurrence formula of the form

1 h ( x ) = 1 + 1 h ( x + 1 ) [ n x x + 1 ( p + x λ + λ 1 p x λ ) ( 1 + λ p + x λ ) x 1 ( 1 λ 1 p x λ ) n x 1 ]

Image (3.72)

with h(0)=f(0)=(1p)nImage. Likewise, the reversed hazard rate satisfies the recurrence formula

1 λ ( x + 1 ) = 1 + 1 λ ( x ) [ x + 1 n x ( 1 p x λ p + x λ + λ ) ( 1 + λ p + x λ ) 1 x ( 1 λ 1 + p x λ ) x + 1 n ]

Image

with λ(0)=1Image.

The probability mass functions and hazard rate functions of quasi-binomial distribution are presented in Figs 3.11 and 3.12, respectively.

Image
Figure 3.11 Probability mass functions of quasi-binomial distribution-I.
Image
Figure 3.12 Hazard rate functions of quasi-binomial distribution-I.

Quasi-Negative Binomial Distribution

Taking P1(a)=(ba)nImage, we have

P 1 ( a ) = x = 0 ( n + x 1 x ) a ( a + x λ ) x 1 ( b + x λ ) n x .

Image (3.73)

Thus,

f ( x ; a , λ ) = ( n + x 1 x ) a ( a + x λ ) x 1 ( b + x λ ) n x ,

Image

or

f ( x ) = ( n + x 1 x ) p ( p + x ϕ ) x 1 ( q + x ϕ ) n x , x = 0 , 1 , 2 , , p + x ϕ < 1 , q p = 1 ,

Image (3.74)

defines a probability distribution, which is called the quasi-negative binomial distribution (QNBD). In (3.74), p=abaImage, q=bbaImage and ϕ=λbaImage in the original expansion. Some special cases of (3.73) are as follows:

  1. (i)  the negative binomial when ϕ=0Image;
  2. (ii)  the quasi-geometric distribution (QGD)

f ( x ) = p ( p + x ϕ ) x 1 ( q + x ϕ ) 1 x , x = 0 , 1 , 2 , ,

Image

  1. when n=1Image and
  2. (iii)  the geometric distribution when n=1Image and ϕ=0Image.

The probability mass function, for successive values of x, satisfies the recursive formula

f ( x + 1 ) = n + x x + 1 ( p + x ϕ + ϕ q + x ϕ ) ( 1 + ϕ p + x ϕ ) x 1 ( 1 + ϕ q + x ϕ ) n x f ( x ) ,

Image

and the moments are evaluated from the generating function

P ( t ) = ( P 1 ( a ) ) 1 [ exp a q 1 ( t d P 1 ( μ λ ) d u ) ] u = 0 .

Image

Eq. (3.74) yields the survival function as

S ( x ) = t = x ( n + t 1 t ) p ( p + t ϕ ) t 1 ( q + t ϕ ) n t .

Image (3.75)

From (3.74) and (3.75), we find that the hazard rate function can be computed from the recursive formula

1 h ( x ) = 1 + n + x x ( 1 + ϕ p + x ϕ ) x 1 ( 1 + ϕ 1 + p + x ϕ ) n + x ( 1 + 1 p + x ϕ + ϕ ) 1 1 h ( x + 1 ) .

Image

Similarly, the reversed hazard rate function satisfies the recursive formula

1 λ ( x + 1 ) = 1 + 1 λ ( x ) ( x + 1 ) ( n + x ) ( q + x ϕ + ϕ p + x ϕ + ϕ ) ( 1 + ϕ q + x ϕ ) x + n ( 1 + ϕ p + x ϕ ) x 1 .

Image

When n=1Image, we have the results for the quasi-geometric law. Models that lead to QNBD as well as various properties of QNBD have been discussed by Bilal and Hassan (2006) and Hassan and Bilal (2008). They have found the mean and variance to be

μ = n p 2 F 0 [ 1 , n + 1 , ; ϕ ]

Image

and

μ 2 = n p 2 F 0 [ 1 , n + 1 , ; ϕ ] + p ( p + 2 ϕ ) n ( n + 1 ) 2 F 0 [ 2 , n + 2 , ; ϕ ] + p ϕ 2 n ( n + 1 ) ( n + 2 ) 2 F 0 [ 3 , n + 3 , ; ϕ ] μ 2 ,

Image

respectively, where

F 0 2 [ 1 , n + 1 , ; ϕ ] = j = 0 i ( j ) ( n + 1 ) ( j ) ϕ j j ! .

Image

Quasi-Logarithmic Series Distribution

In this case, an extension of the usual logarithmic series distribution is obtained by taking P1(a)=log(1a)Image. Then, the Abel series expansion yields

log ( 1 a ) = x = 1 ( 1 λ x ) x x a ( a + x λ ) x 1

Image

so that

f ( x ) = θ a ( a + x λ ) x 1 x ( 1 x λ ) x , 0 < a < 1 , θ 1 = log ( 1 a ) ,

Image

x=1,2,3,Image, is a probability mass function, representing the quasi-logarithmic distribution. The hazard rate function becomes

h ( x ) = ( a + x λ ) x 1 x ( 1 x λ ) x / t = x ( a + t λ ) t 1 t ( 1 t λ ) t .

Image

Recurrence relations for h(x)Image and λ(x)Image can be obtained in the usual manner. For instance, a recursive relation for the hazard function is given by

1 h ( x ) = 1 + x λ + 1 ( a + λ + x λ 1 λ x λ ) ( 1 + λ a + x λ ) x 1 ( 1 + λ a + x λ ) x 1 h ( x + 1 ) .

Image

Quasi-Binomial Distribution II

The quasi-binomial distribution II (QBD II) can be derived in different ways. We obtain it here as a property of Abel polynomials and the corresponding series expansion. A sequence sn(a)Image is of the binomial type if

s n ( a + b ) = x = 0 n ( n x ) s x ( a ) s n x ( b )

Image

for all b. With the choice of

s n ( θ ) = θ ( θ + n λ ) n 1 = n ! α n ( θ , λ ) ,

Image (3.76)

we have

d x d θ x α n ( θ + b λ ) = α n x ( θ + b + x λ , λ )

Image

from (3.64).

Hence,

g ( x λ ) = α n x ( b , λ ) .

Image

Using the expansion in (3.66), we have

α n ( θ + b , λ ) = x = 0 n α x ( θ , λ ) α n x ( b , λ )

Image

so that

1 s n ( θ + b , λ ) x = 0 n ( n x ) s a ( θ , λ ) s n x ( b , λ ) = 1 .

Image

Substituting for snImage from (3.76), we see that

f ( x ) = ( n x ) θ b θ + b ( θ + x λ ) x 1 [ b + ( n x ) λ ] n x 1 ( θ + b + n λ ) n 1 ,

Image (3.77)

x=0,1,2,,nImage, θ>0Image, b>0Image and λ>θnImage. The distribution defined by (3.77) is called QBD II. When λ=0Image, it reduces to the binomial model. A reparametrization of p=θ(θ+b+nλ)Image and α=λ(θ+b+nλ)1Image provide another QBD II version

f ( x ) = ( n x ) ( 1 p n α ) p ( 1 n α ) ( p + α x ) x 1 ( 1 p α x ) n x 1 , 0 < p < 1 , p n < α < ( 1 p ) n , x = 0 , 1 , 2 , , n .

Image (3.78)

The mean and variance of the form in (3.77) are

μ = n θ θ + b

Image

and

μ 2 = n 2 θ b ( θ + b ) 2 n ( n 1 ) θ b θ + b i = 0 n 2 ( n 2 ) ( i ) λ i ( θ + b + n α ) i + 1 ,

Image

respectively. From (3.78), we can write expressions for the hazard rate and reversed hazard rate functions, respectively, as

h ( x ) = ( n x ) ( p + α x ) x 1 ( 1 p α x ) n x 1 t = x n ( n t ) ( p + α t ) t 1 ( 1 p α t ) n t 1

Image

and

λ ( x ) = ( n x ) ( p + α x ) x 1 ( 1 p α x ) n x 1 t = 0 n ( n t ) ( p + α t ) t 1 ( 1 p α t ) m t 1 .

Image

They satisfy the recursive relationships

1 h ( x ) = 1 + 1 h ( x + 1 ) n x x + 1 ( p + α + α x 1 p α x ) ( 1 + α p + α x ) x 1 ( 1 α 1 p α x ) n x 2 ,

Image

and

1 λ ( x + 1 ) = 1 + 1 λ ( x ) x + 1 n x ( 1 p α α x p + α x ) ( 1 + α p + α x ) x ( 1 α 1 p α x ) x + 1 n .

Image

Figs 3.13 and 3.14 provide probability mass functions and hazard rate functions of quasi-binomial distribution II.

Image
Figure 3.13 Probability mass functions of quasi-binomial distribution-II.
Image
Figure 3.14 Hazard rate functions of quasi-binomial distribution-II.

The Abel series distributions presented above have been obtained by Nandi and Das (1994). They have proposed estimates of the parameters of the distributions by equating the proportion of zeros and ones in the sample with f(0)Image and f(1)Image in the case of QBD I, QNBD and the generalized Poisson, and equating the sample mean and proportion of zeros with the population mean and f(0)Image for QBD II. With real data, it has also been shown that various distributions become useful in practice as satisfactory models. Chapter 4 of Consul and Famoye (2006) and the references therein provide more information on distributional properties, characterizations and application of the Abel series distributions in the general framework of Lagrangian expansions.

3.2.5 Lagrangian Family

Consider a function g(z)Image which is successively differentiable satisfying g(0)0Image and g(1)=1Image. Then, the numerically smallest root z=l(t)Image of the transformation z=tg(z)Image defines a probability generating function with Lagrangian expansion

z = P ( t ) = x = 1 t x x ! [ d x 1 d t x 1 g ( z ) x ] z = 0

Image (3.79)

provided that the derivative inside the square brackets is non-negative for all x0Image. The corresponding probability mass function is

f ( x ) = 1 x ! [ d x 1 d z x 1 ( g ( z ) ) x ] z = 0

Image (3.80)

which constitutes the basic Lagrangian distributions. Detailed study of the properties, applications and estimation problems connected with this family is available in works of Mohanty (1966), Consul and Shenton (1972, 1973, 1975), and the books by Consul (1989) and Consul and Famoye (2006). We, therefore, represent here only these results that are most pertinent to reliability modelling and analysis. A large number of distributions belong to the Lagrangian family, of which we discuss only those for which the support is appropriate and where some useful and simple results on various reliability functions can be presented.

Chronologically, the earliest distribution is that of Borel specified by

f ( x ) = ( x λ ) x 1 x ! e x λ , x = 1 , 2 , .

Image (3.81)

By transferring the random variable X in (3.81) to Y=X1Image, we have the generalized Poisson distribution presented earlier in (3.67). Since details of the Borel distribution can be evaluated in terms of these of the generalized Poisson, we abstain from further discussions here.

Haight Distribution

The Haight (1961) distribution has probability mass function

f ( x ) = 1 2 s 1 ( 2 x 1 x ) q x p x 1 , x = 1 , 2 , , 0 < q = 1 p < 1 .

Image (3.82)

From the probability generating function

P ( t ) = q ( 1 p t ) 1 ,

Image

the basic characteristics such as the mean, variance and higher moments can all be easily derived. The hazard rate function is

h ( x ) = ( 2 x 1 x ) q x p x 1 ( 2 x 1 ) t = x 1 2 t 1 ( 2 t 1 t ) q t p t 1 , x = 1 , 2 , .

Image

As usual, h(x)Image can be evaluated recursively as

1 h ( a ) = 1 + ( 4 x 2 ) p q ( x + 1 ) h ( x + 1 ) , x = 1 , 2 , ,

Image (3.83)

with h(1)=qImage. Differentiating the survival function

S ( x + 1 ) = t = 1 x + 1 1 2 t 1 ( 2 t 1 t ) q t p t 1

Image

with respect to p and rearranging the terms, we get

E ( X | X > x ) = p q q p [ 1 p + log S ( x + 1 ) p ] .

Image (3.84)

Accordingly, the mean residual life function is

m ( x ) = p q q p [ 1 p + log S ( x + 1 ) ] x , x = 1 , 2 , .

Image (3.85)

Differentiating S(x+1)Image twice with respect to p, we get

2 S ( x + 1 ) p 2 = t = x + 1 { 1 2 t 1 ( 2 t 1 t ) [ ( t 1 ) ( t 2 ) q t p t 1 p 2 ] t ( t 1 ) [ 2 p q 1 q 2 ] p t 1 q t }

Image

which leads to

1 S ( x + 1 ) 2 S ( x + 1 ) p 2 = 2 p 2 2 q 2 + ( q p ) 2 p 2 q 2 E ( X | X > x ) + ( q p ) 2 p 2 q 2 E ( X 2 | X > x )

Image

and to

E ( X 2 | X > x ) = p 2 q 2 ( q p ) 2 [ 1 S ( x + 1 ) 2 S ( x + 1 ) p 2 + 2 q 2 + ( q p ) 2 p 2 q 2 E ( X | X > x ) ] .

Image (3.86)

Now, the variance residual life can be computed from (3.84) and (3.86), as

σ 2 ( x ) = E ( X 2 | X > x ) E 2 ( X | X > x ) .

Image

With the aid of the distribution function

F ( x ) = t = 1 x 1 2 t 1 ( 2 t 1 t ) q t p t 1 ,

Image

some similar manipulations yield reliability functions in reversed time. The expressions are

1 λ ( x + 1 ) = 1 + x + 1 2 ( 2 x 1 ) x q p 1 λ ( x ) , r ( x ) = x p q q p [ 1 p + log F ( x ) p ] , x = 1 , 2 ,

Image

and

v ( x ) = E ( X 2 | X x ) E 2 ( X | X x ) ,

Image

where

E ( X | X x ) = p q q p [ 1 p + log F ( x ) ]

Image

and

E ( X 2 | X x ) = p 2 q 2 ( q p ) 2 [ 2 F ( x ) p 2 2 p 2 + 2 q 2 + ( q p ) 2 p 2 q 2 E ( X | X x ) ] .

Image

Geeta Distribution

A discrete random variable X is said to have Geeta distribution with parameters θ and α if

f ( x ) = 1 α x 1 ( α x 1 x ) θ x 1 ( 1 θ ) α x x , x = 1 , 2 , , 0 < θ < 1 , 1 < α < θ 1 .

Image

The distribution is L-shaped and unimodal with

μ = ( 1 θ ˆ ) ( 1 α θ ) 1

Image

and

μ 2 = ( α 1 ) θ ( 1 θ ) ( 1 α θ ) 3 .

Image

A recurrence relation is satisfied by the central moments of the form

μ r + 1 = θ μ d μ r d θ + r μ 2 μ r 1 , r = 1 , 2 , .

Image

Estimation of the parameters can be done by the method of moments or by the maximum likelihood method. Moment estimates are

μ ˆ = x ¯  and  α ˆ = S 2 x ¯ ( x ¯ 1 ) S 2 x ¯ 2 ( x ¯ 1 )

Image

with x¯Image and s2Image being the sample mean and variance. The maximum likelihood estimates are

μ ˆ M = x ¯

Image

and αˆImage is iteratively obtained by solving the equation

( α 1 ) x ¯ α x ¯ 1 = exp [ 1 n x ¯ x = 2 k i = 2 x x f x x i ]

Image

with n=x=1kfxImage as the sample size and fxImage is the frequency of x=1,2,,kImage. For details, see Consul (1990). As the Geeta model is a member of the MPSD, the reliability properties can be readily obtained from those of the MPSD discussed in Section 3.2.2.

Generalized Geometric Distribution II

Famoye (1997) obtained the model

f ( x ) = 1 x ( m x x 1 ) θ x 1 ( 1 θ ) m x x + 1 , x = 1 , 2 , , 0 < θ < 1 , 1 m θ 1 ,

Image (3.87)

which is a generalization of the geometric distribution, as (3.87) becomes the geometric law when m=1Image. It is obtained from the Lagrangian expansion of the generating function of the geometric distribution. The mean and variance are

μ = ( 1 θ m ) 1  and  μ 2 = m θ ( 1 θ ) ( 1 θ m ) 3 .

Image

Other distributional properties are derived from the central moments that satisfy the recurrence formula

μ r + 1 = θ ( 1 θ ) ( 1 θ m ) 1 d μ r d θ + r μ r 1 μ 2 , r = 1 , 2 , ,

Image

using the value of μ2Image given above. For any given set of values of x, the hazard rate function satisfies from the recursive relationship

1 h ( x ) = 1 + θ ( 1 θ ) m 1 ( x + 1 ) h ( x + 1 ) Γ ( m x + m + 1 ) Γ ( m x x + 2 ) Γ ( n x + 1 ) Γ ( m x + m x + 1 ) , x = 1 , 2 , ,

Image

with h(1)=(1θ)mImage. Working as in the earlier cases, the mean residual life satisfies the relationship

m ( x ) = μ [ log S ( x + 1 ) θ + 1 θ ( 1 θ ) ] x .

Image

Also,

1 S ( x + 1 ) 2 S ( x + 1 ) θ 2 = E ( X 1 ) ( X 2 ) θ 2 2 E ( X 1 ) ( m X X + 1 ) θ ( 1 θ ) + E ( m X x ) ( m X x + 1 ) ( 1 θ ) 2 .

Image

Simplifying, we obtain

E ( X 2 | X > x ) = 1 S ( x + 1 ) ( 1 θ ) 2 × [ 2 S ( x + 1 ) θ 2 ( 3 θ 2 + 2 θ m 8 θ 2 + 2 θ 3 ) E ( X | X > x ) 2 ( 1 θ ) ] .

Image

This, along with

E ( X | X > x ) = μ [ log S θ + 1 θ ( 1 θ ) ] ,

Image

gives an expression for the variance residual life. The reversed hazard rate is derived from the recurrence relation

1 λ ( x + 1 ) = 1 + λ ( x ) ( x + 1 ) Γ ( m x + m x + 1 ) Γ ( m x + 1 ) Γ ( m x + m + 1 ) Γ ( m x + 2 x ) θ ( 1 θ ) m 1 .

Image

By adopting calculations similar to those of the mean residual life, we have the reversed mean residual life as

r ( x ) = x μ [ log F ( x ) x + 1 θ ( 1 θ ) ] .

Image

Finally, the estimates of the parameters can be obtained by the moment method, or the maximum likelihood method, or by equating the sample mean and frequency at x=1Image with the mean and nf(1)Image, where n is the sample size.

There are several other families of discrete distributions discussed in the literature like the generalized hypergeometric family (Kemp, 1968a, 1968b), Gould series distributions (Charalambides, 1986), factorial series distributions (Berg, 1974) and their further extensions in various directions. Many of the members of these families overlap with those discussed here already. Furthermore, the reliability functions in other cases are of complicated forms, but can be obtained by some of the methods already detailed here.

3.3 Discrete Analogues of Continuous Distributions

Several discrete distributions that are members of various families have been presented in the preceding section. They can serve as black-box models in the sense that their derivations were not based on the physical properties of the failure mechanism resulting in lifetimes. In a slightly different scenario, in this section, we consider various forms of discrete models that are obtained from popular continuous life distributions. This can be accomplished in different ways. Let Y be a continuous random variable representing lifetime with survival function F¯(x)=P(Yx)Image. If times are grouped into unit intervals, the discrete variable X=[Y]Image, the largest integer less than or equal to Y, has its probability mass function as

f ( x ) = P ( X = x ) = P ( x Y < x + 1 ) = F ¯ ( x ) F ¯ ( x + 1 ) , x = 0 , 1 , 2 ,

Image (3.88)

The survival function S(x)Image of X and F¯(x)Image of Y will be the same at all integer points. As a simple example, when Y is exponential with F¯(x)=exp[λx]Image, we have

f ( x ) = exp ( λ x ) exp ( λ ( x + 1 ) ) = q x q x + 1 = p q x , p = 1 q , q = e λ ,

Image

which represents the geometric distribution.

Discrete Weibull Distribution  Taking F¯(x)=exp[(xα)]βImage and applying (3.88), we have

S ( x ) = q x β , q = exp [ 1 α β ] , x = 0 , 1 , 2 , , 0 < q < 1 .

Image (3.89)

This will be referred to as the discrete Weibull distribution I, first proposed by Nakagawa and Osaki (1975). For the model in (3.89), we readily find

f ( x ) = q x β q ( x + 1 ) β

Image

and

h ( x ) = 1 q ( x + 1 ) β x β , x = 0 , 1 , 2 , .

Image

The reversed hazard rate is

λ ( x ) = q x β q ( x + 1 ) β 1 q ( x + 1 ) β = 1 q x β 1 q ( x + 1 ) β 1 .

Image

Two special cases of interest are the geometric when β=1Image and the discrete Rayleigh when β=2Image. There are no closed-form expressions for the mean, variance, and higher-order moments. Alikhan et al. (1989) noted that

P ( X = 1 ) = 1 q  and  P ( X = 2 ) = q q 2 β ,

Image

and then proposed estimating the parameters q and β by equating the theoretical and estimated probabilities from the data of the observations X=1Image and X=2Image. Thus, these estimates are given by

q ˆ = S n ( 1 )  and  β ˆ = 1 log 2 log S n ( 2 ) log S n ( 1 ) ,

Image

where Sn(x)Image is the empirical survival function. One can use two other values of X instead of 1 and 2, depending on the number of observations in the sample that are equal to the chosen integer values. The larger the frequencies of the numbers chosen, the better will be the estimates. Bracquemond and Gaudoin (2003) proposed the method of maximum likelihood for the estimation of q and β and observed that the estimates are biased, but of good quality for large values of q. One can use the distribution as an approximation to the continuous counterpart. Further, a shock model interpretation is also available, with X denoting the number of shocks survived by a device and q is the probability of surviving more than one shock. It can also be noted that (3.89) satisfies the extension of the lack of memory property that

P ( X ( x β + y β ) 1 β | X x ) = P ( X y ) .

Image

Discrete Half-Logistic Distribution  The half-logistic model in the continuous case has survival function (Balakrishnan, 1985)

F ¯ ( x ) = 2 1 + exp ( x σ ) , x > 0 , σ > 0 .

Image

Accordingly, the discrete half-logistic distribution is specified by the survival function

S ( x ) = 2 1 + exp ( x σ ) , x = 0 , 1 , 2 , ,

Image (3.90)

or by the probability mass function

f ( x ) = 2 ( e 1 σ 1 ) e x σ ( 1 + e x σ ) ( e x + 1 σ 1 ) , x = 0 , 1 , 2 , .

Image

The mean and variance of the distribution in (3.90) are

μ = 2 n = 1 ( 1 + e n σ ) 1  and  μ 2 = 2 ( e 1 σ 1 ) ( 1 + e 1 σ ) 2 + 2 n = 1 2 n + 1 1 + e n + 1 σ μ 2 .

Image

The hazard rate and reversed hazard rate functions of the distribution in (3.90) can be derived as follows:

h ( x ) = ( e 1 σ 1 ) e x σ 1 + e x + 1 σ , λ ( x ) = 2 ( e 1 σ 1 ) e x σ ( 1 + e x σ ) ( e x + 1 σ 1 ) , x = 0 , 1 , .

Image

There is only one parameter (σ) involved in (3.90) which can be estimated by the method of maximum likelihood.

Geometric Weibull Distribution  This is a three-parameter model based on the continuous exponential Weibull distribution

F ¯ ( x ) = exp [ λ x { λ ( x a ) + } β ]

Image

introduced by Zacks (1984), where (xa)+=max(0,xa)Image. Of the three parameters, λ>0Image represents scale, β>0Image the shape and a>0Image is the change-point. The model was proposed in a practical situation when the hazard rate is constant until time a and thereafter it is increasing like a Weibull hazard rate. In the discrete analogue also, the same physical meaning prevails with

S ( x ) = exp [ λ x { λ ( x a ) + } β ]

Image (3.91)

and

f ( x ) = exp [ λ x { λ ( x a ) + } β ] exp [ λ ( x + 1 ) { λ ( x + 1 a ) + } β ] .

Image

We then have

h ( x ) = 1 exp [ λ + λ β ( { ( x a ) + } β { ( x a + 1 ) + } β ) ]

Image

as the hazard rate.

Telescopic Distributions  A more general family of distributions result if we consider the discrete analogue of Y with survival function

F ¯ ( x ) = exp [ α k θ ( x ) ] , x > 0 ,

Image

when kθ(x)Image is a strictly increasing function with kθ(0)=0Image and kθ(x)Image and xImage. It includes the exponential, Rayleigh, Weibull, linear-exponential, and Gompertz distributions as special cases. Corresponding to F¯(x)Image above, we have the distribution of X as

f ( x ) = q k θ ( x ) q k θ ( x + 1 ) , x = 0 , 1 , 2 , , 0 < q < 1 , q = e α .

Image (3.92)

For the model in (3.92), it follows (Roknabadi et al., 2009) that

S ( x ) = q k θ ( x ) , h ( x ) = 1 q k θ ( x + 1 ) k θ ( x ) , λ ( x ) = q k θ ( x ) k θ ( x + 1 ) / 1 q k θ ( x + 1 ) , m ( x ) = i = x q k θ ( i + 1 ) k θ ( x ) , r ( x ) = i = 1 x 1 q k θ ( i ) 1 q k θ ( x + 1 ) ,

Image

and the alternative hazard rate is

h 1 ( x ) = ( k θ ( x 1 ) k θ ( x ) ) log q .

Image

Discrete Inverse Weibull DistributionJazi et al. (2010) considered the inverse Weibull law

F ¯ ( x ) = 1 exp [ a x b ] , x > 0 ,

Image

to derive the discrete inverse Weibull distribution with survival function

S ( x ) = 1 q x b , x = 1 , 2 , ,

Image

and probability mass function

f ( x ) = { q , x = 1 , q x b q ( x 1 ) b , x = 2 , 3 , , 0 < q < 1 , b > 0 .

Image (3.93)

The distribution in (3.93) has decreasing f(x)Image for qlog22b1Image and unimodal with mode at 2, otherwise. When β becomes small, the tail becomes longer. The mean and variance do not have closed-form expressions. The hazard rate function and the alternative hazard rate function are, respectively,

h ( x ) = q x b q ( x 1 ) b 1 q ( x 1 ) b

Image

and

h 1 ( x ) = log ( 1 q ( x 1 ) b 1 q x b ) , x = 1 , 2 ,

Image

In order to estimate the parameters Jazi et al. (2010) considered the method of proportions, method of moments, a heuristic algorithm and an inverse Weibull probability plots. The discrete version of the probability plot is given by

y = β x + log ( log q ) ,

Image

which is a straight line and accordingly β and q can be estimated by a simple linear regression model fitted to the data.

Discrete Generalized Exponential Distribution  Recalling that the generalized exponential distribution has its survival function as

F ¯ ( x ) = 1 ( 1 e λ x ) α , x > 0 ,

Image

Nekoukhou et al. (2013) derived its discretized version as

f ( x ) = ( 1 p x + 1 ) α ( 1 p x ) α , α > 0 , 0 < p < 1 = j = 1 ( 1 ) j + 1 ( α j ) p j x ( 1 p j )

Image (3.94)

with support (0,1,2,)Image. For integer values of α, the sum in (3.94) terminates at α. The distribution is unimodal for all α and p. There is a closed form expression for the moments given by

E ( X r ) = j = 1 ( 1 ) j + 1 ( α j ) t = 1 r S ( r , t ) t ! ( p j 1 p j ) t , r = 1 , 2 , ,

Image

where

S ( r , t ) = 1 t ! j = 0 t 1 ( 1 ) j ( t j ) ( t j ) r

Image

is the Stirling number of the second kind. In particular, we have

μ = j = 1 ( 1 ) j + 1 ( α j ) p j 1 p j

Image

and

μ 2 = j = 1 ( 1 ) j + 1 ( α j ) p j ( 1 + p j ) ( 1 p j ) 2 μ 2 .

Image

The parameters are estimated by the method of maximum likelihood by solving the equations

j = 1 n ( 1 p x j + 1 ) α log ( 1 p x j + 1 ) ( 1 p x j ) α log ( 1 p x j ) ( 1 p x j + 1 ) α ( 1 p x j ) α = 0

Image

and

α j = 1 n ( 1 p x j ) α 1 p x j 1 x j ( 1 p x j + 1 ) α p x j ( 1 p x j + 1 ) α ( 1 p x j ) α ( x j + 1 ) = 0 .

Image

Using the survival function

S ( x ) = 1 ( 1 p x ) α ,

Image

we can write the hazard rate and reversed hazard rate functions as

h ( x ) = ( 1 p x + 1 ) α ( 1 p x ) α 1 ( 1 p x + 1 ) α

Image

and

λ ( x ) = ( 1 p x + 1 ) α ( 1 p x ) α ( 1 p x + 1 ) α = 1 ( 1 p x 1 p x + 1 ) α , x = 0 , 1 , .

Image

The alternative hazard rate becomes

h 1 ( x ) = log ( 1 ( 1 p x ) α 1 ( 1 p x + 1 ) α ) , x = 0 , 1 , .

Image

Discrete Gamma DistributionChakraborty and Chakravarty (2012) considered the well-known two-parameter gamma distribution with survival function

F ¯ ( x ) = 1 θ k Γ ( k ) x u k 1 e u θ d u = Γ ( k , x θ ) Γ ( k ) ,

Image

where

Γ ( k , x θ ) = x θ 1 θ k u k 1 e u d u

Image

is the upper incomplete gamma function, to derive the discrete gamma model with probability mass function

f ( x ) = Γ ( k , x θ ) Γ ( k , x + 1 θ ) Γ ( k ) , k , θ > 0 , x = 0 , 1 , .

Image (3.95)

As particular cases, we have the one-parameter discrete gamma when θ=1Image and the geometric when θ=1Image, k=1Image. The distributional characteristics can be ascertained from the moments satisfying the recursive formula

μ r = j = 1 r 1 ( 1 ) j + 1 ( r 1 j ) g r j ( k , x θ ) + g r 1 ( k , x θ ) μ r 1

Image

where

g r ( k , x θ ) = x = 1 x r Γ ( k , x θ ) / Γ ( k ) .

Image

In particular, we have

μ = g 0 ( k , x θ )

Image

and

μ 2 = 2 g 1 ( k , x θ ) g 0 ( k , x θ ) [ g 0 ( k , x θ ) ] 2 .

Image

From a random sample of failure times (x1,,xn)Image, estimates of θ and k can be obtained by solving the likelihood equations

j = 1 n x j / θ x j + 1 / θ u k 1 e u log u d u Γ ( k , x j θ ) Γ ( k , x j + 1 θ ) = n ψ ( k )

Image

and

j = 1 n Γ ( k + 1 , x j θ ) Γ ( k + 1 , x j + 1 θ ) Γ ( k , x j θ ) Γ ( k , x j + 1 θ ) = n k

Image

with ψ(k)=dlogΓ(k)dkImage, being the digamma function.

The reliability functions of interest are the survival function

S ( x ) = Γ ( k , x θ ) Γ ( k ) ,

Image

the hazard rate function

h ( x ) = 1 Γ ( k , x + 1 θ ) Γ ( k , x θ ) ,

Image

the alternative hazard rate function

h 1 ( x ) = log Γ ( k , x θ ) Γ ( k , x + 1 θ ) ,

Image

and the reversed hazard rate function

λ ( x ) = Γ ( k , x θ ) Γ ( k , x + 1 θ ) Γ ( k ) Γ ( k , x θ ) .

Image

The hazard rate functions of discrete gamma distribution are presented in Fig. 3.15.

Image
Figure 3.15 Hazard rate functions of discrete gamma distribution.

Discrete Lindley Distribution  The Lindley distribution, in the continuous case, has its survival function as

F ¯ ( x ) = e θ x ( 1 + θ + θ x ) 1 + θ , x > 0 , θ > 0 .

Image

Accordingly, the discrete version takes on the form

f ( x ) = λ x 1 log λ [ λ log λ + ( 1 λ ) ( 1 log λ x + 1 ) ] , x = 0 , 1 , 2 ,

Image (3.96)

and λ=eθImage. From the probability generating function

P ( t ) = ( λ 1 ) ( t λ 1 ) + ( 1 2 λ + t λ 2 ) log λ ( 1 t λ ) 2 ( 1 log λ ) ,

Image

it is easy to obtain

μ = λ [ ( λ 2 ) log λ + 1 λ ] ( 1 λ ) 2 ( 1 log λ )

Image

and

μ 2 = λ [ ( 1 λ ) 2 ( 3 4 λ + λ 2 ) log λ + ( 2 3 λ ) log 2 λ ] ( 1 λ ) 4 ( 1 log λ ) 2 .

Image

We also have

S ( x ) = λ x ( 1 ( 1 + x ) log λ ) 1 log λ ,

Image (3.97)

F ( x ) = 1 λ x + 1 + ( ( 2 + x ) λ x + 1 1 ) log λ 1 log λ ,

Image (3.98)

so that

h ( x ) = λ log λ + ( λ 1 ) ( log λ x + 1 1 ) 1 ( 1 + x ) log λ , λ ( x ) = λ x [ ( λ log λ + ( λ 1 ) ( log λ x + 1 1 ) ) ] 1 λ x + 1 [ ( 2 + x ) λ x + 1 1 ] log λ ,

Image (3.99)

and

h 1 ( x ) = log ( 2 + x ) log λ 1 λ ( 3 + x ) log λ 1 , x = 0 , 1 , .

Image (3.100)

Fig. 3.16 presents hazard rate functions of discrete Lindley distribution.

Image
Figure 3.16 Hazard rate functions of discrete Lindley distribution.

The maximum likelihood estimate of λ can be obtained as the solution of the equation

n x ¯ + n 1 log λ + j = 1 n ( λ + x j ) λ log λ ( x j + 1 ) ( 1 λ ) λ log λ + ( 1 λ ) ( 1 ( x j + 1 ) log λ ) = 0 ,

Image

where x¯Image is the mean of a random sample of size n from the discrete Lindley distribution.

Besides the above results on the discrete Lindley distribution described in Gomez-Deniz and Calderin-Ojeda (2011), there are some other results on the reliability characteristics. We note that

t = x S ( t ) = λ x ( 1 λ ) 2 [ ( 1 λ ) ( 1 x log λ ) log λ ] .

Image

Hence, the mean residual life function can be expressed in a simple form as

m ( x ) = 1 log λ ( 1 λ ) 2 [ ( 1 λ ) ( 1 log λ x log λ ) log λ 1 log λ λ log λ x λ log λ ]

Image (3.101)

which is a homographic function of x. Moreover, as a corollary to Eq. (3.9) characterizing the Ord family, we have the following theorem.

Theorem 3.10

A random variable X, taking values in (0,1,)Image, will be distributed as discrete Lindley distribution if and only if

m ( x ) = μ ( λ log λ + ( 1 λ ) ( 1 log λ ) x ( 1 λ ) log λ ) h ( x + 1 )

Image

for all x.

Unlike in the case of most discrete models, the variance residual life has also a closed-form expression given by

σ 2 ( x ) = 2 ( 1 log λ ) ( 1 λ ) 4 ( 1 λ ) ( 1 λ 2 log λ ) log λ x ( 1 λ ) log λ 1 ( x + 2 ) log λ m ( x ) ( m ( x ) 1 )

Image (3.102)

with m(x)Image as given in (3.101).

The reversed mean residual life has the expression

r ( x ) = x λ ( 1 λ x ) 1 λ log λ ( 1 λ ) 2 [ ( 1 λ x ) λ ( 2 λ ) + x ( 1 λ ) λ x ] .

Image

A characterization of the distribution by relationship between r(x)Image and λ(x)Image can be derived along the lines of Theorem 3.3.

Theorem 3.11

The random variable X, taking values in (0,1,)Image, will have discrete Lindley distribution if and only if

r ( x ) = μ [ λ log λ + ( 1 λ ) ( 1 log λ ) x ( 1 λ ) log λ ] λ ( x )

Image

for all x=0,1,2,Image.

Further, the odds functions are

w ( x ) = λ x + 1 ( 1 ( 2 + x ) log λ ) 1 λ x + 1 + [ ( 2 + x ) λ x + 1 1 ] log λ

Image

and ω¯(x)=[ω(x)]1Image.

A second approach to construct discrete analogues of continuous distributions is to use the formula

P ( X = x ) = g ( x ) x = 0 g ( t ) ,

Image (3.103)

where g(x)Image is the probability density function of a continuous random variable. When g(x)=CexθxαImage, for example, (3.103) results in

f ( x ) = q x x α t = 1 q t t α , x = 1 , 2 , , q = e 1 θ ,

Image

the Good distribution in (3.57). For a detail study of this distribution, we refer to Kulasekera and Tonkyn (1992). Obviously, the geometric distribution can also be derived from the exponential density function in this manner. See Sato et al. (1999) for an application of such a model, and also the convolution of their discrete exponential in the form of a negative binomial law. Lai and Wang (1995) obtained an analogue of the power distribution with density function

g ( x ) = α x α 1 β α , 0 x β , β > 0 ,

Image

in the form

f ( x ) = x α x = 0 b x α , x = 0 , 1 , , b ,

Image

with hazard rate

h ( x ) = x α t = x b t α

Image

and reversed hazard rte

λ ( x ) = x α t = 0 x t α .

Image

Lai (2013) has also suggested construction of new models by discretizing the continuous hazard rate function or the alternative hazard rate function. Further discussion on the above methods and models derived therefrom can be seen in Kemp (2008) and Krishna and Pundir (2009).

3.4 Some Other Models

In this section, we present some other models arising from a variety of considerations.

Discrete Weibull Distribution IIStein and Dattero (1984) introduced a second form of Weibull distribution by specifying its hazard rate function as

h ( x ) = { ( x m ) β 1 , x = 1 , 2 , , m , 0 , x = 0  or  x > m .

Image

The probability mass function and survival function are derived from h(x)Image using the formulas in Chapter 2 to be

f ( x ) = ( x m ) β 1 i = 1 x 1 [ 1 ( i m ) β 1 ] , x = 1 , 2 , , m ,

Image

and

S ( x ) = i = 1 min ( x , m ) [ 1 ( x m ) α 1 ] .

Image (3.104)

By comparison, the discrete Weibull I has survival function of the same form as the continuous counterpart, while discrete Weibull II has the same form for the hazard rate function. Stein and Dattero (1984) have pointed out that a series system with two components that are independent and identically distributed have a distribution of the form in (3.104).

Discrete Weibull Distribution III  A third type of Weibull distribution proposed by Padgett and Spurrier (1985) is specified by

f ( x ) = ( 1 exp [ c ( x + 1 ) α ] ) exp [ c j = 0 x j α ] , x = 0 , 1 , 2 , , c > 0 , < α < ,

Image (3.105)

with

S ( x ) = exp [ c j = 0 x j α ] .

Image

Accordingly, the hazard function is

h ( x ) = 1 exp [ c x α ] .

Image

For α=0Image, (3.105) reduces to the geometric case. An important advantage of this model is that the hazard rate is flexible, in the sense that it can assume different shapes. In view of the complex nature of the probability mass function, the maximum likelihood estimates becomes computationally tedious and intensive. Bracquemond and Gaudoin (2003) have pointed out that the quality of the maximum likelihood estimate of c, as regards bias, increases with c, while the bias for α is small except for very small samples.

“S” DistributionBracquemond and Gaudoin (2003) derived the “S” distribution based on some physical characteristics of the failure pattern through a shock-model interpretation. They have assumed a system in which on each demand a shock can occur with probability p and not occur with probability (1p)Image. The number of shocks NxImage at the xth demand is such that the hazard rate is an increasing function of NxImage satisfying

h N ( x ) = 1 π N x , 0 < π 1 .

Image

Then, the survival function, given NxImage, is

S ( x ) = P ( X > x ) = E ( ( P ( X > x ) | N x , x 1 ) = E [ π i = 1 N i ] .

Image

Further, if Ux=NxNx1Image, the UxImage's are independent Bernoulli (p) random variables, so that

S ( x ) = i = 1 x E ( π ( x i + 1 ) U i ) .

Image

This leads to the “S” distribution specified by the probability mass function

f ( x ) = p ( 1 π x ) i = 1 x 1 ( 1 p + p π i )

Image

and the survival function

S ( x ) = i = 1 x 1 ( 1 p + p π i ) .

Image

The interpretation given to the parameters is that p is the probability of a shock and π is the probability of surviving such a shock. We then have

h ( x ) = p ( 1 π x ) .

Image

Various properties of the distribution as well as the estimation issues have not been studied yet.

There are two other distributions proposed by Salvia and Bollinger (1982) and their generalizations by Padgett and Spurrier (1985), which are essentially particular cases of the models already discussed. Alzaatreh et al. (2012) provided a general method for deriving new distributions from continuous or discrete models. Let G(x)Image be the distribution function of a random variable Y which may be continuous or discrete and a(x)Image be the probability density function of a continuous random variable T taking values in [0,)Image. Then, a new distribution can be defined by the distribution function

F ( x ) = 0 log ( 1 G ( x ) ) a ( t ) d t = A ( log ( 1 G ( x ) ) ) ,

Image

where A()Image is the distribution function of T. When Y is discrete, the new distribution has probability mass function

f ( x ) = A ( log ( 1 G ( x ) ) ) A ( log ( 1 G ( x 1 ) ) ) .

Image

For instance, when X has a geometric (p) distribution, the corresponding distribution arising from a continuous distribution with G()Image as the distribution function is

f ( x ) = G ( c ( x + 1 ) ) G ( c x ) ,

Image

where c=logpImage. Another category of models arise when they are required to satisfy certain specific properties for their reliability characteristics, such as bathtub shaped hazard rate functions. Such distributions will be taken up later on in Chapter 5. Expressions for hazard rate function for some distributions are presented in Table 3.2 for ease reference.

Table 3.2

Hazard rate functions
Distribution f ( x ) Image h ( x ) Image
binomial ( n x ) p x q n x Image {1+1(nx)θx[(1+θ)nj=0x(nx)θj]}1Image, θ=pqImage, q=1pImage
Poisson e λ λ x x ! Image { 1 + x ! λ x ( e λ 1 j = 1 x λ j j ! ) } 1 Image
negative binomial ( k + x 1 x ) p k q x Image { 1 + ( k + x 1 x ) 1 q x p k [ 1 j = 0 x ( k + j 1 j ) p k q j ] } 1 Image
Haight zeta (2x−1)σ − (2x+1)σ (2x1)σ(2x+1)σ1(2x1)σImage, x = 2,3,…
geometric q x p p
Waring ( a b ) ( b ) x ( a ) x + 1 Image a b a + x Image
negative hypergeometric ( 1 x ) ( k n x ) ( 1 k n ) Image k k + n x Image
uniform f(x)=1bImage, x = 1,…,b 1 b x + 1 Image
arithmetic xb(b+1)Image, x = 1,2,…,b ( b + x ) ( b x + 1 ) b ( b + 1 ) Image
S p ( 1 π 2 ) i = 1 x 1 ( q + p π i ) Image p(1 − πx)
Weibull I q x β q ( x + 1 ) β Image 1 q ( x + 1 ) β x β Image
Weibull II ( x m ) β 1 i = 1 x 1 ( 1 ( i n ) β 1 ) Image ( x m ) β 1 Image
Weibull III ( 1 exp ( c ( x + 1 ) α ) ) Image exp [ j = 0 x j α ] Image exp [ c x α ] Image
power xαx=0bxαImage, x = 0,1,…,b x α t = x b t α Image
Lindley λ x 1 log λ [ λ log λ + ( 1 λ ) ( 1 ( x + 1 ) log λ ) ] Image λ log λ + ( λ 1 ) ( ( x + 1 ) log λ 1 ) 1 ( x + 1 ) log λ Image
gamma Γ ( k , x θ ) Γ ( k , x + 1 θ ) Γ ( k ) Image 1 Γ ( k , x + 1 θ ) Γ ( k , x θ ) Image
quasi binomial II ( n x ) ( 1 p n α ) p 1 n α ( p + α x ) x 1 ( q α x ) n x 1 Image ( n x ) ( p + α x ) x 1 ( q α x ) n x 1 k = x n ( n t ) ( p + α t ) t 1 ( q α t ) n t 1 Image
quasi logarithmic θ a ( a + x λ ) x 1 ( 1 x λ ) x x Image ( a + x λ ) x 1 x ( 1 x λ ) x t = x ( a + t λ ) t 1 t ( 1 t λ ) t Image
generalized Poisson θ ( θ + x λ ) x 1 x ! e ( θ + x λ ) Image ( θ + x λ ) x 1 e λ x x t = x ( θ + t λ ) t 1 t ! e λ t Image
Hurtwitz-zeta C ( a 1 + x ) s Image [ ( a 1 + x ) s ζ ( s , a + x 1 ) ] 1 Image
Lerch C z x ( a + x ) s Image [ ( a + x ) s ϕ ( z , s , a + x ) ] 1 Image
linear hazard rate { ( 1 a ) ( 1 a b ( x 1 ) ) ( a + b x ) x = 1 , 2 , , n a x = 0 , α + β n = 1 Image a + bx

References

G. Afendras, N. Balakrishnan, N. Papadatos, Orthogonal polynomials in the cumulative Ord family and its application to variance bounds, Statistics 2017 In press.

A.N. Ahmed, Characterization of beta, binomial and Poisson distributions, IEEE Transactions on Reliability 1991;40:290–293.

S.V. Aksenov, M.A. Savageau, Some properties of the Lerch family of discrete distributions, arXiv preprint arXiv:math/0504485; 2005.

M.S. Alikhan, A. Kalique, A.M. Abouammoh, On estimating parameters of a discrete Weibull distribution, IEEE Transactions on Reliability 1989;38:348–350.

A. Alzaatreh, C. Lee, F. Famoye, On the discrete analogues of continuous distributions, Statistical Methodology 2012;9:589–603.

N. Balakrishnan, Order statistics from the half logistic distribution, Journal of Statistical Computation and Simulation 1985;20:287–309.

S. Berg, Factorial series distributions with applications to capture-recapture problems, Scandinavian Journal of Statistics 1974;1:145–152.

S.K. Bhattacharya, Confluent hypergeometric distributions of discrete and continuous type with applications to accident proneness, Calcutta Statistical Association Bulletin 1966;60:1060–1066.

S. Bilal, A. Hassan, On some models leading to quasi-negative binomial distribution, Journal of Korean Society for Industrial and Applied Mathematics 2006;11:15–29.

W.R. Blischke, D.N.P. Murthy, Reliability: Modelling, Prediction and Optimization. New York: John Wiley & Sons; 2000.

C. Bracquemond, O. Gaudoin, A survey on discrete lifetime distributions, International Journal of Reliability Quality and Safety Engineering 2003;10:69–98.

S. Chakraborty, D. Chakravarty, Discrete gamma distribution: properties and parameter estimation, Communications in Statistics, Theory and Methods 2012;41:3301–3324.

C.A. Charalambides, Gould series distributions with applications to fluctuations of sums of random variables, Journal of Statistical Planning and Inference 1986;14:15–28.

C.A. Charalambides, Abel series distributions with applications to fluctuations of sample functions of stochastic processes, Communications in Statistics, Theory and Methods 1990;19:317–335.

L. Comptet, Advanced Combinatorics. Dordrecht, The Netherlands: D. Reidel; 1994.

P.C. Consul, Generalized Poisson Distributions: Applications and Properties. New York: Marcel Dekker; 1989.

P.C. Consul, Geeta distribution and its properties, Communications in Statistics, Theory and Methods 1990;19:3051–3068.

P.C. Consul, Some characterizations of the exponential class of distributions, IEEE Transactions on Reliability 1995;44:403–407.

P.C. Consul, F. Famoye, Lagrangian Probability Distributions. Boston: Birkhauser; 2006.

P.C. Consul, G.C. Jain, A generalization of the Poisson distribution, Technometrics 1973;15:791–799.

P.C. Consul, G.C. Jain, On some interesting properties of the generalized Poisson distribution, Biometrische Zeitschrift 1973;15:495–500.

P.C. Consul, L.R. Shenton, Use of Lagrangian expansion for generating generalized probability distribution, SIAM Journal of Applied Mathematics 1972;23:239–248.

P.C. Consul, L.R. Shenton, Some interesting properties of Lagrangian distributions, Communications in Statistics 1973;2:263–272.

P.C. Consul, L.R. Shenton, On the probabilistic structure and properties of discrete Lagrangian distributions, G.P. Patil, S. Kotz, J.K. Ord, eds. Statistical Distributions in Scientific Work. Boston: D. Reidel Publishing Company; 1975:41–57.

L. Deng, R.S. Chhikkara, On the characterization of the exponential distribution by independence of its integer and fractional parts, Statistica Neerlandica 1990;44:83–85.

L.G. Doray, A. Luong, Quadratic distance estimators of the zeta family, Insurance: Mathematics and Economics 1995;16:255–260.

L.G. Doray, A. Luong, Efficient estimators of the Good family, Communications in Statistics, Simulation and Computation 1997;26:1075–1088.

A. Erdelyi, W. Magnus, F. Oberhettinger, F.G. Tricomi, Higher Transcendental Functions, vol. I. New York: McGraw-Hill; 1953.

F. Famoye, Generalized geometric distribution and some of its applications, Journal of Mathematical Sciences 1997;8:1–13.

W.R. Fox, G.W. Lasker, The distribution of surname frequencies, International Statistics Review 1983;51:81–87.

W. Glanzel, A characterization theorem based on truncated moments and its applications some distribution families, Mathematical Statistics and Probability. Statistical Inference and Methods. Dordrecht, The Netherlands: D. Reidel; 1987;vol. B:75–84.

W. Glanzel, Characterization through some conditional moments of Pearson-type distributions and discrete analogues, Sankhyā 1991;53:17–24.

W. Glanzel, A. Teles, A. Schubert, Characterization by truncated moments and its application to Pearson-type distributions, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1984;66:173–183.

E. Gomez-Deniz, E. Calderin-Ojeda, The discrete Lindley distribution: properties and applications, Journal of Statistical Computation and Simulation 2011;81:1405–1416.

I.J. Good, The population frequencies of species and the estimation of population parameters, Biometrika 1953;40:237–264.

P.L. Gupta, R.C. Gupta, S. Ong, H.M. Srivastava, A class of Hurwitz-Lerch zeta distributions and their applications in reliability, Applied Mathematics and Computation 2008;196:521–531.

P.L. Gupta, R.C. Gupta, R.C. Tripathi, On the monotonic properties of discrete failure rates, Journal of Statistical Planning and Inference 1997;65:255–268.

R.C. Gupta, Modified power series distributions and some of its applications, Sankhyā, Series B 1974;36:288–298.

A. Gut, Probability: A Graduate Course. New York: Springer-Verlag; 2005.

F.A. Haight, A distribution analogous to the Borel-Tanner, Biometrika 1961;48:167–173.

F.A. Haight, Some statistical problems in connection with word association data, Journal of Mathematical Psychology 1966;3:217–233.

F.A. Haight, Two probability distributions connected with Zipf's rank-size conjecture, Zastosowania Matematyki 1969;10:225–228.

A. Hassan, S. Bilal, On some properties of quasi-negative binomial distribution and its applications, Journal of Modern Applied Mathematical Methods 2008;7:616–631.

J. Hoffman-Jorgensen, Probability with a View Towards Statistics. London: Chapman and Hall; 1994.

G.C. Jain, P.C. Consul, A generalized negative binomial distribution, SIAM Journal of Applied Mathematics 1971;21:501–513.

M.A. Jazi, C.D. Lai, M.H. Alamatsaz, A discrete inverse Weibull distribution and estimation of its parameters, Statistical Methodology 2010;7:121–132.

N.L. Johnson, S. Kotz, A.W. Kemp, Univariate Discrete Distributions. second edition New York: John Wiley & Sons; 1992.

L. Katz, Unified treatment of a broad class of discrete probability distributions, G.P. Patil, ed. Classical and Contagious Discrete Distributions. Calcutta, India: Statistical Publishing Society; 1965:175–185.

A.W. Kemp, A wide class of discrete distributions and the associated differential equations, Sankhyā, Series A 1968;30:401–410.

A.W. Kemp, Studies in Univariate Discrete Distribution Theory Based on the Generalized Hypergeometric Function and Associated Differential Equations. [Ph.D. thesis] Northern Ireland: Queens University of Belfast; 1968.

A.W. Kemp, The discrete half-normal distribution, B.C. Arnold, N. Balakrishnan, J.M. Sarabia, R. Minguez, eds. Advances in Mathematical and Statistical Modeling. Birkhäuser; 2008:353–360.

A.W. Kemp, Families of power series distributions with particular references to the Lerch family, Journal of Statistical Planning and Inference 2010;140:2255–2259.

D.D. Kosambi, Characteristic properties of series distributions, Proceedings of the National Institute of Science, vol. 15. India. 1949:109–113.

H. Krishna, P.S. Pundir, Discrete Burr and discrete Pareto distributions, Statistical Methodology 2009;6:177–188.

K.B. Kulasekera, D.W. Tonkyn, A new discrete distribution, with applications to survival, dispersal and dispersion, Communications in Statistics, Simulation and Computation 1992;21:499–518.

C.D. Lai, Issues concerning construction of discrete lifetime models, Quality Technology and Quality Management 2013;10:251–262.

C.D. Lai, D.Q. Wang, A finite range discrete life distribution, International Journal of Reliability, Quality and Safety Engineering 1995;2:147–160.

M. Lerch, Note sur la function, Acta Mathematica 1887;11:19–24.

G.D. Lin, C. Hu, The Riemann zeta distribution, Bernoulli 2001;7:817–822.

W. Magnus, F. Oberhettinger, R.P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics. New York: Springer-Verlag; 1966.

B. Mandelbrot, A note on a class of skew distribution functions: analysis and critique of a paper by H.A. Simmon, Information and Control 1959;2:90–99.

B. Mandelbrot, Information theory and psycholinguistics: a theory of words frequencies, P. Lazafeld, N. Henry, eds. Readings in Mathematical Social Science. Boston: MIT Press; 1966.

S.G. Mohanty, On a generalized two-coin tossing problem, Biometrical Journal 1966;8:266–272.

N.U. Nair, P.G. Sankaran, Characterization of the Pearson family of distributions, IEEE Transactions on Reliability 1991;40:75–77.

N.U. Nair, K.K. Sudheesh, Some results on lower variance bounds useful in reliability modelling and estimation, Annals of the Institute of Statistical Mathematics 2008;60:591–603.

T. Nakagawa, S. Osaki, The discrete Weibull distributions, IEEE Transactions on Reliability 1975;24:300–301.

S.B. Nandi, K.K. Das, A family of Abel series distributions, Sankhyā, Series B 1994;56:147–164.

V. Nekoukhou, M.H. Alamatsaz, H. Bidram, Discrete generalized exponential distribution of a second type, Statistics 2013;47:876–887.

A. Noack, A class of random variables with discrete distributions, Annals of Mathematical Statistics 1950;21:137–142.

J.K. Ord, Graphical methods for a class of discrete distributions, Journal of the Royal Statistical Society, Series A 1967;130:232–238.

J.K. Ord, On a system of discrete distributions, Biometrika 1967;54:649–656.

J.K. Ord, Families of Frequency Distributions. London: Griffin; 1972.

S. Osaki, X. Li, Characterizations of the gamma and negative binomial distributions, IEEE Transactions on Reliability 1988;37:379–382.

W.J. Padgett, J.D. Spurrier, On discrete failure models, IEEE Transactions on Reliability 1985;34:458–459.

J. Panaretos, On the evolution of surnames, International Statistical Review 1989;51:161–179.

G.P. Patil, Certain properties of power series distributions, Annals of the Institute of Statistical Mathematics 1962;14:179–182.

A.H. Roknabadi, G.R.M. Borzadaran, M. Khorashadizadeh, Some aspects of discrete hazard rate function in telescopic families, Economic Quality Control 2009;24:35–42.

A.A. Salvia, R.C. Bollinger, On discrete hazard functions, IEEE Transactions on Reliability 1982;31:458–459.

H. Sato, M. Ikota, S. Aritoshi, H. Masuda, A new defect distribution meteorology with a constant discrete exponential formula and its applications, IEEE Transactions on Semi Conductor Manufacturing 1999;12:409–418.

H.A. Seal, A probability distribution of deaths at age x when policies are counted instead of lives, Scandinavisk Actuarietidsskrift 1947;30:18–43.

S. Shan, On the generalized Zipf distribution, part I, Information Processing and Management 2005;41:1369–1386.

T.K. Sindhu, An Extended Pearson System Useful in Reliability Analysis. [Ph.D. thesis] Cochin, India: Cochin University of Science and Technology; 2002.

W.E. Stein, R. Dattero, A new discrete Weibull distribution, IEEE Transactions on Reliability 1984;33:196–197.

F.W. Steutel, J.G.F. Thiemann, On the independence of integer and fractional parts, Statistica Neerlandica 1989;43:53–59.

K.K. Sudheesh, N.U. Nair, Characterization of discrete distribution by conditional variance, Metron 2010;LXVIII:77–85.

J.C. Tanner, A problem of interface between two queues, Biometrika 1953;40:58–69.

J.P. Vilaplana, The Hurwitz distribution, M.L. Puri, J.P. Vilaplana, W. Wertz, eds. New Perspectives in Theoretical and Applied Statistics. New York: John Wiley & Sons; 1987.

S. Zacks, Estimating the shift to wear-out of systems having exponential-Weibull life, Operations Research 1984;32:741–749.

G.K. Zipf, Human Behaviour and the Principle of Least Effort. Cambridge, MA: Addison-Wesley; 1949.

P. Zornig, G. Altmann, Unified representation of Zipf distributions, Computational Statistics and Data Analysis 1995;19:461–473.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.35.81