Chapter 2

Basic Reliability Concepts

Abstract

The basic reliability functions that can be used to model lifetime data and explain the failure patterns are the topics of discussion in this chapter. We begin with the conventional hazard rate defined as the ratio of the probability mass function to the survival function. This is followed up by an alternative hazard function introduced to overcome certain limitations of the conventional rate. Properties of both these hazard rates and their interrelationships are discussed. Then, the concept of residual life distribution and its characteristics like the mean, variance and moments are discussed. Various identities connecting the hazard rates, mean residual life function and various residual functions are derived, and some special relationships are employed for characterizing discrete life distributions. We then work out two problems to demonstrate how the characteristic properties enable the identification of the life distribution. Also, the role of partial moments in the context of reliability modelling is examined. Various concepts in reversed time has been of interest in reliability and related areas. Accordingly, reversed hazard rate, reversed residual mean life and reversed variance life are all defined and their interrelationships and characterizations based on them are reviewed. In the case of finite range distributions, it is shown that all the concepts in reversed time can assume constant values and these are related to the reversed lack of memory property characteristic of the reversed geometric law. Along with the traditional reliability functions, the notion of odds functions can also play a role in reliability modelling and analysis. We explain the relevant results in this connection. The log-odds functions and rates and their applications are also studied. Mixture distributions and weighted distributions also appear as models in certain situations, and the hazard rates and reversed hazard rates for these two cases are derived and are subsequently used to characterize certain lifetime distributions.

Keywords

Hazard rate; Mean residual life; Partial moments; Concepts in reversed time; Odds function; Log odds rate characterizations

2.1 Introduction

As mentioned in the last chapter, the reliability of a device is the probability that the device performs its intended function for a given period of time under conditions specified for its operation. When the device does not perform its function satisfactorily, we say that it has failed. When the random variable X represents the lifetime of a device, the observation on X is realized as the time of failure. The primary concern in reliability theory is then to understand the pattern in which failures occur for different devices and under varying operating environments. This is often done by analyzing the observed failure times or ages at failure with the help of a model that satisfactorily represents the predominant features of the data. One direct method is to find a probability distribution that provides a reasonable fit to the observations. Sometimes, there may exist more than one distribution that can pass an appropriate goodness-of-fit test. In any case, it is more desirable to find a probability model that manifests certain physical properties of the failure mechanism. In reliability theory, some basic concepts that help in the study of failure patterns have been developed. The objective of this chapter is to define such concepts and discuss their properties and inter-relationships. Two important aspects that necessitate the study of these concepts are (a) various functions considered in this context determine the life distribution uniquely, so that the knowledge of their functional form is equivalent to that of the distribution itself, and (b) it should be easier to deal with these functions than the distribution function or probability density function of the corresponding distributions.

2.2 Hazard Rate Function

Let X be a discrete random variable assuming values in N=(0,1,)Image with probability mass function f(x)Image and survival function S(x)=P(Xx)Image. We will think of X as the random lifetime of a device that can fail only at times (ages) in N. The hazard rate function of X is defined as

h ( x ) = f ( x ) S ( x )

Image (2.1)

at points x for which S(x)>0Image. Treated as a function of x, the hazard rate is also called failure rate, instantaneous death rate, force of mortality and intensity function in other disciplines such as survival analysis, actuarial science, demography, extreme value theory and bio-sciences. Although in the continuous case, the concept of hazard rate dates back to historical studies in human mortality, its discrete version came up much later in the works of Barlow and Proschan (1965), Cox (1972) and Kalbfleisch and Prentice (2002), to mention a few.

When X has a finite support (0,1,,n)Image, n<Image, then h(n)=1Image. As a convention we take h(x)=1Image for x>nImage. The hazard function h(x)Image is interpreted as the conditional probability of the failure of the device at age x, given that it did not fail before age x. Thus, 0h(x)1Image. The interpretation and boundedness of the discrete hazard rate is thus different from that of the continuous case.

We see from (2.1) that h(x)Image is determined from f(x)Image or S(x)Image. The converse that the hazard rate function determines the distribution of X uniquely is also true. To see this, we note that

h ( x ) = S ( x ) S ( x + 1 ) S ( x ) ,

Image

or

S ( x + 1 ) S ( x ) = 1 h ( x ) .

Image

So,

S ( x ) = { t = 0 x 1 ( 1 h ( t ) ) , x 1 , 1 , x = 0 .

Image (2.2)

Eq. (2.2) reveals also that h(x)Image can be used as a tool to model the life distribution. When a functional form for h(x)Image is assumed as model for a given data set, one has to ensure that the assumed form conforms to the hazard rate function of a distribution. The following theorem is useful in this regard.

Theorem 2.1

A necessary and sufficient condition that h:N[0,1]Image is the hazard rate function of a distribution with support N is that h(x)[0,1]Image for xNImage and x=0h(t)=Image.

If the probability mass function is required from (2.1) and (2.2), we see that

f ( x ) = h ( x ) t = 0 x 1 ( 1 h ( t ) ) .

Image (2.3)

The above results have appeared repeatedly in several papers; see, for example, Gupta (1979), Shaked et al. (1995) and Kemp (2004).

Example 2.1

Let X follow the geometric distribution with parameter p (G(p)Image) specified by

f ( x ) = q x p , x N , 0 < p < 1 , q = 1 p .

Image (2.4)

Then,

S ( x ) = q x

Image

so that h(x)=pImage, a constant for all x.

Example 2.2

Consider the Waring distribution with parameters a and b (W(a,b)Image) having probability mass function

f ( x ) = ( a b ) ( b ) x ( a ) x + 1 , x N , a > b ,

Image (2.5)

developed by Irwin (1968, 1975a, 1975b, 1975c), where

( b ) x = b ( b 1 ) ( b x + 1 ) = Γ ( b + 1 ) Γ ( b x + 1 )

Image

is the Pochammer's symbol. In studying the properties of this distribution, we need the Waring expansion

1 x a = 1 x + a x ( x + 1 ) + a ( a + 1 ) x ( x + 1 ) ( x + 2 ) +

Image (2.6)

which converges for x>aImage. Then,

S ( x ) = ( a b ) t = x ( b ) t ( a ) t + 1 = ( a b ) ( b ) x ( a ) x [ 1 a + x + b + x ( a + x ) ( a + x + 1 ) + ] = ( b ) x ( a ) x ,  by ( 2.6 ).

Image (2.7)

Hence, we get in this case

h ( x ) = a b a + x .

Image (2.8)

Example 2.3

A special case of the negative hyper geometric law with parameters n and k is defined by the probability mass function

f ( x ) = ( 1 x ) ( k n x ) ( 1 k n ) , x = 0 , 1 , , n , k > 0 .

Image (2.9)

Then,

S ( x ) = t = x n f ( t ) = t = x n ( k + n t 1 n t ) / ( k + n n )

Image (2.10)

on using the identity

( p k ) = ( 1 ) k ( p + k 1 k ) .

Image

So,

S ( x ) = t = 0 n x ( k + t 1 t ) / ( k + n n ) .

Image

Now to find the sum on the right hand side, the combinatorial expression (Riordan, 1968)

x = 0 n ( a + n x 1 n x ) = ( a + n n )

Image

is employed in order to obtain

S ( x ) = ( k + n x n x ) / ( k + n n ) .

Image (2.11)

We then find

h ( x ) = ( k + n x 1 n x ) ( k + n x n x ) = k k + n x .

Image

The distribution in (2.11) will be denoted by NH (n,k)Image. For more functional forms of h(x)Image that characterize various distributions, see Table 3.2.

Remark 2.1

The geometric, Waring and negative hyper-geometric models form a set of models possessing some attractive properties for their reliability characteristics, in as much the same way as the exponential, Pareto II and rescaled beta distributions in the continuous case. The results in the above examples show that the models (2.4), (2.5) and (2.8) have hazard rates of the form

h ( x ) = ( l x + m ) 1

Image (2.12)

with l=0Image for geometric, l=aab>0Image for Waring, and l=1k<0Image for negative hyper-geometric distributions.

Remark 2.2

Using (2.3), it can be seen that a reciprocal linear hazard rate function in (2.12) characterizes the above three distributions. A direct proof of this fact is available in Xekalaki (1983).

Remark 2.3

When Remark 2.1 is employed in a practical problem, it should be borne in mind that the support of X is N. Thus, the geometric distribution has a constant hazard rate means that a device with such a lifetime distribution does not age. However, when the support of X is n<Image, h(x)=pImage leads to

f ( x ) = { q x p , x = 0 , 1 , 2 , , n 1 , q n , x = n .

Image

If X has bounded support (0,1,2,,nImage), (2.12) is satisfied with f(0)>0Image, then l>1Image and m<0Image and n=1mlImage. This follows from the facts that at x=0Image, 1l<1Image and at x=nImage, ln+m=1Image.

The sum of the hazard rates from 0 through x1Image is of interest in reliability theory and is called the cumulative hazard rate, defined by

H ( x ) = t = 0 x 1 h ( t ) .

Image (2.13)

Also define H(0)=0Image. Graphically, the cumulative hazard rate represents the area under the step function representing h(x)Image. Definition (2.13) does not satisfy properties analogous to the continuous case in which the cumulative hazard rate satisfies the identity

H ( x ) = log F ¯ ( x ) , F ¯ ( x ) = P ( X > x ) .

Image

Therefore, Cox and Oakes (1984) proposed an alternative definition of cumulative hazard rate in the form

H 1 ( x ) = log S ( x ) .

Image (2.14)

This means that

H 1 ( x ) = log t x f ( t ) = log t = 0 x 1 ( 1 h ( t ) ) .

Image (2.15)

If

H 1 ( x ) = t = 0 x 1 h 1 ( t ) , x = 1 , 2 ,

Image (2.16)

then H1(x)Image is a cumulative hazard rate corresponding to an alternative hazard rate function defined by

h 1 ( x ) = log S ( x ) S ( x + 1 ) , x = 0 , 1 , 2 , .

Image (2.17)

Xie et al. (2002a) advocated the use of (2.17) as the hazard rate function instead of (2.1) by citing the following arguments. In the continuous case, the hazard rate is not a probability, but (2.1) is a conditional probability which is bounded. Consequently, (2.1) cannot increase too fast either linearly or exponentially to provide models of lifetimes of components in the wear-out phase. When cumulative hazard rate is defined as the negative logarithm of the survival function, logS(x)t=0x1h(t)Image. This causes problems in defining discrete ageing concepts that are analogues of their continuous counterparts, such as increasing hazard rate average (see Chapter 4). Thus, discrete ageing concepts based on h(x)Image may not convey the same meaning as those in the continuous case. Similar problems persist with the construction of proportional hazards models and with series systems. Further details about these are provided in Sections 2.10 and 2.11. It may also be noted that unlike h(x)Image, the definition of h1(x)Image does not have any interpretation. Xie et al. (2002a) and Kemp (2004) have obtained the following interrelationships among the two hazard rate functions and the other reliability functions discussed so far:

H 1 ( x ) = t = 0 x 1 log ( 1 H ( t ) + H ( t 1 ) ) , H ( x ) = t = 0 x ( 1 exp [ H 1 ( x ) H 1 ( x + 1 ) ] ) , S ( x ) = exp [ H 1 ( x ) ] = t = 0 x 1 ( 1 H ( t ) + H ( t 1 ) ) , f ( x ) = exp [ H 1 ( x ) ] exp [ H 1 ( x + 1 ) ] = ( H ( x ) H ( x 1 ) ) × t = 0 x 1 ( 1 H ( t ) + H ( t 1 ) ) , h ( x ) = 1 exp [ H 1 ( x ) H 1 ( x + 1 ) ] = H ( x ) H ( x 1 ) ,

Image

and

h ( x ) = 1 e h 1 ( x ) .

Image (2.18)

Thus, the function h(x)(H(x))Image determines h1(x)(H1(x))Image uniquely and hence is useful in characterizing life distributions. We give some examples that compare the expressions of h(x)Image and h1(x)Image.

Example 2.4

The discrete Pareto distribution

S ( x ) = ( α x + α 1 ) β , x = 1 , 2 , ,

Image

provides

h 1 ( x ) = β log [ x + α x + α 1 ]

Image

whereas

h ( x ) = 1 ( x + α 1 x + α ) β .

Image

Example 2.5

The Waring distribution with

S ( x ) = ( b ) x ( a ) x

Image

in Example 2.2 gives

h 1 ( x ) = log ( b + x a + x )

Image

compared to h(x)=aba+xImage from (2.8).

2.3 Mean Residual Life

The analysis of the lifetime of a device after it has attained age x is of special relevance in reliability and survival analysis. Thus, if X is the original lifetime with survival function S(x)=P(Xx)Image, the corresponding residual lifetime after age x is the random variable Xx=(Xx|X>x)Image. From the definition of conditional probability, one can arrive at the distribution of XxImage as

S x ( t ) = S ( x + t + 1 ) S ( x + 1 ) , t = 0 , 1 , .

Image (2.19)

The mean, variance, partial moments, coefficient of variation and percentiles of the distribution in (2.19) have been discussed extensively in the literature in the continuous case.

The mean residual life of X is defined as

m ( x ) = E ( X x | X > x ) = 1 S ( x + 1 ) t = x + 1 ( t x ) f ( t ) = 1 S ( x + 1 ) t = x + 1 S ( t ) , x = 1 , 0 , 1 ,

Image (2.20)

It is easy to see that m(x)Image is the mean of the distribution in (2.19). When x=1Image,

m ( 1 ) = E ( X + 1 ) = t = 0 S ( t ) = 1 + μ .

Image

Characterizations of the distribution of X in terms of m(x)Image and the hazard rate have been studied by Nair and Hitha (1989). From (2.20),

S ( x + 1 ) m ( x ) = t = x + 1 S ( t )

Image

which leads to

S ( x ) m ( x 1 ) S ( x + 1 ) m ( x ) = S ( x )

Image

and to the identity

h ( x ) = 1 + m ( x ) m ( x 1 ) m ( x ) , x = 0 , 1 , 2 , .

Image (2.21)

Now, from (2.2), we have

S ( x ) = { t = 0 x 1 m ( t 1 ) 1 m ( t ) , x 1 , 1 , x = 0 .

Image (2.22)

Thus, the three functions h(x)Image, m(x)Image and S(x)Image determine each other uniquely. Though h(x)Image can be determined from m(x)Image and vice-versa, both have unique features that ensure their necessity in reliability theory. The mean residual life function may exist when the hazard function does not exist and vice-versa, as will be seen in some of the examples that are considered later. While h(x)Image is a local measure of failure patterns for any x, the mean residual life function depends upon the life history of devices at all ages beyond x and hence the latter is more informative. At the same time, m(x)Image as a summary measure is highly sensitive to a single long-term survivor in a data set, which is not desirable. There are other interesting properties that distinguish the two concepts as will be seen in the subsequent chapters.

Since 0h(x)1Image, we can note from (2.21) that

1 + m ( x ) m ( x 1 )

Image (2.23)

and m(x)1Image. For many discrete distributions, the expression for m(x)Image is not of a simple form. In the following examples, we have simple forms for m(x)Image that have many desirable properties. For more examples of the m(x)Image function, see Table 2.4.

Example 2.6

For the geometric distribution G(p)Image in Example 2.1,

m ( x ) = 1 q x + 1 t = x + 1 q t + 1 = 1 p

Image

is the reciprocal of the hazard rate function so that h(x)m(x)=1Image.

Example 2.7

The Waring distribution W(a,b)Image in Example 2.2 yields

m ( x ) = ( a ) x + 1 ( b ) x + 1 t = x + 1 ( a ) t ( b ) t = ( a ) x + 1 ( a ) x [ 1 a + x + b + x + 1 ( a + x ) ( a + x + 1 ) + ] = a + x a b 1 ,  by Waring expansion in ( 2.6 ).

Image

Notice that m(x)Image is a linearly increasing function and satisfies h(x)m(x)=abab1>1Image.

Example 2.8

In the case of the negative hypergeometric distribution NH(n,k)Image in Example 2.3, we have

m ( x ) = ( k + n n ) ( k + n x 1 n x 1 ) t = x + 1 ( k + n t n t ) ( k + n n ) = ( k + n x n x 1 ) ( k + n x 1 n x 1 ) = k + n x k + 1 .

Image

We can see that m(x)Image is linear and decreasing in x and h(x)m(x)=kk+1<1Image.

Nair and Hitha (1989) and Hitha and Nair (1989) have established the following characterizations of the above three models.

Theorem 2.2

A necessary and sufficient that a non-negative discrete random variable has a mean residual life function of the form

m ( x ) = l x + m , m > 0 ,

Image

is that X is geometric (l=0Image), or Waring (l>0Image), or negative hyper-geometric (l<0Image).

Theorem 2.3

The relationship

h ( x ) m ( x ) = C ,

Image

where C is a constant, is satisfied by a non-negative discrete random variable X if and only if X is distributed as geometric for C=1Image, Waring for C>1Image, and negative hyper-geometric for C<1Image.

Identification of lifetime distributions that are uniquely determined by simple relationships between various reliability concepts has attracted several studies in the past and continues to be a fertile area of research. Many results pertaining to individual distributions and families of distributions are available in this context. Theorem 2.3 belongs to this category concerning three specific distributions and its applications will be discussed later in Chapter 4. A more general result in this connection is the following.

Let k(x)Image and c(x)Image be real-valued functions defined on N such that E(k(X))=μkImage and V(k(X))=σk2Image. We define

m k ( x ) = E [ k ( X ) | X > x ] .

Image

Further, assume that Ec2(X)<Image and E[Δc(X)g(X)]<Image, where Δ is the usual difference operator, Δa(x)=a(x+1)a(x)Image, and g(x)Image is some real-valued function.

Theorem 2.4

Nair and Sudheesh, 2008

Let X be discrete random variable taking values in N and k(x)Image, c(x)Image and g(x)Image be real-valued functions satisfying the above conditions. Then, for every c(x)Image and some g(x)Image and k(x)Image, the following statements are equivalent:

(i) f ( x + 1 ) f ( x ) = σ k g k ( x ) σ k g k ( x + 1 ) μ k + k ( x + 1 )

Image (2.24)

with gk(0)=μkk(0)σkImage and f(0)Image is evaluated from x=0f(x)=1Image;

(ii) m k ( x ) = μ k + σ k h ( x ) g ( x ) 1 h ( x ) ;

Image (2.25)

(iii) V ( c ( X ) ) E 2 ( g ( X ) Δ c ( X ) ) ,

Image (2.26)

where h(x)Image in (2.25) is the hazard rate function.

When k(x)=xImage, we write

m ( x ) = E ( X | X > x )

Image (2.27)

and call it the vitality function, which is in fact a conditional mean life function. It satisfies

m ( 1 ) = μ  and  m ( x ) = m ( x ) + x .

Image

Eq. (2.25) becomes a relationship between the mean residual life function and hazard rate function of the form

m ( x ) = x + μ + σ h ( x ) g ( x ) 1 h ( x )

Image

that characterizes the class of discrete distributions satisfying

f ( x + 1 ) f ( x ) = σ g ( x ) σ g ( x + 1 ) μ + x + 1 ,

Image (2.28)

where σ2=V(x)Image. It is further observed in Nair and Sudheesh (2008) that

  1. (1)  the equality in (2.26) holds if and only if c(x)Image is linear in k(x)Image;
  2. (2)  Eg(X)=σk1Cov(X,k(X))Image;
  3. (3)  the expression for g(x)Image is unique for a particular choice of k(x)Image, but one can have different choices for g(x)Image for the same distribution when k(x)Image is different;
  4. (4)  for a given k(x)Image, the corresponding g(x)Image characterizes the distribution of X.

Specialization of the identity in (2.25) for various distributions and their implications will be considered later in Chapter 3 wherein we discuss discrete lifetime models. The variance inequality is also useful in the unbiased estimation of functions of X.

Remark 2.4

Since h(x)=1eh1(x)Image, all the characteristic properties mentioned above can be translated in terms of m(x)Image and h1(x)Image.

Though closely related to the mean residual life function, the vitality function introduced in (2.28) has some importance in its own right in lifetime studies. By definition,

m ( x ) = 1 S ( x + 1 ) t = x + 1 t f ( t ) , x = 1 , 0 , 1 , .

Image

Some properties of m(x)Image that are of interest in the sequel are given below. From the definition

m ( x 1 ) S ( x ) m ( x ) S ( x + 1 ) = x f ( x ) .

Image

Dividing by S(x)Image and simplifying, we get

m ( x ) m ( x 1 ) = h ( x ) [ m ( x ) x ] = h ( x ) m ( x ) .

Image (2.29)

Further,

m ( x ) > x  for all  x  in  N .

Image

Analogous to the presentation on vitality function given in Kupka and Loo (1989), we note that in the integer interval [0,x]Image, m([0,x])=m(1)m(x1)Image represents the increment in the conditional mean life which is achieved by surviving from age 0 to x. Thus, low (high) values of m([0,x])Image means that the device is ageing rapidly (slowly) during [0,x]Image and this justifies the name vitality function of the lifetime X. However, it may be noticed that vitality function is always non-decreasing, unlike the mean residual life function which can be either decreasing or increasing.

Returning to the reliability functions based on real-valued functions k(X)Image of X, more characterizations and useful relationships can be seen. Let F(x)Image be the distribution function of X such that F(A)=1Image for some A>0Image.

Theorem 2.5

Gupta, 1975

If E[k(X)]<Image, then

S ( x + 1 ) = y = 0 x 1 m k ( y ) m k ( y + 1 ) + k ( y + 2 ) k ( y + 1 ) , 0 x < A ,

Image

if and only if

m k ( y ) = E ( k ( X ) | X > y ) k ( y + 1 ) , 0 y < A .

Image

In particular, X has geometric distribution G(p)Image if and only if

E ( X | X > y ) = E ( X ) + y + 1 .

Image

Ruiz and Navarro (1995) considered discrete random variables taking values in C=(xa,xa+1,,xb)Image, xi<xi+1Image, and defined for a monotonic k(x)Image, the doubly truncated mean function

M ( x , y ) = E [ k ( X ) | x X y ] = 1 F ( y ) F ( x ) x x i y f ( x i ) k ( x i )

Image

for (x,y)D={(s,t)|F(t)<F(s)}Image, F(x)=P(X<x)Image and f(xi)=F(xi)F(xi1)Image. Then,

F ( x i ) = P 1 ( x i , x i ) P 1 ( x i , x i ) + P 2 ( x i , x i ) 1 ,

Image

where

P 1 ( x , y ) = x i < x M ( x i + 1 , y ) k ( x i ) M ( x i , y ) k ( x i )

Image

and

P 2 ( x , y ) = y < x i k ( x i ) M ( x , x i 1 ) k ( x i ) M ( x , x i ) .

Image

They also obtained a necessary and sufficient condition for a given function in (x,y)Image to be a doubly truncated mean function. This result is of relevance in reliability modelling since M(x,y)Image tends to the mean residual life as xImage and to the reversed mean residual life (Section 2.7) as x0Image when k(X)=XImage and the range of X is N.

Example 2.9

For k(X)=XImage, the relationship

M ( x , y ) = q p + x q x 1 ( y + 1 ) q y q x 1 q y

Image

characterizes the geometric distribution G(p)Image with S(x)=qxImage, x=0,1,2,Image.

The conditional moments

m r ( x ) = E ( X r | X > x ) = 1 S ( x + 1 ) x + 1 t r f ( t )

Image

satisfy

h ( x ) = m r ( x 1 ) m r ( x ) x r m r ( x )

Image (2.30)

so that the distribution of X is determined from mr(x)Image by virtue of (2.2). Further, since the left hand side of (2.30) is independent of r,

m r ( x 1 ) m r ( x ) x r m r ( x ) = m r 1 ( x 1 ) m r 1 ( x ) x r 1 m r 1 ( x )

Image

is a recurrence relation connecting two consecutive moments.

Glanzel et al. (1984) considered k(x)Image to be a monotonic function such that k(x)Image a constant in (n,n+1,)NImage for any finite n. Then,

f ( x ) = { ( t = 0 x 1 m k ( t ) k ( t ) m k ( t + 1 ) k ( t ) ) m k ( x + 1 ) m k ( x ) m k ( x + 1 ) k ( x ) , for  x N  or  x < n   t = 0 n 1 m k ( t ) k ( t ) m k ( t + 1 ) k ( t ) , for  x = n

Image

Further, if k(x)Image and g(x)Image are two functions defined on N such that l(x)=m0(x)mk(x)Image is strictly monotonic, then the distribution of X has probability mass function f(x)Image satisfying

A ( x ) = f ( x ) f ( 0 ) l ( x + 1 ) l ( x ) l ( 1 ) l ( 0 ) t = 1 x l ( t 1 ) k ( t 1 ) g ( t 1 ) l ( t + 1 ) k ( t ) g ( t )

Image

and f(0)=(1+x=1A(x))1Image. For a further refinement of the results on conditional expectations, see Su and Huang (2000). For characterizations of the geometric, Waring and negative hyper-geometric laws by doubly truncated mean residual life, one may see to Khorashadizadeh et al. (2012).

The alternative hazard rate h1(x)Image can also be related to m(x)Image, as shown in Xie et al. (2002a) when x=1,2,3,Image. From the definition of m(x)Image in (2.19) and Eq. (2.2), we have

m ( x ) = 1 S ( x + 1 ) t = x + 1 u = 0 t 1 ( 1 h ( u ) ) = 1 S ( x + 1 ) t = x + 1 exp [ a = 0 t 1 h 1 ( u ) ] ,  using ( 2.18 ) = t = x + 1 exp [ H 1 ( x + 1 ) H 1 ( t 1 ) ] , since  S ( x ) = e H 1 ( x ) .

Image

Example 2.10

Let X be distributed as geometric G(p)Image. Then,

h 1 ( x ) = log S ( x ) S ( x + 1 ) = log q

Image

and

H 1 ( x ) = x log q .

Image

Thus,

m ( x ) = t = x + 1 [ q ( x + 1 ) / q ( t 1 ) ] = 1 p ,

Image

as has been observed earlier.

Some relationships involving h(x)Image and m(x)Image needed in the subsequent discussions are

E ( 1 h ( X ) ) = 1 + μ

Image (2.31)

and

E ( 1 h ( X ) | X > x ) = 1 S ( x + 1 ) t = x + 1 S ( t ) = m ( x ) .

Image (2.32)

The expression h(X)Image is referred to as the hazard rate at random time. We now prove a characteristic property of the mean of h(X)Image.

Theorem 2.6

E ( h ( X ) ) 1 1 + μ

Image (2.33)

and the equality holds if and only if X is geometric.

Proof

Consider

E ( h ( X ) ) E ( 1 h ( X ) ) ) = [ x = 0 h ( x ) f ( x ) ] [ x = 0 1 h ( x ) f ( x ) ] .

Image

Applying the Cauchy-Schwarz inequality for real sequences a(x)Image and b(x)Image, viz.,

[ a ( x ) ] 2 [ b ( x ) ] 2 [ a ( x ) b ( x ) ] 2

Image (2.34)

with a(x)=[h(x)f(x)]12Image and b(x)=[1h(x)f(x)]12Image, we readily obtain

E [ h ( X ) ] E [ 1 h ( X ) ] 1 .

Image

From (2.31), the inequality in (2.33) follows. The inequality in (2.34) becomes equality if and only if a(x)=kb(x)Image, in which case h(x)=kImage, 0<k<1Image, for all x. A constant hazard rate characterizes the geometric law. □

Theorem 2.7

E ( 1 m ( X ) ) 1 E ( m ( X ) )

Image (2.35)

and the equality holds if and only if X is geometric.

Proof

E ( 1 m ( X ) ) E ( m ( X ) ) = [ x = 0 m ( x ) f ( x ) ] [ x = 0 1 m ( x ) f ( x ) ] .

Image

Taking a(x)=[m(x)f(x)]12Image and b(x)=[f(x)m(x)]12Image and applying the Cauchy-Schwarz inequality, (2.35) follows. For the equality sign to hold, a(x)=kb(x)Image which implies m(x)=kImage, a characteristic property of the geometric distribution.  □

2.3.1 Modelling Data

Although we have defined the hazard rate function and the mean residual life function in terms of the time to failure of a device, the definitions and properties can also be used to model data on the time to occurrence of an ‘event’ with appropriate modifications in interpretations that best suit the context. Accordingly, the results mentioned so far have been extensively applied in other disciplines as well. A detailed examination of this aspect will be taken up later in Chapter 9. In the present section, we consider some real data and illustrate the application of the characterizations established above in finding suitable models for them.

Example 2.11

In this example, we consider the famous Bortkiewicz data on the number of soldiers of the Prussian army who died of horse-kicks in a period of 20 consecutive years, with the modification of the data value suggested by Cohen (1960). The last observation in Cohen's data is omitted for the analysis; see Table 2.1 for details. From the data, the probability mass function of X, the time of death, is estimated as

f ˆ ( 0 ) = 129 199 = 0.6482 , f ˆ ( 1 ) = 45 199 = 0.2261 , f ˆ ( 2 ) = 22 199 = 0.1106 , f ˆ ( 3 ) = 3 199 = 0.0151 .

Image

Table 2.1

Model for the deaths of solders by horse-kicks

No. of deaths per year Observed frequency m ˆ ( x ) Image Expected frequency
0 129 1.4926 123.08
1 45 1.4002 53.79
2 22 1.1201 18.35
3 3 1.0000 3.78

Image

These values give the estimate of the mean residual life as

m ˆ ( x ) = 1 S ˆ ( x + 1 ) t = x + 1 3 S ˆ ( t ) , x = 1 , 0 , 1 , 2 ,

Image

where Sˆ(x)=txfˆ(t)Image. The values of mˆ(x)Image can be seen in column 2 of Table 2.1. The next step is to seek a functional form for mˆ(x)Image. Accordingly, a straight line fit is considered for m(x)Image by the method of least-squares. This yields

m ( x ) = 1.34112 0.17579 x .

Image

Being a linearly decreasing mean residual life, from Theorem 2.1, we propose the negative hyper-geometric distribution as a model for the data with n=3Image. Comparing with the mean residual life function of the negative hyper-geometric law

m ( x ) = k + n x k + 1 ,

Image

we obtain the estimate of k as kˆ=4.863Image. Thus the proposed model has survival function

S ˆ ( x ) = ( k ˆ + n x n x ) / ( k ˆ + n n )

Image

with kˆ=4.863Image and n=3Image. The expected probability mass function is

f ˆ ( x ) = S ˆ ( x ) S ˆ ( x + 1 ) .

Image

Obviously, the chosen model rests on the assumption of linearity of m(x)Image. Hence the model is validated by calculating the expected frequencies and then verifying their closeness by the chi-square goodness-of-fit test. The chi-square value of 2.4949 does not reject the model at 5% level of significance.

Example 2.12

The results of the well-known Rutherford-Geiger experiment is produced in Table 2.2. It describes the number of α-particles remitted from radioactive substances in 2608 fractions of 75 seconds. As in the previous example, the estimates fˆ(x)Image and Sˆ(x)Image of the probability mass function and the survival function and there from mˆ(x)Image are found out. In addition, we also require the estimated hazard rate function hˆ(x)=fˆ(x)/Sˆ(x)Image. The observed values of hˆ(x)Image and mˆ(x)Image are shown in Table 2.2.

Table 2.2

Estimated values of hˆ(x)Image, mˆ(x)Image and gˆ(x)Image for the data in Table 2.3

x h ˆ ( x ) Image m ˆ ( x ) Image g ˆ ( x ) Image
0 0.029 3.9569 2.00
1 0.0795 3.2124 2.06
2 0.1632 2.6437 2.07
3 0.2672 2.2431 1.96
4 0.3695 1.9716 1.87
5 0.4493 1.7642 1.85
6 0.5462 1.6839 1.65
7 0.6615 1.7604 1.61
8 0.5518 1.5576 2.81
9 0.6306 1.5082 2.01
10 0.6223 1.3478 2.23
11 0.6522 1.000 2.12
12 1.000

Image

We also have

m ( 1 ) = t = 0 S ˆ ( t ) = 1 + μ = 4.8702

Image

so that an estimate of the mean becomes

μ ˆ = 3.8702 .

Image

Neither an examination of the values of hˆ(x)Image and mˆ(x)Image nor their plots in Figs 2.1 and 2.2 exhibit an obvious choice for the functional forms of h(x)Image and m(x)Image. In such cases, Theorem 2.4 can be of some assistance in the determination of a suitable model. Since we have the empirical mean residual life from the data, choose k(x)=xImage so that (2.24) becomes

m ( x ) = m ( x ) + x = μ + σ h ( x ) g ( x ) 1 h ( x ) .

Image

Hence, we can calculate gˆ(x)Image from

g ˆ ( x ) = ( m ˆ ( x ) + x μ ˆ σ ˆ ) ( 1 h ˆ ( x ) 1 ) .

Image (2.36)

Image
Figure 2.1 Empirical hazard function for Rutherford-Geiger experiment.
Image
Figure 2.2 Empirical mean residual life function for Rutherford-Geiger experiment.

In (2.36), the estimate of the standard deviation σ is taken as the sample standard deviation. The values of gˆ(x)Image so arrived at are shown in Table 2.2. Notice that the value at x=8Image is significantly larger than the rest. Barring this value, the rest appears to be clustering around the average g¯(x)=1.94Image. This is the same as assuming g(x)=cImage, a constant for all x, since the least-square estimate of c is g¯(x)Image. Recall from Theorem 2.4 that the value of g(x)Image uniquely determines the distribution of X and further, g(x)=cImage if and only if the distribution is Poisson with mean c2Image. Thus, the data follows a Poisson distribution

f ( x ) = e λ λ x x ! , x = 0 , 1 , 2 ,

Image

with λ=c2=3.8Image. To validate the assumption made on g(x)Image, we compare the observed and expected frequencies, when λ=3.8Image, presented in Table 2.3.

Table 2.3

Goodness-of-fit for Rutherford-Geiger data

x Observed frequency Expected frequency Contribution to χ2Image-value
0 57 58.4 0.0036
1 203 221.7 1.5773
2 383 421.2 3.4644
3 525 533.6 0.1386
4 532 507.0 1.2327
5 408 385.2 1.3495
6 273 244.1 3.4215
7 139 132.5 0.3189
8 45 62.9 5.0934
9 27 26.6 0.0060
10 10 10.27 0.0039
11 12 Image }Image 6 4.4 0.5819
Total 2608 2608 16.2216

Image

The chi-square value obtained above does not reject the hypothesis that the data follow the Poisson distribution. The following facts can be further observed:

  1. (i)  The characterization theorems used in Examples 2.11 and 2.12 provide some quick estimates of the parameters of the hypothesized models. In more sophisticated models that require iterative procedures to find estimates, the above estimates are good choice may serve as initial values;
  2. (ii)  The contribution to the chi-square value or the discrepancy between the observed and expected frequencies at x=8Image is considerably larger compared to the rest. The plot of the hazard function shows that the function is decreasing markedly at this point which is uncharacteristic of the Poisson model where the hazard rate is increasing. If we use the maximum likelihood estimate λˆ=3.87Image for the Poisson distribution, the difference is still higher with expected frequency 67. Thus, the gˆ(x)Image value at x=8Image is taken as a discordant one and omitted in the calculation of g¯(x)Image.

2.4 Variance Residual Life Function

The variance of the residual life (Xx|X>x)=XxImage is studied in reliability theory in various contexts. Primarily, its role is to define ageing concepts that are weaker than some ageing criteria based on the hazard rate and the mean residual life. Chapter 4 provides details in this direction. Secondly, variance of residual life has the same role as the usual variance when estimators of mean residual life are discussed. It is also required in the study of coefficient of variation of residual life. Assuming that E(X2)<Image, we define the variance residual life function as

σ 2 ( x ) = E [ ( X x ) 2 | X > x ] m 2 ( x ) ,

Image (2.37)

where m(x)Image is the mean residual life function defined in (2.20). Alternatively,

σ 2 ( x ) = E ( X 2 | X > x ) E 2 ( X | X > x ) .

Image (2.38)

The second factorial moment of the residual life XxImage, given by

μ ( 2 ) ( x ) = E [ ( X x ) ( X x 1 ) | X > x ] ,

Image

will frequently appear in the sequel. We have the following expression for μ(2)(x)Image.

Theorem 2.8

μ ( 2 ) ( x ) = 2 S ( x + 1 ) t = x + 1 u = t + 1 S ( u ) .

Image (2.39)

Proof

E ( X 2 | X > x ) = 1 S ( x + 1 ) t = x + 1 t 2 [ S ( t ) S ( t + 1 ) ] = x 2 + 1 S ( x + 1 ) t = x + 1 ( 2 t 1 ) S ( t ) .

Image (2.40)

Hence,

E [ ( X x ) 2 | X > x ] = 1 S ( x + 1 ) t = x + 1 ( 2 t 2 x 1 ) S ( t ) , = 1 S ( x + 1 ) [ 2 t = x + 1 u = t + 1 S ( u ) + t = x + 1 S ( t ) ] = 2 S ( x + 1 ) t = x + 1 u = t + 1 S ( u ) + E ( X x | X > x ) ,

Image (2.41)

which is equivalent to (2.39). Thus, the variance residual life function can be written in the form

σ 2 ( x ) = 2 S ( x + 1 ) t = x + 1 u = t + 1 S ( u ) m ( x ) ( m ( x ) 1 ) .

Image (2.42)

An equivalent expression for (2.40), by defining the survival function as P(X>x)Image, has been obtained in Khorashadizadeh et al. (2010). □

Example 2.13

For the geometric law considered in Example 2.6, we have

S ( x ) = q x , m ( x ) = 1 p , u = t + 1 S ( t ) = q t + 1 p  and  t = x + 1 q t + 1 p = q p 2

Image

so that from (2.42), we have

σ 2 ( x ) = 2 q p 2 1 p ( 1 p 1 ) = q p 2 ,

Image

which is the same as the variance of X.

There exist some inter-relationships between the reliability functions discussed so far.

Theorem 2.9

σ 2 ( x + 1 ) σ 2 ( x ) = h ( x + 1 ) [ σ 2 ( x + 1 ) m ( x + 1 ) ( m ( x ) 1 ) ] .

Image (2.43)

Proof

From (2.42), we have

[ σ 2 ( x ) + m ( x ) ( m ( x ) 1 ) ] S ( x + 1 ) [ σ 2 ( x + 1 ) + m ( x + 1 ) ( m ( x + 1 ) 1 ) ] S ( x + 2 ) = 2 S ( x + 2 ) m ( x + 1 ) .

Image

Dividing by S(x+2)Image and using (2.3), we get

σ 2 ( x ) + m ( x ) ( m ( x ) 1 ) = ( 1 h ( x + 1 ) ) [ σ 2 ( x + 1 ) + m 2 ( x + 1 ) + m ( x + 1 ) ] .

Image (2.44)

Since

h ( x + 1 ) = 1 + m ( x + 1 ) m ( x ) m ( x + 1 ) ,

Image

further simplification of (2.44) yields (2.43). □

Remark 2.5

The identity in (2.43) can be expressed in terms of σ2(x)Image and m(x)Image as

σ 2 ( x ) m ( x ) 1 = σ 2 ( x + 1 ) m ( x + 1 ) + m ( x + 1 ) m ( x ) + 1 .

Image

Theorem 2.10

σ 2 ( x ) = E [ m ( X ) ( m ( X 1 ) 1 ) | X > x ] , x = 0 , 1 , 2 ,

Image (2.45)

Proof

Starting from (2.43), we have

σ 2 ( x + 1 ) σ 2 ( x ) = f ( x + 1 ) S ( x + 1 ) [ σ 2 ( x + 1 ) m ( x + 1 ) ( m ( x ) 1 ) ] ,

Image

or

S ( x + 1 ) σ 2 ( x ) S ( x + 2 ) σ 2 ( x + 1 ) = m ( x + 1 ) ( m ( x ) 1 ) f ( x + 1 ) .

Image

Adding the above identity for x+1,x+2,Image, we get

S ( x + 1 ) σ 2 ( x ) = t = x + 1 m ( t ) ( m ( t 1 ) 1 ) f ( t )

Image

which is the same as (2.45). □

Remark 2.6

Theorem 2.10 can be employed to find quick estimates of the variance residual life when the estimated mean residual life is known.

Unlike the cases of hazard rate and mean residual life functions, there is no inversion formula that expresses the survival function in terms of the variance residual life. Also, there are only a few standard distributions for which σ2(x)Image has simple tractable forms, as could be seen from Table 2.3. Therefore, characterizations of life distributions involving σ2(x)Image take the form of its relationship with other concepts. Hitha and Nair (1989) have shown that

σ 2 ( x ) m ( x ) ( m ( x ) 1 ) = C ,  a constant ,

Image

if and only if X is geometric for C=1Image, negative hyper-geometric for C>1Image, and Waring for C<1Image. We also have a more general result satisfied by a class of distribution due to Sudheesh and Nair (2010). We retain the notations used for Theorem 2.4.

Theorem 2.11

Let k(X)Image be a real-valued function of X that is non-decreasing and satisfying the following conditions:

  1. (i)  E[k2(X)]Image and E[|Δk(X)|gk(X)]Image are finite;
  2. (ii)  f(0)>0Image and the support of X is an integer interval;
  3. (iii)  Δk(x)0Image for all x.

Then, X has distribution specified by

f ( x + 1 ) f ( x ) = σ k g k ( x ) σ k g k ( x + 1 ) μ k + k ( x + 1 )

Image

for some real-valued function gk(x)Image if and only if for all x

V ( k ( X ) | X > x ) = σ k E ( Δ k ( X ) g k ( X ) | X > x ) + ( μ k m k ( x ) ) ( m k ( x ) k ( x + 1 ) ) ,

Image

where V()Image stands for variance.

Remark 2.7

Setting k(X)=XImage, we have

σ 2 ( x ) = σ E [ g ( X ) | X > x ] + ( μ m ( x ) ) ( m ( x ) x 1 )

Image (2.46)

for all distributions in which the probability mass function satisfies

f ( x + 1 ) f ( x ) = σ g ( x ) σ g ( x + 1 ) μ + x + 1 .

Image

Remark 2.8

From Eqs (2.39) and (2.42), it is evident that the property

μ ( 2 ) ( x ) m ( x ) ( m ( x ) 1 ) = k ,

Image

a positive constant, characterize the geometric, Waring and negative hyper-geometric models.

2.5 Upper Partial Moments

A concept that is closely related to moments of residual life is that of partial moments, which can also be interpreted as moments of a different kind of residual life. The rth upper partial moment of X about a point x is defined as

α r ( x ) = E ( ( X x ) + r ) , r = 0 , 1 , ; x = 0 , 1 , 2 , ,

Image (2.47)

where (Xx)+=max(Xx,0)Image. In the case of discrete models, it is sometimes more convenient to work with factorial partial moments defined as

α ( r ) ( x ) = E ( ( X x ) + ( r ) ) , X > x + r 1

Image (2.48)

where

t ( r ) = t ( t 1 ) ( t r + 1 )

Image (2.49)

is the descending factorial expression. By virtue of the relationship

( t x ) + r = k = 0 r ( X x ) + ( r ) S ( r , k ) ,

Image

where S(r,k)Image is the Stirling number of the second kind, αrImage can be computed in terms of α(r)Image. Nair et al. (2000) have studied several properties of α(r)Image and their implications to reliability modelling. First, we note that

α 1 ( x ) = α ( 1 ) ( x ) = t = x + 1 S ( t ) ,

Image

and hence

S ( x ) = α 1 ( x 1 ) α 1 ( x ) , x = 0 , 1 , ,

Image

and

f ( x ) = α 1 ( x 1 ) 2 α 1 ( x ) + α 1 ( x + 1 ) .

Image

The upper partial moments satisfy the recurrence formula

α ( r ) ( x ) α ( r ) ( x + 1 ) = r α ( r 1 ) ( x + 1 ) , r = 1 , 2 , .

Image

Thus, any one partial moment sequence, particularly α1(x)Image, x=0,1,2,Image, determines all other partial moments. Since only the first two partial moments are of importance in reliability analysis we concentrate on their properties. Notice that

α 1 ( x ) α 1 ( x + 1 ) = S ( x + 1 )

Image (2.50)

so that

m ( x ) = α 1 ( x ) α 1 ( x ) α 1 ( x + 1 )

Image (2.51)

and

1 h ( x + 1 ) = S ( x + 2 ) S ( x + 1 ) = α 1 ( x + 1 ) α 1 ( x + 2 ) α 1 ( x ) α 1 ( x + 1 ) .

Image (2.52)

From (2.51), the ratio of the upper partial means at consecutive values gives

α 1 ( x + 1 ) α 1 ( x ) = m ( x ) 1 m ( x ) .

Image

Thus, from Theorem 2.2, we deduce the following result.

Theorem 2.12

If E(X)<Image, then

α 1 ( x + 1 ) α 1 ( x ) = ( l 1 ) + a x l + a x , l > 1 ,

Image

if and only if X has geometric, Waring, negative hyper-geometric distributions when a=0Image, a>0Image, and a<0Image, respectively.

The second factorial partial moment is

α ( 2 ) ( x ) = E [ ( X x ) ( X x 1 ) ] + = S ( x + 1 ) μ ( 2 ) ( x ) = 2 t = x + 1 u = t + 1 S ( u ) ,

Image

while

E [ ( X x ) 2 | X > x ] = α 2 ( x ) α 1 ( x ) α 1 ( x + 1 ) .

Image

Thus, the variance residual life is calculated as

σ 2 ( x ) = α 2 ( x ) ( α 1 ( x ) α 1 ( x + 1 ) ) α 1 2 ( x ) [ α 1 ( x ) α 1 ( x + 1 ) ] 2 .

Image

Example 2.14

Let

f ( x ) = ( a b ) ( b ) x ( a ) x , x = 0 , 1 , 2 , , a > b .

Image

Then, from Example 2.2, we have

S ( x ) = ( b ) x ( a ) x , α 1 ( x ) = x + 1 ( b ) t ( a ) t = ( b ) x + 1 ( a ) x 1 [ 1 a + x + b + x + 1 ( a + x ) ( a + x + 1 ) + ] = ( b ) x + 1 ( a b 1 ) ( a ) x , α 1 ( x ) = 2 t = x + 1 u = t + 1 S ( u ) = 2 t = x + 1 ( b ) t + 1 ( a b 1 ) ( a ) t = 2 ( b ) x + 2 ( a ) x ( a b 1 ) ( a b 2 ) .

Image

The ratios of partial moments admit simple forms in this case. For example,

α 1 ( x + 1 ) α 1 ( x ) = b + x + 1 a + x

Image

and

α 2 ( x + 1 ) α 2 ( x ) = b + x + 2 a + x ,

Image

both being homographic functions. A similar discussion of the properties is possible when the ascending factorial expression

x ( r ) = x ( x + 1 ) ( x + r 1 )

Image

is used instead of x(r)Image in (2.48); for details see Priya et al. (2000).

So far, we have considered reliability functions specified by the survival function S(x)Image. A parallel theory that builds up when the event XxImage, defined by the distribution function, also has been of interest in reliability literature. Referred to by the name reliability functions in reversed time, they are found to be useful in modelling and analysis of lifetime data and also in other fields of study. We discuss some important functions in this category in the next few sections.

2.6 Reversed Hazard Rate

The reversed hazard rate of X is defined as

λ ( x ) = P ( X = x | X x ) = f ( x ) F ( x ) .

Image (2.53)

Thus, λ(x)Image in the discrete case is interpreted as the conditional probability that a device fails at age x, given that its lifetime is at most x. Being a conditional probability, 0λ(x)1Image. Keilson and Sumita (1982), who first defined the reversed hazard rate in continuous time, called it the dual failure function by the property that X has reversed hazard rate λ(x)Image, axb<Image if and only if the random variable −X has a hazard rate λ(x)Image on (b,a)Image.

Finkelstein (2002) observed that, in reliability, one often works with non-negative random variables and therefore the above duality is not applicable. Further, the upper point of support is generally infinite. Thus, the properties of reversed hazard rate of non-negative random variables with infinite support cannot be formally obtained from those of the hazard rates. This makes a study of λ(x)Image becomes necessary in its own right. Since

λ ( x + 1 ) = f ( x + 1 ) F ( x + 1 ) = F ( x + 1 ) F ( x ) F ( x ) ,

Image

it follows that

F ( x ) = t = x + 1 ( 1 λ ( t ) ) .

Image (2.54)

Notice further that λ(0)=1Image and

f ( x ) = λ ( x ) t = x + 1 ( 1 λ ( t ) ) .

Image (2.55)

Thus, λ(x)Image determines the life distribution uniquely; see Nair and Asha (2004) for details and examples.

Example 2.15

We say that X follows arithmetic distribution if it has a probability mass function of the form

f ( x ) = 2 x b ( b + 1 ) , x = 1 , 2 , , b .

Image

The corresponding distribution function is

F ( x ) = x ( x + 1 ) b ( b + 1 ) ,

Image

and hence

λ ( x ) = 2 x + 1 ,

Image

a reciprocal linear function.

In the continuous case, Block et al. (1998) have shown that for an absolutely continuous random variable X with interval, of support (a,b)Image, a<bImage, if the reversed hazard rate is a constant c>0Image for all a<x<bImage, then a=Image, b<Image and

F ( x ) = exp [ c ( x b ) ] , x < b ,

Image

and conversely. Thus, there does not exist an absolutely continuous distribution with constant reversed hazard rate on the positive real axis. We will now show that in the discrete case, reversed hazard rate can be constant when a subset of the set of nonnegative integers is as the support of X.

Example 2.16

Let X be distributed with probability mass function

f ( x ) = { ( 1 + c ) b , x = 0 , c ( 1 + c ) x b 1 , x = 1 , 2 , , b , c > 0 ,

Image (2.56)

where b is a finite positive integer greater than unity. Then,

F ( x ) = { ( 1 + c ) x b , x = 0 , 1 , 2 , , b 1 , x b .

Image (2.57)

The reversed hazard rate in this case is

λ ( x + 1 ) = c ( 1 + c ) x b 1 ( 1 + c ) x b = c 1 + c ,  for all  x = 0 , 1 , 2 , ,

Image

and λ(0)=1Image.

Remark 2.9

The terms in (2.56), for x=1,2,Image, are in geometric progression with a common ratio (1+c)Image and therefore, the successive terms are increasing, as opposed to the usual geometric distribution where the monotonicity is in the opposite direction. We will term (2.56) as the reversed geometric distribution with parameter c.

This distribution has an important role in the sequel. Since it does not form part of the standard distributions discussed in the next chapter, some properties of the model (2.56) are presented here. First,

f ( x + 1 ) f ( x ) = { c , x = 0 , c + 1 , x = 1 , 2 , . .

Image

Thus, the distribution can be identified in practice for data for which the ratio of the successive frequencies, except the first, are nearly constant. The mean and variance are given by

μ = E ( X ) = ( 1 + c ) b [ 1 + c t = 1 b t ( 1 + c ) t 1 ] = b c 1 c + 1 c ( 1 + c ) b 1

Image

and

σ 2 = ( 1 + c ) b [ 1 + c t = 1 b t 2 ( 1 + c ) t 1 ] μ 2 = 1 c + 2 c 2 + b 2 2 b c ( 1 c + 1 c 2 ) ( 1 + c ) b μ 2 .

Image

For the geometric distribution, the hazard rate is constant which is equivalent to the lack of memory property. An analogous property in a reverse sense is satisfied by the reversed geometric law, as the next theorem shows.

Theorem 2.13

Let X be a discrete random variable with a finite set (0,1,,b)Image as support. Then,

P ( X t | X t + s ) = P ( X 0 | X s )

Image (2.58)

for all t,sImage in the support of X if and only if X has reversed geometric distribution.

Proof

Eq. (2.58) can be restated as

F ( t + s ) F ( 0 ) = F ( t ) F ( s ) .

Image (2.59)

Substituting (2.57) in (2.59), we readily have the ‘if’ part. Conversely, (2.59) is equivalent to the functional equation

a ( t + s ) = a ( t ) a ( s ) ,

Image

where a(t)=F(t)F(0)Image. To solve for a(t)Image, we set s=1Image, so that

a ( t + 1 ) = a ( t ) a ( 1 ) .

Image

Iterating for t,

a ( t ) = [ a ( 1 ) ] t  for all  t = 1 , 2 , , b ,

Image

and so

F ( x + 1 ) = [ F ( 1 ) ] x + 1 [ f ( 0 ) ] x .

Image (2.60)

Thus,

f ( x + 1 ) = p x + 1 [ f ( 0 ) ] x p x [ f ( 0 ) ] x 1 , p = F ( 1 ) .

Image

Summing for x=0,1,2,Image and using the fact that x=0bf(x)=1Image, we get f(0)=pbb1Image. Inserting the values of f(0)Image in (2.60), we get

F ( x ) = p ( x b ) .

Image

Since F(1)<1Image, we should have p<1Image and setting p=(1+c)1Image, the distribution in (2.57) is recovered, as needed. □

Remark 2.10

The reversed lack of memory property implies that, given the lifetime of device is upto age x, the probability that the device fails at any age in [0,x]Image is the same. Further, the distribution of such a lifetime is governed by the reversed geometric law.

Remark 2.11

From the above discussions, we observe that the following are equivalent:

  1. (i)  X has constant reversed hazard rate;
  2. (ii)  X follows reversed geometric law;
  3. (iii)  X has reversed lack of memory property.

Further discussions and application of these results can be seen in Nair and Sankaran (2013).

The definition in (2.53), when applied to the continuous case, has the form λ(x)=dlogF(x)dxImage. This property is not shared in the discrete case. For reasons similar to those explained in Section 2.2 regarding the alternative hazard rate, a second definition for reversed hazard rate can be put forward as

λ 1 ( x ) = log F ( x ) F ( x 1 ) , x = 1 , 2 , .

Image (2.61)

In this case,

F ( x 1 ) = F ( x ) e λ 1 ( x ) ,

Image

or

F ( x ) = exp [ t = x + 1 λ 1 ( t ) ] .

Image

Also,

f ( x ) = exp [ t = x + 1 λ 1 ( t ) ] [ 1 e λ 1 ( x ) ] , x = 1 , 2 , ,

Image

and f(0)Image is determined from

f ( 0 ) = 1 x = 1 f ( x ) .

Image

Example 2.17

The reversed geometric distribution specified by

F ( x ) = ( 1 + c ) x b , x = 0 , 1 , , b ,

Image

has

λ 1 ( x ) = log ( 1 + c ) x b + 1 ( 1 + c ) x b = log ( 1 + c ) ,

Image

which is a constant for all x.

From the definitions in (2.61) and (2.53), the relationship between λ(x)Image and λ1(x)Image is found to be

λ ( x ) = 1 e λ 1 ( x ) .

Image (2.62)

It may be noticed that λ1(0)=Image, is consistent with the value λ(0)=1Image.

2.7 Reversed Mean Residual Life

A second measure of interest in reversed time is the reversed mean residual life. Suppose a device has failed before attaining age t. Then, the random variable Xt=tX|(X<t)Image is the time elapsed since the device has failed, conditioned on the fact that its lifetime is less than t, and this is referred to as the reversed residual life or inactivity time of X. It is easy to see that XtImage has the distribution function

F x t = F ( t 1 ) F ( t x 1 ) F ( t 1 ) , x = 0 , 1 , 2 , .

Image (2.63)

The mean of this distribution is called the reversed mean residual life or mean inactivity time, and is denoted by r(x)Image. One can also define r(x)Image as

r ( x ) = E ( x X | X < x ) = 1 F ( x 1 ) t = 1 x 1 t f ( t ) , x = 1 , 2 , .

Image (2.64)

We define r(0)=0Image. Note also that r(1)=1Image. Goliforushani and Asadi (2008) have established the following properties of r(x)Image:

  1. (i)  xr(x)Image is an increasing function with 0r(x)xImage;
  2. (ii)  μ=xF(x1)r(x)+S(x)m(x1)Image;
  3. (iii)  r(x)Image cannot be a decreasing function for all x;
  4. (iv)  

λ ( x ) = 1 + r ( x ) r ( x + 1 ) r ( x ) , x = 1 , 2 , ;

Image

  1. (v)  If X has a finite support as (0,1,2,,b)Image, then

F ( x ) = t = 1 x ( r ( t ) r ( t + 1 ) 1 ) F ( 0 ) , x = 1 , 2 , , b ,

Image (2.65)

  1. and

F ( 0 ) = [ t = 1 b r ( t ) r ( t + 1 ) 1 ] 1 ;

Image (2.66)

  1. (vi)  If λ(x)Image is decreasing function of x, then r(x)Image is an increasing function of x;
  2. (vii)  

r ( x ) = t = 1 x F ( t 1 ) F ( x 1 ) .

Image

It is of interest to mention here that the notions of residual life and inactivity time have been extended to general coherent systems and their properties, mixture representations and stochastic orderings of various forms have been discussed by numerous authors; see, for example, Navarro et al. (2008, 2013), Zhang (2010), Goliforushani and Asadi (2011), Goliforushani et al. (2012), and Parvardeh and Balakrishnan (2013, 2014).

Example 2.18

For the geometric distribution G(p)Image, we have

F ( x ) = 1 q x + 1 ,

Image

and hence

r ( x ) = 1 F ( x 1 ) [ F ( 0 ) + + F ( x 1 ) ] = x p q ( 1 q x ) p ( 1 q x ) .

Image

Example 2.19

Consider the discrete uniform distribution

f ( x ) = 1 b , x = 1 , , b .

Image

In this case,

F ( x ) = x b

Image

and

r ( x ) = b x 1 ( 1 b + 2 b + + x 1 b ) = x 2 .

Image

Notice that in this case λ(x)=1xImage and that the product x of r(x)Image and λ(x)Image is 12Image for all x.

In the case of geometric distribution, the hazard rate h(x)Image is constant and so is the mean residual life function m(x)Image, the two being related by m(x)h(x)=1Image. We will show that a different scenario exists in the relationship between λ(x)Image and r(x)Image. For the reversed geometric law, λ(x)=c1+cImage, whereas

r ( x ) = 1 + c c [ 1 1 ( 1 + c ) x ]

Image

which is a strictly increasing function of x. However, we can recover a distribution with constant reversed mean residual life by modifying the probability at x=0Image.

Theorem 2.14

A random variable with the support (0,1,2,,b)Image has reversed mean residual life

r ( x ) = c + 1 c , c > 0 , x = 2 , 3 , , b ,

Image (2.67)

for all x if and only if its distribution is given by

F ( x ) = { c 1 ( 1 + c ) 1 b , x = 0 , ( 1 + c ) x b , x = 1 , 2 , , b .

Image (2.68)

Proof

For the distribution in (2.68), we have

r ( x ) = ( 1 + c ) b x + 1 [ 1 c ( 1 + c ) b 1 + ( 1 + c ) 1 b + + ( 1 + c ) x 1 b ] = ( 1 + c ) b x + 1 [ 1 c ( 1 + c ) b 1 + ( 1 + c ) c ( ( 1 + c ) x 1 1 ) ( 1 + c ) b ] = 1 + c c .

Image

The converse follows upon noting from (2.67) that

r ( x ) r ( x + 1 ) 1 = ( 1 + c )

Image

and then applying (2.65) and the fact that 0bf(x)=1Image. □

Remark 2.12

The probability mass function corresponding to (2.68) is

f ( x ) = { c 1 ( 1 + c ) 1 b , x = 0 , ( c 1 ) c 1 ( 1 + c ) 1 b , x = 1 , c ( 1 + c ) x b 1 , x = 2 , 3 , , b .

Image

Accordingly, we have

λ ( x ) = { 1 , x = 0 , ( c 1 ) c 1 , x = 1 , c ( c + 1 ) 1 , x = 2 , 3 , , b .

Image

Remark 2.13

In comparison with the reversed geometric model, there is a difference in the value of λ(1)Image for the model in (2.68). It may also be noted that

λ ( x ) r ( x ) = 1 , x = 2 , 3 , , b ,

Image

for the distribution in (2.68). This is a characteristic property as evidenced by the identity

λ ( x ) r ( x ) = 1 + r ( x ) r ( x + 1 )

Image

which gives

r ( x + 1 ) r ( x ) = 0

Image

for all x, or r(x)Image is a constant.

The reversed mean residual life can also be related to the alternative reversed hazard rate. From (2.62) and the identity in (iv) above, we have

λ 1 ( x ) = log r ( x ) r ( x + 1 ) 1 , x = 0 , 1 , 2 , .

Image

In certain problems, it is more convenient to deal with the conditional mean

r ( x ) = E ( X | X x ) = x r ( x + 1 ) .

Image

The corresponding results for r(x)Image is easily derived from those of r(x)Image by using the above identity.

Relationship between λ(x)Image and r(x)Image of a different nature, than those indicated above, that characterizes families of discrete distributions has been proposed in literature. The main results reviewed here are from Gupta et al. (2006) and Nair and Sudheesh (2008). We retain the same notation as in Section 2.3. Let k(x)Image be real-valued function such that Ek2(X)<Image. Then, the probability mass function of X will be of the form

f ( x + 1 ) f ( x ) = σ k g k ( x ) σ k g k ( x + 1 ) μ k + k ( x + 1 ) ,

Image (2.69)

where μkImage and σkImage are the mean and standard deviation of k(X)Image satisfying σkgk(0)=μkk(0)Image for some real-valued function g()Image if and only if

r k ( x ) = μ k σ k g k ( x ) λ ( x )

Image (2.70)

with rk(x)=E(k(X)|Xx)Image. The gkImage function appearing in the above relationships is unique for a particular distribution and often assumes simple forms. Special cases of the form in (2.69) that includes various families like the discrete Pearson and Katz families and several individual distributions are discussed in Chapter 3. In practice, the formulas in (2.69) and (2.70) will work if we replace σkgk(x)Image by function ak(x)Image, which can be determined from the data, without actually using σkImage.

When any two of the functions h(x)Image, λ(x)Image and F(x)Image are known, the third can be determined from the identity

F ( x ) = h ( x ) [ h ( x ) + λ ( x ) λ ( x ) h ( x ) ] 1 .

Image

Similarly, we also have

S ( x ) = λ ( x ) [ h ( x ) + λ ( x ) λ ( x ) h ( x ) ] 1

Image

and

f ( x ) = λ ( x ) h ( x ) [ h ( x ) + λ ( x ) λ ( x ) h ( x ) ] 1 .

Image

From the last three forms, we arrive at

f ( x + 1 ) f ( x ) = h ( x + 1 ) ( 1 h ( x ) ) h ( x ) = λ ( x + 1 ) λ [ 1 λ ( x + 1 ) ] ,

Image (2.71)

which will be useful in later chapters.

2.8 Reversed Variance Residual Life

Just as the mean of the reversed residual life XxImage, the variance of XxImage is also an important function reliability analysis, called the reversed variance residual life or variance inactivity time, and is denoted by v(x)Image. In algebraic manipulations, different expressions for v(x)Image have been employed. These are

v ( x ) = V ( x X | X < x ) = V ( X | X < x ) = E ( X 2 | X < x ) E 2 ( X | X < x )

Image (2.72)

and

v ( x ) = E [ ( x X ) 2 | X < x ] r 2 ( x ) .

Image (2.73)

Further,

E ( ( x X ) 2 | X < x ) = 1 F ( x 1 ) t = 0 x 1 ( x t ) 2 f ( t ) = 1 F ( x 1 ) t = 1 x 1 ( x t ) 2 [ F ( t ) F ( t 1 ) ] = 2 F ( x 1 ) t = 1 x ( x t ) F ( t ) r ( x ) = 2 F ( x 1 ) t = 1 x u = 1 t F ( u 1 ) r ( x ) ,

Image

or

v ( x ) = 2 F ( x 1 ) t = 1 x u = 1 t F ( u 1 ) r ( x ) ( r ( x ) + 1 ) .

Image (2.74)

Example 2.20

Consider the geometric distribution G(p)Image in Example 2.2, for which F(x)=1qx+1Image. Then, the conditional probability mass function of X|(X<x)Image is given by

f x ( t ) = p q t 1 q x , t = 0 , 1 , , x 1 .

Image (2.75)

We use (2.72) to calculate v(x)Image. For this, from (2.75), we have

t = 0 x 1 p q t = 1 q x .

Image

Differentiating with respect to q, we get

t p q t 1 = 0 x 1 q t x q x 1 .

Image

Simplifying so as to make the left hand side an expected value, we get

t = 0 x 1 t p q t 1 q x = E ( X | X < x ) = q p x q x 1 q x .

Image (2.76)

Differentiating (2.76) again with respect to q and rearranging the terms in the same manner, we get

E ( X 2 | X < x ) = q ( 1 + q ) q x ( ( 1 + q ) ( p x + q ) + x ( x 1 ) p 2 ) p 2 ( 1 q x ) .

Image (2.77)

The variance is now found from (2.72).

Various reliability functions in reversed residual lifetime for some distributions are exhibited in Table 2.4 and the graph of these functions for arithmetic distribution is presented in Fig. 2.3.

Table 2.4

The reversed hazard function, mean residual life and variance residual life for some distributions

Distribution F ( x ) Image λ ( x ) Image r ( x ) Image v ( x ) Image
geometric 1 − qx+1 q x p 1 q x + 1 Image x p q ( 1 q x ) p ( 1 q x ) Image (2.76) and (2.77)
uniform xbImage, x=1,2,,bImage 1 x Image x 2 Image x ( x 2 ) 12 Image
arithmetic x(x+1)b(b+1)Image, x=1,2,,bImage 2 x + 1 Image x + 1 3 Image ( x + 1 ) ( x 2 ) 18 Image
reversed geometric (1+c)xbImage, x=0,1,,bImage c 1 + c Image 1 + c c [ 1 1 ( 1 + c ) x ] Image [ ( 1 + c ) x + 1 ] [ ( 1 + c ) x 1 + 1 ] 2 ( c x + 1 ) c 2 ( 1 + c ) 2 ( x 1 ) Image

Image

Image
Figure 2.3 Reliability functions in reversed time for the arithmetic distribution.

In the continuous case, identities that connect the three functions λ(x)Image, r(x)Image and v(x)Image have been established. The corresponding results in the discrete case are

v ( x + 1 ) v ( x ) = λ ( x ) [ r ( x ( r ( x + 1 ) 1 ) v ( x ) ]

Image (2.78)

and

v ( x + 1 ) r ( x + 1 ) 1 + r ( x + 1 ) 1 = v ( x ) r ( x ) + r ( x ) .

Image (2.79)

Eqs (2.78) and (2.79) are employed in finding v(x)Image when the others are known, especially in characterization problems and also in the discussions on the monotonicity of the reversed variance residual life function.

As in the case of the usual mean and variance residual lives, we have

r ( x ) = E [ 1 λ ( x ) | X < x ]

Image

and

v ( x + 1 ) = E [ r ( x ) ( r ( x + 1 ) 1 ) | X < x ] .

Image

There are some special relationships between λ(x)Image, r(x)Image and v(x)Image that characterize certain families of distributions. Some important results in this connection are presented in the next two theorems.

Theorem 2.15

For a random variable X with the support (1,2,,b)Image, the relationship

λ ( x ) r ( x ) = c , x = 2 , 3 , , b , 0 < c < 1 ,

Image (2.80)

holds if and only if the distribution of X is

F ( x ) = ( b 1 ) ! ( θ ) x ( x 1 ) ! ( θ ) b , x = 1 , 2 , , b ,

Image (2.81)

with θ=c1cImage, a positive integer.

Proof

Suppose (2.80) holds for X. From

λ ( x ) = 1 + r ( x ) r ( x + 1 ) r ( x ) ,

Image

we have the difference equation

r ( x + 1 ) r ( x ) = 1 c .

Image

Iterating for x, with the boundary condition r(2)=1Image, we get

r ( x + 1 ) = c + ( 1 c ) x

Image

and hence

r ( x + 1 ) 1 r ( x ) = x 1 θ + x 1 .

Image

Substituting in (2.65), we obtain (2.81). To prove the converse, from (2.81), we have

λ ( x ) = θ θ + x 1 = c c + ( 1 c ) ( x 1 )

Image

so that r(x)λ(x)=cImage. □

Theorem 2.16

The random variable X in Theorem 2.15 satisfies the property

v ( x ) r ( x ) ( r ( x ) 1 ) = k ,

Image (2.82)

a constant in (0,1)Image, if and only if its distribution is (2.81) when k<1Image and the distribution is (2.68) when k=1Image.

Proof

First, we note that

r 2 ( x ) = E [ ( x X ) 2 | X < x ] = 1 F ( x 1 ) t = 1 x 1 ( x t ) 2 f ( t ) = 2 F ( x 1 ) t = 1 x 1 ( x t ) F ( t ) r ( x ) ,

Image

or

[ r 2 ( x ) + r ( x ) ] F ( x 1 ) = 2 t = 1 x 1 F ( t ) .

Image (2.83)

Changing x to x+1Image in (2.83) and then subtracting (2.83), we get

r 2 ( x + 1 ) r ( x ) r 2 ( x ) ( r ( x + 1 ) 1 ) r ( x ) ( r ( x + 1 ) 1 ) = r ( x ) r ( x + 1 ) .

Image (2.84)

When (2.82) is satisfied with k=1Image, from

v ( x ) + r 2 ( x ) = r 2 ( x )

Image

and (2.82), we obtain

r 2 ( x ) = r ( x ) ( r ( x ) 1 ) + r 2 ( x ) .

Image

Substituting in (2.84) and simplifying, we get

r ( x + 1 ) r ( x ) = 0

Image

and hence r(x)Image is a constant and the distribution is (2.68). On the other hand, when the distribution is (2.68), r(x)=1+ccImage and then (2.74) yields

v ( x ) = 2 ( c + 1 ) c F ( x 1 ) [ t = 2 x ( 1 + c ) t 1 b + ( 1 + c ) t b c ] ( 1 + c ) 2 c 2 1 + c c = 2 ( c + 1 ) c ( 1 + c ) x 1 [ ( 1 + c ) ( ( 1 + c ) x 1 1 ) 1 + c c ] ( 1 + c ) 2 c 2 1 + c c = c + 1 c 2 .

Image

Also, r(x)[r(x)1]=c+1cImage, so that the theorem is proved when k=1Image. In the more general case, when k<1Image, assume that X has distribution (2.81). Then,

r ( x ) = c + ( 1 c ) ( x 1 ) = θ + x 1 θ + 1 .

Image (2.85)

Further, in formula (2.74), we have

2 F ( x 1 ) t = 1 x u = 1 t F ( u 1 ) = 2 F ( x 1 ) t = 2 x r ( t ) F ( t 1 ) = 2 ( b 1 ) ! F ( x 1 ) ( θ + 1 ) ( θ ) b t = 2 x ( θ + t 1 ) ( θ ) t 1 ( t 2 ) ! = 2 ( b 1 ) ! ( θ + 1 ) ( θ ) b F ( x 1 ) t = 2 x ( θ ) t ( t 2 ) ! .

Image (2.86)

Upon using the combinational identity

j = 0 r ( j + a 1 a 1 ) = ( r + a a ) ,

Image

we have

t = x x ( θ ) t ( t 2 ) ! = ( x + θ ) ! ( θ + 2 ) ! ( x 2 ) ! .

Image

Substituting for F(x1)Image from (2.81) and simplifying (2.86), we get

2 F ( x 1 ) t = 1 x u = 1 t F ( u 1 ) = 2 ( θ + x 1 ) ( θ + x ) ( θ + 1 ) ( θ + 2 ) .

Image

Now from (2.85),

r ( x ) [ r ( x ) + 1 ] = ( 20 + x ) ( θ + x 1 ) ( θ + 1 ) 2

Image

and so the formula (2.74) yields

v ( x ) = θ ( θ + x 1 ) ( x 2 ) ( θ + 1 ) 2 ( θ + 2 ) .

Image

Thus,

v ( x ) r ( x ) ( r ( x ) 1 ) = θ θ + 2 = k < 1 .

Image

Conversely, under the hypothesis of the theorem, we have

v ( x ) = k r ( x ) ( r ( x ) 1 ) .

Image

With the aid of (2.79), we write

k r ( x + 1 ) ( r ( x + 1 ) 1 ) r ( x + 1 ) 1 + r ( x + 1 ) 1 = k r ( x ) ( r ( x ) 1 ) r ( x ) + r ( x ) .

Image

The last equation leads to

r ( x + 1 ) r ( x ) = 1 k 1 + k

Image

which on solving using the boundary condition r(2)=1Image, gives

r ( x ) = 1 + 1 k 1 + k ( x 2 ) .

Image

Setting k=c2cImage so that 0<c<1Image as stipulated, we have

r ( x ) = c + ( 1 c ) ( x 1 )

Image

and hence the distribution is (2.81). □

Remark 2.14

We can write the distribution function in (2.81) also as

F ( x ) = ( θ + x 1 x 1 ) ( θ + b 1 b 1 ) , x = 1 , 2 , , b .

Image (2.87)

Remark 2.15

Eq. (2.84) represents a family of finite range distributions that contains the uniform distribution for θ=1Image and the arithmetic law for θ=2Image.

Arising from characteristics of the finite range laws discussed above, we have some further characterizations. These results can be proved by invoking Cauchy-Schwarz inequality, as done in Theorems 2.6 and 2.7.

Theorem 2.17

Let X be a discrete random variable with the support (0,1,,b)Image. Then:

  1. (i)  E(λ(X))1E(λ(X))Image with the equality holding if and only if X has reversed geometric law;
  2. (ii)  E(r(X))1Er(X)Image and the equality holds if and only if X follows distribution (2.68).

Theorem 2.18

Let X be a discrete random variable with the support (1,2,,b)Image. Then,

E ( λ ( x ) r ( X ) ) 1 E ( λ ( X ) r ( X ) )

Image

if and only if the distribution of X is (2.81).

2.9 Odds Function

Along with the traditional reliability functions presented so far, there has been some interest in discovering the potential of odds function and log odds function in reliability analysis. The motivation for the consideration of these two functions are (i) they are easy to compute and interpret (ii) the estimation of these functions is relatively simpler, and (iii) the behaviour of other reliability functions can be ascertained through them.

The concept of odds ratio originated from gambling wherein the odds of an event A against another event B is defined as the ratio P(A)/P(B)Image. In reliability theory, we can take the event A as survival of age x and B as failure by age x. Then, the odds ratio of the events become a function of x. The odds ratio for surviving age x is defined as

ω ( x ) = P ( X > x ) P ( X x ) = S ( x + 1 ) F ( x ) , x = 1 , 0 , 1 , ,

Image (2.88)

and it is called the odds function for survival. Similarly, the odds function for failure by age x is

ω ¯ ( x ) = F ( x ) S ( x + 1 ) .

Image

From the definitions, it follows that

  1. (i)  ω(1)=Image, ω()=0Image and ω(x)Image is decreasing,
  2. (ii)  ω¯(1)=0Image, ω¯()=Image and ω¯(x)Image is increasing.

Odds functions ω and ω¯Image are important tools in survival analysis and medical studies in developing models for survival data and in comparing a treatment group with a control group. We refer to Collett (1994) and Kirmani and Gupta (2001) for further details. There has not been much study about the role of odds functions in reliability analysis, especially in the discrete case. We note that

ω ( x ) = 1 F ( x ) 1 , x = 0 , 1 , ,

Image

and

ω ¯ ( x ) = 1 S ( x + 1 ) 1 , x = 0 , 1 , , ω ¯ ( 1 ) = 0 .

Image

Accordingly, both ω(x)Image and ω¯(x)Image determine the distribution of X uniquely through the expressions

F ( x ) = 1 1 + ω ( x ) = ω ¯ ( x ) 1 + ω ¯ ( x )

Image (2.89)

and

S ( x + 1 ) = ω ( x ) 1 + ω ( x ) = 1 1 + ω ¯ ( x ) .

Image (2.90)

It is easy to see that the hazard and reversed hazard rates are

h ( x ) = ω ¯ ( x ) ω ¯ ( x 1 ) 1 + ω ¯ ( x ) , x = 0 , 1 , 2 , ,

Image (2.91)

and

λ ( x ) = ω ( x 1 ) ω ( x ) 1 + ω ( x ) .

Image (2.92)

We can express the monotonicities of h(x)Image and λ(x)Image in terms of ω(x)Image and ω¯(x)Image.

Theorem 2.19

(i) h(x)Image is decreasing λ(x)Image is decreasing ω(x)Image is convex;

(ii) λ(x)Image is increasing h(x)Image is increasing ω¯(x)Image is convex.

Proof

h(x)Image is decreasing

f ( x + 1 ) S ( x + 1 ) f ( x ) S ( x ) 0 f ( x + 1 ) 1 F ( x + 1 ) f ( x ) 1 F ( x 1 ) 0 λ ( x + 1 ) F ( x + 1 ) 1 F ( x ) λ ( x ) F ( x ) 1 F ( x 1 ) 0 .

Image

Since F(x)Image is non-decreasing F(x+1)1F(x)F(x)1F(x1)Image and so λ(x+1)λ(x)Image, which proves that λ(x)Image is decreasing. Next,

ω ( x + 1 ) ω ( x ) = 1 F ( x + 1 ) F ( x + 1 ) 1 F ( x ) F ( x ) = 1 F ( x + 1 ) 1 F ( x ) = ( x + 2 ( 1 λ ( t ) ) ) 1 ( x + 1 ( 1 λ ( t ) ) ) 1 = λ ( x + 1 ) x + 1 ( 1 λ ( t ) ) 1 .

Image

Similarly,

ω ( x ) ω ( x 1 ) = S ( x + 1 ) F ( x ) S ( x ) F ( x 1 ) = λ ( x ) [ x ( 1 λ ( t ) ) ] 1 .

Image

Thus,

( ω ( x + 1 ) ω ( x ) ) ( ω ( x ) ω ( x 1 ) ) = [ λ ( x ) A ( x + 1 ) ( λ ( x ) λ ( x + 1 ) ) ] [ x + 1 ( 1 λ ( t ) ] 1

Image

which is non-negative whenever λ(x)Image is decreasing, and so ω(x)Image is convex.

The proof of (ii) is similar. □

Example 2.21

Consider the uniform distribution in Example 2.2. In this case,

ω ( x ) = b x x ; ω ¯ ( x ) = x b x .

Image

The hazard and reversed hazard rates are obtained by using (2.91) and (2.92) as

h ( x ) = 1 b x + 1  and  λ ( x ) = 1 x .

Image

Notice that λ(x)Image is decreasing and h(x)Image is increasing, so that the reverse implication h(x)Image increasing means λ(x)Image is increasing in (ii) is not true. However, it can be easily verified that ω¯(x)Image is convex.

The concepts based on residual life and reversed residual life can also be expressed in terms of odds functions. Recall that the survival function of the residual life (Xx|X>x)Image is

S x ( t ) = S ( x + t + 1 ) S ( x + 1 ) , t = 0 , 1 , .

Image

The residual odds functions are then

ω x ( t ) = 1 F x ( t ) 1 = S ( x + t + 2 ) S ( x + 1 ) S ( x + t + 2 ) = 1 F x ( t ) F x ( t ) ,

Image

or

F x ( t ) = 1 1 + ω x ( t )

Image

and similarly

ω ¯ x ( t ) = 1 S x ( t + 1 ) 1  or  S x ( t + 1 ) = 1 1 + ω ¯ x ( t ) .

Image

In terms of ω(x)Image,

ω ¯ x ( t ) = 1 + ω ¯ ( x ) ω ¯ ( x + t + 1 ) ω ¯ ( x )

Image

and

ω x ( t ) = F ( x + t ) F ( x ) 1 F ( x ) = [ 1 + ω ( x ) ] ω ( x + t + 1 ) ω ( x ) ω ( x + t + 1 ) .

Image

Also, the mean residual life function is

m ( x ) = 1 S ( x + 1 ) x + 1 S ( t ) = 1 + ω ¯ ( x ) t = x + 1 1 1 + ω ¯ ( t 1 ) .

Image (2.93)

Eq. (2.93) leads to the recurrence relation

m ( x 1 ) = 1 + 1 + ω ¯ ( x 1 ) 1 + ω ¯ ( x ) m ( x ) , x = 0 , 1 , 2 ,

Image (2.94)

Theorem 2.20

X has decreasing (increasing) mean residual life if and only if

m ( x ) ( ) 1 + ω ¯ ( x ) ω ¯ ( x ) ω ¯ ( x 1 ) .

Image

Proof

By means of (2.91), we can write

X  has decreasing mean   residual life  m ( x 1 ) m ( x ) 0 1 + ( 1 + ω ¯ ( x 1 ) 1 + ω ¯ ( x ) 1 ) m ( x ) 0 m ( x ) 1 + ω ¯ ω ¯ ( x ) ω ¯ ( x 1 ) .

Image

In terms of the odds functions, the reversed mean residual life is

r ( x ) = 1 F ( x 1 ) t = 1 x F ( t 1 ) = ( 1 + ω ¯ ( x 1 ) ) t = 1 x 1 1 + ω ( t 1 )

Image

and so

r ( x + 1 ) = 1 + 1 + ω ( x ) 1 + ω ( x 1 ) r ( x ) , x = 0 , 1 ,

Image

 □

In the same manner as we have proved Theorem 2.3, we note the following.

Theorem 2.21

X has increasing reversed mean residual life if and only if, for all x,

r ( x ) 1 + ω ( x 1 ) ω ( x 1 ) ω ( x ) .

Image

Instead of considering the odds functions, it is sometimes beneficial to use the odds rate defined as

ω ¯ R ( x ) = ω ¯ ( x ) ω ¯ ( x 1 ) .

Image (2.95)

The function ω¯R(x)Image is interpreted as the rate at which ω(x1)Image changes at age x. It has the following properties:

  1. (i)  ω¯R(x)=1S(x+1)1S(x)=h(x)S(x+1)Image;
  2. (ii)  The sequence of rates ω¯R(x)Image determine the distribution of X uniquely as

S ( x ) = [ 1 + t = 0 x 1 ω ¯ R ( t ) ] 1 ;

Image (2.96)

  1. (iii)  If ω¯R(x)Image is decreasing (ω¯Image is concave), then h(x)Image is decreasing;
  2. (iv)  If h(x)Image is increasing, then ω¯R(x)Image is also increasing.

Example 2.22

Assume that ω¯R(x)=b(bx)(bx+1)Image, x=0,1,2,Image. Writing

ω ¯ R ( x ) = b b x b b x + 1 , t = 1 x 1 ω ¯ R ( x ) = x 1 b x + 1 .

Image

Employing formula (2.96), we have

S ( x ) = b b x + 1 ,

Image

the survival function of the discrete uniform distribution.

Property (iii) of ω¯R(x)Image given above tells us that the class of distributions with decreasing ω¯R(x)Image is a subset of the decreasing failure rate class and also that a sufficient condition for failure rate to be decreasing is that ω¯(x)Image is concave. Thus, we have a stronger condition that assists in characterizing and modelling failure time data. Further, when h(x)Image is increasing, ω¯R(x)Image is also increasing, providing an alternative proof of (ii) in Theorem 2.3. For detailed discussion of the results in this section, we refer to Nair and Sankaran (2015b).

2.10 Log-odds Functions and Rates

The log-odds functions and rates were introduced by Zimmer et al. (1998) as an alternative reliability measure and further propagated in the work of Wang et al. (2003, 2008). These functions were proposed as an alternative to the hazard rate in situations under which the device considered have high reliability or the corresponding hazard rate is non-monotone. When X is a discrete lifetime, Khorashadizadeh et al. (2013a) defined the log-odds function as

L ¯ ( x ) = log F ( x ) S ( x + 1 ) = log ω ¯ ( x ) , x = 0 , 1 , 2 , ,

Image (2.97)

and the corresponding log-odds rate as

L ¯ R ( x ) = L ¯ ( x ) L ¯ ( x 1 ) .

Image

They have shown the following:

  1. (a)  L¯R(x)=λ1(x)+h1(x)Image, where λ1Image and h1Image are the alternative reversed hazard and hazard rates of X discussed in Section 2.2;
  2. (b)  F(x)Image is characterized by

F ( x ) = e a ( x ) + b 1 + e a ( x ) + b , x = 0 , 1 , ,

Image

  1. where a(x)=t=1xL¯(t)Image and b=logF(0)S(1)Image;
  2. (c)  In terms of Y=logXImage, we have

L ¯ Y ( x ) = L ¯ X ( e x ) ,

Image

  1. and hence

L ¯ R , Y ( x ) = t = 1 x ( λ 1 ( x ) + h 1 ( x ) ) log F X ( x e 1 ) S X ( ( x + 1 ) e 1 ) + b ,

Image

  1. where b1=logfX(0)1fX(0)Image;
  2. (d)  X has increasing log-odds rate in terms of x (y=logxImage) if and only if L(x)Image is convex with respect to x (y=logxImage).

From the above discussions, it is clear that L¯R(x)Image is directly related to the alternative hazard and reversed hazard rates whereas ω¯R(x)Image is related to the usual hazard rate. There are other mathematical properties that make ω¯(x)Image and ω(x)Image more desirable than their logarithms.

Theorem 2.22

If X is a discrete random variable with odds function ω¯(x)Image, then there exists a random variable XImage with the same support as X with alternative hazard rate h1(x)=ω¯(x)Image. The survival functions S(x)Image of X and S(x)Image of XImage are related by

S ( x ) = ( 1 log S ( x ) ) 1 .

Image (2.98)

Proof

By virtue of the properties of ω¯(x)Image,

S ( x ) = exp [ ω ¯ ( x 1 ) ] , x = 0 , 1 , 2 ,

Image (2.99)

is a survival function. Let XImage denote the corresponding random variable. From the representation in (2.14), the cumulative hazard rate of XImage is

H 1 ( x ) = 0 x 1 h 1 ( t ) ,

Image (2.100)

where h1(t)=logS(x+1)S(x)Image is the alternative hazard rate of XImage, and it satisfies

H 1 ( x ) = log S ( x ) = ω ¯ ( x 1 )

Image (2.101)

Now, from (2.100) and (2.101), we have

h 1 ( x ) = H 1 ( x + 1 ) H 1 ( x ) = ω ¯ ( x ) ω ¯ ( x 1 ) = ω ¯ R ( x ) ,

Image

proving the first part. Using (2.99),

S ( x ) = exp [ 1 S ( x ) + 1 ]

Image

so that (2.98) holds. Since there is a one-to-one correspondence between S(x)Image and S(x)Image, they have the same support, and this completes the proof. □

Some observations from Theorem 2.22 are as follows:

  1. (a)  The converse of Theorem 2.22 is also true; that is, if S(x)Image is the survival function of a random variable XImage satisfying (2.98), then S(x)Image is the survival function of a random variable X for which h1(x)=ω¯R(x)Image;
  2. (b)  Since

ω ¯ R ( x ) = h ( x ) S ( x + 1 )

Image

  1. we have

h ( x ) = h ( x ) S ( x + 1 ) ,

Image

  1. and

h ( x ) = h ( x ) exp [ 0 x h 1 ( x ) ] , h 1 ( x ) = ω ¯ ( x ) ω ¯ ( x 1 ) ;

Image

  1. (c)  A parallel result that involves the odds function ω(x)Image and the alternative reversed hazard rate λ1(x)Image is also possible. To see this, it is easy to recognize that F(x)=exp[ω(x)]Image is a distribution function, with alternative reversed hazard rate

λ 1 ( x ) = log F ( x ) F ( x 1 ) = ω ( x 1 ) ω ( x ) .

Image

Since ω(x)Image is a decreasing function, ω(x1)ω(x)=ωR(x)Image is the rate at which ω(x)Image is decreasing and is therefore an odds rate. Thus, there exists a distribution specified by F(x)Image for which the odds rate ωR(x)Image of F(x)Image is the reversed hazard rate of F(x)Image and the connection between the two is explained in the next theorem.

Theorem 2.23

Corresponding to a discrete random variable X with distribution function F(x)Image and odds rate ω(x)Image, there exist another distribution function F(x)Image with the same support as F(x)Image and alternative reversed hazard rate λ1(x)Image that satisfies

λ 1 ( x ) = ω R ( x ) .

Image (2.102)

Also, F(x)Image and F(x)Image are related through

F ( x ) = [ 1 log F ( x ) ] 1 .

Image (2.103)

Conversely, if for two distribution functions F(x)Image and F(x)Image (2.103) holds, then λ1(x)=ωR(x)Image.

Remark 2.16

  1. (1)  The distribution function F(x)Image represents the random variable XImage of Theorem 2.22;
  2. (2)  ωR(x)=λ(x)F(x1)=λ(x)Image;
  3. (3)  λ(x)=λ(x)F(x1)Image or λ(x)=λ(x)/t=x(1λ(t))Image.

Example 2.23

The distributions of X and XImage can be mutually characterized. If XImage is geometric with

S ( x ) = q x , x = 0 , 1 , 2 , , 0 < q < 1 ,

Image

the distribution of X is

S ( x ) = 1 1 α log q , x = 0 , 1 , 2 , .

Image

In this case,

ω ¯ ( x ) = ( x + 1 ) log q

Image

and

ω R ( x ) = log q = h 1 ( x ) .

Image

Example 2.24

In this example, we present an analysis of real data on the number of deaths following surgery in a hospital in the US classified according to the age of the patients (Mosteller and Tukey, 1977). The survival function is estimated from the sample as

S ˆ ( x ) = number of observations  x total number of observations ,

Image

while

F ˆ ( x ) = number of observations  x total number of observations .

Image

ω ¯ ˆ ( x ) = 1 S ˆ ( x + 1 ) 1  and  ω ˆ ( x ) = 1 F ˆ ( x ) 1 .

Image

Table 2.5 presents the necessary calculations and Fig. 2.4 shows the shapes of the ω(w)Image and ω¯(x)Image curves.

Table 2.5

Estimation of ω(x) and ω¯(x)Image

Age Number dying S ˆ ( x ) Image F ˆ ( x ) Image ω ¯ ( x ) Image ω ( x ) Image
0–4 34 1.0000 0.0556 0.0589 17.9856
5–14 9 0.9444 0.0704 0.0757 13.2100
15–24 23 0.9296 0.1080 0.1211 8.2576
25–34 19 0.8920 0.1319 0.1616 6.1811
35–44 16 0.8609 0.1653 0.1980 5.0505
45–54 59 0.8347 0.2619 0.3548 2.8185
55–64 101 0.7581 0.4272 0.7458 1.3408
65–75 185 0.5728 0.7300 2.7037 0.3699
76–83 97 0.2700 0.8870 7.8496 0.1274
83+ 68 0.1112 1.0000 8.0917 0.1236

Image

Image
Figure 2.4 Plots of ω(x) and ω¯(x)Image.

2.11 Mixture Distributions

The role of mixture distributions in reliability studies was mentioned earlier in Section 2.1. When the mixture is of the form (1.11), we have

f ( x ) = α f 1 ( x ) + ( 1 α ) f 1 ( x )

Image

and

S ( x ) = α S 1 ( x ) + ( 1 α ) S 2 ( x ) ,

Image (2.104)

where f1(f2)Image is the probability mass function and S1Image (S2Image) is the survival function corresponding to F1Image (F2Image). If h(x)Image, k1(x)Image and k2(x)Image, respectively, denote the hazard rates of f, f1Image and f2Image, it is easy to see that

h ( x ) = p ( x ) k 1 ( x ) + ( 1 p ( x ) ) k 2 ( x ) ,

Image (2.105)

where

p ( x ) = α S 1 ( x ) α S 1 ( x ) + ( 1 α ) S 2 ( x ) .

Image

Likewise, for the reversed hazard rates λ, λ¯1Image and λ2¯Image of f,f1Image and f2Image, respectively, we have

λ ( x ) = p 1 ( x ) λ ¯ 1 ( x ) + ( 1 p 1 ( x ) ) λ ¯ 2 ( x ) ,

Image (2.106)

with

p 1 ( x ) = α F 1 ( x ) α F 1 ( x ) + ( 1 α ) F 2 ( x ) .

Image

Finite mixtures of two components are commonly used in heterogeneous populations in which the elements are classified into two categories. It follows from (2.105) that

min ( k 1 ( x ) , k 2 ( x ) ) h ( x ) max ( k 1 ( x ) , k 2 ( x ) ) ,

Image

and also that

k 1 ( x ) h ( x ) k 2 ( x )

Image

whenever k1(x)k2(x)Image for all x.

Theorem 2.24

If k1(x)k2(x)Image for all x, p(x)Image is an increasing function.

Proof

We have

p ( x + 1 ) p ( x ) = α S 1 ( x + 1 ) α S 1 ( x + 1 ) + ( 1 α ) S 2 ( x + 1 ) α S 1 ( x ) α S 1 ( x ) + ( 1 α ) S 2 ( x ) .

Image

The sign of the previous equation depends on

t ( x ) = α S 1 ( x + 1 ) [ α S 1 ( x ) + ( 1 α ) S 2 ( x ) ] α S 1 ( x ) [ α S 1 ( x + 1 ) ( 1 α ) S 2 ( x + 1 ) ] = ( 1 α ) α [ S 1 ( x + 1 ) S 2 ( x ) S 1 ( x ) S 2 ( x + 1 ) ] = ( 1 α ) α [ S 1 ( x + 1 ) S 1 ( x ) S 2 ( x + 1 ) S 2 ( x ) ] S 1 ( x ) S 2 ( x ) = α ( 1 α ) S 1 ( x ) S 2 ( x ) [ k 2 ( x ) k 1 ( x ) ] 0 ,

Image

since k1(x)k2(x)Image. Thus, p(x)Image is increasing. This result has the interpretation that when the lifetime distribution is a mixture, the weakest items will die out first.

When the distribution of X is indexed by a parameter θ, where θ is the value of a random variable Θ defined on [0,)Image with distribution function C(θ)Image, from (1.3), the survival function and the probability mass function of X are given by

G ¯ ( x ) = S ( x | θ ) d C ( θ )

Image

and

g ( x ) = f ( x | θ ) d C ( θ )

Image

so that the mixture has its hazard rate as

h ( x ) = Θ f ( x | θ ) d C ( θ ) Θ S ( x | θ ) d C ( θ ) = Θ h ( x | θ ) d C ( θ | x ) ,

Image

where

h ( x | θ ) = f ( x | θ ) F ( x | θ )

Image

is the conditional hazard rate of X, given Θ=θImage, and

d C ( θ | x ) = S ( x | θ ) d C ( θ ) Θ S ( x | θ ) d C ( θ )

Image

is the conditional density function of θ, given XxImage. When c(θ)=dC(θ)dθImage, the density function of Θ, c(θ|x)Image, defines the conditional probability density function of θ with the same support as that of Θ. In a Bayesian context, c(θ)Image can be viewed as a prior distribution of θ, and C(θ|x)Image as the posterior distribution of θ after observing the data on X. Models in which θ is regarded as random are called frailty models which are extensively discussed in survival analysis. In particular, if a representation of the form

S 1 ( x ) = S 1 ( x | θ ) = [ S ( x ) ] θ

Image

holds, then from (2.17), we have the alternative hazard rates of S1Image, say h¯1(x)Image, and S satisfy the relationship

h ¯ 1 ( x ) = θ h 1 ( x ) .

Image

We say that a random variable X1Image with survival function S1(x)Image is the proportional hazard rates model corresponding to X with survival function S(x)Image. □

Some similar results exist for reversed hazard rates as well.

Theorem 2.25

In the case of reversed rate functions, if λ¯1(x)>λ¯2(x)Image for all x, then p1(x)Image is an increasing function.

Proof

We have

p 1 ( x + 1 ) p 1 ( x ) = α F 1 ( x + 1 ) α F 1 ( x + 1 ) + ( 1 α ) F 2 ( x + 1 ) α F 1 ( x ) α F 1 ( x ) + ( 1 α ) F 2 ( x ) .

Image

The sign of the left hand side depends on

α ( 1 α ) F 1 ( x + 1 ) F 2 ( x + 1 ) [ F 2 ( x ) F 2 ( x + 1 ) F 1 ( x ) F 1 ( x + 1 ) ] = α ( 1 α ) F 1 ( x + 1 ) F 2 ( x + 1 ) [ λ 1 ( x + 1 ) λ 2 ( x + 1 ) ]

Image

which is non-negative since λ¯1(x)>λ¯2(x)Image for all x. Hence, p1(x)Image is an increasing function.

Assuming θ to be random with distribution function C(θ)Image, the mixture has reversed hazard rate

λ ( x ) = Θ f ( x | θ ) d C ( θ ) Θ F ( x | θ ) d C ( θ ) = Θ λ ( x | θ ) d C ( θ | x ) ,

Image

where

λ ( x | θ ) = f ( x | θ ) F ( x | θ )

Image

is the conditional reversed hazard rate of X, given Θ=θImage, and

C ( θ | x ) = F ( x | θ ) d C ( θ ) Θ F ( x | θ ) d C ( θ )

Image

is the conditional distribution function of θ, given XxImage. As before, if a representation of the form

F 1 ( x ) = F 1 ( x | θ ) = [ F ( x ) ] θ

Image

holds, then the reversed hazard rates λ of F1Image and F satisfy the relationship

λ F 1 ( x ) = θ λ F ( x ) .

Image

In this case, we say that F1Image is the reversed proportional hazard rates model of F. □

Nelson (1982) has mentioned that units manufactured in different production periods may have different life distributions due to difference in design, raw materials, handling, etc., and it may therefore be necessary to identify production period, customer environment, etc. that result in poor units, for remedial action on that part of the population. Cox (1959) analyzed data on failure times using mixture models by classifying the cause of failure as identified or not; see also Mendenhall and Hader (1958) and Cheng et al. (1985). Identification of the life distribution is crucial in such cases. One way in which such identification is possible is to use characterization theorems that involve various reliability functions or relationships between them. Nair et al. (1999) proposed certain relationships between the hazard rate and the mean residual life that characterize some mixture distributions for the purpose.

Theorem 2.26

(a) A discrete random variable X taking values in (0,1,)Image follows a mixture of geometric laws with probability mass function

f ( x ) = α p 1 q 1 x + ( 1 α ) p 2 q 2 x , 0 α 1 , 0 < p i < 1 , q i = 1 p i , i = 1 , 2 ,

Image (2.107)

if and only if, for all x,

r ( x ) = ( 1 p 1 + 1 p 2 ) 1 p 1 p 2 h ( x + 1 ) ;

Image

(b) X is a mixture of Waring distributions with probability mass function

f ( x ) = α ( a b 1 ) ( b 1 ) x ( a ) x + 1 + ( 1 α ) ( a b 2 ) ( b 2 ) x ( a ) x + 1 , a > b 1 , b 1 b 2 , b 2 > 0 ,

Image

if and only if for all x

m ( x ) = ( 2 a b 1 b 2 1 ) ( a b 1 1 ) ( a b 2 1 ) ( a + x ) × ( a + x ) ( a + x + 1 ) ( a b 1 1 ) ( a b 2 1 ) h ( x + 1 ) .

Image

Corollary 2.27

When p1=p2=pImage, the geometric law is characterized by the property

m ( x ) = 2 p 1 p 2 h ( x + 1 ) ,

Image

and when b1=b2=bImage, the Waring distribution

f ( x ) = ( a b ) ( b ) x ( a ) x + 1 , x = 0 , 1 , 2 , ,

Image

is characterized by the property

r ( x ) = ( 2 a 2 b 1 ) ( a b 1 ) 2 ( a + x ) ( a + x ) ( a + x + 1 ) ( a b 1 ) 2 h ( x + 1 ) .

Image

In Theorem 2.26, we have used the mean residual life of the mixture of the form in (2.104). According to Definition 2.20, it is calculated as

r ( x ) = 1 S ( x + 1 ) t = x + 1 S ( t ) = t = x + 1 α S 1 ( t ) + ( 1 α ) S 2 ( t ) α S 1 ( x + 1 ) + ( 1 α ) S 2 ( x + 1 ) ,

Image (2.108)

where S1Image and S2Image are the survival functions of the component distributions. Eq. (2.108) is expressible as

m ( x ) = q ( x ) m 1 ( x ) + ( 1 q ( x ) ) m 2 ( x ) ,

Image

with

q ( x ) = α S 1 ( x + 1 ) α S 1 ( x + 1 ) + ( 1 α ) S 2 ( x + 1 ) = p ( x + 1 )

Image

and m1(x)Image and m2(x)Image are the mean residual life functions of the component distributions. When S(x)Image is indexed by the values of a non-negative continuous random variable Θ, the residual life distribution becomes

S x ( t ) = Θ S ( x + t + 1 | θ ) d C ( θ ) Θ S ( x + 1 | θ ) d C ( θ )

Image (2.109)

as a naturally corollary of the basic definition in (2.19). Eq. (2.109) is equivalent to

S x ( t ) = Θ S x ( t | θ ) S ( x + 1 | θ ) d C ( θ ) Θ S x ( x + 1 | θ ) d C ( θ ) = Θ S x ( t | θ ) d C ( θ | x + 1 ) d θ ,

Image

where

π ( θ | x + 1 ) = 0 S ( x + 1 | θ ) d C ( θ ) 0 S ( x + 1 | θ ) d C ( θ )

Image

is the distribution function of Θ given Xx+1Image.

Apart from the hazard function and mean residual life function, higher moments of residual life can also be used for characterizing life distributions. Two such results are established in Nair et al. (1999).

Theorem 2.28

If E(X2)<Image, the identity

  1. (a)  

μ ( 2 ) ( x ) = E [ ( X x ) ( X x 1 ) | X > x ] = A h ( x + 1 ) + B , where A = 2 p 1 2 p 1 2 ( p 1 p 2 p 1 p 2 ) , B = 2 p 1 2 p 2 2 ( p 1 2 + p 2 2 + p 1 p 2 ( 1 p 1 p 2 ) ) , 0 < p 1 , p 2 < 1 ,

Image

  1. holds if and only if X is distributed as geometric mixture in (2.107);
  2. (b)  

E [ ( X x ) 2 | X > x ] = A + B h ( x ) + C ( x ) ,

Image

  1. where

A = 2 ( p 1 p 2 p 3 ) 1 ( p 1 + p 2 + p 3 ) , B = 2 ( p 1 p 2 p 3 ) 1

Image

  1. and

C = 2 ( p 1 p 2 p 3 ) 1 ( p 1 p 2 + p 2 p 3 + p 3 p 1 ) ,

Image

  1. if and only if X has a three-component geometric mixture

f ( x ) = i = 1 3 α i p i q i x , 0 < α i < 1 , and i = 1 3 α i = 1 .

Image

These authors have pointed out with the help of simulated data that Theorem 2.26 is useful in model identification and inference. If the plots of the estimates (hˆ(x),mˆ(x))Image fall along a straight line, the distribution is mixture of geometric. A quick estimate of the parameters p1Image and p2Image is obtained from the slope and intercept of the fitted line, or more accurately from a least-square fit of the line. One can estimate α by equating the sample mean with

E ( X ) = α q 1 p 1 1 + ( 1 α ) q 2 p 2 1

Image

after substituting the estimate of p1Image and p2Image. Nair (1983b) addressed the estimation problem by matching the factorial moments of the sample with those of the population. Denoting by m(1)Image, m(2)Image and m(3)Image the first three sample factorial moments, the equations to be solved are as follows:

α r + ( 1 α ) s = a 1 = m ( 1 ) α r ( r 1 ) 2 + ( 1 α ) s ( s 1 ) = a 2 = m ( 2 ) 2 α r ( r 1 ) 2 + ( 1 α ) s ( s 1 ) 2 = a 3 = m ( 3 ) 6

Image

with r=p11Image and s=p21Image. After some algebra, the quadratic equation

( a 1 + a 2 a 1 2 ) s 2 + ( a 1 2 + a 1 a 2 a i 2 a 2 a 3 ) s + ( a 1 a 2 a 2 2 ) = 0

Image

is obtained, which can be solved for s. Now r and α are calculated from

s = a 1 + a 2 a 1 r a 1 r  and  α = a 1 s r s .

Image

If there are two admissible solutions for s in the above quadratic equation, the one that gives better fit is chosen as the final estimate.

Sometimes, mixtures of a more general form

f ( x ) = i = 1 n α i f i ( x ) ,

Image (2.110)

or

S ( x ) = i = 1 n α i S i ( x ) ,

Image

where αiImage are real numbers, not necessarily all positive, satisfying αi=1Image are also considered. They are of practical significance representing distributions of order statistics, coherent systems and inference in step-stress models. Discussions of these aspects in the continuous case can be seen in Navarro and Rychlik (2007), Navarro and Shaked (2006), Balakrishnan et al. (2007, 2009), Balakrishnan and Cramer (2008), and Balakrishnan and Xie (2007). When some of the mixing constants are negative, such mixtures are called generalized mixtures. The representation in (2.110) can be expressed as (Navarro and Hernandez, 2004) as

S ( x ) = α S + ( x ) + ( 1 α ) S ( x )

Image

with α=αi>0αi>1Image,

S + ( x ) = α i > 0 α i S i ( x ) / α i > 0 α i  and  S + ( x ) = α i < 0 α i S i ( x ) / α i < 0 α i .

Image

Then,

S + ( x ) = α 1 S ( x ) + α 1 ( α 1 ) S ( x )

Image

and the relationship for n=2Image becomes

min ( m 1 ( x ) , m 2 ( x ) ) m 1 ( x ) max ( m ( x ) , m 2 ( x ) ) ,

Image

where mi(x)Image is the mean residual life function of Si(x)Image.

Recalling the formula for the reversed mean residual life of X as

r ( x ) = 1 F ( x 1 ) t = 1 x F ( t 1 ) ,

Image (2.111)

we see that for the two-component mixture we have

F ( x ) = α F 1 ( x ) + ( 1 α ) F 2 ( x ) , r ( x ) = p 1 ( x 1 ) r 1 ( x ) + ( 1 p 1 ( x 1 ) ) r 2 ( x ) ,

Image (2.112)

where r1(x)Image and r2(x)Image are the reversed mean residual lives of F1Image and F2Image and p1()Image is as defined in (2.106). Eliminating p1(x)Image from

r ( x + 1 ) = p 1 ( x ) r 1 ( x + 1 ) + ( 1 p 1 ( x ) ) r 2 ( x + 1 )

Image

and (2.106), a relationship connecting the reversed mean residual life r(x)Image and the reversed hazard rate λ(x)Image of the mixture can be expressed in the form

r ( x + 1 ) = g 1 ( x ) λ ( x ) + g 2 ( x ) ,

Image (2.113)

where

g 1 ( x ) = r 1 ( x ) r 2 ( x ) λ ¯ ( x ) λ ¯ 2 ( x )  and  g 2 ( x ) = r 2 ( x ) λ ¯ 1 ( x ) r 1 ( x ) λ ¯ 2 ( x ) λ ¯ 1 ( x ) λ ¯ 2 ( x ) .

Image

Remark 2.17

The identity connecting m(x)Image and h(x)Image of the mixture also bears the same form

m ( x ) = l 1 ( x ) h ( x + 1 ) + l 2 ( x ) ,

Image

where

l 1 ( x ) = m 2 ( x ) h ¯ 1 ( x + 1 ) m 1 ( x ) h ¯ 2 ( x + 1 ) h ¯ 1 ( x + 1 ) h ¯ 2 ( x + 1 ) , l 2 ( x ) = m 1 ( x ) m 2 ( x ) h 1 ( x + 1 ) h 2 ( x + 1 ) .

Image (2.114)

Example 2.25

Let X be distributed as a mixture of reversed geometric forms

F ( x ) = α F 1 ( x ) + ( 1 α ) F 2 ( x ) ,

Image

where

F i ( x ) = { c i 1 ( 1 + c i ) 1 b , x = 0 ( 1 + c i ) x i b , x i = 1 , 2 , .

Image

Inserting

r i ( x ) = 1 + c i c i  and  λ ¯ i ( x ) = c i 1 + c i

Image

in (2.113), we obtain

r ( x + 1 ) = c 1 + c 2 + 2 c 1 c 2 c 1 c 2 ( 1 + c 1 ) ( 1 + c 2 ) c 1 c 2 λ ( x ) , x = 1 , 2 ,

Image (2.115)

The relationship given above provides a plot of (λ(x),r(x+1))Image as a decreasing straight line which is easy to verify. Notice that (2.115) is a characteristic property of this distribution.

2.12 Weighted Distributions

Recalling the definition of the weighted distribution in (1.20) that

f W ( x ) = W ( x ) f ( x ) E [ W ( x ) ]

Image

and the expression of the survival function in (1.21) given by

h W ( x ) = W ( x ) E [ W ( X ) | X > x ] h X ( x ) ,

Image

the mean residual lives of X and XWImage satisfy the relationship

m W ( x 1 ) 1 m W ( x ) = p ( x + 1 ) p ( x ) m ( x 1 ) 1 m ( x ) , p ( x ) = E [ W ( X ) | X > x ] .

Image

As special cases, we have

S L ( x ) = ( m ( x ) + x ) S ( x ) μ = m ( x ) S ( x ) μ , h L ( x ) = x h ( x ) ( m ( x ) + x ) = x h ( x ) m ( x ) , m L ( x 1 ) 1 m L ( x ) = m ( x + 1 ) ( x + 1 ) ( m ( x ) x ) m ( x 1 ) 1 m ( x ) .

Image

Denoting hn(x)Image and mn(x)Image, for the hazard rate and the mean residual life of the equilibrium random variable YnImage (Section 1.4), we have from Nair et al. (2012b) that

h n ( x ) = 1 m n 1 ( x ) , m n 1 ( x + 1 ) = m n ( x + 1 ) 1 + m n ( x + 1 ) m n ( x )

Image

and

h n ( x + 1 ) = 1 + h n + 1 ( x + 1 ) ( 1 h n + 1 1 ( x ) ) .

Image

In particular, for the first-order equilibrium random variable XEImage (Nair and Hitha, 1989), we have

h E ( x ) = 1 m ( x ) ,

Image

so that the hazard rate of XEImage is the reciprocal of the mean residual life of XEImage. Furthermore, we have

h ( x ) = 1 + h E ( 1 + h E 1 ( x 1 ) ) , m ( x ) = m E ( x ) [ 1 + m E ( x ) + m E ( x 1 ) ] 1 .

Image

From the above identities, Nair and Hitha (1989) have also shown the following:

(a) a mean residual life of the form A+BxImage characterizes the geometric, Waring and negative hyper-geometric distributions according as B=0Image, B>0Image and B<0Image, respectively;

(b) the random variables X and XEImage are such that h(x)=C1hE(x)Image (m(x)=C2mE(x)Image) if and only if X is geometric for C1=1Image (C2=1Image), Waring for C1>1Image (C2<1Image) and negative hyper-geometric for C1<1Image (C2>1Image).

More general characterizations covering the original and the nth-order equilibrium distributions have been provided by Nair et al. (2012b) as follows:

  1. (a)  hn(x)=(1+Cn)h(x)Image for all x and n=0,1,2,Image;
  2. (b)  mn(x)=(1+Bn)1m(x)Image, for all x and n=0,1,2,Image;
  3. (c)  the variance residual life of YnImage,

σ n 2 ( x ) = C n m n ( x ) ( m n ( x ) 1 ) ,

Image

holds if and only if X has one of the above three distributions, the geometric (C=0Image, B=0Image, Cn=1Image), Waring (C<0Image, B>0Image, Cn>1Image), and negative hyper-geometric (C>0Image, B<0Image, Cn<1Image).

In particular, σn2(x)=σ2(x)Image characterizes the geometric distribution. So also the property

E [ ( X x 1 ) ( n ) | X > x + n ] = c ,

Image

a constant, for all x and n=1,2,Image.

Li (2011) obtained formulas for the equilibrium distribution of the n-fold convolution of f(x)Image and mixtures. If fnImage is the n-fold convolution of f (see Section 1.5) and fmnImage is the m-th order equilibrium distribution of fnImage, then

f m n ( x ) = μ ( m ) μ ( m ) n f m ( n 1 ) ( x ) + 1 μ ( m ) n i = 1 m ( m i ) μ ( i ) μ ( m i ) ( n 1 ) f i ( x ) ,

Image

where μ(r)nImage is the rth factorial moment of frImage, m=1,2,Image, n=2,3,Image.

When the baseline distribution of X is f(x|θ)Image, where θ is the realization of a continuous random variable Θ, the mixture (Section 1.3)

g ( x ) = Θ f ( x | θ ) d C ( θ )

Image

has nth order equilibrium distribution

g n ( x ) = Θ f n ( x | θ ) d C n ( θ )

Image

where

d C n ( θ ) = E ( X ( n ) | θ ) d C ( θ ) E ( X ( n ) ) .

Image

Example 2.26

It has been shown in Example 1.2 that the nth order equilibrium distribution of a geometric variable X is the same as the distribution of X. Now, in this case, we have

d C n ( q ) = E ( X ( n ) | q ) d C ( q ) E ( X ( n ) ) = ( q p ) n d C ( q ) E ( X ( n ) ) ,

Image

showing that gn(x)Image is also a mixture geometric law.

Finally, we look at the relationships between various reliability measures in reversed time in the original and weighted versions. For this purpose, we note that the distribution function of the weighted law is given by

F w ( x ) = t = 0 x W ( t ) f ( t ) / E [ W ( X ) ] , E [ W ( X ) ] < = E [ W ( X ) | X x ] F ( x ) E [ W ( x ) ] .

Image

Accordingly, the reversed failure rate of XWImage becomes

λ W ( x ) = f W ( x ) F W ( x ) = W ( x ) E [ W ( x ) | X x ] λ ( x ) .

Image

Similarly, from (2.64),

r W ( x ) = 1 F W ( x 1 ) t = 1 x F W ( t 1 )

Image

gives

r W ( x + 1 ) 1 r W ( x ) = p 1 ( x 1 ) p 1 ( x ) r ( x + 1 ) 1 r ( x ) , p 1 ( x ) = E ( W 1 ( X ) | X x ) .

Image

These identities, relationships and formulas are helpful while discussing properties of ageing concepts of weighted distributions and their relationships with those of the baseline distribution.

References

N. Balakrishnan, E. Cramer, Progressive censoring from heterogeneous distributions with applications to inference, Annals of the Institute of Statistical Mathematics 2008;60:151–171.

N. Balakrishnan, D. Kundu, H.K.T. Ng, N. Kannan, Point and interval estimation for a simple step-stress model with type-II censoring, Journal of Quality Technology 2007;39:35–47.

N. Balakrishnan, Q. Xie, Exact inference for a simple step-stress model with type I hybrid censored data from the exponential population, Journal of Statistical Planning and Inference 2007;137:3268–3290.

N. Balakrishnan, Q. Xie, D. Kundu, Exact inference for a simple step-stress model from the exponential distribution under time constraint, Annals of the Institute of Statistical Mathematics 2009;61:251–274.

R.E. Barlow, F. Proschan, Mathematical Theory of Reliability. New York: John Wiley & Sons; 1965.

H.W. Block, T.H. Savits, H. Singh, The reversed hazard rate function, Probability in the Engineering and Informational Sciences 1998;12:69–90.

S.W. Cheng, J.C. Fu, S.K. Sinha, An empirical procedure for estimating parameters of mixed exponential life testing model, IEEE Transactions on Reliability 1985;34:60–64.

A. Cohen, Estimation in the Poisson distribution when the sample values (c+1) are sometimes erroneously as c, Annals of the Institute of Statistical Mathematics 1960;9:181–193.

D. Collett, Modelling Binary Data. London: Chapman and Hall; 1994.

D.R. Cox, The analysis of exponentially distributed lifetimes with two types of failure, Journal of the Royal Statistical Society, Series B 1959;21:411–421.

D.R. Cox, Regression models and life tables, Journal of the Royal Statistical Society, Series B 1972;34:187–220.

D.R. Cox, D. Oakes, Analysis of Survival Data. London: Chapman and Hall; 1984.

M.S. Finkelstein, On the reversed hazard rate, Reliability Engineering and System Safety 2002;78:71–75.

W. Glanzel, A. Teles, A. Schubert, Characterization by truncated moments and its application to Pearson-type distributions, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1984;66:173–183.

S. Goliforushani, M. Asadi, On the discrete mean past time, Metrika 2008;68:209–217.

S. Goliforushani, M. Asadi, Stochastic ordering among inactivity times of coherent systems, Sankhyā, Series B 2011;73:241–262.

S. Goliforushani, M. Asadi, N. Balakrishnan, On the residual and inactivity times of the components of used coherent systems, Applied Probability 2012;49:385–404.

R.C. Gupta, Some characterizations of discrete distributions by properties of their moment distribution, Communications in Statistics, Theory and Methods 1975;4:761–785.

R.C. Gupta, On the characterization of survival distributions in reliability by properties of their renewal densities, Communications in Statistics, Theory and Methods 1979;98:685–697.

R.P. Gupta, N.U. Nair, G. Asha, Characterizing discrete life distribution by relations between reversed failure rates and conditional expectations, Far East Journal of Theoretical Statistics 2006;20:113–122.

N. Hitha, N.U. Nair, Characterization of some discrete models by properties of residual life function, Calcutta Statistical Association Bulletin 1989;38:151–154.

J.O. Irwin, The generalized Waring distribution applied to accident theory, Journal of the Royal Statistical Society A 1968;131:205–225.

J.O. Irwin, The generalized Waring distribution, Part I, Journal of the Royal Statistical Society A 1975;138:18–31.

J.O. Irwin, The generalized Waring distribution, Part II, Journal of the Royal Statistical Society A 1975;138:204–227.

J.O. Irwin, The generalized Waring distribution, Part III, Journal of the Royal Statistical Society A 1975;138:374–384.

J.D. Kalbfleisch, R.L. Prentice, The Statistical Analysis of Failure Time Data. New York: John Wiley & Sons; 2002.

J. Keilson, U. Sumita, Uniform stochastic ordering and related inequalities, Canadian Journal of Statistics 1982;10:181–198.

A.W. Kemp, Classes of discrete life distribution, Communications in Statistics, Theory and Methods 2004;33:3069–3093.

M. Khorashadizadeh, A.H.R. Roknabadi, G.R. Borzadaran, Variance residual life function in discrete random ageing, Metron 2010;LXVII:67–75.

M. Khorashadizadeh, A.H.R. Roknabadi, G.R.M. Borzadaran, Characterizations of lifetime distributions based on doubly truncated mean residual life and mean past to failure, Communications in Statistics, Theory and Methods 2012;41:1105–1115.

M. Khorashadizadeh, A.H.R. Roknabadi, G.R.M. Borzadaran, Doubly truncated (interval) cumulative residual and past entropy, Statistics & Probability Letters 2013;83:1464–1471.

S.N.U.A. Kirmani, R.C. Gupta, On proportional odds model in survival analysis, Annals of the Institute of Statistical Mathematics 2001;53:203–216.

J. Kupka, S. Loo, The hazard and vitality measures of ageing, Journal of Applied Probability 1989;26:532–542.

M.S. Li, The equilibrium distribution of counting random variables, Open Journal of Discrete Mathematics 2011;1:127–135.

W. Mendenhall, R.J. Hader, Estimation of parameter of a mixed exponential failure time distribution from censored life test data, Biometrika 1958;45:504–508.

F. Mosteller, J.W. Tukey, Data Analysis and Regression: A Second Course in Statistics. Pearson; 1977.

N.U. Nair, On a distribution of first conception delays in the presence of adolescent sterility, Demography India 1983;12:269–275.

N.U. Nair, G. Asha, Characterization using failure and reversed failure rates, Journal of Indian Society of Probability and Statistics 2004;8:45–56.

N.U. Nair, K.G. Geetha, P. Priya, Modeling lifelength data using mixture distributions, Journal of the Japan Statistical Society 1999;29:65–73.

N.U. Nair, K.G. Geetha, P. Priya, On partial moments of discrete distributions and their applications, Far East Journal of Theoretical Statistics 2000;4:153–164.

N.U. Nair, N. Hitha, Characterization of discrete models by distribution based on their partial sums, Statistics and Probability Letters 1989;8:335–337.

N.U. Nair, P.G. Sankaran, Characterizations of discrete distributions using reliability concepts in reversed time, Statistics and Probability Letters 2013;83:1939–1945.

N.U. Nair, P.G. Sankaran, Odds function and odds rates for discrete lifetime distributions, Communications in Statistics, Theory and Methods 2015;44:4185–4202.

N.U. Nair, P.G. Sankaran, M. Preeth, Reliability aspects of discrete equilibrium distributions, Communications in Statistics, Theory and Methods 2012;41:500–515.

N.U. Nair, K.K. Sudheesh, Some results on lower variance bounds useful in reliability modelling and estimation, Annals of the Institute of Statistical Mathematics 2008;60:591–603.

J. Navarro, N. Balakrishnan, F.R. Samaniego, Mixture representations of residual lifetimes of used systems, Journal of Applied Probability 2008;45:1097–1112.

J. Navarro, P.J. Hernandez, How to obtain bathtub shaped failure rate models from normal mixtures, Probability in the Engineering and Informational Sciences 2004;18:511–531.

J. Navarro, T. Rychlik, Reliability and expectation bounds for coherent systems with exchangeable components, Journal of Multivariate Analysis 2007;98:102–113.

J. Navarro, F.J. Samaniego, N. Balakrishnan, Mixture representations for the joint distribution of lifetimes of two coherent systems with shared components, Advances in Applied Probability 2013;45:1011–1027.

J. Navarro, M. Shaked, Hazard rates of order statistics and systems, Journal of Applied Probability 2006;43:391–408.

W. Nelson, Applied Life Data Analysis. New York: John Wiley & Sons; 1982.

A. Parvardeh, N. Balakrishnan, Conditional residual lifetimes of coherent systems, Statistics and Probability Letters 2013;83:2664–2672.

A. Parvardeh, N. Balakrishnan, On the conditional residual life and inactivity time of coherent systems, Journal of Applied Probability 2014;51:990–998.

P. Priya, P.G. Sankaran, N.U. Nair, Factorial partial moments and their properties, Journal of the Indian Statistical Association 2000;38:45–53.

J. Riordan, Combinatorial Identities. New York: John Wiley & Sons; 1968.

J.M. Ruiz, J. Navarro, Characterization of discrete distributions using expected values, Statistical Papers 1995;41:423–435.

M. Shaked, G.J. Shantikumar, J.B. Valdez-Toress, Discrete hazard rate functions, Computers and Operations Research 1995;22:391–402.

J.C. Su, W.J. Huang, Characterizations based on conditional expectations, Statistical Papers 2000;41:423–435.

K.K. Sudheesh, N.U. Nair, Characterization of discrete distribution by conditional variance, Metron 2010;LXVIII:77–85.

Y. Wang, A.M. Hossain, W.J. Zimmer, Monotone log odds rate distributions in reliability analysis, Communications in Statistics, Theory and Methods 2003;32(11):2227–2244.

Y. Wang, A.M. Hossain, W.J. Zimmer, Useful properties of the three parameter Burr XII distribution, M. Ahsanullah, ed. Applied Statistics Research Progress. Hauppauge, NY: Nova Science Publishers; 2008:11–20.

E. Xekalaki, Hazard functions and life distributions in discrete time, Communications in Statistics, Theory and Methods 1983;12:2503–2509.

M. Xie, O. Gaudion, C. Bracquemond, Redefining failure rate functions for discrete distributions, International Journal for Reliability, Quality and Safety Engineering 2002;9:275–285.

Z. Zhang, Mixture representations of inactivity times of conditional coherent systems and their applications, Journal of Applied Probability 2010;47:876–885.

W.J. Zimmer, Y. Wang, P.K. Pathak, Log-odds rate and monotone log-odds rate distributions, Journal of Quality Technology 1998;30:376–385.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.254.35