Theorem 2.2.5 yields that for every the functions and and satisfy
with boundary conditions and , and , and , respectively. The characteristic polynomial for this second-order linear recursion is
We avoid treating separately the cases and by considering first the case , for which . Then, there are two distinct roots
the general solution is of the form , and hence
Using and
the expressions for and and can be easily recovered, which is left as an exercise.
A Taylor expansion yields the joint law of and of the result of the game (ruin , win if ), and hence the law of . For ,
which yields expressions for and as rational functions of explicit polynomials, from which (Feller, W. (1968), XIV.5) derives the formula
Expectations, variances, and higher moments of and and can be derived by appropriate Taylor expansions at the point for and and . Computations are tedious by hand but can be performed quickly by appropriate software.
Consider a compulsive Gambler A, which gambles as long as he is not ruined, starting from an initial fortune . This corresponds to taking and above. Now, the state space for is , and
The game duration is , and we set and . Proposition 2.2.4 does no longer apply.
By monotone convergence (Theorem A.3.2),
A global expression for this is
We seek the least nonnegative solutions for the equations satisfying and , by considering notably the behavior at infinity.
Note that if then the trivial infinite solution of the equations in Theorem 2.2.6 must be accepted, as no other solution is nonnegative. It would have been likewise if we had tried to compute thus for .
If then for a sequence of i.i.d. r.v. such that . Assuming ,
and hence , despite the fact that yields that , a.s. By dominated convergence (Theorem A.3.5), this implies that .
The situation seems more reasonable (but less realistic) from the perspective of Gambler B, who ardently desires to gain a sum and has infinite credit and a compliant adversary. Depending on his probability of winning at each toss, his eventual win probability and expected time to win are
Let . Theorem 2.2.5 yields that for the equation satisfied by is the extension of (2.3.8) for with boundary condition .
If then and the general solution is given after (2.3.9). The minimality result in Theorem 2.2.5 and the fact that
yield that
As and , it holds that
Classic Taylor expansions yield that, at a precision of order ,
and, using ,
Using and by identification, see (A.1.1), this yields that
The classic Taylor expansion
using the generalized binomial coefficient defined by
yields that
By identification, for , it holds that and
The fact that
yields that the law of given that is the law of the sum of independent r.v. with same law as given that and implies that
It is actually a consequence of the strong Markov property: in order for Gambler A to loose units, he must first loose unit and then he starts independently of the past from a fortune of units; a simple recursion and the spatial homogeneity of a random walk conclude this. A precise formulation is left as an exercise.
Extensions of such results to multidimensional random walks are delicate, as the linear equations are much more involved and seldom have explicit solutions. Some guesswork easily allows to find a solution in the following example.
Let be the symmetric nearest-neighbor random walk on and be in for . Consider and , with
Theorem 2.2.6 yields that the function satisfies the affine equation with boundary condition
and, considering gambler's ruin, we obtain that
Notably, if and for then is quadratic in and linear in the dimension .
See Section 1.4.3. We assume that
that is, that the mean number of offspring of an individual is finite, as well as and to avoid trivial situations. A quantity of interest is the extinction time
An essential fact is that if then can be obtained by the sum of the chains given by the descendants of each of the initial individuals and that these chains are independent and have same law as given that . In particular, we study the case when , the others being deduced easily. This property can be generalized to appropriate subpopulations and is called the branching property.
Notably, if then for . This, and the “one step forward” method, yields that
which is the recursion in Section 1.4.3, obtained there by a different method.
By monotone limit (Lemma A.3.1),
Moreover, , and the recursion (2.3.11) yields that
This recursion can also be obtained directly by the “one step forward” method: the branching property yields that , and thus
Thus, solves the recursion started at , and this nondecreasing sequence converges to . As is continuous on , the limit is the least fixed point of on . Thus, the extinction probability is
The facts that and are continuous increasing on , and and and , allow to place the graph of with respect to the diagonal.
See Figure 2.3.
Moreover, the strict convexity implies that
which yields some convergence rate results.
Indeed, then can be written as for , and by iteration and hence .
Indeed, and thus , which implies the result as .
Indeed, for , and it is a simple matter to prove that (Figure 2.3).
The case is called the critical case and is the most delicate to study. Assume that the number of offspring of a single individual has a variance or equivalently that or that . Then , and the population goes extinct, a.s., but has an infinite mean life time.
Indeed, implies for that
and as then with
and hence .
As , an order Taylor expansion yields that
and by identification and hence
This can also be directly obtained by the “one step forward” method:
A few results follow from this.
We assume . Let
denote the variance of the number of offspring of an individual, and the variance of . As , using (A.1.1), at a precision of order ,
and by identification
Thus, and
Setting , we obtain that and
and hence
For , simple arguments yield that
and the covariance and correlation of and are given by
and more precisely
See Section 1.4.6. The quantity of interest is
The state space is finite and the chain irreducible, hence by Proposition 2.2.4. The expected value and of the generating function can easily be derived by solving the equations yielded by the “one step forward” method, but we leave that as an exercise.
We prefer to describe a more direct method, which is specific to this situation. It explores some possibilities of evolution in the near future, with horizon the word length. The word GAG is constituted of three letters. As is in and the sequence is i.i.d., for , it holds that
and, considering the overlaps within the word GAG and
and hence
As and , it follows that
Moreover, Lemma A.1.3 yields that for , and (2.3.12) and yield that
so that eventually
As , this yields again. Moreover, at a precision of order ,
and (A.1.1) yields that
2.1 Stopping times Let be a Markov chain on , and . Prove that the following random variables are stopping times.
2.2 Operations on stopping times Let and be two stopping times.
2.3 Induced chain, 1 Let be a Markov chain on with matrix , having no absorbing state. Let be defined by and and iteratively
Let denotes the filtration generated by , and the one for : the events in are of the form , and those in of the form . Let the matrix and for , the geometric law on be defined by
Prove that is a Markov chain with matrix . Prove that
2.4 Doeblin coupling Let be a Markov chain on with matrix satisfying
and
Let be a transition matrix on such that there exists and a probability measure such that for all ; this is the Doeblin condition of Theorem 1.3.4 for .
and that .
by using the fact that there exists r.v. and with laws and such that (see Lemma A.2.2).
2.5 The space station, see Exercise 1.1 The astronaut starts from module .
2.6 The mouse, see Exercise 1.2 The mouse starts from room . Compute the probability that it reaches room before returning to room . Compute the generating function and expectation of the time it takes to reach room , conditional on the fact that it does so before returning to room .
2.7 Andy, see Exercise 1.6 Andy is just out of jail. Let be the number of consecutive evenings he spends out of jail, before his first return there. Compute for . Compute the law and expectation of .
2.8 Genetic models, see Exercise 1.8 Let , and be the number of individuals of allele at time . Consider the Dirichlet problem (2.2.5) for on for .
2.9 Nearest-neighbor walk, 1-d Let be the random walk on with matrix given by and . Let and for , and and .
and that if and only if .
Prove that if and if .
2.10 Labouchère system, see Exercise 1.4 Let be the random walk on with matrix given by and , and .
Prove that this recursion has as a solution, then that its general solution is given for by with and for by .
Deduce from this that, for , if then and if then .
Deduce from this that, for , if then and if then .
3.21.104.183