At equilibrium, that is, under the invariant law, the are uniform on and hence, the for are i.i.d. uniform on . The strong law of large numbers and the central limit theorem then yield that
Hence, as goes to infinity, the instantaneous proportion of molecules in each compartment converges to with fluctuations of order . For instance,
and as a numerical illustration, as , the choice and yields that and hence
For an arbitrary initial law, for and ,
and the solution of this affine recursion, with fixed point , is given by
Then, at geometric rate,
The rate seems poor, but the time unit should actually be of order , and
Explicit variance computations can also be done, which show that converges in probability to , but in order to go further some tools must be introduced.
The Markov chain on (microscopic representation) can be obtained by taking a sequence of i.i.d. r.v. which are uniform on the vectors of the canonical basis, independent of , and setting
This is a symmetric nearest-neighbor random walk on the additive group , and we are going to exploit this structure, according to a technique adaptable to other random walks on groups.
For and in and for vectors and , the canonical scalar products will be respectively denoted by
Let us associate to each r.v. on its characteristic function, which is the (discrete) Fourier transform of its law given by, with the notation ,
For in and any such that ,
hence is an orthogonal basis of vectors, each with square product . This basis could easily be transformed into an orthonormal basis.
From this follows the inversion formula
For , setting
it holds that
and thus is an eigenvector for the eigenvalue for the transition matrix. There are distinct eigenvalues
of which the eigenspace of dimension is generated by the such that has exactly terms taking the value .
This yields the spectral decomposition of in an orthogonal basis and
This yields that is for (constant vectors) and that
For , let be such that , and for , and be the corresponding laws. Then, and thus and
that is, and are the uniform laws respectively on
Let
The law is the mixture of and, which respects the probability that be in and in , and the mixture that interchanges these. If is of law , then is of law and thus
and, as and , for the Hilbert norm associated with it holds that
In particular, the law of converges exponentially fast to and the law of to .
The behavior we have witnessed is related to the notion of periodicity: is even or odd the same as , and has opposite parity than . This is obviously related to the fact that is an eigenvalue.
A component of a system (electronic component, machine, etc.) lasts a random life span before failure. It is visited at regular intervals (at times in ) and replaced appropriately. The components that are used for this purpose behave in i.i.d. manner.
At time , a first component is installed, and the th component is assumed to have a random life span before replacement given by , where the are i.i.d. on . Let denote an r.v. with same law as , representing a generic life span. It is often assumed that .
Let denote the age of the component in function at time , with if it is replaced at that time. Setting
it holds that sur (Figure 1.5).
The are defined for all as . If , then all are finite, a.s., else if , then there exists an a.s. finite r.v. such that and .
This natural representation in terms of the life spans is not a random recursion of the kind discussed in Theorem 1.2.3. We will give a direct proof that is a Markov chain and give its transition matrix.
Note that except if and is in for . These are the only cases to be considered and then
where is the number of in and , , , … , are their ranks (Figure 1.5). This can be written as
The independence of and yields that
Moreover,
is obtained similarly, or by complement to .
Hence, the only thing that matters is the age of the component in function, and is a Markov chain on with matrix and graph given by
This Markov chain is irreducible if and only if for every and .
Thus, from a mathematical perspective, we can start with a sequence with values in and assume that a component with age at an arbitrary time has probability to pass the inspection at time , and else a probability to be replaced then. In this setup, the law of is determined by
This formulation is not as natural as the preceding one. It corresponds to a random recursion given by a sequence of i.i.d. uniform r.v. on , independent of , and
The renewal process is often introduced in this manner, in order to avoid the previous computations. It is an interesting example, as we will discuss later.
An invariant measure satisfies
thus that yields uniqueness, and existence holds if and only if
This unique invariant measure can be normalized, in order to yield an invariant law, if and only if it is finite, that is, if
and then
A class of renewal processes is one of the rare natural examples of infinite state space Markov chains satisfying the Doeblin condition.
A source emits an infinite i.i.d. sequence of “characters” of some “alphabet.” We are interested in the successive appearances of a certain “word” in the sequence.
For instance, the characters could be and in a computer system, “red” or “black” in a roulette game, A, C, G, T in a DNA strand, or ASCII characters for a typewriting monkey. Corresponding words could be , red-red-red-black, GAG, and Abracadabra.
Some natural questions are the following:
A general method will be described on a particular instance, the search for the occurrences of the word GAG.
Two different kinds of occurrences can be considered, without or with overlaps; for instance, GAGAG contains one single occurrence of GAG without overlaps but two with. The case without overlaps is more difficult, and more useful in applications;it will be considered here, but the method can be readily adapted to the other case.
We start by defining a counting automaton with four states , G, GA, and GAG, which will be able to count the occurrences of GAG in any arbitrary finite character chain. The automaton starts in state and then examines the chain sequentially term by term, and:
Such an automaton can be represented by a graph that is similar to a Markov chain graph, with nodes given by its possible states and oriented edges between nodes marked by the logical condition for this transition (Figure 1.6).
This automation is now used on a sequence of characters given by an i.i.d. sequence such that
satisfying , , and .
Let , and be the state of the automaton after having examined the th character. Theorem 1.2.3 yields that is a Markov chain with graph, given in Figure 1.6, obtained from the automaton graph by replacing the logical conditions by their probabilities of being satisfied.
All relevant information can be written in terms of . For instance, if and denotes the time of the th occurrence (complete, without overlaps) of the word for , and the number of such occurrences taking place before , then
The transition matrix is irreducible and has for unique invariant law
In order to search for the occurrences with overlaps, it would suffice to modify the automaton by considering the overlaps inside the word. For the word GAG, we need only modify the transitions from state GAG: if the next term is G, then the automaton should take state G, and if the next term is A, then it should take state GA, else it should take state . For more general overlaps, this can become very involved.
We describe another method for the search for the occurrences with overlaps of a word of length in an i.i.d. sequence of characters from some alphabet .
Setting , then
is the time of the th occurrence of the word, (with ), and
is the number of such occurrences before time . In general, is not i.i.d., but it will be seen to be a Markov chain.
More generally, let be a Markov chain on of arbitrary matrix and for . Then, is a Markov chain on with matrix with only nonzero terms given by
called the snake chain of length for . The proof is straightforward if the conditional formulation is avoided.
If is irreducible, then is irreducible on its natural state space
If is an invariant measure for , then given by
is immediately seen to be an invariant measure for . If further is a law, then is also a law.
In the i.i.d. case where , the only invariant law for is given by , and the only invariant law for by the product law .
Let and be two transition matrices on and , and the matrices on have generic term
Then, is a transition matrix on , as in the sense of product laws,
see below for more details. See Figure 1.7.
The Markov chain with matrix is called the product chain. Its transitions are obtained by independent transitions of each coordinate according, respectively, to and . In particular, and are two Markov chains of matrices and , which conditional on are independent, and
Immediate computations yield that if is an invariant measure for and for , then the product measure given by
is invariant for . Moreover,
and thus if and are laws then is a law.
The matrix on is irreducible and has unique invariant law the uniform law, whereas a Markov chain with matrix alternates either between and or between and , depending on the initial state and is not irreducible on . The laws
are invariant for and generate the space of invariant measures.
All this can be readily generalized to an arbitrary number of transition matrices.
1.1 The space station, 1 An aimless astronaut wanders within a space station, schematically represented as follows:
The space station spins around its center in order to create artificial gravity in its periphery. When the astronaut is in one of the peripheral modules, the probability for him to go next in each of the two adjacent peripheral modules is twice the probability for him to go to the central module. When the astronaut is in the central module, the probability for him to go next in each of the six peripheral modules is the same.
Represent this evolution by a Markov chain and give its matrix and graph. Prove that this Markov chain is irreducible and give its invariant law.
1.2 The mouse, 1 A mouse evolves in an apartment, schematically represented as follows:
The mouse chooses uniformly an opening of the room where it is to go into a new room. It has a short memory and forgets immediately where it has come from.
Represent this evolution by a Markov chain and give its matrix and graph. Prove that this Markov chain is irreducible and give its invariant law.
1.3 Doubly stochastic matrices Let be a doubly stochastic matrix on a state space : by definition,
1.4 The Labouchère system, 1 In a game where the possible gain is equal to the wager, the probability of gain of the player at each draw typically satisfies and even , but is usually close to , as when betting on red or black at roulette. In this framework, the Labouchère system is a strategy meant to provide a means for earning in a secure way a sum determined in advance.
The sum is decomposed arbitrarily as a sum of positive terms, which are put in a list. The strategy then transforms recursively this list, until it is empty.
At each draw, if then the sum of the first and last terms of the list are wagered, and if then the single term is wagered. If the gambler wins, he or she removes from the list the terms concerned by the wager. If the gambler loses, he or she retains these terms and adds at the end of the list a term worth the sum just wagered. The game stops when , and hence, the sum has been won.
(Martingale theory proves that in realistic situations, for instance, if wagers are bounded or credit is limited, then with a probability close to the sum is indeed won, but with a small probability a huge loss occurs, large enough to prevent the gambler to continue the game and often to ever gamble in the future.)
of words of the form . Describe its transition matrix and its graph. Prove that if reaches (the empty word), then the gambler wins the sum .
1.5 Three-card Monte Three playing cards are lined face down on a cardboard box at time . At times , the middle card is exchanged with probability with the card on the right and with probability with the one on the left.
1.6 Andy, 1 If Andy is drunk one evening, then he has one odd in ten to end up in jail, in which case will remain sober the following evening. If Andy is drunk one evening and does not end up in jail, then he has one odd in two to be drunk the following evening. If Andy stays sober one evening, then he has three odds out of four to remain sober the following evening.
It is assumed that constitutes a Markov chain, where if Andy on the -th evening is drunk and ends up in jail, if Andy then is drunk and does not end up in jail, and if then he remains sober.
Give the transition matrix and the graph for . Prove that is irreducible and compute its invariant law. Compute in terms of . What is the behavior of when goes to infinity?
1.7 Squash Let us recall the original scoring system for squash, known as English scoring. If the server wins a rally, then he or she scores a point and retains service. If the returner wins a rally, then he or she becomes the next server but no point is scored. In a game, the first player to score points wins, except if the score reaches -, in which case the returner must choose to continue in either or points, and the first player to reach that total wins.
A statistical study of the games between two players indicates that the rallies are won by Player A at service with probability and by Player B at service with probability , each in i.i.d. manner.
The situation in which Player A has points, Player B has points, and Player L is at service is denoted by in .
1.8 Genetic models, 1 Among the individuals of a species, a certain gene can appear under different forms called alleles.
In a microscopic (individual centered) model for a population of individuals, these are arbitrarily numbered, and the fact that individual carries allele is coded by the state
A macroscopic representation only retains the numbers of individuals carrying each allele, and the state space is
We study two simplified models for the reproduction of the species, in which the population size is fixed, and where the selective advantage of every allele w.r.t. the others is quantified by a real number .
1.9 Records Let be i.i.d. r.v. such that and , and be the greatest number of consecutive observed in .
Prove that is a Markov chain and give its transition matrix . Prove that there exists a unique invariant law and compute it.
Prove that is a Markov chain on and give its transition matrix .
1.10 Incompressible mixture, 1 There are two urns, white balls, and black balls. Initially balls are set in each urn. In i.i.d. manner, a ball is chosen uniformly in each urn and the two are interchanged. The white balls are numbered from to and the black balls from to . We denote by the r.v. with values in
given by the set of the numbers in the first urn just after time and by
the corresponding number of white balls.
1.11 Branching with immigration The individuals of a generation disappear at the following, leaving there descendants each with probability , and in addition, immigrants appear with probability , where . Let
be the generating functions for the reproduction and the immigration laws.
Similarly to Section 1.4.3, using with values in and and for and such that and for in , all these r.v. being independent, let us represent the number of individuals in generation by
Let be the generating function of .
1.12 Single Server Queue Let be i.i.d. r.v. with values in , with generating function and expectation , and let be an independent r.v. with values in . Let
and that .
1.13 Dobrushin mixing coefficient Let be a transition matrix on , and
One may use that .
Prove that
3.12.123.2