1.4.4.3 Some computations on particle distribution

At equilibrium, that is, under the invariant law, the c01-math-0972 are uniform on c01-math-0973 and hence, the c01-math-0974 for c01-math-0975 are i.i.d. uniform on c01-math-0976. The strong law of large numbers and the central limit theorem then yield that

equation

Hence, as c01-math-0978 goes to infinity, the instantaneous proportion of molecules in each compartment converges to c01-math-0979 with fluctuations of order c01-math-0980. For instance,

equation

and as a numerical illustration, as c01-math-0982, the choice c01-math-0983 and c01-math-0984 yields that c01-math-0985 and hence

equation

For an arbitrary initial law, for c01-math-0987 and c01-math-0988,

equation

and the solution of this affine recursion, with fixed point c01-math-0990, is given by

equation

Then, at geometric rate,

equation

The rate c01-math-0993 seems poor, but the time unit should actually be of order c01-math-0994, and

equation

Explicit variance computations can also be done, which show that c01-math-0996 converges in probability to c01-math-0997, but in order to go further some tools must be introduced.

1.4.4.4 Random walk, Fourier transform, and spectral decomposition

The Markov chain c01-math-0998 on c01-math-0999 (microscopic representation) can be obtained by taking a sequence c01-math-1000 of i.i.d. r.v. which are uniform on the vectors of the canonical basis, independent of c01-math-1001, and setting

equation

This is a symmetric nearest-neighbor random walk on the additive group c01-math-1003, and we are going to exploit this structure, according to a technique adaptable to other random walks on groups.

For c01-math-1004 and c01-math-1005 in c01-math-1006 and for vectors c01-math-1007 and c01-math-1008, the canonical scalar products will be respectively denoted by

equation

Let us associate to each r.v. c01-math-1010 on c01-math-1011 its characteristic function, which is the (discrete) Fourier transform of its law c01-math-1012 given by, with the notation c01-math-1013,

equation

For c01-math-1015 in c01-math-1016 and any c01-math-1017 such that c01-math-1018,

equation

hence c01-math-1020 is an orthogonal basis of vectors, each with square product c01-math-1021. This basis could easily be transformed into an orthonormal basis.

Fourier inversion formula

From this follows the inversion formula

equation
Fourier transform and eigenvalues

For c01-math-1023 , setting

equation

it holds that

equation

and thus c01-math-1026 is an eigenvector for the eigenvalue c01-math-1027 for the transition matrix. There are c01-math-1028 distinct eigenvalues

equation

of which the eigenspace of dimension c01-math-1030 is generated by the c01-math-1031 such that c01-math-1032 has exactly c01-math-1033 terms taking the value c01-math-1034.

Spectral decomposition of the transition matrix

This yields the spectral decomposition of c01-math-1035 in an orthogonal basis and

equation
Long-time behavior

This yields that c01-math-1037 is c01-math-1038 for c01-math-1039 (constant vectors) and that

equation

For c01-math-1041, let c01-math-1042 be such that c01-math-1043, c01-math-1044 and c01-math-1045 for c01-math-1046, and c01-math-1047 be the corresponding laws. Then, c01-math-1048 and thus c01-math-1049 and

equation

that is, c01-math-1051 and c01-math-1052 are the uniform laws respectively on

equation

Let

equation

The law c01-math-1055 is the mixture of c01-math-1056 andc01-math-1057, which respects the probability that c01-math-1058 be in c01-math-1059 and in c01-math-1060, and c01-math-1061 the mixture that interchanges these. If c01-math-1062 is of law c01-math-1063, then c01-math-1064 is of law c01-math-1065 and thus

equation

and, as c01-math-1067 and c01-math-1068, for the Hilbert norm associated with c01-math-1069 it holds that

equation

In particular, the law of c01-math-1071 converges exponentially fast to c01-math-1072 and the law of c01-math-1073 to c01-math-1074.

Periodic behavior

The behavior we have witnessed is related to the notion of periodicity: c01-math-1075 is even or odd the same as c01-math-1076, and c01-math-1077 has opposite parity than c01-math-1078. This is obviously related to the fact that c01-math-1079 is an eigenvalue.

1.4.5 Renewal process

A component of a system (electronic component, machine, etc.) lasts a random life span before failure. It is visited at regular intervals (at times c01-math-1080 in c01-math-1081) and replaced appropriately. The components that are used for this purpose behave in i.i.d. manner.

1.4.5.1 Modeling

At time c01-math-1082, a first component is installed, and the c01-math-1083th component is assumed to have a random life span before replacement given by c01-math-1084, where the c01-math-1085 are i.i.d. on c01-math-1086. Let c01-math-1087 denote an r.v. with same law as c01-math-1088, representing a generic life span. It is often assumed that c01-math-1089.

Let c01-math-1090 denote the age of the component in function at time c01-math-1091, with c01-math-1092 if it is replaced at that time. Setting

equation

it holds that c01-math-1094 sur c01-math-1095 (Figure 1.5).

c01f005

Figure 1.5 Renewal process. The • represent the ages at the discrete instants on the horizontal axis and are linearly interpolated by dashes in their increasing phases. Then, c01-math-1096 if c01-math-1097. The c01-math-1098 represent the two possible ages at time c01-math-1099, which are c01-math-1100 if c01-math-1101 and c01-math-1102 if c01-math-1103.

The c01-math-1104 are defined for all c01-math-1105 as c01-math-1106. If c01-math-1107, then all c01-math-1108 are finite, a.s., else if c01-math-1109, then there exists an a.s. finite r.v. c01-math-1110 such that c01-math-1111 and c01-math-1112.

This natural representation in terms of the life spans is not a random recursion of the kind discussed in Theorem 1.2.3. We will give a direct proof that c01-math-1113 is a Markov chain and give its transition matrix.

Note that c01-math-1114 except if c01-math-1115 and c01-math-1116 is in c01-math-1117 for c01-math-1118. These are the only cases to be considered and then

equation

where c01-math-1120 is the number of c01-math-1121 in c01-math-1122 and c01-math-1123, c01-math-1124, c01-math-1125, … , c01-math-1126 are their ranks (Figure 1.5). This can be written as

equation

The independence of c01-math-1128 and c01-math-1129 yields that

equation

Moreover,

equation

is obtained similarly, or by complement to c01-math-1132.

Hence, the only thing that matters is the age of the component in function, and c01-math-1133 is a Markov chain on c01-math-1134 with matrix c01-math-1135 and graph given by

equation
1.4.5 equation

This Markov chain is irreducible if and only if c01-math-1138 for every c01-math-1139 and c01-math-1140.

A mathematically equivalent description

Thus, from a mathematical perspective, we can start with a sequence c01-math-1141 with values in c01-math-1142 and assume that a component with age c01-math-1143 at an arbitrary time c01-math-1144 has probability c01-math-1145 to pass the inspection at time c01-math-1146, and else a probability c01-math-1147 to be replaced then. In this setup, the law of c01-math-1148 is determined by

equation

This formulation is not as natural as the preceding one. It corresponds to a random recursion given by a sequence c01-math-1150 of i.i.d. uniform r.v. on c01-math-1151, independent of c01-math-1152, and

equation

The renewal process is often introduced in this manner, in order to avoid the previous computations. It is an interesting example, as we will discuss later.

Invariant measures and laws

An invariant measure c01-math-1154 satisfies

equation

thus c01-math-1156 that yields uniqueness, and existence holds if and only if

equation

This unique invariant measure can be normalized, in order to yield an invariant law, if and only if it is finite, that is, if

equation

and then

equation
Renewal process and Doeblin condition

A class of renewal processes is one of the rare natural examples of infinite state space Markov chains satisfying the Doeblin condition.

1.4.6 Word search in a character chain

A source emits an infinite i.i.d. sequence of “characters” of some “alphabet.” We are interested in the successive appearances of a certain “word” in the sequence.

For instance, the characters could be c01-math-1165 and c01-math-1166 in a computer system, “red” or “black” in a roulette game, A, C, G, T in a DNA strand, or ASCII characters for a typewriting monkey. Corresponding words could be c01-math-1167, red-red-red-black, GAG, and Abracadabra.

Some natural questions are the following:

  • Is any word going to appear in the sequence?
  • Is it going to appear infinitely often, and with what frequency?
  • What is the law and expectation of the first appearance time?

1.4.6.1 Counting automaton

A general method will be described on a particular instance, the search for the occurrences of the word GAG.

Two different kinds of occurrences can be considered, without or with overlaps; for instance, GAGAG contains one single occurrence of GAG without overlaps but two with. The case without overlaps is more difficult, and more useful in applications;it will be considered here, but the method can be readily adapted to the other case.

We start by defining a counting automaton with four states c01-math-1168, G, GA, and GAG, which will be able to count the occurrences of GAG in any arbitrary finite character chain. The automaton starts in state c01-math-1169 and then examines the chain sequentially term by term, and:

  • In state c01-math-1170: if the next state is G, then it takes state G, else it stays in state c01-math-1171,
  • In state G: if the next state is A, then it takes state GA, if the next state is G, then it stays in state G, else it takes state c01-math-1172,
  • In state GA: if the next state is G, then it takes state GAG, else it takes state c01-math-1173,
  • In state GAG: if the next state is G, then it takes state G, else it takes state c01-math-1174.

Such an automaton can be represented by a graph that is similar to a Markov chain graph, with nodes given by its possible states and oriented edges between nodes marked by the logical condition for this transition (Figure 1.6).

c01f006

Figure 1.6 Search for the word GAG: Markov chain graph. The graph for the automaton is obtained by replacing c01-math-1175 by “if the next term is c01-math-1176,” c01-math-1177 by “if the next term is c01-math-1178,” c01-math-1179 by “if the next term is not c01-math-1180,” and c01-math-1181 by “if the next term is neither c01-math-1182 nor c01-math-1183.”

This automation is now used on a sequence of characters given by an i.i.d. sequence c01-math-1184 such that

equation

satisfying c01-math-1186, c01-math-1187, and c01-math-1188.

Let c01-math-1189, and c01-math-1190 be the state of the automaton after having examined the c01-math-1191th character. Theorem 1.2.3 yields that c01-math-1192 is a Markov chain with graph, given in Figure 1.6, obtained from the automaton graph by replacing the logical conditions by their probabilities of being satisfied.

Markovian description

All relevant information can be written in terms of c01-math-1193. For instance, if c01-math-1194 and c01-math-1195 denotes the time of the c01-math-1196th occurrence (complete, without overlaps) of the word for c01-math-1197, and c01-math-1198 the number of such occurrences taking place before c01-math-1199, then

equation

The transition matrix c01-math-1201 is irreducible and has for unique invariant law

equation
Occurrences with overlaps

In order to search for the occurrences with overlaps, it would suffice to modify the automaton by considering the overlaps inside the word. For the word GAG, we need only modify the transitions from state GAG: if the next term is G, then the automaton should take state G, and if the next term is A, then it should take state GA, else it should take state c01-math-1203. For more general overlaps, this can become very involved.

1.4.6.2 Snake chain

We describe another method for the search for the occurrences with overlaps of a word c01-math-1204 of length c01-math-1205 in an i.i.d. sequence c01-math-1206 of characters from some alphabet c01-math-1207.

Setting c01-math-1208, then

equation

is the time of the c01-math-1210th occurrence of the word, c01-math-1211 (with c01-math-1212), and

equation

is the number of such occurrences before time c01-math-1214. In general, c01-math-1215 is not i.i.d., but it will be seen to be a Markov chain.

More generally, let c01-math-1216 be a Markov chain on c01-math-1217 of arbitrary matrix c01-math-1218 and c01-math-1219 for c01-math-1220. Then, c01-math-1221 is a Markov chain on c01-math-1222 with matrix c01-math-1223 with only nonzero terms given by

equation

called the snake chain of length c01-math-1225 for c01-math-1226. The proof is straightforward if the conditional formulation is avoided.

Irreducibility

If c01-math-1227 is irreducible, then c01-math-1228 is irreducible on its natural state space

equation
Invariant Measures and Laws

If c01-math-1230 is an invariant measure for c01-math-1231, then c01-math-1232 given by

equation

is immediately seen to be an invariant measure for c01-math-1234. If further c01-math-1235 is a law, then c01-math-1236 is also a law.

In the i.i.d. case where c01-math-1237, the only invariant law for c01-math-1238 is given by c01-math-1239, and the only invariant law for c01-math-1240 by the product law c01-math-1241.

1.4.7 Product chain

Let c01-math-1242 and c01-math-1243 be two transition matrices on c01-math-1244 and c01-math-1245, and the matrices c01-math-1246 on c01-math-1247 have generic term

equation

Then, c01-math-1249 is a transition matrix on c01-math-1250, as in the sense of product laws,

equation

see below for more details. See Figure 1.7.

c01f007

Figure 1.7 Product chain. The first and second coordinates are drawn independently according to c01-math-1252 and c01-math-1253.

The Markov chain c01-math-1254 with matrix c01-math-1255 is called the product chain. Its transitions are obtained by independent transitions of each coordinate according, respectively, to c01-math-1256 and c01-math-1257. In particular, c01-math-1258 and c01-math-1259 are two Markov chains of matrices c01-math-1260 and c01-math-1261, which conditional on c01-math-1262 are independent, and

equation

1.4.7.1 Invariant measures and laws

Immediate computations yield that if c01-math-1264 is an invariant measure for c01-math-1265 and c01-math-1266 for c01-math-1267, then the product measure c01-math-1268 given by

equation

is invariant for c01-math-1270. Moreover,

equation

and thus if c01-math-1272 and c01-math-1273 are laws then c01-math-1274 is a law.

1.4.7.2 Irreducibility problem

The matrix c01-math-1275 on c01-math-1276 is irreducible and has unique invariant law the uniform law, whereas a Markov chain with matrix c01-math-1277 alternates either between c01-math-1278 and c01-math-1279 or between c01-math-1280 and c01-math-1281, depending on the initial state and is not irreducible on c01-math-1282. The laws

equation

are invariant for c01-math-1284 and generate the space of invariant measures.

All this can be readily generalized to an arbitrary number of transition matrices.

Exercises

1.1 The space station, 1 An aimless astronaut wanders within a space station, schematically represented as follows:

c01g007

The space station spins around its center in order to create artificial gravity in its periphery. When the astronaut is in one of the peripheral modules, the probability for him to go next in each of the two adjacent peripheral modules is twice the probability for him to go to the central module. When the astronaut is in the central module, the probability for him to go next in each of the six peripheral modules is the same.

Represent this evolution by a Markov chain and give its matrix and graph. Prove that this Markov chain is irreducible and give its invariant law.

1.2 The mouse, 1 A mouse evolves in an apartment, schematically represented as follows:

c01g008

The mouse chooses uniformly an opening of the room where it is to go into a new room. It has a short memory and forgets immediately where it has come from.

Represent this evolution by a Markov chain and give its matrix and graph. Prove that this Markov chain is irreducible and give its invariant law.

1.3 Doubly stochastic matrices Let c01-math-1285 be a doubly stochastic matrix on a state space c01-math-1286: by definition,

equation
  1. (a) Find a simple invariant measure for c01-math-1288.
  2. (b) Prove that c01-math-1289 is doubly stochastic for all c01-math-1290.
  3. (c) Prove that the transition matrix for a random walk on a network is doubly stochastic.

1.4 The Labouchère system, 1 In a game where the possible gain is equal to the wager, the probability of gain c01-math-1291 of the player at each draw typically satisfies c01-math-1292 and even c01-math-1293, but is usually close to c01-math-1294, as when betting on red or black at roulette. In this framework, the Labouchère system is a strategy meant to provide a means for earning in a secure way a sum c01-math-1295 determined in advance.

The sum c01-math-1296 is decomposed arbitrarily as a sum of c01-math-1297 positive terms, which are put in a list. The strategy then transforms recursively this list, until it is empty.

At each draw, if c01-math-1298 then the sum of the first and last terms of the list are wagered, and if c01-math-1299 then the single term is wagered. If the gambler wins, he or she removes from the list the terms concerned by the wager. If the gambler loses, he or she retains these terms and adds at the end of the list a term worth the sum just wagered. The game stops when c01-math-1300, and hence, the sum c01-math-1301 has been won.

(Martingale theory proves that in realistic situations, for instance, if wagers are bounded or credit is limited, then with a probability close to c01-math-1302 the sum c01-math-1303 is indeed won, but with a small probability a huge loss occurs, large enough to prevent the gambler to continue the game and often to ever gamble in the future.)

  1. (a) Represent the list evolution by a Markov chain c01-math-1304 on the set
    equation

    of words of the form c01-math-1306. Describe its transition matrix c01-math-1307 and its graph. Prove that if c01-math-1308 reaches c01-math-1309 (the empty word), then the gambler wins the sum c01-math-1310.

  2. (b) Let c01-math-1311 be the length of the list (or word) c01-math-1312 for c01-math-1313. Prove that c01-math-1314 is a Markov chain on c01-math-1315 and give its matrix c01-math-1316 and its graph.

1.5 Three-card Monte Three playing cards are lined face down on a cardboard box at time c01-math-1317. At times c01-math-1318, the middle card is exchanged with probability c01-math-1319 with the card on the right and with probability c01-math-1320 with the one on the left.

  1. (a) Represent the evolution of the three cards by a Markov chain c01-math-1321. Give its transition matrix c01-math-1322 and its graph. Prove that c01-math-1323 is irreducible. Find its invariant law c01-math-1324.
  2. (b) The cards are the ace of spades and two reds. Represent the evolution of the ace of spades by a Markov chain c01-math-1325. Give its transition matrix c01-math-1326 and its graph. Prove that it is irreducible. Find its invariant law c01-math-1327.
  3. (c) Compute c01-math-1328 in terms of the initial law c01-math-1329 and c01-math-1330 and c01-math-1331. Prove that the law c01-math-1332 of c01-math-1333 converges to c01-math-1334 as c01-math-1335 goes to infinity, give an exponential convergence rate for this convergence, and find for which value of c01-math-1336 the convergence is fastest.

1.6 Andy, 1 If Andy is drunk one evening, then he has one odd in ten to end up in jail, in which case will remain sober the following evening. If Andy is drunk one evening and does not end up in jail, then he has one odd in two to be drunk the following evening. If Andy stays sober one evening, then he has three odds out of four to remain sober the following evening.

It is assumed that c01-math-1337 constitutes a Markov chain, where c01-math-1338 if Andy on the c01-math-1339-th evening is drunk and ends up in jail, c01-math-1340 if Andy then is drunk and does not end up in jail, and c01-math-1341 if then he remains sober.

Give the transition matrix c01-math-1342 and the graph for c01-math-1343. Prove that c01-math-1344 is irreducible and compute its invariant law. Compute c01-math-1345 in terms of c01-math-1346. What is the behavior of c01-math-1347 when c01-math-1348 goes to infinity?

1.7 Squash Let us recall the original scoring system for squash, known as English scoring. If the server wins a rally, then he or she scores a point and retains service. If the returner wins a rally, then he or she becomes the next server but no point is scored. In a game, the first player to score c01-math-1349 points wins, except if the score reaches c01-math-1350-c01-math-1351, in which case the returner must choose to continue in either c01-math-1352 or c01-math-1353 points, and the first player to reach that total wins.

A statistical study of the games between two players indicates that the rallies are won by Player A at service with probability c01-math-1354 and by Player B at service with probability c01-math-1355, each in i.i.d. manner.

The situation in which Player A has c01-math-1356 points, Player B has c01-math-1357 points, and Player L is at service is denoted by c01-math-1358 in c01-math-1359.

  1. (a) Describe the game by a Markov chain on c01-math-1360, assuming that if the score reaches c01-math-1361-c01-math-1362 then they play on to c01-math-1363 points (the play up to c01-math-1364 can easily be deduced from this), in the two following cases: (i) all rallies are considered and (ii) only point scoring is considered.
  2. (b) Trace the graphs from arriving at c01-math-1365-c01-math-1366 on the service of Player A to end of game.
  3. (c) A game gets to c01-math-1367-c01-math-1368 on the service of Player A. Compute in terms of c01-math-1369 and c01-math-1370 the probability that Player B wins according to whether he or she elects to go to c01-math-1371 or c01-math-1372 points. Counsel Player B on this difficult choice.

1.8 Genetic models, 1 Among the individuals of a species, a certain gene can appear under c01-math-1373 different forms called alleles.

In a microscopic (individual centered) model for a population of c01-math-1374 individuals, these are arbitrarily numbered, and the fact that individual c01-math-1375 carries allele c01-math-1376 is coded by the state

equation

A macroscopic representation only retains the numbers of individuals carrying each allele, and the state space is

equation

We study two simplified models for the reproduction of the species, in which the population size is fixed, and where the selective advantage of every allele c01-math-1379 w.r.t. the others is quantified by a real number c01-math-1380.

  1. Synchronous: Fisher–Wright model: at each step, the whole population is replaced by its descendants, and in i.i.d. manner, each new individual carries allele c01-math-1381 with a probability proportional both to c01-math-1382 and to the number of old individuals carrying allele c01-math-1383.
  2. Asynchronous: Moran model: at each step, an uniformly chosen individual is replaced by a new individual, which carries allele c01-math-1384 with a probability proportional both to c01-math-1385 and to the number of old individuals carrying allele c01-math-1386.
    1. Explain how to obtain the macroscopic representation from the microscopic representation
    2. Prove that each pair representation-model corresponds to a Markov chain. Give the transition matrices and the absorbing states.

1.9 Records Let c01-math-1387 be i.i.d. r.v. such that c01-math-1388 and c01-math-1389, and c01-math-1390 be the greatest number of consecutive c01-math-1391 observed in c01-math-1392.

  1. (a) Show that c01-math-1393 is not a Markov chain.
  2. (b) Let c01-math-1394, and
    equation

    Prove that c01-math-1396 is a Markov chain and give its transition matrix c01-math-1397. Prove that there exists a unique invariant law c01-math-1398 and compute it.

  3. (c) Let c01-math-1399,
    equation

    Prove that c01-math-1401 is a Markov chain on c01-math-1402 and give its transition matrix c01-math-1403.

  4. (d) Express c01-math-1404 in terms of c01-math-1405, then of c01-math-1406. Deduce from this the law of c01-math-1407.
  5. (e) What is the probability of having at least c01-math-1408 consecutive heads among c01-math-1409 fair tosses of head-or-tails? One can use the fact that for c01-math-1410,
equation

1.10 Incompressible mixture, 1 There are two urns, c01-math-1412 white balls, and c01-math-1413 black balls. Initially c01-math-1414 balls are set in each urn. In i.i.d. manner, a ball is chosen uniformly in each urn and the two are interchanged. The white balls are numbered from c01-math-1415 to c01-math-1416 and the black balls from c01-math-1417 to c01-math-1418. We denote by c01-math-1419 the r.v. with values in

equation

given by the set of the numbers in the first urn just after time c01-math-1421 and by

equation

the corresponding number of white balls.

  1. (a) Prove that c01-math-1423 is an irreducible Markov chain on c01-math-1424 and give its matrix c01-math-1425. Prove that the invariant law c01-math-1426 is unique and compute it.
  2. (b) Do the same for c01-math-1427 on c01-math-1428, with matrix c01-math-1429 and invariant law c01-math-1430.
  3. (c) For c01-math-1431 find a recursion for c01-math-1432, and solve it in terms of c01-math-1433 and c01-math-1434. Do likewise for c01-math-1435. What happens for large c01-math-1436?

1.11 Branching with immigration The individuals of a generation disappear at the following, leaving there c01-math-1437 descendants each with probability c01-math-1438, and in addition, c01-math-1439 immigrants appear with probability c01-math-1440, where c01-math-1441. Let

equation

be the generating functions for the reproduction and the immigration laws.

Similarly to Section 1.4.3, using c01-math-1443 with values in c01-math-1444 and c01-math-1445 and c01-math-1446 for c01-math-1447 and c01-math-1448 such that c01-math-1449 and c01-math-1450 for c01-math-1451 in c01-math-1452, all these r.v. being independent, let us represent the number of individuals in generation c01-math-1453 by

equation

Let c01-math-1455 be the generating function of c01-math-1456.

  1. (a) Prove that c01-math-1457 is a Markov chain, without giving its transition matrix.
  2. (b) Compute c01-math-1458 in terms of c01-math-1459, c01-math-1460, and c01-math-1461, then of c01-math-1462, c01-math-1463, c01-math-1464, and c01-math-1465.
  3. (c) If c01-math-1466 and c01-math-1467, compute c01-math-1468 in terms of c01-math-1469, c01-math-1470, c01-math-1471, and c01-math-1472.

1.12 Single Server Queue Let c01-math-1473 be i.i.d. r.v. with values in c01-math-1474, with generating function c01-math-1475 and expectation c01-math-1476, and let c01-math-1477 be an independent r.v. with values in c01-math-1478. Let

equation
  1. (a) Prove that c01-math-1480 is a Markov chain with values in c01-math-1481, which is irreducible if and only if c01-math-1482.
  2. (b) Compute c01-math-1483 in terms of c01-math-1484 and c01-math-1485.
  3. (c) It is now assumed that there exists an invariant law c01-math-1486 for c01-math-1487, with generating function denoted by c01-math-1488. Prove that
    equation

    and that c01-math-1490.

  4. (d) Prove that necessarily c01-math-1491 and that c01-math-1492 only in the trivial case where c01-math-1493.
  5. (e) Let c01-math-1494. Prove that c01-math-1495 if and only if c01-math-1496, and then that for c01-math-1497, it holds that c01-math-1498.

1.13 Dobrushin mixing coefficient Let c01-math-1499 be a transition matrix on c01-math-1500, and

equation
  1. (a) Prove that c01-math-1502 and that, for all laws c01-math-1503 and c01-math-1504,
    equation
  2. (b) Prove that
    equation

    One may use that c01-math-1507.

  3. (c) Prove that if c01-math-1508 is such that c01-math-1509, then c01-math-1510 is a Cauchy sequence, its limit is an invariant law c01-math-1511, and
    equation
  4. (d) Assume that c01-math-1513 satisfies the Doeblin condition: there exists c01-math-1514 and c01-math-1515 and a law c01-math-1516 such that c01-math-1517. Prove that c01-math-1518. Compare with the result in Theorem 1.3.4.
  5. (e) Let c01-math-1519 be a sequence of i.i.d. random functions from c01-math-1520 to c01-math-1521, and c01-math-1522 and c01-math-1523 for c01-math-1524, so that c01-math-1525 is the transition matrix of the Markov chain induced by this random recursion, see Theorem 1.2.3 and what follows. Let
    equation

    Prove that

    equation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.123.2