Carl Graham
Markov Chains
Carl Graham
Markov Chains
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features:
Numerous exercises with solutions as well as extended case studies. A detailed and rigorous presentation of Markov chains with discrete time and state space. An appendix presenting probabilistic notions that are…mehr
Andere Kunden interessierten sich auch für
- Rehman M. KhanProblem Solving and Data Analysis Using Minitab105,99 €
- Andrew R. WebbStatistical Pattern Recognition181,99 €
- Steven Bradley LowenFractal-Based Point Processes208,99 €
- Andrew R. WebbStatistical Pattern Recognition93,99 €
- Giuseppe ModicaA First Course in Probability and Markov Chains105,99 €
- David InsuaBayesian Analysis of Stochastic Process Models138,99 €
- John A. SokolowskiModeling and Simulation for Analyzing Global Events146,99 €
-
-
-
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features:
Numerous exercises with solutions as well as extended case studies.
A detailed and rigorous presentation of Markov chains with discrete time and state space.
An appendix presenting probabilistic notions that are necessary to the reader, as well as giving more advanced measure-theoretic notions.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Numerous exercises with solutions as well as extended case studies.
A detailed and rigorous presentation of Markov chains with discrete time and state space.
An appendix presenting probabilistic notions that are necessary to the reader, as well as giving more advanced measure-theoretic notions.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Wiley Series in Probability and Statistics .1
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 264
- Erscheinungstermin: 27. Mai 2014
- Englisch
- Abmessung: 235mm x 157mm x 19mm
- Gewicht: 463g
- ISBN-13: 9781118517079
- ISBN-10: 1118517075
- Artikelnr.: 40191836
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- Wiley Series in Probability and Statistics .1
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 264
- Erscheinungstermin: 27. Mai 2014
- Englisch
- Abmessung: 235mm x 157mm x 19mm
- Gewicht: 463g
- ISBN-13: 9781118517079
- ISBN-10: 1118517075
- Artikelnr.: 40191836
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
Carl Graham CNRS (France's National Center for Scientific Research) and Ecole Polytechnique, Palaiseau, France.
Preface ix
List of Figures xi
Nomenclature xiii
Introduction xv
1 First steps 1
1.1 Preliminaries 1
1.2 First properties of Markov chains 2
1.2.1 Markov chains, finite-dimensional marginals, and laws 2
1.2.2 Transition matrix action and matrix notation 5
1.2.3 Random recursion and simulation 9
1.2.4 Recursion for the instantaneous laws, invariant laws 10
1.3 Natural duality: algebraic approach 11
1.3.1 Complex eigenvalues and spectrum 11
1.3.2 Doeblin condition and strong irreducibility 15
1.3.3 Finite state space Markov chains 17
1.4 Detailed examples 21
1.4.1 Random walk on a network 21
1.4.2 Gambler's ruin 22
1.4.3 Branching process: evolution of a population 25
1.4.4 Ehrenfest's Urn 27
1.4.5 Renewal process 33
1.4.6 Word search in a character chain 36
1.4.7 Product chain 38
Exercises 40
2 Past, present, and future 47
2.1 Markov property and its extensions 47
2.1.1 Past -field, filtration, and translation operators 47
2.1.2 Markov property 48
2.1.3 Stopping times and strong Markov property 50
2.2 Hitting times and distribution 51
2.2.1 Hitting times, induced chain, and hitting distribution 51
2.2.2 "One step forward" method, Dirichlet problem 53
2.3 Detailed examples 60
2.3.1 Gambler's ruin 60
2.3.2 Unilateral hitting time for a random walk 64
2.3.3 Exit time from a box 67
2.3.4 Branching process 67
2.3.5 Word search 71
Exercises 73
3 Transience and recurrence 79
3.1 Sample paths and state space 79
3.1.1 Communication and closed irreducible classes 79
3.1.2 Transience and recurrence, recurrent class decomposition 80
3.1.3 Detailed examples 83
3.2 Invariant measures and recurrence 87
3.2.1 Invariant laws and measures 87
3.2.2 Canonical invariant measure 89
3.2.3 Positive recurrence, invariant law criterion 91
3.2.4 Detailed examples 93
3.3 Complements 97
3.3.1 Hitting times and superharmonic functions 97
3.3.2 Lyapunov functions 99
3.3.3 Time reversal, reversibility, and adjoint chain 105
3.3.4 Birth-and-death chains 108
Exercises 111
4 Long-time behavior 119
4.1 Path regeneration and convergence 119
4.1.1 Pointwise ergodic theorem, extensions 120
4.1.2 Central limit theorem for Markov chains 124
4.1.3 Detailed examples 126
4.2 Long-time behavior of the instantaneous laws 128
4.2.1 Period and aperiodic classes 128
4.2.2 Coupling of Markov chains and convergence in law 132
4.2.3 Detailed examples 139
4.3 Elements on the rate of convergence for laws 140
4.3.1 The Hilbert space framework 140
4.3.2 Dirichlet form, spectral gap, and exponential bounds 143
4.3.3 Spectral theory for reversible matrices 146
4.3.4 Continuous-time Markov chains 149
Exercises 150
5 Monte Carlo methods 155
5.1 Approximate solution of the Dirichlet problem 155
5.1.1 General principles 155
5.1.2 Heat equation in equilibrium 156
5.1.3 Heat equation out of equilibrium 158
5.1.4 Parabolic partial differential equations 159
5.2 Invariant law simulation 162
5.2.1 Monte Carlo methods and ergodic theorems 162
5.2.2 Metropolis algorithm, Gibbs law, and simulated annealing 163
5.2.3 Exact simulation and backward recursion 166
Appendix A Complements 171
A.1 Basic probabilistic notions 171
A.1.1 Discrete random variable, expectation, and generating function 171
A.1.2 Conditional probabilities and independence 175
A.2 Discrete measure convergence 177
A.2.1 Total variation norm and maximal coupling 177
A.2.2 Duality between measures and functions 180
A.2.3 Weak convergence of laws and convergence in law 182
A.3 Measure-theoretic framework 183
A.3.1 Probability spaces 183
A.3.2 Measurable spaces and functions: signed and nonnegative 185
A.3.3 Random variables, their laws, and expectations 186
A.3.4 Random sequences and Kolmogorov extension theorem 192
References 195
Solutions for the exercises 197
Index 229
List of Figures xi
Nomenclature xiii
Introduction xv
1 First steps 1
1.1 Preliminaries 1
1.2 First properties of Markov chains 2
1.2.1 Markov chains, finite-dimensional marginals, and laws 2
1.2.2 Transition matrix action and matrix notation 5
1.2.3 Random recursion and simulation 9
1.2.4 Recursion for the instantaneous laws, invariant laws 10
1.3 Natural duality: algebraic approach 11
1.3.1 Complex eigenvalues and spectrum 11
1.3.2 Doeblin condition and strong irreducibility 15
1.3.3 Finite state space Markov chains 17
1.4 Detailed examples 21
1.4.1 Random walk on a network 21
1.4.2 Gambler's ruin 22
1.4.3 Branching process: evolution of a population 25
1.4.4 Ehrenfest's Urn 27
1.4.5 Renewal process 33
1.4.6 Word search in a character chain 36
1.4.7 Product chain 38
Exercises 40
2 Past, present, and future 47
2.1 Markov property and its extensions 47
2.1.1 Past -field, filtration, and translation operators 47
2.1.2 Markov property 48
2.1.3 Stopping times and strong Markov property 50
2.2 Hitting times and distribution 51
2.2.1 Hitting times, induced chain, and hitting distribution 51
2.2.2 "One step forward" method, Dirichlet problem 53
2.3 Detailed examples 60
2.3.1 Gambler's ruin 60
2.3.2 Unilateral hitting time for a random walk 64
2.3.3 Exit time from a box 67
2.3.4 Branching process 67
2.3.5 Word search 71
Exercises 73
3 Transience and recurrence 79
3.1 Sample paths and state space 79
3.1.1 Communication and closed irreducible classes 79
3.1.2 Transience and recurrence, recurrent class decomposition 80
3.1.3 Detailed examples 83
3.2 Invariant measures and recurrence 87
3.2.1 Invariant laws and measures 87
3.2.2 Canonical invariant measure 89
3.2.3 Positive recurrence, invariant law criterion 91
3.2.4 Detailed examples 93
3.3 Complements 97
3.3.1 Hitting times and superharmonic functions 97
3.3.2 Lyapunov functions 99
3.3.3 Time reversal, reversibility, and adjoint chain 105
3.3.4 Birth-and-death chains 108
Exercises 111
4 Long-time behavior 119
4.1 Path regeneration and convergence 119
4.1.1 Pointwise ergodic theorem, extensions 120
4.1.2 Central limit theorem for Markov chains 124
4.1.3 Detailed examples 126
4.2 Long-time behavior of the instantaneous laws 128
4.2.1 Period and aperiodic classes 128
4.2.2 Coupling of Markov chains and convergence in law 132
4.2.3 Detailed examples 139
4.3 Elements on the rate of convergence for laws 140
4.3.1 The Hilbert space framework 140
4.3.2 Dirichlet form, spectral gap, and exponential bounds 143
4.3.3 Spectral theory for reversible matrices 146
4.3.4 Continuous-time Markov chains 149
Exercises 150
5 Monte Carlo methods 155
5.1 Approximate solution of the Dirichlet problem 155
5.1.1 General principles 155
5.1.2 Heat equation in equilibrium 156
5.1.3 Heat equation out of equilibrium 158
5.1.4 Parabolic partial differential equations 159
5.2 Invariant law simulation 162
5.2.1 Monte Carlo methods and ergodic theorems 162
5.2.2 Metropolis algorithm, Gibbs law, and simulated annealing 163
5.2.3 Exact simulation and backward recursion 166
Appendix A Complements 171
A.1 Basic probabilistic notions 171
A.1.1 Discrete random variable, expectation, and generating function 171
A.1.2 Conditional probabilities and independence 175
A.2 Discrete measure convergence 177
A.2.1 Total variation norm and maximal coupling 177
A.2.2 Duality between measures and functions 180
A.2.3 Weak convergence of laws and convergence in law 182
A.3 Measure-theoretic framework 183
A.3.1 Probability spaces 183
A.3.2 Measurable spaces and functions: signed and nonnegative 185
A.3.3 Random variables, their laws, and expectations 186
A.3.4 Random sequences and Kolmogorov extension theorem 192
References 195
Solutions for the exercises 197
Index 229
Preface ix
List of Figures xi
Nomenclature xiii
Introduction xv
1 First steps 1
1.1 Preliminaries 1
1.2 First properties of Markov chains 2
1.2.1 Markov chains, finite-dimensional marginals, and laws 2
1.2.2 Transition matrix action and matrix notation 5
1.2.3 Random recursion and simulation 9
1.2.4 Recursion for the instantaneous laws, invariant laws 10
1.3 Natural duality: algebraic approach 11
1.3.1 Complex eigenvalues and spectrum 11
1.3.2 Doeblin condition and strong irreducibility 15
1.3.3 Finite state space Markov chains 17
1.4 Detailed examples 21
1.4.1 Random walk on a network 21
1.4.2 Gambler's ruin 22
1.4.3 Branching process: evolution of a population 25
1.4.4 Ehrenfest's Urn 27
1.4.5 Renewal process 33
1.4.6 Word search in a character chain 36
1.4.7 Product chain 38
Exercises 40
2 Past, present, and future 47
2.1 Markov property and its extensions 47
2.1.1 Past -field, filtration, and translation operators 47
2.1.2 Markov property 48
2.1.3 Stopping times and strong Markov property 50
2.2 Hitting times and distribution 51
2.2.1 Hitting times, induced chain, and hitting distribution 51
2.2.2 "One step forward" method, Dirichlet problem 53
2.3 Detailed examples 60
2.3.1 Gambler's ruin 60
2.3.2 Unilateral hitting time for a random walk 64
2.3.3 Exit time from a box 67
2.3.4 Branching process 67
2.3.5 Word search 71
Exercises 73
3 Transience and recurrence 79
3.1 Sample paths and state space 79
3.1.1 Communication and closed irreducible classes 79
3.1.2 Transience and recurrence, recurrent class decomposition 80
3.1.3 Detailed examples 83
3.2 Invariant measures and recurrence 87
3.2.1 Invariant laws and measures 87
3.2.2 Canonical invariant measure 89
3.2.3 Positive recurrence, invariant law criterion 91
3.2.4 Detailed examples 93
3.3 Complements 97
3.3.1 Hitting times and superharmonic functions 97
3.3.2 Lyapunov functions 99
3.3.3 Time reversal, reversibility, and adjoint chain 105
3.3.4 Birth-and-death chains 108
Exercises 111
4 Long-time behavior 119
4.1 Path regeneration and convergence 119
4.1.1 Pointwise ergodic theorem, extensions 120
4.1.2 Central limit theorem for Markov chains 124
4.1.3 Detailed examples 126
4.2 Long-time behavior of the instantaneous laws 128
4.2.1 Period and aperiodic classes 128
4.2.2 Coupling of Markov chains and convergence in law 132
4.2.3 Detailed examples 139
4.3 Elements on the rate of convergence for laws 140
4.3.1 The Hilbert space framework 140
4.3.2 Dirichlet form, spectral gap, and exponential bounds 143
4.3.3 Spectral theory for reversible matrices 146
4.3.4 Continuous-time Markov chains 149
Exercises 150
5 Monte Carlo methods 155
5.1 Approximate solution of the Dirichlet problem 155
5.1.1 General principles 155
5.1.2 Heat equation in equilibrium 156
5.1.3 Heat equation out of equilibrium 158
5.1.4 Parabolic partial differential equations 159
5.2 Invariant law simulation 162
5.2.1 Monte Carlo methods and ergodic theorems 162
5.2.2 Metropolis algorithm, Gibbs law, and simulated annealing 163
5.2.3 Exact simulation and backward recursion 166
Appendix A Complements 171
A.1 Basic probabilistic notions 171
A.1.1 Discrete random variable, expectation, and generating function 171
A.1.2 Conditional probabilities and independence 175
A.2 Discrete measure convergence 177
A.2.1 Total variation norm and maximal coupling 177
A.2.2 Duality between measures and functions 180
A.2.3 Weak convergence of laws and convergence in law 182
A.3 Measure-theoretic framework 183
A.3.1 Probability spaces 183
A.3.2 Measurable spaces and functions: signed and nonnegative 185
A.3.3 Random variables, their laws, and expectations 186
A.3.4 Random sequences and Kolmogorov extension theorem 192
References 195
Solutions for the exercises 197
Index 229
List of Figures xi
Nomenclature xiii
Introduction xv
1 First steps 1
1.1 Preliminaries 1
1.2 First properties of Markov chains 2
1.2.1 Markov chains, finite-dimensional marginals, and laws 2
1.2.2 Transition matrix action and matrix notation 5
1.2.3 Random recursion and simulation 9
1.2.4 Recursion for the instantaneous laws, invariant laws 10
1.3 Natural duality: algebraic approach 11
1.3.1 Complex eigenvalues and spectrum 11
1.3.2 Doeblin condition and strong irreducibility 15
1.3.3 Finite state space Markov chains 17
1.4 Detailed examples 21
1.4.1 Random walk on a network 21
1.4.2 Gambler's ruin 22
1.4.3 Branching process: evolution of a population 25
1.4.4 Ehrenfest's Urn 27
1.4.5 Renewal process 33
1.4.6 Word search in a character chain 36
1.4.7 Product chain 38
Exercises 40
2 Past, present, and future 47
2.1 Markov property and its extensions 47
2.1.1 Past -field, filtration, and translation operators 47
2.1.2 Markov property 48
2.1.3 Stopping times and strong Markov property 50
2.2 Hitting times and distribution 51
2.2.1 Hitting times, induced chain, and hitting distribution 51
2.2.2 "One step forward" method, Dirichlet problem 53
2.3 Detailed examples 60
2.3.1 Gambler's ruin 60
2.3.2 Unilateral hitting time for a random walk 64
2.3.3 Exit time from a box 67
2.3.4 Branching process 67
2.3.5 Word search 71
Exercises 73
3 Transience and recurrence 79
3.1 Sample paths and state space 79
3.1.1 Communication and closed irreducible classes 79
3.1.2 Transience and recurrence, recurrent class decomposition 80
3.1.3 Detailed examples 83
3.2 Invariant measures and recurrence 87
3.2.1 Invariant laws and measures 87
3.2.2 Canonical invariant measure 89
3.2.3 Positive recurrence, invariant law criterion 91
3.2.4 Detailed examples 93
3.3 Complements 97
3.3.1 Hitting times and superharmonic functions 97
3.3.2 Lyapunov functions 99
3.3.3 Time reversal, reversibility, and adjoint chain 105
3.3.4 Birth-and-death chains 108
Exercises 111
4 Long-time behavior 119
4.1 Path regeneration and convergence 119
4.1.1 Pointwise ergodic theorem, extensions 120
4.1.2 Central limit theorem for Markov chains 124
4.1.3 Detailed examples 126
4.2 Long-time behavior of the instantaneous laws 128
4.2.1 Period and aperiodic classes 128
4.2.2 Coupling of Markov chains and convergence in law 132
4.2.3 Detailed examples 139
4.3 Elements on the rate of convergence for laws 140
4.3.1 The Hilbert space framework 140
4.3.2 Dirichlet form, spectral gap, and exponential bounds 143
4.3.3 Spectral theory for reversible matrices 146
4.3.4 Continuous-time Markov chains 149
Exercises 150
5 Monte Carlo methods 155
5.1 Approximate solution of the Dirichlet problem 155
5.1.1 General principles 155
5.1.2 Heat equation in equilibrium 156
5.1.3 Heat equation out of equilibrium 158
5.1.4 Parabolic partial differential equations 159
5.2 Invariant law simulation 162
5.2.1 Monte Carlo methods and ergodic theorems 162
5.2.2 Metropolis algorithm, Gibbs law, and simulated annealing 163
5.2.3 Exact simulation and backward recursion 166
Appendix A Complements 171
A.1 Basic probabilistic notions 171
A.1.1 Discrete random variable, expectation, and generating function 171
A.1.2 Conditional probabilities and independence 175
A.2 Discrete measure convergence 177
A.2.1 Total variation norm and maximal coupling 177
A.2.2 Duality between measures and functions 180
A.2.3 Weak convergence of laws and convergence in law 182
A.3 Measure-theoretic framework 183
A.3.1 Probability spaces 183
A.3.2 Measurable spaces and functions: signed and nonnegative 185
A.3.3 Random variables, their laws, and expectations 186
A.3.4 Random sequences and Kolmogorov extension theorem 192
References 195
Solutions for the exercises 197
Index 229