This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
"This book contributes to the systematization of the most relevant existing methods for these problems by introducing new algorithms for solving different classes of stochastic dynamic programming problems. ... The mathematical and computational level of the book will enable students and practitioners to deepen their understanding of the topic. Numerous examples are included to illustrate the proposed algorithms and methods." (Rosario Romera, Mathematical Reviews, July, 2015)