A path-breaking account of Markov decision processes-theory and computation
This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.
Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.
The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic
Stochastic Dynamic Programming and the Control of Queueing Systems features:
* Path-breaking advances in Markov decision process techniques, brought together for the first time in book form
* A theorem/proof format (proofs may be omitted without loss of continuity)
* Development of a unified method for the computation of optimal rules of system operation
* Numerous examples drawn mainly from the control of queueing systems
* Detailed discussions of nine numerical programs
* Helpful chapter-end problems
* Appendices with complete treatment of background material
This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.
Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.
The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic
Stochastic Dynamic Programming and the Control of Queueing Systems features:
* Path-breaking advances in Markov decision process techniques, brought together for the first time in book form
* A theorem/proof format (proofs may be omitted without loss of continuity)
* Development of a unified method for the computation of optimal rules of system operation
* Numerous examples drawn mainly from the control of queueing systems
* Detailed discussions of nine numerical programs
* Helpful chapter-end problems
* Appendices with complete treatment of background material
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in D ausgeliefert werden.