While Dynamic Programming (DP) has helped solve control problems involving dynamic systems, its value was limited by algorithms that lacked practical scale-up capacity. In recent years, developments in Reinforcement Learning (RL), DP's model-free counterpart, has changed this. Focusing on continuous-variable problems, this unparalleled work provides an introduction to classical RL and DP, followed by a presentation of current methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, it offers illustrative examples that readers will be able to adapt to their own work.…mehr
While Dynamic Programming (DP) has helped solve control problems involving dynamic systems, its value was limited by algorithms that lacked practical scale-up capacity. In recent years, developments in Reinforcement Learning (RL), DP's model-free counterpart, has changed this. Focusing on continuous-variable problems, this unparalleled work provides an introduction to classical RL and DP, followed by a presentation of current methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, it offers illustrative examples that readers will be able to adapt to their own work.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Robert Babuska, Lucian Busoniu, and Bart de Schutter are with the Delft University of Technology. Damien Ernst is with the University of Liege.
Inhaltsangabe
Introduction. Dynamic programming and reinforcement learning. Focus of this book. Book outline. Basics of dynamic programming and reinforcement learning. Introduction. Markov decision processes. Value iteration. Policy iteration. Direct policy search. Conclusions. Bibliographical notes. Dynamic programming and reinforcement learning in large and continuous spaces. Introduction. The need for approximation in large and continuous spaces. Approximate value iteration. Approximate policy iteration. Finding value function approximators automatically. Approximate policy search. Comparison of approximate value iteration, policy iteration, and policy search. Conclusions. Bibliographical notes. Q-value iteration with fuzzy approximation. Introduction. Fuzzy Q-iteration. Analysis of fuzzy Q-iteration. Optimizing the membership functions. Experimental studies. Conclusions. Bibliographical notes. Online and continuous-action least-squares policy iteration. Introduction. Least-squares policy iteration. LSPI with continuous-action approximation. Online LSPI. Using prior knowledge in online LSPI. Experimental studies. Conclusions. Bibliographical notes. Direct policy search with adaptive basis functions. Introduction. Policy search with adaptive basis functions. Experimental studies. Conclusions. Bibliographical notes. References. Glossary.
Introduction. Dynamic programming and reinforcement learning. Focus of this book. Book outline. Basics of dynamic programming and reinforcement learning. Introduction. Markov decision processes. Value iteration. Policy iteration. Direct policy search. Conclusions. Bibliographical notes. Dynamic programming and reinforcement learning in large and continuous spaces. Introduction. The need for approximation in large and continuous spaces. Approximate value iteration. Approximate policy iteration. Finding value function approximators automatically. Approximate policy search. Comparison of approximate value iteration, policy iteration, and policy search. Conclusions. Bibliographical notes. Q-value iteration with fuzzy approximation. Introduction. Fuzzy Q-iteration. Analysis of fuzzy Q-iteration. Optimizing the membership functions. Experimental studies. Conclusions. Bibliographical notes. Online and continuous-action least-squares policy iteration. Introduction. Least-squares policy iteration. LSPI with continuous-action approximation. Online LSPI. Using prior knowledge in online LSPI. Experimental studies. Conclusions. Bibliographical notes. Direct policy search with adaptive basis functions. Introduction. Policy search with adaptive basis functions. Experimental studies. Conclusions. Bibliographical notes. References. Glossary.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826