177,95 €
177,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
89 °P sammeln
177,95 €
177,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
89 °P sammeln
Als Download kaufen
177,95 €
inkl. MwSt.
Sofort per Download lieferbar
payback
89 °P sammeln
Jetzt verschenken
177,95 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
89 °P sammeln
  • Format: PDF

This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology.
The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of…mehr

Produktbeschreibung
This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology.

The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including:

  • deep learning;
  • artificial intelligence;
  • applications of game theory;
  • mixed modality learning; and
  • multi-agent reinforcement learning.


Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.


Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Kyriakos G. Vamvoudakis serves as an Assistant Professor at The Daniel Guggenheim School of Aerospace Engineering at Georgia Tech. He received the Diploma in Electronic and Computer Engineering from the Technical University of Crete, Greece in 2006. He received his M.S. and Ph.D. in Electrical Engineering in 2008 and 2011 respectively from the University of Texas, Arlington. During the period from 2012 to 2016 he was a project research scientist at the Center for Control, Dynamical Systems and Computation at the University of California, Santa Barbara. He was an assistant professor at the Kevin T. Crofton Department of Aerospace and Ocean Engineering at Virginia Tech until 2018. His research interests include reinforcement learning, control theory, and safe/assured autonomy. He is the recipient of a 2019 ARO YIP award, a 2018 NSF CAREER award, and of several international awards including the 2016 International Neural Network Society Young Investigator Award. He currently isan Associate Editor of: Automatica; IEEE Computational Intelligence Magazine; IEEE Transactions on Systems, Man, and Cybernetics: Systems; Neurocomputing; Journal of Optimization Theory and Applications; and of IEEE Control Systems Letters. Yan Wan is currently an Associate Professor in the Electrical Engineering Department at the University of Texas at Arlington. She received her Ph.D. degree in Electrical Engineering from Washington State University in 2009 and then did postdoctoral training at the University of California, Santa Barbara. Her research interests lie in the modeling, evaluation, and control of large-scale dynamical networks, cyber-physical system and stochastic networks. She has been recognized by several prestigious awards, including the NSF CAREER Award, RTCA William E. Jackson Award and U.S. Ignite and GENI demonstration awards. She currently serves as the Associate Editor for IEEE Transactions on Control of Network Systems, Transactions of the Institute of Measurement and Control, and Journal of Advanced Control for Applications. Frank L. Lewis is a Distinguished Scholar Professor and Moncrief-O'Donnell Chair at University of Texas at Arlington's Automation & Robotics Research Institute. He obtained his Bachelor's Degree in Physics/EE and MSEE at Rice University, his MS in Aeronautical Engineering from Univ. W. Florida, and his Ph.D. at Ga. Tech. He received the Fulbright Research Award, the Outstanding Service Award from Dallas IEEE Section, and was selected as Engineer of the year by Ft. Worth IEEE Section. He is an elected Guest Consulting Professor at South China University of Technology and Shanghai Jiao Tong University. He is a Fellow of the IEEE, Fellow of IFAC, Fellow of the U.K. Institute of Measurement & Control, and a U.K. Chartered Engineer. His current research interests include distributed control on graphs, neural and fuzzy systems, and intelligent control. Derya Cansever is aProgram Manager at the US Army Research Office. Prior to that, he was the Chief Engineer of the Communication Networks and Networking Division at US Army CERDEC, where he conducts research in Tactical, Mission Aware and Software Defined Networks. Dr. Cansever has also worked at Johns Hopkins University Applied Physics Laboratory, AT&T Bell Labs, and GTE Laboratory. He taught courses on Data Communications and Network Security at Boston University and University of Massachusetts. He has a Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana Champaign.