Authors:
- Illustrates the effectiveness of the developed methods with comparative simulations to leading off-line numerical methods
- Presents theoretical development through engineering examples and hardware implementations
- Provides computationally efficient function approximation tools for implementation in real-time
- Includes supplementary material: sn.pub/extras
Part of the book series: Communications and Control Engineering (CCE)
Buy it now
Buying options
Tax calculation will be finalised at checkout
Other ways to access
This is a preview of subscription content, log in via an institution to check for access.
Table of contents (7 chapters)
-
Front Matter
-
Back Matter
About this book
To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements.
This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.
Authors and Affiliations
-
Mechanical and Aerospace Engineering, Oklahoma State University, Stillwater, USA
Rushikesh Kamalapurkar
-
Naval Surface Warfare Center, Panama City, USA
Patrick Walters
-
Electrical Engineering, Vanderbilt University, Nashville, USA
Joel Rosenfeld
-
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, USA
Warren Dixon
About the authors
Dr. Joel Rosenfeld is a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at Vanderbilt University in the VeriVital Laboratory. He received his PhD in Mathematics at the University of Florida in 2013 under the direction of Dr. Michael T. Jury. His doctoral work concerned densely defined operators over reproducing kernel Hilbert spaces (RKHS), where he established characterizations of densely defined multiplication operators for several RKHSs. Dr. Rosenfeld then spent four years as a postdoctoral researcher in the Nonlinear Controls and Robotics Laboratory under Dr. Warren E. Dixon where he worked on problems in Numerical Analysis and Optimal Control Theory. Working together with Dr. Dixon and Dr. Kamalapurkar, he developed the numerical approach represented by the state following (StaF) method, which enables the implementation of online optimal control methods that were previously intractable.
Prof. Warren Dixon received his Ph.D. in 2000 from the Department of Electrical and Computer Engineering from Clemson University. He worked as a research staff member and Eugene P. Wigner Fellow at Oak Ridge National Laboratory (ORNL) until 2004, when he joined the University of Florida in the Mechanical and Aerospace Engineering Department. His main research interest has been the development and application of Lyapunov-based control techniques for uncertain nonlinear systems. He has published 3 books, an edited collection, 13 chapters, and over 130 journal and 240 conference papers. His work has been recognized by the 2015 & 2009 American Automatic Control Council (AACC) O. Hugo Schuck (Best Paper) Award, the 2013 Fred Ellersick Award for Best Overall MILCOM Paper, a 2012-2013 University of Florida College of Engineering Doctoral Dissertation Mentoring Award, the 2011 American Society of Mechanical Engineers (ASME) Dynamics Systems and Control Division Outstanding Young Investigator Award, the 2006 IEEE Robotics and Automation Society (RAS) Early Academic Career Award, an NSF CAREER Award (2006-2011), the 2004 Department of Energy Outstanding Mentor Award, and the 2001 ORNL Early Career Award for Engineering Achievement. He is an ASME and IEEE Fellow, an IEEE Control Systems Society (CSS) Distinguished Lecturer, and has served as the Director of Operations for the Executive Committee of the IEEE CSS Board of Governors (2012-2015). He was awarded the Air Force Commander's Public Service Award (2016) for his contributions to the U.S. Air Force Science Advisory Board. He is currently or formerly an associate editor for ASME Journal of Journal of Dynamic Systems, Measurement and Control, Automatica, IEEE Transactions on Systems Man and Cybernetics: Part B Cybernetics, and the International Journal of Robust and Nonlinear Control.
Bibliographic Information
Book Title: Reinforcement Learning for Optimal Feedback Control
Book Subtitle: A Lyapunov-Based Approach
Authors: Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
Series Title: Communications and Control Engineering
DOI: https://doi.org/10.1007/978-3-319-78384-0
Publisher: Springer Cham
eBook Packages: Engineering, Engineering (R0)
Copyright Information: Springer International Publishing AG 2018
Hardcover ISBN: 978-3-319-78383-3Published: 28 May 2018
Softcover ISBN: 978-3-030-08689-3Published: 26 December 2018
eBook ISBN: 978-3-319-78384-0Published: 10 May 2018
Series ISSN: 0178-5354
Series E-ISSN: 2197-7119
Edition Number: 1
Number of Pages: XVI, 293
Topics: Control and Systems Theory, Calculus of Variations and Optimal Control; Optimization, Systems Theory, Control, Communications Engineering, Networks