Get 40% off our selection of bestselling print books in Engineering through October 31st!

Communications and Control Engineering

Reinforcement Learning for Optimal Feedback Control

A Lyapunov-Based Approach

Authors: Kamalapurkar, R., Walters, P., Rosenfeld, J., Dixon, W.

  • Illustrates the effectiveness of the developed methods with comparative simulations to leading off-line numerical methods
  • Presents theoretical development through engineering examples and hardware implementations
  • Provides computationally efficient function approximation tools for implementation in real-time
see more benefits

Buy this book

eBook 118,99 €
price for India (gross)
  • ISBN 978-3-319-78384-0
  • Digitally watermarked, DRM-free
  • Included format: PDF, EPUB
  • ebooks can be used on all reading devices
  • Immediate eBook download after purchase
Hardcover 139,99 €
price for India (gross)
  • ISBN 978-3-319-78383-3
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
About this book

Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution.

To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements.

This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.

About the authors

Rushikesh Kamalapurkar received his M.S. and his Ph.D. degree in 2011 and 2014, respectively, from the Mechanical and Aerospace Engineering Department at the University of Florida. After working for a year as a postdoctoral research fellow with Dr. Warren E. Dixon, he was selected as the 2015-16 MAE postdoctoral teaching fellow. In 2016 he joined the School of Mechanical and Aerospace Engineering at the Oklahoma State University as an Assistant professor. His primary research interest has been intelligent, learning-based optimal control of uncertain nonlinear dynamical systems. He has published 3 book chapters, 18 peer reviewed journal papers and 21 peer reviewed conference papers. His work has been recognized by the 2015 University Of Florida Department Of Mechanical and Aerospace Engineering Best Dissertation Award, and the 2014 University of Florida Department of Mechanical and Aerospace Engineering Outstanding Graduate Research Award.
Dr. Joel Rosenfeld is a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at Vanderbilt University in the VeriVital Laboratory. He received his PhD in Mathematics at the University of Florida in 2013 under the direction of Dr. Michael T. Jury. His doctoral work concerned densely defined operators over reproducing kernel Hilbert spaces (RKHS), where he established characterizations of densely defined multiplication operators for several RKHSs. Dr. Rosenfeld then spent four years as a postdoctoral researcher in the Nonlinear Controls and Robotics Laboratory under Dr. Warren E. Dixon where he worked on problems in Numerical Analysis and Optimal Control Theory. Working together with Dr. Dixon and Dr. Kamalapurkar, he developed the numerical approach represented by the state following (StaF) method, which enables the implementation of online optimal control methods that were previously intractable.
Prof. Warren Dixon received his Ph.D. in 2000 from the Department of Electrical and Computer Engineering from Clemson University. He worked as a research staff member and Eugene P. Wigner Fellow at Oak Ridge National Laboratory (ORNL) until 2004, when he joined the University of Florida in the Mechanical and Aerospace Engineering Department. His main research interest has been the development and application of Lyapunov-based control techniques for uncertain nonlinear systems. He has published 3 books, an edited collection, 13 chapters, and over 130 journal and 240 conference papers. His work has been recognized by the 2015 & 2009 American Automatic Control Council (AACC) O. Hugo Schuck (Best Paper) Award, the 2013 Fred Ellersick Award for Best Overall MILCOM Paper, a 2012-2013 University of Florida College of Engineering Doctoral Dissertation Mentoring Award, the 2011 American Society of Mechanical Engineers (ASME) Dynamics Systems and Control Division Outstanding Young Investigator Award, the 2006 IEEE Robotics and Automation Society (RAS) Early Academic Career Award, an NSF CAREER Award (2006-2011), the 2004 Department of Energy Outstanding Mentor Award, and the 2001 ORNL Early Career Award for Engineering Achievement. He is an ASME and IEEE Fellow, an IEEE Control Systems Society (CSS) Distinguished Lecturer, and has served as the Director of Operations for the Executive Committee of the IEEE CSS Board of Governors (2012-2015). He was awarded the Air Force Commander's Public Service Award (2016) for his contributions to the U.S. Air Force Science Advisory Board. He is currently or formerly an associate editor for ASME Journal of Journal of Dynamic Systems, Measurement and Control, Automatica, IEEE Transactions on Systems Man and Cybernetics: Part B Cybernetics, and the International Journal of Robust and Nonlinear Control.

Table of contents (7 chapters)

  • Optimal Control

    Kamalapurkar, Rushikesh (et al.)

    Pages 1-16

  • Approximate Dynamic Programming

    Kamalapurkar, Rushikesh (et al.)

    Pages 17-42

  • Excitation-Based Online Approximate Optimal Control

    Kamalapurkar, Rushikesh (et al.)

    Pages 43-98

  • Model-Based Reinforcement Learning for Approximate Optimal Control

    Kamalapurkar, Rushikesh (et al.)

    Pages 99-148

    Preview Buy Chapter 24,95 €
  • Differential Graphical Games

    Kamalapurkar, Rushikesh (et al.)

    Pages 149-193

Buy this book

eBook 118,99 €
price for India (gross)
  • ISBN 978-3-319-78384-0
  • Digitally watermarked, DRM-free
  • Included format: PDF, EPUB
  • ebooks can be used on all reading devices
  • Immediate eBook download after purchase
Hardcover 139,99 €
price for India (gross)
  • ISBN 978-3-319-78383-3
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
Loading...

Recommended for you

Loading...

Bibliographic Information

Bibliographic Information
Book Title
Reinforcement Learning for Optimal Feedback Control
Book Subtitle
A Lyapunov-Based Approach
Authors
Series Title
Communications and Control Engineering
Copyright
2018
Publisher
Springer International Publishing
Copyright Holder
Springer International Publishing AG
eBook ISBN
978-3-319-78384-0
DOI
10.1007/978-3-319-78384-0
Hardcover ISBN
978-3-319-78383-3
Series ISSN
0178-5354
Edition Number
1
Number of Pages
XVI, 293
Topics