Logo - springer
Slogan - springer

Engineering - Control Engineering | Adaptive Dynamic Programming for Control - Algorithms and Stability

Adaptive Dynamic Programming for Control

Algorithms and Stability

Zhang, H., Liu, D., Luo, Y., Wang, D.

2013, XVI, 424 p.

Available Formats:
eBook
Information

Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.

You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.

After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.

 
$139.00

(net) price for USA

ISBN 978-1-4471-4757-2

digitally watermarked, no DRM

Included Format: PDF and EPUB

download immediately after purchase


learn more about Springer eBooks

add to marked items

Hardcover
Information

Hardcover version

You can pay for Springer Books with Visa, Mastercard, American Express or Paypal.

Standard shipping is free of charge for individual customers.

 
$179.00

(net) price for USA

ISBN 978-1-4471-4756-5

free shipping for individuals worldwide

usually dispatched within 3 to 5 business days


add to marked items

  • Convergence proofs of the algorithms presented teach readers how to derive necessary stability and convergence criteria for their own systems
  • Establishes the fundamentals of ADP theory so that student readers can extrapolate their learning into control, operations research and related fields
  • Applications examples show how the theory can be made to work in real example systems

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming for Control approaches the challenging topic of optimal control for nonlinear systems using the tools of  adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and  proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinte-horizon control;
• nonlinear games for which  a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming for Control:
• establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen undertstanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.

Content Level » Research

Keywords » Adaptive Dynamic Programming - Finite-horizon Control - Infinite-horizon Control - Reinforcement Learning - Zero-sum Game

Related subjects » Applications - Artificial Intelligence - Computational Intelligence and Complexity - Control Engineering - Mathematics

Table of contents / Preface / Sample pages 

Popular Content within this publication 

 

Articles

Read this Book on Springerlink

Services for this book

New Book Alert

Get alerted on new Springer publications in the subject area of Control.