Logo - springer
Slogan - springer

Engineering - Computational Intelligence and Complexity | TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains

TEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains

Hester, Todd

2013, XIV, 165 p. 55 illus. in color.

Available Formats:
eBook
Information

Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.

You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.

After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.

 
$99.00

(net) price for USA

ISBN 978-3-319-01168-4

digitally watermarked, no DRM

Included Format: PDF

download immediately after purchase


learn more about Springer eBooks

add to marked items

Hardcover
Information

Hardcover version

You can pay for Springer Books with Visa, Mastercard, American Express or Paypal.

Standard shipping is free of charge for individual customers.

 
$129.00

(net) price for USA

ISBN 978-3-319-01167-7

free shipping for individuals worldwide

usually dispatched within 3 to 5 business days


add to marked items

  • Latest research on Temporal Difference Reinforcement Learning for Robots
  • Focuses on applying Reinforcement Learning to real-world problems, particularly learning on robots
  • Presents the model-based Reinforcement Learning algorithm developed by the authors group
  • Written by an expert in the field

This book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time.

Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually select actions in real time. This book focuses on addressing all four of these challenges. In particular, this book is focused on time-constrained domains where the first challenge is critically important. In these domains, the agent’s lifetime is not long enough for it to explore the domains thoroughly, and it must learn in very few samples.

Content Level » Research

Keywords » Computational Intelligence - Model Based RL - Real-Time Sample Efficient Reinforcement Learning - Reinforcement Learning - Reinforcement Learning for Robots - TEXPLORE - Temporal Difference Reinforcement Learning for Robots

Related subjects » Computational Intelligence and Complexity - Image Processing - Robotics

Table of contents 

Introduction .- Background and Problem Specification.- Real Time Architecture.- The TEXPLORE Algorithm.- Empirical Evaluation.- Further Examination of Exploration.- Related Work.- Discussion and Conclusion.- TEXPLORE Pseudo-Code.

Popular Content within this publication 

 

Articles

Read this Book on Springerlink

Services for this book

New Book Alert

Get alerted on new Springer publications in the subject area of Computational Intelligence.