Logo - springer
Slogan - springer

Mathematics - Applications | Markov Decision Processes with Their Applications

Markov Decision Processes with Their Applications

Hu, Qiying, Yue, Wuyi

2008, XV, 297 p.

Available Formats:
eBook
Information

Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.

You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.

After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.

 
$129.00

(net) price for USA

ISBN 978-0-387-36951-8

digitally watermarked, no DRM

Included Format: PDF

download immediately after purchase


learn more about Springer eBooks

add to marked items

Hardcover
Information

Hardcover version

You can pay for Springer Books with Visa, Mastercard, American Express or Paypal.

Standard shipping is free of charge for individual customers.

 
$169.00

(net) price for USA

ISBN 978-0-387-36950-1

free shipping for individuals worldwide

usually dispatched within 3 to 5 business days


add to marked items

Softcover
Information

Softcover (also known as softback) version.

You can pay for Springer Books with Visa, Mastercard, American Express or Paypal.

Standard shipping is free of charge for individual customers.

 
$169.00

(net) price for USA

ISBN 978-1-4419-4238-8

free shipping for individuals worldwide

usually dispatched within 3 to 5 business days


add to marked items

  • Presents new branches for Markov Decision Processes (MDP)
  • Applies new methodology for MDPs
  • Offers new applications of MDPs
  • Shows the validity of the optimality equation and its properties from the definition of the model by reducing the scale of MDP models based on action reduction and state decomposition
  • Presents two new optimal control problems for discrete event systems
  • Examines two optimal replacement problems in stochastic environments
  • Studies continuous time MDPs and semi-Markov decision processes in a semi-Markov environment

Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters.

Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems:

*a new methodology for MDPs with discounted total reward criterion;

*transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs;

*MDPs in stochastic environments, which greatly extends the area where MDPs can be applied;

*applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions.

This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.

Content Level » Research

Keywords » Markov decision process - Observable - Optimal control - decision making problems - decision processes - discrete event systems - stochastic dynamic programming

Related subjects » Applications - Mathematics - Probability Theory and Stochastic Processes - Production & Process Engineering

Table of contents 

Discretetimemarkovdecisionprocesses: Total Reward.- Discretetimemarkovdecisionprocesses: Average Criterion.- Continuous Time Markov Decision Processes.- Semi-Markov Decision Processes.- Markovdecisionprocessesinsemi-Markov Environments.- Optimal control of discrete event systems: I.- Optimal control of discrete event systems: II.- Optimal replacement under stochastic Environments.- Optimalal location in sequential online Auctions.

Popular Content within this publication 

 

Articles

Read this Book on Springerlink

Services for this book

New Book Alert

Get alerted on new Springer publications in the subject area of Operations Research, Mathematical Programming.