Overview
- Presents new branches for Markov Decision Processes (MDP)
- Applies new methodology for MDPs with discounted total reward criterion
- Offers new applications of MDPs in areas such as the control of discrete event systems and the optimal allocations in sequential online auctions
- Shows the validity of the optimality equation and its properties from the definition of the model by reducing the scale of MDP models based on action reduction and state decomposition
- Presents two new optimal control problems for discrete event systems
- Examines two optimal replacement problems in stochastic environments
- Studies continuous time MDPs and semi-Markov decision processes in a semi-Markov environment
Part of the book series: Advances in Mechanics and Mathematics (AMMA, volume 14)
Access this book
Tax calculation will be finalised at checkout
Other ways to access
Table of contents (10 chapters)
Keywords
About this book
Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters.
Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: a new methodology for MDPs with discounted total reward criterion; transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions.
This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.
Reviews
From the reviews:
"Markov decision processes (MDPs) are one of the most comprehensively investigated branches in mathematics. … Very beneficial also are the notes and references at the end of each chapter. … we can recommend the book … for readers who are familiar with Markov decision theory and who are interested in a new approach to modelling, investigating and solving complex stochastic dynamic decision problems." (Peter Köchel, Mathematical Reviews, Issue 2009 c)
Authors and Affiliations
Bibliographic Information
Book Title: Markov Decision Processes with Their Applications
Authors: Qiying Hu, Wuyi Yue
Series Title: Advances in Mechanics and Mathematics
DOI: https://doi.org/10.1007/978-0-387-36951-8
Publisher: Springer New York, NY
eBook Packages: Mathematics and Statistics, Mathematics and Statistics (R0)
Copyright Information: Springer-Verlag US 2008
Hardcover ISBN: 978-0-387-36950-1Published: 26 November 2007
Softcover ISBN: 978-1-4419-4238-8Published: 19 November 2010
eBook ISBN: 978-0-387-36951-8Published: 14 September 2007
Series ISSN: 1571-8689
Series E-ISSN: 1876-9896
Edition Number: 1
Number of Pages: XV, 297
Topics: Operations Research, Management Science, Probability Theory and Stochastic Processes, Calculus of Variations and Optimal Control; Optimization, Industrial and Production Engineering