Communications and Control Engineering

Simulation-Based Algorithms for Markov Decision Processes

Authors: Chang, H.S., Hu, J., Fu, M.C., Marcus, S.I.

  • Rigorous theoretical derivation of sampling and population-based algorithms enables the reader to expand on the work presented in the certainty that new results will have a sound foundation
  • New chapter on game-theoretic methods for solving Markov decision processes gives the researcher up-to-date information
  • Presents recently developed on-line methods in constrained and uncertain model settings for the reader to use and adapt in their own research
see more benefits

Buy this book

eBook $129.00
price for USA (gross)
  • ISBN 978-1-4471-5022-0
  • Digitally watermarked, DRM-free
  • Included format: EPUB, PDF
  • ebooks can be used on all reading devices
  • Immediate eBook download after purchase
Hardcover $169.99
price for USA
  • ISBN 978-1-4471-5021-3
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
Softcover $139.99
price for USA
  • ISBN 978-1-4471-5990-2
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
Rent the eBook  
  • Rental duration: 1 or 6 month
  • low-cost access
  • online reader with highlighting and note-making option
  • can be used across all devices
About this book

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search.
This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes:
innovative material on MDPs, both in constrained settings and with uncertain transition properties;
game-theoretic method for solving MDPs;
theories for developing roll-out based algorithms; and
details of approximation stochastic annealing, a population-based on-line simulation-based algorithm.
The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research.

About the authors

Hyeong Soo Chang (SM’07 of the IEEE, Member of INFORMS) received the B.S. and M.S. degrees in electrical engineering and the Ph.D. degree in electrical and computer engineering, all from Purdue University,West Lafayette, IN, in 1994, 1996, and 2001, respectively. Since 2003, he has been with the Department of Computer Science and Engineering, Sogang University, Seoul, Korea, where he is now an Associate Professor. He has about 30 journal papers in the area of MDPs and related areas. His main research interests include Markov decision processes, Markov games, computational learning theory, computational intelligence, and stochastic optimization.  He currently serves as an Associate Editor for the IEEE Transactions on Automatic Control.
Jiaqiao Hu (M’11 of the IEEE, Member of INFORMS) received the B.S. degree in automation from Shanghai Jiao Tong University, Shanghai, China, in 1997, the M.S. degree in applied mathematics from the University of Maryland, Baltimore County, in 2001, and the Ph.D. degree in electrical engineering from the University of Maryland, College Park, in 2006. Since 2006, he has been with the Department of Applied Mathematics and Statistics, State University of New York, Stony Brook, where he is currently an Assistant Professor Markov decision processes, simulation-based optimization, global optimization, applied probability, and stochastic modeling and analysis.
Michael Fu (Fellow of the IEEE, Member of INFORMS) received his Ph.D. and M.S degrees in applied mathematics from Harvard University in 1989 and 1986, respectively. He received S.B. and S.M. degrees in electrical engineering and an S.B. degree in mathematics from the Massachusetts Institute of Technology in 1985. Since 1989, he has been at the University of Maryland, College Park, in the College of Business and Management.  He was the Simulation Area Editor for Operations and is an Associate Editor for Management Science, and has served on the Editorial Boards of the INFORMS Journal on Computing, Production and Operations Management and IIE Transactions. He was on the program committee for the Spring 1996 INFORMS National Meeting, in charge of contributed papers. In 1995, he received the Maryland Business School's annual Allen J. Krowe Award for Teaching Excellence. He is the co-author (with Jian-Qiang Hu) of the book, Conditional Monte Carlo: Gradient Estimation and Optimization Applications (0-7923-9873-4, 1997), which received the 1998 INFORMS College on Simulation Outstanding Publication Award. Other awards include the 1999 IIE Operations Research Division Award and a 1998 IIE Transactions Best Paper Award. In 2002, he received ISR's Outstanding Systems Engineering Faculty Award. He currently serves as a director of National Science Foundation Operations Research Program. Dr. Fu's research interests lie in the areas of stochastic derivative estimation and simulation optimization of discrete-event systems, particularly with applications towards manufacturing systems, inventory control, and the pricing of financial derivatives.
Steven I. Marcus (Fellow of the IEEE, Fellow of SIAM, Member of INFORMS) received his Ph.D. and S.M. from the Massachusetts Institute of Technology in 1975 and 1972, respectively. He received a B.A. from Rice University in 1971. From 1975 to 1991, he was with the Department of Electrical and Computer Engineering at the University of Texas at Austin, where he was the L.B. (Preach) Meaders Professor in Engineering. He was Associate Chairman of the Department during the period 1984-89. In 1991, he joined the University of Maryland, College Park, where he was Director of the Institute for Systems Research until 1996. He is currently a Professor in the Electrical Engineering Department and the Institute for Systems Research. He has served as an Editor of the SIAM Journal on Control and Optimization, and Associate Editor of Mathematics of Control, Signals, and Systems, Journal on Discrete Event Dynamic Systems, and Acta Applicandae Mathematicae. He has authored or co-authored more than 100 articles, conference proceedings, and book chapters. Dr. Marcus's research interests lie in the areas of control and systems engineering, analysis and control of stochastic systems, Markov decision processes, stochastic and adaptive control, learning, fault detection, and discrete event systems, with applications in manufacturing, acoustics, and communication networks.

Reviews

From the book reviews:

“The book consists of five chapters. … This well-written book is addressed to researchers in MDPs and applied modeling with an interests in numerical computations, but the book is also accessible to graduate students in operation research, computer science, and economics. The authors gives many pseudocodes of algorithms, numerical examples, algorithms convergence analysis and bibliographical notes that can be very helpful for readers to understand the ideas presented in the book and to perform experiments on their own.” (Wiesław Kotarski, zbMATH, Vol. 1293, 2014)

Table of contents (5 chapters)

  • Markov Decision Processes

    Chang, Hyeong Soo (et al.)

    Pages 1-17

  • Multi-stage Adaptive Sampling Algorithms

    Chang, Hyeong Soo (et al.)

    Pages 19-60

  • Population-Based Evolutionary Approaches

    Chang, Hyeong Soo (et al.)

    Pages 61-87

  • Model Reference Adaptive Search

    Chang, Hyeong Soo (et al.)

    Pages 89-177

  • On-Line Control Methods via Simulation

    Chang, Hyeong Soo (et al.)

    Pages 179-218

Buy this book

eBook $129.00
price for USA (gross)
  • ISBN 978-1-4471-5022-0
  • Digitally watermarked, DRM-free
  • Included format: EPUB, PDF
  • ebooks can be used on all reading devices
  • Immediate eBook download after purchase
Hardcover $169.99
price for USA
  • ISBN 978-1-4471-5021-3
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
Softcover $139.99
price for USA
  • ISBN 978-1-4471-5990-2
  • Free shipping for individuals worldwide
  • Usually dispatched within 3 to 5 business days.
Rent the eBook  
  • Rental duration: 1 or 6 month
  • low-cost access
  • online reader with highlighting and note-making option
  • can be used across all devices
Loading...

Recommended for you

Loading...

Bibliographic Information

Bibliographic Information
Book Title
Simulation-Based Algorithms for Markov Decision Processes
Authors
Series Title
Communications and Control Engineering
Copyright
2013
Publisher
Springer-Verlag London
Copyright Holder
Springer-Verlag London
eBook ISBN
978-1-4471-5022-0
DOI
10.1007/978-1-4471-5022-0
Hardcover ISBN
978-1-4471-5021-3
Softcover ISBN
978-1-4471-5990-2
Series ISSN
0178-5354
Edition Number
2
Number of Pages
XVII, 229
Number of Illustrations and Tables
48 b/w illustrations, 1 illustrations in colour
Topics