Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.
You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.
After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.
To the best of our knowledge, this is the first book completely devoted to continuous-time Markov Decision Processes
Studies continuous-time MDPs allowing unbounded transition rates, which is the case in most applications
It is thus distinguished from other books that contain chapters on the continuous-time case
Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Content Level »Research
Keywords »Markov chain - Markov decision process - Markov decision processes - controlled Markov chains - operations research - stochastic control - stochastic dynamic programming
and Summary.- Continuous-Time Markov Decision Processes.- Average Optimality for Finite Models.- Discount Optimality for Nonnegative Costs.- Average Optimality for Nonnegative Costs.- Discount Optimality for Unbounded Rewards.- Average Optimality for Unbounded Rewards.- Average Optimality for Pathwise Rewards.- Advanced Optimality Criteria.- Variance Minimization.- Constrained Optimality for Discount Criteria.- Constrained Optimality for Average Criteria.