Skip to main content

Dynamic Optimization

Deterministic and Stochastic Models

  • Textbook
  • © 2016

Overview

  • Provides a self-contained and easy-to-read introduction to dynamic programming
  • Provides a comprehensive treatment of discrete-time multistage optimization
  • Presents the theory of Markov decision processes without advanced measure theory
  • Includes various examples and exercises (without solutions)
  • Includes supplementary material: sn.pub/extras

Part of the book series: Universitext (UTX)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (26 chapters)

  1. Deterministic Models

  2. Markovian Decision Processes

Keywords

About this book

This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance.


Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.



Reviews

“Part I deals with deterministic dynamic optimization models describing the control of discrete-time systems. Part II is devoted to discrete-time stochastic control models. Part III … is devoted to Markovian decision processes with disturbances… The book comprises a lot of examples, problems for readers, and supplements with additional comments for the advanced reader and with bibliographic notes.” (Svetlana A. Kravchenko, zbMATH 1365.90002)

Authors and Affiliations

  • Karlsruher Institut für Technologie (KIT), Karlsruhe, Germany

    Karl Hinderer, Michael Stieglitz

  • University of Ulm, Ulm, Germany

    Ulrich Rieder

About the authors

Karl Hinderer was Professor of Stochastics at the Karlsruhe Institute of Technology KIT. He wrote the seminal book Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter (1970) and the textbook Grundbegriffe der Wahrscheinlichkeitstheorie (1972). His main research areas were stochastic dynamic programming, probability and stochastic processes.


Ulrich Rieder is Professor emeritus at the University of Ulm. From 1990 to 2008, he was Editor-in-Chief of Mathematical Methods of Operations Research. His main research areas include stochastic dynamic programming and control, risk-sensitive Markov decision processes, stochastic games, and financial optimization.


Michael Stieglitz was Professor at the University of Karlsruhe until 2002. He contributes to summability, approximation theory, and probability.

Bibliographic Information

Publish with us