Front cover image for Controlled Markov processes and viscosity solutions

Controlled Markov processes and viscosity solutions

"This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. Stochastic control problems are treated using the dynamic programming approach. The authors approach stochastic control problems by the method of dynamic programming."--Jacket
eBook, English, ©2006
Springer, New York, ©2006
1 online resource (428 pages)
9780387260457, 9780387310718, 9786610461998, 0387260455, 0387310711, 6610461996
228397047
Cover
Contents
Preface to Second Edition
Preface
Notation
I Deterministic Optimal Control
I.1 Introduction
I.2 Examples
I.3 Finite time horizon problems
I.4 Dynamic programming principle
I.5 Dynamic programming equation
I.6 Dynamic programming and Pontryagin's principle
I.7 Discounted cost with infinite horizon
I.8 Calculus of variations I
I.9 Calculus of variations II
I.10 Generalized solutions to Hamilton-Jacobi equations
I.11 Existence theorems
I.12 Historical remarks
II Viscosity Solutions
II. 1 Introduction
II. 2 Examples
II. 3 An abstract dynamic programming principle
II. 4 Definition
II. 5 Dynamic programming and viscosity property
II. 6 Properties of viscosity solutions
II. 7 Deterministic optimal control and viscosity solutions
II. 8 Viscosity solutions: first order case
II. 9 Uniqueness: first order case
II. 10 Continuity of the value function
II. 11 Discounted cost with infinite horizon
II. 12 State constraint
II. 13 Discussion of boundary conditions
II. 14 Uniqueness: first-order case
II. 15 Pontryagin's maximum principle (continued)
II. 16 Historical remarks
III Optimal Control of Markov Processes: Classical Solutions
III. 1 Introduction
III. 2 Markov processes and their evolution operators
III. 3 Autonomous (time-homogeneous) Markov processes
III. 4 Classes of Markov processes
III. 5 Markov diffusion processes on IRn; stochastic differential equations
III. 6 Controlled Markov processes
III. 7 Dynamic programming: formal description
III. 8 A Verification Theorem; finite time horizon
III. 9 Infinite Time Horizon
III. 10 Viscosity solutions
III. 11 Historical remarks
IV Controlled Markov Diffusions in IRn
IV. 1 Introduction
IV. 2 Finite time horizon problem
IV. 3 Hamilton-Jacobi-Bellman PDE
IV. 4 Uniformly parabolic case
IV. 5 Infinite time horizon
IV. 6 Fixed finite time horizon problem: Preliminary estimates
IV. 7 Dynamic programming principle
IV. 8 Estimates for first order difference quotients
IV. 9 Estimates for second-order difference quotients
IV. 10 Generalized subsolutions and solutions
IV. 11 Historical remarks
V Viscosity Solutions: Second-Order Case
V.1 Introduction
V.2 Dynamic programming principle
V.3 Viscosity property
V.4 An equivalent formulation
V.5 Semiconvex, concave approximations
V.6 Crandall-Ishii Lemma
V.7 Properties of H
V.8 Comparison
V.9 Viscosity solutions in Q0
V.10 Historical remarks
VI Logarithmic Transformations and Risk Sensitivity
VI. 1 Introduction
VI. 2 Risk sensitivity
VI. 3 Logarithmic transformations for Markov diffusions
VI. 4 Auxiliary stochastic control problem
VI. 5 Bounded region Q
VI. 6 Small noise limits
VI. 7 H-infinity norm of a nonlinear system
VI. 8 Risk sensitive control
VI. 9 Logarithmic transformations for Markov processes
VI. 10 Historical remarks
VII Singular Perturbations
VII. 1 Introduction
VII. 2 Examples
VII. 3 Barles and Perthame procedur