Controlled Markov Processes and Viscosity SolutionsThis book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. |
From inside the book
Results 1-5 of 29
Page
... Calculus of variations I .. 33 I.9 Calculus of variations II 37 I.10 Generalized solutions to Hamilton - Jacobi equations 42 I.11 Existence theorems 49 I.12 Historical remarks 55 II Viscosity Solutions 57 II.1 Introduction 57 II.2 ...
... Calculus of variations I .. 33 I.9 Calculus of variations II 37 I.10 Generalized solutions to Hamilton - Jacobi equations 42 I.11 Existence theorems 49 I.12 Historical remarks 55 II Viscosity Solutions 57 II.1 Introduction 57 II.2 ...
Page 2
... calculus of varia- tions. For a calculus of variations problem, the dynamic programming equa- tion is called a Hamilton-Jacobi partial differential equation. Many first-order nonlinear partial differential equations can be interpreted ...
... calculus of varia- tions. For a calculus of variations problem, the dynamic programming equa- tion is called a Hamilton-Jacobi partial differential equation. Many first-order nonlinear partial differential equations can be interpreted ...
Page 5
... calculus of variations problems in some detail in Sections 810. In the formulation in Section 8 , we allow the possibility that the fixed upper limit t1 in ( 2.8 ) is replaced by a time 7 which is the smaller of t1 and the exit time of ...
... calculus of variations problems in some detail in Sections 810. In the formulation in Section 8 , we allow the possibility that the fixed upper limit t1 in ( 2.8 ) is replaced by a time 7 which is the smaller of t1 and the exit time of ...
Page 19
... calculus of variations type . For further results about existence and continuity properties of optimal controls see [ FR , Chap III ] , Cesari [ Ce ] and Section 11 . If the control space U is compact and U ( t , x ) = U ° ( t ) ...
... calculus of variations type . For further results about existence and continuity properties of optimal controls see [ FR , Chap III ] , Cesari [ Ce ] and Section 11 . If the control space U is compact and U ( t , x ) = U ° ( t ) ...
Page 32
... by ( 7.15 ) . For x1 , x2 < 0 , c1 ( 2,0 ) if c1 ≤ ac2 u * ( X1 , X2 ) = ( 0 , 1 ) if c1 ≥ ac2 . See [ S1 ] for additional examples . 1.8 Calculus of variations I In the remainder of this 32 I. Deterministic Optimal Control.
... by ( 7.15 ) . For x1 , x2 < 0 , c1 ( 2,0 ) if c1 ≤ ac2 u * ( X1 , X2 ) = ( 0 , 1 ) if c1 ≥ ac2 . See [ S1 ] for additional examples . 1.8 Calculus of variations I In the remainder of this 32 I. Deterministic Optimal Control.
Contents
1 | |
Viscosity Solutions | 57 |
Classical Solutions119 | 118 |
Controlled Markov Diffusions in R | 151 |
SecondOrder Case | 199 |
Logarithmic Transformations and Risk Sensitivity | 227 |
Singular Perturbations 261 | 260 |
Singular Stochastic Control | 293 |
Finite Difference Numerical Approximations | 321 |
Differential Games | 375 |
A Duality Relationships 397 | 396 |
References | 409 |
Index 425 | 424 |
Other editions - View all
Controlled Markov Processes and Viscosity Solutions Wendell H. Fleming,Halil Mete Soner No preview available - 2006 |
Common terms and phrases
admissible control assume assumptions boundary condition boundary data bounded brownian motion C₁ C¹(Q calculus of variations Chapter classical solution consider constant constraint controlled Markov diffusion convergence convex Corollary defined definition denote differential games dynamic programming equation dynamic programming principle Dynkin formula Example exists exit finite formulation given Hence HJB equation holds implies inequality initial data Ishii Lemma linear Lipschitz continuous Markov chain Markov control policy Markov processes minimizing Moreover nonlinear obtain optimal control optimal control problem partial derivatives partial differential equation progressively measurable proof of Theorem prove reference probability system Remark result risk sensitive satisfies Section semigroup Soner stochastic control stochastic control problem stochastic differential equations subset Suppose t₁ Theorem 9.1 uniformly continuous unique value function Verification Theorem viscosity solution viscosity subsolution viscosity supersolution