Controlled Markov Processes and Viscosity SolutionsThis book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. |
From inside the book
Results 1-5 of 82
Page 3
... constants , known to the planner . Let x ( s ) = ( x1 ( s ) , ··· , xn ( s ) ) , u ( s ) = ( u1 ( s ) , ··· , Un ( s ) ) , d = ( d1 , ··· , dn ) . They are , respectively , the inventory and control vectors at time s , and the demand ...
... constants , known to the planner . Let x ( s ) = ( x1 ( s ) , ··· , xn ( s ) ) , u ( s ) = ( u1 ( s ) , ··· , Un ( s ) ) , d = ( d1 , ··· , dn ) . They are , respectively , the inventory and control vectors at time s , and the demand ...
Page 5
... constant K , since U C { v : | v | ≤ p } for large enough p . If ƒ ( t , · , v ) has a continuous gradient fx , ( 3.1 ) is equivalent to the condition fa ( t , x , v ) | ≤ K , whenever | v | ≤ p . A control is a bounded , Lebesgue ...
... constant K , since U C { v : | v | ≤ p } for large enough p . If ƒ ( t , · , v ) has a continuous gradient fx , ( 3.1 ) is equivalent to the condition fa ( t , x , v ) | ≤ K , whenever | v | ≤ p . A control is a bounded , Lebesgue ...
Page 9
... constant M ≥ 0. ) The method of dynamic programming uses the value function as a tool in the analysis of the optimal control problem . In this section and the following one we study some basic properties of the value function . Then we ...
... constant M ≥ 0. ) The method of dynamic programming uses the value function as a tool in the analysis of the optimal control problem . In this section and the following one we study some basic properties of the value function . Then we ...
Page 10
... constant ( or nearly constant ) . Indeed , for a small positive 8 , choose a d - optimal admissible control u ( · ) € U ( t , x ) . Then for any r Є [ t , t1 ] we have 8+ V ( t , x ) > J ( t , x ; u ) = = = [ " L ( 8 , x ( 8 ) , u ( s ) ...
... constant ( or nearly constant ) . Indeed , for a small positive 8 , choose a d - optimal admissible control u ( · ) € U ( t , x ) . Then for any r Є [ t , t1 ] we have 8+ V ( t , x ) > J ( t , x ; u ) = = = [ " L ( 8 , x ( 8 ) , u ( s ) ...
Page 19
... constant Mê such that - K | W ( t , x ) – W ( t ' , x ′ ) | ≤ MÊ ( | t − t ' | + | x − x ′ | ) or [ Zi ] ) every locally Lipschitz function is. for all ( t , x ) , ( t ' , x ' ) = K. If one can choose M = MÊ which does not depend on K ...
... constant Mê such that - K | W ( t , x ) – W ( t ' , x ′ ) | ≤ MÊ ( | t − t ' | + | x − x ′ | ) or [ Zi ] ) every locally Lipschitz function is. for all ( t , x ) , ( t ' , x ' ) = K. If one can choose M = MÊ which does not depend on K ...
Contents
1 | |
Viscosity Solutions | 57 |
Classical Solutions119 | 118 |
Controlled Markov Diffusions in R | 151 |
SecondOrder Case | 199 |
Logarithmic Transformations and Risk Sensitivity | 227 |
Singular Perturbations 261 | 260 |
Singular Stochastic Control | 293 |
Finite Difference Numerical Approximations | 321 |
Differential Games | 375 |
A Duality Relationships 397 | 396 |
References | 409 |
Index 425 | 424 |
Other editions - View all
Controlled Markov Processes and Viscosity Solutions Wendell H. Fleming,Halil Mete Soner No preview available - 2006 |
Common terms and phrases
admissible control assume assumptions boundary condition boundary data bounded brownian motion C₁ C¹(Q calculus of variations Chapter classical solution consider constant constraint controlled Markov diffusion convergence convex Corollary defined definition denote differential games dynamic programming equation dynamic programming principle Dynkin formula Example exists exit finite formulation given Hence HJB equation holds implies inequality initial data Ishii Lemma linear Lipschitz continuous Markov chain Markov control policy Markov processes minimizing Moreover nonlinear obtain optimal control optimal control problem partial derivatives partial differential equation progressively measurable proof of Theorem prove reference probability system Remark result risk sensitive satisfies Section semigroup Soner stochastic control stochastic control problem stochastic differential equations subset Suppose t₁ Theorem 9.1 uniformly continuous unique value function Verification Theorem viscosity solution viscosity subsolution viscosity supersolution