Controlled Markov Processes and Viscosity SolutionsThis book is an introduction to optimal stochastic control for continuous time Markov processes and the theory of viscosity solutions. It covers dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. New chapters in this second edition introduce the role of stochastic optimal control in portfolio optimization and in pricing derivatives in incomplete markets and two-controller, zero-sum differential games. |
From inside the book
Results 1-5 of 42
Page
... cost with infinite horizon 25 1.8 Calculus of variations I .. 33 I.9 Calculus of variations II 37 I.10 Generalized ... function 99 viii II.11 Discounted cost with infinite horizon . 105 II.12.
... cost with infinite horizon 25 1.8 Calculus of variations I .. 33 I.9 Calculus of variations II 37 I.10 Generalized ... function 99 viii II.11 Discounted cost with infinite horizon . 105 II.12.
Page 1
... function (or cost function) which depends on the control inputs to the system, then the problem is one of optimal control. In this introductory chapter we are concerned with deterministic optimal control models in which the dynamics of ...
... function (or cost function) which depends on the control inputs to the system, then the problem is one of optimal control. In this introductory chapter we are concerned with deterministic optimal control models in which the dynamics of ...
Page 2
... function V is introduced which is the optimum value of the payoff considered as a function of initial data. See ... cost function in the control problem. The reader should refer to Section 3 for notations and assumptions used in this ...
... function V is introduced which is the optimum value of the payoff considered as a function of initial data. See ... cost function in the control problem. The reader should refer to Section 3 for notations and assumptions used in this ...
Page 6
... cost function and the terminal cost function . B. Control until exit from a closed cylindrical region Q. Consider the following payoff functional J , which depends on states x ( s ) and controls u ( s ) for times s Є [ t , 7 ) , where 7 ...
... cost function and the terminal cost function . B. Control until exit from a closed cylindrical region Q. Consider the following payoff functional J , which depends on states x ( s ) and controls u ( s ) for times s Є [ t , 7 ) , where 7 ...
Page 7
... function . Thus , for real numbers a , b , Xa < b = 1 if a < b O if a > b , and Xa < b is defined similarly . The function g is called a boundary cost function , and is assumed continuous . B ' . Control until exit from Q. Let ( t , x ) ...
... function . Thus , for real numbers a , b , Xa < b = 1 if a < b O if a > b , and Xa < b is defined similarly . The function g is called a boundary cost function , and is assumed continuous . B ' . Control until exit from Q. Let ( t , x ) ...
Contents
1 | |
Viscosity Solutions | 57 |
Classical Solutions119 | 118 |
Controlled Markov Diffusions in R | 151 |
SecondOrder Case | 199 |
Logarithmic Transformations and Risk Sensitivity | 227 |
Singular Perturbations 261 | 260 |
Singular Stochastic Control | 293 |
Finite Difference Numerical Approximations | 321 |
Differential Games | 375 |
A Duality Relationships 397 | 396 |
References | 409 |
Index 425 | 424 |
Other editions - View all
Controlled Markov Processes and Viscosity Solutions Wendell H. Fleming,Halil Mete Soner No preview available - 2006 |
Common terms and phrases
admissible control assume assumptions boundary condition boundary data bounded brownian motion C₁ C¹(Q calculus of variations Chapter classical solution consider constant constraint controlled Markov diffusion convergence convex Corollary defined definition denote differential games dynamic programming equation dynamic programming principle Dynkin formula Example exists exit finite formulation given Hence HJB equation holds implies inequality initial data Ishii Lemma linear Lipschitz continuous Markov chain Markov control policy Markov processes minimizing Moreover nonlinear obtain optimal control optimal control problem partial derivatives partial differential equation progressively measurable proof of Theorem prove reference probability system Remark result risk sensitive satisfies Section semigroup Soner stochastic control stochastic control problem stochastic differential equations subset Suppose t₁ Theorem 9.1 uniformly continuous unique value function Verification Theorem viscosity solution viscosity subsolution viscosity supersolution