Hostname: page-component-84b7d79bbc-dwq4g Total loading time: 0 Render date: 2024-07-27T12:09:28.804Z Has data issue: false hasContentIssue false

Dynamic programming principle for stochastic recursive optimal control problem with delayed systems

Published online by Cambridge University Press:  16 January 2012

Li Chen
Affiliation:
Department of Mathematics, China University of Mining Technology, Beijing 100083, P.R. China. chenli@cumtb.edu.cn
Zhen Wu
Affiliation:
School of Mathematics, Shandong University, Jinan 250100, P.R. China; wuzhen@sdu.edu.cn
Get access

Abstract

In this paper, we study one kind of stochastic recursive optimal control problem for the systems described by stochastic differential equations with delay (SDDE). In our framework, not only the dynamics of the systems but also the recursive utility depend on the past path segment of the state process in a general form. We give the dynamic programming principle for this kind of optimal control problems and show that the value function is the viscosity solution of the corresponding infinite dimensional Hamilton-Jacobi-Bellman partial differential equation.

Type
Research Article
Copyright
© EDP Sciences, SMAI, 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Chang, M., Pang, T. and Pemy, M., Optimal control of stochastic functional differential equations with a bounded memory. Stochastic An International J. Probability & Stochastic Process 80 (2008) 6996. Google Scholar
Duffie, D. and Epstein, L.G., Stochastic differential utility. Economicrica 60 (1992) 353394. Google Scholar
El Karoui, N., Peng, S. and Quenez, M.C., Backward stochastic differential equation in finance. Math. Finance 7 (1997) 171. Google Scholar
Fuhrman, M. and Tessitore, G., Nonlinear Kolmogorov equation in infinite dimensional spaces : the backward stochastic differential equations approach and applications to optimal control. Ann. Probab. 30 (2002) 13971465. Google Scholar
Fuhrman, M., Masiero, F. and Tessitore, G., Stochastic equations with delay : optimal control via BSDEs and regular solutions of Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim. 48 (2010) 46244651. Google Scholar
Larssen, B., Dynamic programming in stochastic control of systems with delay. Stoch. Stoch. Rep. 74 (2002) 651673. Google Scholar
B. Larssen and N.H. Risebro, When are HJB equations for control problems with stochastic delay equations finite dimensional? Dr. Scient. thesis, University of Oslo (2003).
S.E.A. Mohammed, Stochastic Functional Differential Equations, Pitman (1984).
S.E.A. Mohammed, Stochastic Differential Equations with Memory : Theory, Examples and Applications, Stochastic Analysis and Related Topics 6. The Geido Workshop (1996); Progress in Probability. Birkhauser (1998).
Peng, S., A generalized dynamic programming principle and Hamilton-Jacobi-Bellmen equation. Stoch. Stoch. Rep. 38 (1992) 119134. Google Scholar
S. Peng, Backward stochastic differential equations-stochastic optimization theory and viscosity solution of HJB equations. Topics on Stochastic Analysis (in Chinese), edited by J. Yan, S. Peng, S. Fang and L. Wu. Science Press, Beijing (1997) 85–138.
Wu, Z. and Yu, Z., Dynamic programming principle for one kind of stochastic recursive optimal control problem and Hamilton-Jacobi-Bellman equation. SIAM J. Control Optim. 47 (2008) 26162641. Google Scholar
J. Yong and X.Y. Zhou, Stochastic Controls. Springer-Verlag (1999).