No CrossRef data available.
Article contents
Solving the Hamilton-Jacobi-Bellman equation of stochastic control by a semigroup perturbation method
Published online by Cambridge University Press: 01 July 2016
Extract
We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).
- Type
- Applied Probability in Biology and Engineering. An ORSA/TIMS Special Interest Meeting
- Information
- Copyright
- Copyright © Applied Probability Trust 1984