Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-24T15:29:01.587Z Has data issue: false hasContentIssue false

Solving the Hamilton-Jacobi-Bellman equation of stochastic control by a semigroup perturbation method

Published online by Cambridge University Press:  01 July 2016

Domokos Vermes*
Affiliation:
University of Szeged

Extract

We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).

Type
Applied Probability in Biology and Engineering. An ORSA/TIMS Special Interest Meeting
Copyright
Copyright © Applied Probability Trust 1984 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)