Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-24T05:22:51.200Z Has data issue: false hasContentIssue false

Optimal control of ultimately bounded stochastic processes

Published online by Cambridge University Press:  22 January 2016

Yoshio Miyahara*
Affiliation:
Shizuoka University
Rights & Permissions [Opens in a new window]

Extract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We shall consider the optimal control for a system governed by a stochastic differential equation

where u(t, x) is an admissible control and W(t) is a standard Wiener process. By an optimal control we mean a control which minimizes the cost and in addition makes the corresponding Markov process stable.

Type
Research Article
Copyright
Copyright © Editorial Board of Nagoya Mathematical Journal 1974

References

[1] Haussmann, U. G.; Optimal Stationary Control with State and Control Dependent Noise, SIAM J. Control, Vol. 9 No. 2 (1971) pp. 184198.CrossRefGoogle Scholar
[2] Miyahara, Y.; Ultimate Boundedness of the Systems Governed by Stochastic Differential Equations, Nagoya Math. J. Vol. 47 (1972), pp. 111144.Google Scholar
[3] Miyahara, Y.; Invariant Measures of Ultimately Bounded Stochastic Process, Nagoya Math. J. Vol. 49 (1973) pp. 149153.CrossRefGoogle Scholar
[4] Wonham, W. M.; Optimal Stationary Control of a Linear System with State-dependent Noise, SIAM J. Control, Vol. 5, No. 3 (1967).CrossRefGoogle Scholar
[5] Wonham, W. M.; On Pole assignment in Multi-input Controllable linear systems, IEEE Trans. Automatic Control, AC-12 (1967), pp. 660665.Google Scholar