Article contents
Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution
Published online by Cambridge University Press: 30 January 2018
Abstract
Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.
Keywords
MSC classification
- Type
- Research Article
- Information
- Copyright
- © Applied Probability Trust
References
- 2
- Cited by