Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-18T04:17:08.827Z Has data issue: false hasContentIssue false

Conditions Under Which a Markov Chain Converges to its Steady State in Finite Time

Published online by Cambridge University Press:  27 July 2009

Peter W. Glynn
Affiliation:
Department of Operations Research Stanford University, Stanford, CA 94305
Donald L. Iglehart
Affiliation:
Department of Operations Research Stanford University, Stanford, California 94305

Abstract

Analysis of the initial transient problem of Monte Carlo steady-state simulation motivates the following question for Markov chains: when does there exist a deterministic Tsuch that P{X(T) = y|(0) = x} = ®(y), where ρ is the stationary distribution of X? We show that this can essentially never happen for a continuous-time Markov chain; in discrete time, such processes are i.i.d. provided the transition matrix is diagonalizable.

Type
Articles
Copyright
Copyright © Cambridge University Press 1988

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cinlar, E. (1975). Introduction to stochastic processes. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Ethier, S.N. & Kurtz, T.G. (1986). Markov processes: Characterization and convergence. New York: John Wiley.CrossRefGoogle Scholar
Feller, W. (1968). An introduction to probability theory and its applications, Vol. 1. New York: John Wiley.Google Scholar
Lancaster, P. (1969). Theory of matrices. New York: Academic Press.Google Scholar
Schiuben, L. (1983). Confidence interval estimation using standardized time series. Operations Research 31: 10901108.Google Scholar
Subelman, E.J. (1976). On the class of Markov chains with finite convergence time. Stochastic Processes Applications 4: 253259.CrossRefGoogle Scholar