Book contents
- Frontmatter
- Contents
- List of abbreviations and acronyms
- Preface
- Acknowledgments
- 1 Introduction
- Part I Probability, random variables, and statistics
- Part II Transform methods, bounds, and limits
- Part III Random processes
- Part IV Statistical inference
- Part V Applications and advanced topics
- 20 Hidden Markov models and applications
- 21 Probabilistic models in machine learning
- 22 Filtering and prediction of random processes
- 23 Queueing and loss models
- References
- Index
22 - Filtering and prediction of random processes
from Part V - Applications and advanced topics
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of abbreviations and acronyms
- Preface
- Acknowledgments
- 1 Introduction
- Part I Probability, random variables, and statistics
- Part II Transform methods, bounds, and limits
- Part III Random processes
- Part IV Statistical inference
- Part V Applications and advanced topics
- 20 Hidden Markov models and applications
- 21 Probabilistic models in machine learning
- 22 Filtering and prediction of random processes
- 23 Queueing and loss models
- References
- Index
Summary
The estimation of a random variable or process by observing other random variables or processes is an important problem in communications, signal processing and other science and engineering applications. In Chapter 18 we considered a partially observable RVX = (Y, Z), where the unobservable part Z is called a latent variable. In this chapter we study the problem of estimating the unobserved part using samples of the observed part. In Chapter 18 we also considered the problem of estimating RVs, called random parameters using the maximum a posteriori probability (MAP) estimation procedure. When the prior distribution of the random parameter is unknown, we normally assume a uniform distribution, and then the MAP estimate reduces to the maximumlikelihood estimate (MLE) (see Section 18.1.2): if the prior density is not uniform, the MLE is not optimal and does not possess the nice properties described in Section 18.1.2. If an estimator θ = T(X) has a Gaussian distribution N(µ, ∑), its log-likelihood function is a quadratic form (t - µ)⊤ ∑-1(t - µ), and the MLE is obtained by minimizing this quadratic form. If the variance matrix is diagonal, the MLE becomes what is called a minimum weighted square error (MWSE) estimate. If all the diagonal terms of ∑ are equal, the MWSE becomes the minimum mean square error (MMSE) estimate. Thus, the MMSE estimator is optimal only under these assumptions cited above.
- Type
- Chapter
- Information
- Probability, Random Processes, and Statistical AnalysisApplications to Communications, Signal Processing, Queueing Theory and Mathematical Finance, pp. 645 - 694Publisher: Cambridge University PressPrint publication year: 2011