Book contents
- Frontmatter
- Contents
- Preface
- 1 Basic probability theory
- 2 Convergence
- 3 Introduction to conditioning
- 4 Nonliner parametric regression analysis and maximum likelihood theory
- 5 Tests for model misspecification
- 6 Conditioning and dependence
- 7 Functional specification of time series models
- 8 ARMAX models: estimation and testing
- 9 Unit roots and cointegration
- 10 The Nadaraya–Watson kernel regression function estimator
- References
- Index
2 - Convergence
Published online by Cambridge University Press: 28 October 2009
- Frontmatter
- Contents
- Preface
- 1 Basic probability theory
- 2 Convergence
- 3 Introduction to conditioning
- 4 Nonliner parametric regression analysis and maximum likelihood theory
- 5 Tests for model misspecification
- 6 Conditioning and dependence
- 7 Functional specification of time series models
- 8 ARMAX models: estimation and testing
- 9 Unit roots and cointegration
- 10 The Nadaraya–Watson kernel regression function estimator
- References
- Index
Summary
In this chapter we consider various modes of convergence, i.e., weak and strong convergence of random variables, weak and strong laws of large numbers, convergence in distribution and central limit theorems, weak and strong uniform convergence of random functions, and uniform weak and strong laws. The material in this chapter is a revision and extension of sections 2.2–2.4 in Bierens (1981).
Weak and strong convergence of random variables
In this section we shall deal with the concepts of convergence in probability and almost sure convergence, and various laws of large numbers. Throughout we assume that the random variables involved are defined on a common probability space {Ω,ℑ,P}. The first concept is well known:
Definition 2.1.1 Let (Xn) be a sequence of r.v.'s. We say that Xn converges in probability to a r.v. X if for every ε > 0, limn→∞P(∣Xn – X∣ <; ε) = 1, and we write: Xn → X in pr. or plimn→∞Xn = X.
However, almost sure convergence is a much stronger convergence concept:
Definition 2.1.2 Let (Xn) be a sequence of r.v.'s. We say that Xn converges almost surely (a.s.) to a r.v. X if there is a null set N ε ℑ (that is a set in ℑ satisfying P(N) = 0) such that for every ω ε Ω\N, limn→∞xn(ω) = x(ω), and we write: Xn → X a.s. or limn→∞Xn = X a.s.
- Type
- Chapter
- Information
- Topics in Advanced EconometricsEstimation, Testing, and Specification of Cross-Section and Time Series Models, pp. 19 - 47Publisher: Cambridge University PressPrint publication year: 1994