Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-23T03:14:50.608Z Has data issue: false hasContentIssue false

Bounds for the chi-square approximation of the power divergence family of statistics

Published online by Cambridge University Press:  09 August 2022

Robert E. Gaunt*
Affiliation:
The University of Manchester
*
*Postal address: Department of Mathematics, The University of Manchester, Oxford Road, Manchester M13 9PL, UK. Email: robert.gaunt@manchester.ac.uk

Abstract

It is well known that each statistic in the family of power divergence statistics, across n trials and r classifications with index parameter $\lambda\in\mathbb{R}$ (the Pearson, likelihood ratio, and Freeman–Tukey statistics correspond to $\lambda=1,0,-1/2$ , respectively), is asymptotically chi-square distributed as the sample size tends to infinity. We obtain explicit bounds on this distributional approximation, measured using smooth test functions, that hold for a given finite sample n, and all index parameters ( $\lambda>-1$ ) for which such finite-sample bounds are meaningful. We obtain bounds that are of the optimal order $n^{-1}$ . The dependence of our bounds on the index parameter $\lambda$ and the cell classification probabilities is also optimal, and the dependence on the number of cells is also respectable. Our bounds generalise, complement, and improve on recent results from the literature.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahle, T. D. (2022). Sharp and simple bounds for the raw moments of the binomial and Poisson distributions. Statist. Prob. Lett. 182, 109306.CrossRefGoogle Scholar
Anastasiou, A. and Reinert, G. (2020). Bounds for the asymptotic distribution of the likelihood ratio. Ann. Appl. Prob. 30, 608643.CrossRefGoogle Scholar
Assylebekov, Z. A. (2010). Convergence rate of multinomial goodness-of-fit statistics to chi-square distribution. Hiroshima Math. J. 40, 115131.Google Scholar
Assylebekov, Z. A., Zubov, V. N. and Ulyanov, V. V. (2011). On approximating some statistics of goodness-of-fit tests in the case of three-dimensional discrete data. Siberian Math. J. 52, 571584.CrossRefGoogle Scholar
Chapman, J. W. (1976), A comparison of the $X^2$ , $-2\log R$ , and the multinomial probability criteria for significance testing when expected frequencies are small. J. Amer. Statist. Assoc. 71, 854863.Google Scholar
Chen, L. H. Y., Goldstein, L. and Shao, Q.–M. (2011). Normal Approximation by Stein’s Method. Springer, New York.CrossRefGoogle Scholar
Cressie, N. A. C. and Read, T. R. C. (1984). Multinomial goodness-of-fit tests. J. R. Statist. Soc. B 46, 440464.Google Scholar
Diaconis, P. and Zabell, S. (1991). Closed form summation for classical distributions: Variations on a theme of de Moivre. Statist. Sci. 6, 284302.CrossRefGoogle Scholar
Döbler, C. and Peccati, G. (2018). The Gamma Stein equation and noncentral de Jong theorems. Bernoulli 24, 33843421.CrossRefGoogle Scholar
Freeman, M. F. and Tukey, J. W. (1950). Transformations related to the angular and the square root. Ann. Math. Statist. 21, 607611.CrossRefGoogle Scholar
Fujikoshi, Y. and Ulyanov, V. V. (2020). Non-Asymptotic Analysis of Approximations for Multivariate Statistics. Springer Briefs, New York.CrossRefGoogle Scholar
Gaunt, R. E. (2013). Rates of convergence of variance-Gamma approximations via Stein’s method. DPhil thesis, University of Oxford.Google Scholar
Gaunt, R. E. (2020). Stein’s method for functions of multivariate normal random variables. Ann. Inst. H. Poincaré Prob. Statist. 56, 14841513.CrossRefGoogle Scholar
Gaunt, R. E., Pickett, A. and Reinert, G. (2017). Chi-square approximation by Stein’s method with application to Pearson’s statistic. Ann. Appl. Prob. 27, 720756.CrossRefGoogle Scholar
Gaunt, R. E. and Reinert, G. (2021). Bounds for the chi-square approximation of Friedman’s statistic by Stein’s method. Preprint, arXiv:2111.00949.Google Scholar
Gibbs, A. L. and Su, F. E. (2002). On choosing and bounding probability metrics. Internat. Statist. Rev. 70, 419435.CrossRefGoogle Scholar
Götze, F. and Ulyanov, V. V. (2003). Asymptotic distribution of $\chi^2$ -type statistics. Preprint 03033, Bielefeld University.Google Scholar
Greenwood, P. E. and Nikulin, M. S. (1996). A Guide to Chi-Squared Testing. Wiley, New York.Google Scholar
Koehler, K. J. and Larntz, K. (1980). An empirical investigation of goodness-of-fit statistics for sparse multinomials. J. Amer. Statist. Assoc. 75, 336344.CrossRefGoogle Scholar
Larantz, K. (1978). Small sample comparisons of exact levels for chi-squared goodness-of-fit statistics. J. Amer. Statist. Assoc. 73, 253263.CrossRefGoogle Scholar
Luk, H. (1994). Stein’s method for the Gamma distribution and related statistical applications. PhD thesis, University of Southern California.Google Scholar
Mann, B. (1997). Convergence rate for a $\chi^2$ of a multinomial. Unpublished manuscript.Google Scholar
Medak, F. M. (1991). Hierarchical testing using the power-divergence family of statistics. PhD thesis, Iowa State University.Google Scholar
Nourdin, I. and Peccati, G. (2009). Stein’s method on Wiener chaos. Prob. Theory Relat. Fields 145, 75118.CrossRefGoogle Scholar
Pearson, K. (1900). On the criterion that a given system of deviations is such that it can be reasonably supposed to have arisen from random sampling. Phil. Mag. 50, 157175.CrossRefGoogle Scholar
Perkins, W., Tygert, M. and Ward, R. (2014). Some deficiencies of $\chi^2$ and classical exact tests of significance. Appl. Comput. Harmon. Anal. 36, 361386.CrossRefGoogle Scholar
Pickett, A. (2004). Rates of convergence of $\chi^2$ approximations via Stein’s method. DPhil thesis, University of Oxford.Google Scholar
Pinelis, I. (1995). Optimum bounds on moments of sums of independent random vectors. Siberian Adv. Math. 5, 141150.Google Scholar
Prokhorov, Y. V. and Ulyanov, V. V. (2013). Some approximation problems in statistics and probability. In P. Eichelsbacher et al. (eds), Limit Theorems in Probability, Statistics and Number Theory, Springer Proceedings in Mathematics & Statistics 42, 235249.CrossRefGoogle Scholar
Read, T. R. C. (1984). Small-sample comparisons for the power divergence goodness-of-fit statistics. J. Amer. Statist. Assoc. 79, 929935.CrossRefGoogle Scholar
Read, T. R. C. (1984). Closer asymptotic approximations for the distributions of the power divergence goodness-of-fit statistics. Ann. Inst. Statist. Math. 36, 5969.CrossRefGoogle Scholar
Read, T. R. C. and Cressie, N. A. C. (2012). Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer, New York.Google Scholar
Rudas, T. (1986). A Monte Carlo comparison of the small sample behaviour of the Pearson, the likelihood ratio and the Cressie–Read statistics. J. Statist. Comput. Simul. 24, 107120.CrossRefGoogle Scholar
Siotani, M. and Fujikoshi, Y. (1984). Asymptotic approximations for the distributions of multinomial goodness-of-fit statistics. Hiroshima Math. J. 14, 115124.CrossRefGoogle Scholar
Stein, C. (1986). Approximate Computation of Expectations. IMS, Hayward, CA.Google Scholar
Stuart, A. and Ord, J. K. (1987). Kendall’s Advanced Theory of Statistics, Vol. I, 5th edn. Charles Griffin, London.Google Scholar
Ulyanov, V. V. and Zubov, V. N. Refinement on the convergence of one family of goodness-of-fit statistics to chi-squared distribution. Hiroshima Math. J. 39 (2009), 133161.CrossRefGoogle Scholar
Yarnold, J. K. (1972). Asymptotic approximations for the probability that a sum of lattice random vectors lies in a convex set. Ann. Math. Statist. 43, 15661580.CrossRefGoogle Scholar
Zolotarev, V. M. (1983). Probability metrics. Teor. Veroyatn. Primen. 28, 264287.Google Scholar
Supplementary material: PDF

Gaunt supplementary material

Gaunt supplementary material

Download Gaunt supplementary material(PDF)
PDF 162.8 KB