Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-22T02:23:19.707Z Has data issue: false hasContentIssue false

Bounds on Data Compression Ratio with a Given Tolerable Error Probability

Published online by Cambridge University Press:  27 July 2009

Ilan Sadeh
Affiliation:
Visnet Ltd., P.O. Box 627, Pardessiya, 42815, Israel

Abstract

The paper treats data compression from the viewpoint of probability theory where a certain error probability is tolerable. We obtain bounds for the minimal rate given an error probability for blockcoding of general stationary ergodic sources. An application of the theory of large deviations provides numerical methods to compute for memoryless sources, the minimal compression rate given a tolerable error probability. Interesting connections between Cramer's functions and Shannon's theory for lossy coding are found.

Type
Research Article
Copyright
Copyright © Cambridge University Press 1998

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Algoet, P.H. & Cover, T.M. (1988). A sandwich proof of the Shannon-McMillan-Breiman theorem. Annals of Probability 16: 899909.CrossRefGoogle Scholar
2.Arimoto, S. (1973). On the converse to the coding theorem for discrete memoryless channels. IEEE Transactions on Information Theory IT-19: 357359.CrossRefGoogle Scholar
3.Barron, A.R. (1985). The strong ergodic theorem for densities generalized Shannon-McMillan-Breiman theorem. Annals of Probability 13: 12921303.CrossRefGoogle Scholar
4.Bender, C.M. & Orszag, S.A. (1987). Advanced mathematical methods for scientists and engineers. New York: McGraw-Hill.Google Scholar
5.Blahut, R.E. (1972). Computation of channel capacity and rate distortion functions. IEEE Transactions on Information Theory IT-18: 460473.CrossRefGoogle Scholar
6.Blahut, R.E. (1987). Principles and practice of information theory. Reading, MA: Addison-Wesley.Google Scholar
7.Bleistein, N. & Handelsman, R.A. (1975). Asymptotic expansions of integrals. New York: Holt, Rinehart Winston.Google Scholar
8.Breiman, L. (1957). The individual ergodic theorem of information theory. Annals of Mathematical Statistics 28: 809811 (corrected in 31: 809–810).CrossRefGoogle Scholar
9.Covo, Y. (1992). Error bounds for noiseless channels by an asymptotic large deviations theory. M.Sc. thesis, Tel-Aviv University.Google Scholar
10.Covo, Y. & Schuss, Z. (1991). Error bounds for noiseless channels by an asymptotic large deviations theory. Preliminary report.Google Scholar
11.Csiszar, I. & Longo, G. (1971). On the error exponent for source coding and for testing simple statistical hypothesis. First published in the Hungarian Academy of Sciences, Budapest.Google Scholar
12.Davisson, L.D., Longo, G., & Sgarro, A. (1981). The error exponent for the noiseless encoding of finite ergodic Markov sources. IEEE Transactions on Information Theory IT-27: 431438.CrossRefGoogle Scholar
13.Dueck, G. & Korner, J. (1979). Reliability function of a discrete memoryless channel at rates above capacity. IEEE Transactions on Information Theory IT-25: 8285.CrossRefGoogle Scholar
14.Gardiner, C.W. (1985). Handbook of stochastic methods for physics, chemistry and the natural sciences. Springer-Verlag.CrossRefGoogle Scholar
15.Gray, R.M. (1975). Sliding block source coding. IEEE Transactions on Information Theory IT-21: 357368.CrossRefGoogle Scholar
16.Gray, R.M. (1990). Entropy and information theory. Springer-Verlag.CrossRefGoogle Scholar
17.Gray, R.M., Neuhoff, D.L., & Omura, J.K. (1975). Process definitions of distortion rate functions and source coding theorems. IEEE Transactions on Information Theory IT-21: 524532.CrossRefGoogle Scholar
18.Knessl, C., Matkowsky, B.J., Schuss, Z., & Tier, C. (1985). An asymptotic theory of large deviations for Markov jump processes. SIAM Journal of Applied Mathematics 46(6): 10061028.CrossRefGoogle Scholar
19.Longo, G. &. Sgarro, A. (1979). The source coding theorem revisited: A combinatorial approach. IEEE Transactions on Information Theory IT-25: 544548.CrossRefGoogle Scholar
20.Mackenthun, K.M. & Pursley, M.B. (1978). Variable rate universal block source coding subject to a fidelity constraint. IEEE Transactions on Information Theory IT-24(3): 340360.Google Scholar
21.Marton, K. (1974). Error exponent for source coding with a fidelity criterion. IEEE Transactions on Information Theory IT-20: 197199.CrossRefGoogle Scholar
22.Omura, J. (1973). A coding theorem for discrete time sources. IEEE Transactions on Information Theory IT-19: 490498.CrossRefGoogle Scholar
23.Orey, S. (1985). On the Shannon-Perez-Moy theorem. Contemporary Mathematics 41: 319327.CrossRefGoogle Scholar
24.Ornstein, D.S. & Shields, P.C. (1990). Universal almost sure data compression. Annals of Probability 18: 441452.CrossRefGoogle Scholar
25.Sadeh, I. (1996). Universal data compression algorithm based on approximate string matching. Probability in the Engineering and Informational Sciences 10: 465486.CrossRefGoogle Scholar
26.Shannon, C.E. (1948). A mathematical theory of communication. Bell Systems Technical Journal 27: 379423, 623–656.CrossRefGoogle Scholar
27.Shannon, C.E.. (1959). Coding theorems for a discrete source with a fidelity criterion. IRE National Conv. Rec. Part 4: 142163.Google Scholar
28.Ziv, J. (1972). Coding of sources with unknown statistics. Part 1: Probability of encoding error; Part 2: Distortion relative to a fidelity criterion. IEEE Transactions on Information Theory IT-18: 384394.CrossRefGoogle Scholar