Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-7drxs Total loading time: 0 Render date: 2024-07-21T14:18:27.201Z Has data issue: false hasContentIssue false

5 - Lossy compression of scalar sources

Published online by Cambridge University Press:  05 June 2012

William A. Pearlman
Affiliation:
Rensselaer Polytechnic Institute, New York
Amir Said
Affiliation:
Hewlett-Packard Laboratories, Palo Alto, California
Get access

Summary

Introduction

In normal circumstances, lossless compression reduces file sizes in the range of a factor of 2, sometimes a little more and sometimes a little less. Often it is acceptable and even necessary to tolerate some loss or distortion between the original and its reproduction. In such cases, much greater compression becomes possible. For example, the highest quality JPEG-compressed images and MP3 audio are compressed about 6 or 7 to 1. The objective is to minimize the distortion, as measured by some criterion, for a given rate in bits per sample or equivalently, minimize the rate for a given level of distortion.

In this chapter, we make a modest start toward the understanding of how to compress realistic sources by presenting the theory and practice of quantization and coding of sources of independent and identically distributed random variables. Later in the chapter, we shall explain some aspects of optimal lossy compression, so that we can assess how well our methods perform compared to what is theoretically possible.

Quantization

The sources of data that we recognize as digital are discrete in value or amplitude and these values are represented by a finite number of bits. The set of these discrete values is a reduction from a much larger set of possible values, because of the limitations of our computers and systems in precision, storage, and transmission speed. We therefore accept the general model of our data source as continuous in value. The discretization process is called quantization.

Type
Chapter
Information
Digital Signal Compression
Principles and Practice
, pp. 77 - 115
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Fleischer, P. E., “Sufficient conditions for achieving minimum distortion in a quantizer,” in IEEE International Convention Record – Part I, vol. Part I. 1964, pp. 104–111.Google Scholar
2. Trushkin, A. V., “Sufficient conditions for uniqueness of a locally optimal quantizer for a class of convex error weighting functions,” IEEE Trans. Inf. Theory, vol. 28, no. 2, pp. 187–198, Mar. 1982.CrossRefGoogle Scholar
3. Lloyd, S. P., “Least squares quantization in PCM,” IEEE Trans. Inf. Theory, vol. 28, no. 2, pp. 129–137, Mar. 1982.CrossRefGoogle Scholar
4. Max, J., “Quantizing for minimum distortion,” IRE Trans. Inform. Theory, vol. 6, no. 1, pp. 7–12, Mar. 1960. 5.Bennett, W. R., “Spectra of quantized signals,” Bell Syst. Tech. J., vol. 27, pp. 446–472, Jul. 1948.CrossRefGoogle Scholar
6. Goblick, T. J. Jr. and Holsinger, J. L., “Analog source digitization: a comparison of theory and practice,” IEEE Trans. Inf. Theory, vol. 13, no. 2, pp. 323–326, Apr. 1967.CrossRefGoogle Scholar
7. Gish, H. and Pierce, J. N., “Asymptotically efficient quantizing,” IEEE Trans. Inf. Theory, vol. 14, no. 5, pp. 676–683, Sept. 1968.CrossRefGoogle Scholar
8. Netravali, A. N. and Saigal, R., “Optimum quantizer design using a fixed-point algorithm,” Bell Syst. Tech. J., vol. 55, pp. 1423–1435, Nov. 1976.CrossRefGoogle Scholar
9. Farvardin, N. and Modestino, J. W., “Optimum quantizer performance for a class of non-gaussian memoryless sources,” IEEE Trans. Inf. Theory, vol. 30, no. 3, pp. 485–497, May. 1984.CrossRefGoogle Scholar
10. Cover, T. M. and Thomas, J. A., Elements of Information Theory. New York, NY: John Wiley & Sons. 1991, 2006.Google Scholar
11. Gallager, R. G., Information Theory and Reliable Communication. New York, NY: John Wiley & Sons. 1968.Google Scholar
12. Blahut, R. E., “Computation of channel capacity and rate-distortion functions,” IEEE Trans. Inf. Theory, vol. 18, no. 4, pp. 460–473, July. 1972.CrossRefGoogle Scholar
13. Gersho, A. and Gray, R. M., Vector Quantization and Signal Compression. Boston, Dordrecht, London: Kluwer Academic Publishers. 1992.CrossRefGoogle Scholar
Alecu, A., Munteanu, A., Cornelis, J., Dewitte, S., and Schelkens, P., “On the optimality of embedded deadzone scalar-quantizers for wavelet-based l-infinite-constrained image coding,” IEEE Signal Process. Lett., vol. 11, no. 3, pp. 367–370, Mar. 2004.CrossRefGoogle Scholar
Huffman, D. A., “A method for the construction of minimum redundancy codes,” Proc. IRE, vol. 40, pp. 1098–1101, Sept. 1952.CrossRefGoogle Scholar
Jayant, N. S. and Noll, P., Digital Coding of Waveforms. Englewood Cliffs, NJ: Prentice Hall, 1984.Google Scholar
Oger, M., Ragot, S., and Antonini, M., “Model-based deadzone optimization for stack-run audio coding with uniform scalar quantization,” in Proceedings of the IEEE International Conference on Acoustics Speech Signal Process., Las Vegas, NV, Mar. 2008, pp. 4761–4764.Google Scholar
Pearlman, W. A., “Polar quantization of a complex gaussian random variable,” in Quantization, ser. Benchmark Papers in Electrical Engineering and Computer Science, ed. Reinhold, P. Swaszek Van Nostrand, 1985, vol. 29.Google Scholar
Pearlman, W., “Polar quantization of a complex Gaussian random variable,” IEEE Trans. Commun., vol. COM-27, no. 6, pp. 101–112, June 1979.
Pearlman, W. and Senge, G., “Optimal quantization of the Rayleigh probability distribution,” IEEE Trans. Commun., vol. COM-27, no. 1, pp. 101–112, Jan. 1979.
Shannon, C. E., “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, pp. 379–423 and 632–656, July and Oct. 1948.Google Scholar
Shannon, C. E. and Weaver, W., The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press, 1949.Google Scholar
Sullivan, G. J., “Efficient scalar quantization of exponential and Laplacian random variables,” IEEE Trans. Inf. Theory, vol. 42, no. 5, pp. 1365–1374, Sept. 1996.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×