Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
2 - Book overview
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
Summary
Entropy and lossless coding
Compression of a digital signal source is just its representation with fewer information bits than its original representation. We are excluding from compression cases when the source is trivially over-represented, such as an image with gray levels 0 to 255 written with 16 bits each when 8 bits are sufficient. The mathematical foundation of the discipline of signal compression, or what is more formally called source coding, began with the seminal paper of Claude Shannon [1, 2], entitled “A mathematical theory of communication,” that established what is now called Information Theory. This theory sets the ultimate limits on achievable compression performance. Compression is theoretically and practically realizable even when the reconstruction of the source from the compressed representation is identical to the original. We call this kind of compression lossless coding. When the reconstruction is not identical to the source, we call it lossy coding. Shannon also introduced the discipline of Rate-distortion Theory [1–3], where he derived the fundamental limits in performance of lossy coding and proved that they were achievable. Lossy coding results in loss of information and hence distortion, but this distortion can be made tolerable for the given application and the loss is often necessary and unavoidable in order to satisfy transmission bandwidth and storage constraints. The payoff is that the degree of compression is often far greater than that achievable by lossless coding.
- Type
- Chapter
- Information
- Digital Signal CompressionPrinciples and Practice, pp. 10 - 22Publisher: Cambridge University PressPrint publication year: 2011