Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
12 - Methods for lossless compression of images
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
Summary
Introduction
In many circumstances, data are collected that must be preserved perfectly. Data that are especially expensive to collect, require substantial computation to analyze, or involve legal liability consequences for imprecise representation should be stored and retrieved without any loss of accuracy. Medical data, such as images acquired from X-ray, CT (computed tomography), and MRI (magnetic resonance imaging) machines, are the most common examples where perfect representation is required in almost all circumstances, regardless of whether it is really necessary to preserve the integrity of the diagnostic task. The inaccuracies resulting from the acquisition and digitization processes are ignored in this requirement of perfection. It is only in the subsequent compression that the digitized data must be perfectly preserved. Physicists and materials scientists conduct experiments that produce data written as long streams or large arrays of samples in floating point format. These experiments are very expensive to set up, so there is often insistence that, if compressed, the decompressed data must be identical to the original.
Nowadays, storage and transmission systems are overwhelmed with huge quantities of data. Although storage technology has made enormous strides in increasing density and reducing cost, it seems that whatever progress is made is not enough. The users and producers of data continue to adapt to these advances almost instantaneously and fuel demand for even more storage at less cost. Even when huge quantities of data can be accommodated, retrieval and transmission delays remain serious issues.
- Type
- Chapter
- Information
- Digital Signal CompressionPrinciples and Practice, pp. 361 - 372Publisher: Cambridge University PressPrint publication year: 2011