Book contents
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
8 - Morphological Diversity
Published online by Cambridge University Press: 05 October 2015
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
Summary
INTRODUCTION
The content of an image is often complex, and there is no single transform which is optimal to represent effectively all the contained features. For example, the Fourier transform is better at sparsifying globally oscillatory textures, while the wavelet transform does a better job with isolated singularities. Even if we limit our class of transforms to the wavelet one, decisions have to be made between e.g. the starlet transform (see Section 3.5) which yields good results for isotropic objects (such as stars and galaxies in astronomical images, or cells in biological images), and the orthogonal wavelet transform (see Section 2.5) which is good for bounded variation images (Cohen et al. 1999).
If we do not restrict ourselves to fixed dictionaries related to fast implicit transforms such as the Fourier or the wavelet dictionaries, one can even design very large dictionaries including many different shapes to represent the data effectively. Following Olshausen and Field (1996b), we can even push the idea one step forward by requiring that the dictionary is not fixed but rather learned to sparsify a set of typical images (patches). Such a dictionary design problem corresponds to finding a sparse matrix factorization and was tackled by several authors (Field 1999; Olshausen and Field 1996a; Simoncelli and Olshausen 2001; Lewicki and Sejnowski 2000; Kreutz-Delgado et al. 2003; Aharon et al. 2006; Peyre et al. 2007). In the rest of the chapter, we restrict ourselves to fixed dictionaries with fast transforms.
8.1.1 The Sparse Decomposition Problem
In the general sparse representation framework, a signal vector x ∈ RN is modeled as the linear combination of T elementary waveforms according to (1.1). In the case of overcomplete representations, the number of waveforms or atoms (ϕi)1≤i≤T that are columns of the dictionary ɸ is higher than the dimension of the space in which x lies: T > N, or even T ≫ N for highly redundant dictionaries.
- Type
- Chapter
- Information
- Sparse Image and Signal ProcessingWavelets and Related Geometric Multiscale Analysis, pp. 197 - 233Publisher: Cambridge University PressPrint publication year: 2015