Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Programming overview
- 2 Ordinary differential equations
- 3 Root-finding
- 4 Partial differential equations
- 5 Time-dependent problems
- 6 Integration
- 7 Fourier transform
- 8 Harmonic oscillators
- 9 Matrix inversion
- 10 The eigenvalue problem
- 11 Iterative methods
- 12 Minimization
- 13 Chaos
- 14 Neural networks
- 15 Galerkin methods
- References
- Index
11 - Iterative methods
Published online by Cambridge University Press: 05 July 2013
- Frontmatter
- Dedication
- Contents
- Preface
- 1 Programming overview
- 2 Ordinary differential equations
- 3 Root-finding
- 4 Partial differential equations
- 5 Time-dependent problems
- 6 Integration
- 7 Fourier transform
- 8 Harmonic oscillators
- 9 Matrix inversion
- 10 The eigenvalue problem
- 11 Iterative methods
- 12 Minimization
- 13 Chaos
- 14 Neural networks
- 15 Galerkin methods
- References
- Index
Summary
We have seen some direct methods for inverting, and calculating the eigenvalues and eigenvectors of matrices. In many cases, however, these direct methods will take too long to solve a particular problem. In addition, we may only have interest in approximate solutions. When solving the Poisson problem, for example, we may already be on a coarse grid, and require only a qualitative description of the solution – high precision and “exact” matrix inverses are unnecessary. On the eigenvalue side, we may only need part of the spectrum of a matrix, maybe we just want a few bound states for a potential in quantum mechanics, for example. In these cases, what we would like is a process that ultimately would give the full inverse, or the complete set of eigenvalues, but a process that can be truncated “along the way” and still provide partial information.
There are two broad schemes for these approximate methods, and we'll describe and see examples of both. The first approach is relevant to matrix inversion, and involves decomposing a matrix into a simple (to invert) part, and a (hopefully small) “other” part. We proceed to invert the simple part and use that inversion to drive an iteration that will converge to the exact numerical solution (computed using QR factorization, for example). The second approach involves constructing a particular subspace of ℝn, called a Krylov subspace, and we invert matrices and find eigenvalues within that subspace.
- Type
- Chapter
- Information
- Computational Methods for Physics , pp. 274 - 297Publisher: Cambridge University PressPrint publication year: 2013