Book contents
- Frontmatter
- Contents
- Preface
- Abbreviations
- Notations
- List of algorithms
- 1 Introduction
- 2 Digital communication
- 3 Estimation theory and Monte Carlo techniques
- 4 Factor graphs and the sum–product algorithm
- 5 Statistical inference using factor graphs
- 6 State-space models
- 7 Factor graphs in digital communication
- 8 Decoding
- 9 Demapping
- 10 Equalization–general formulation
- 11 Equalization: single-user, single-antenna communication
- 12 Equalization: multi-antenna communication
- 13 Equalization: multi-user communication
- 14 Synchronization and channel estimation
- 15 Appendices
- References
- Index
1 - Introduction
Published online by Cambridge University Press: 08 January 2010
- Frontmatter
- Contents
- Preface
- Abbreviations
- Notations
- List of algorithms
- 1 Introduction
- 2 Digital communication
- 3 Estimation theory and Monte Carlo techniques
- 4 Factor graphs and the sum–product algorithm
- 5 Statistical inference using factor graphs
- 6 State-space models
- 7 Factor graphs in digital communication
- 8 Decoding
- 9 Demapping
- 10 Equalization–general formulation
- 11 Equalization: single-user, single-antenna communication
- 12 Equalization: multi-antenna communication
- 13 Equalization: multi-user communication
- 14 Synchronization and channel estimation
- 15 Appendices
- References
- Index
Summary
Motivation
Claude E. Shannon was one of the great minds of the twentieth century. In the 1940s, he almost single-handedly created the field of information theory and gave the world a new way to look at information and communication. The channel-coding theorem, where he proved the existence of good error-correcting codes to transmit information at any rate below capacity with an arbitrarily small probability of error, was one of his fundamental contributions. Unfortunately, Shannon never described how to construct these codes. Ever since his 1948 landmark paper “A mathematical theory of communication” [1], the channel-coding theorem has tantalized researchers worldwide in their quest for the ultimate error-correcting code. After more than forty years, state-of-the-art errorcorrecting codes were still disappointingly far away from Shannon's theoretical capacity bound. No drastic improvement seemed to be forthcoming, and researchers were considering a more practical capacity benchmark, the cut-off rate [2], which could be achieved by practical codes.
In 1993, two until then little-known French researchers from the ENST in Bretagne, Claude Berrou and Alain Glavieux, claimed to have discovered a new type of code, which operated very close to Shannon capacity with reasonable decoding complexity. The decoding process consisted of two decoders passing information back and forth, giving rise to the name “turbo code.” They first presented their results at the IEEE International Conference on Communications in Geneva, Switzerland [3]. Quite understandably, they were met with a certain amount of skepticism by the traditional coding community. Only when their findings were reproduced by other labs did the turbo idea really take off.
- Type
- Chapter
- Information
- Iterative Receiver Design , pp. 1 - 4Publisher: Cambridge University PressPrint publication year: 2007
- 1
- Cited by