Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Modulation
- 3 Demodulation
- 4 Synchronization and noncoherent communication
- 5 Channel equalization
- 6 Information-theoretic limits and their computation
- 7 Channel coding
- 8 Wireless communication
- Appendix A Probability, random variables, and random processes
- Appendix B The Chernoff bound
- Appendix C Jensen's inequality
- References
- Index
6 - Information-theoretic limits and their computation
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Modulation
- 3 Demodulation
- 4 Synchronization and noncoherent communication
- 5 Channel equalization
- 6 Information-theoretic limits and their computation
- 7 Channel coding
- 8 Wireless communication
- Appendix A Probability, random variables, and random processes
- Appendix B The Chernoff bound
- Appendix C Jensen's inequality
- References
- Index
Summary
Information theory (often termed Shannon theory in honor of its founder, Claude Shannon) provides fundamental benchmarks against which a communication system design can be compared. Given a channel model and transmission constraints (e.g., on power), information theory enables us to compute, at least in principle, the highest rate at which reliable communication over the channel is possible. This rate is called the channel capacity.
Once channel capacity is computed for a particular set of system parameters, it is the task of the communication link designer to devise coding and modulation strategies that approach this capacity. After 50 years of effort since Shannon's seminal work, it is now safe to say that this goal has been accomplished for some of the most common channel models. The proofs of the fundamental theorems of information theory indicate that Shannon limits can be achieved by random code constructions using very large block lengths. While this appeared to be computationally infeasible in terms of both encoding and decoding, the invention of turbo codes by Berrou et al. in 1993 provided implementable mechanisms for achieving just this. Turbo codes are random-looking codes obtained from easy-to-encode convolutional codes, which can be decoded efficiently using iterative decoding techniques instead of ML decoding (which is computationally infeasible for such constructions). Since then, a host of “turbo-like” coded modulation strategies have been proposed, including rediscovery of the low density parity check (LDPC) codes invented by Gallager in the 1960s.
- Type
- Chapter
- Information
- Fundamentals of Digital Communication , pp. 252 - 292Publisher: Cambridge University PressPrint publication year: 2008