Book contents
- Frontmatter
- Contents
- Preface
- Notation
- 1 Introduction and Examples
- 2 Statistical Decision Theory
- 3 Linear Discriminant Analysis
- 4 Flexible Discriminants
- 5 Feed-forward Neural Networks
- 6 Non-parametric Methods
- 7 Tree-structured Classifiers
- 8 Belief Networks
- 9 Unsupervised Methods
- 10 Finding Good Pattern Features
- A Statistical Sidelines
- Glossary
- References
- Author Index
- Subject Index
5 - Feed-forward Neural Networks
Published online by Cambridge University Press: 05 August 2014
- Frontmatter
- Contents
- Preface
- Notation
- 1 Introduction and Examples
- 2 Statistical Decision Theory
- 3 Linear Discriminant Analysis
- 4 Flexible Discriminants
- 5 Feed-forward Neural Networks
- 6 Non-parametric Methods
- 7 Tree-structured Classifiers
- 8 Belief Networks
- 9 Unsupervised Methods
- 10 Finding Good Pattern Features
- A Statistical Sidelines
- Glossary
- References
- Author Index
- Subject Index
Summary
A great deal of hyperbole has been devoted to neural networks, both in their first wave around 1960 (Widrow & Hoff, 1960; Rosenblatt, 1962) and in their renaissance from about 1985 (chiefly inspired by Rumelhart & McClelland, 1986), but the ideas of biological relevance seem to us to have detracted from the essence of what is being discussed, and are certainly not relevant to practical applications in pattern recognition. Because ‘neural networks’ has become a popular subject, it has collected many techniques which are only loosely related and were not originally biologically motivated. In this chapter we will discuss the core area of feed-forward or ‘back-propagation’ neural networks, which can be seen as extensions of the ideas of the perceptron (Section 3.6). From this connection, these networks are also known as multi-layer perceptrons.
A formal definition of a feed-forward network is given in the glossary. Informally, they have units which have one-way connections to other units, and the units can be labelled from inputs (low numbers) to outputs (high numbers) so that each unit is only connected to units with higher numbers. The units can always be arranged in layers so that connections go from one layer to a later layer. This is best seen graphically; see Figure 5.1.
- Type
- Chapter
- Information
- Pattern Recognition and Neural Networks , pp. 143 - 180Publisher: Cambridge University PressPrint publication year: 1996
- 10
- Cited by