Update 10th October 2024: Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more
This chapter considers how connectionist neural networks offer a contrast to the symbolic view of representation discussed in previous chapters. We start by reviewing the structure of neural networks inspired by neurobiology, comparing a single unit in a neural network to a biological neuron. The second section looks at the simplest form of neural network -- a single-layer neural network using the perceptron convergence rule for learning. The third section introduces multilayer neural networks and the development of the backpropagation algorithm. Next, we look at how the multilayer neural network can be trained, and its biological plausibility. The last section summarizes three critical features of information processing in neural networks, as opposed to physical symbol systems: distributed representations, the lack of a clear distinction between storing and processing information, and the ability to learn.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.