Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- 22 Efficient Learning
- 23 Learning as Optimization
- 24 The Boolean Perceptron
- 25 Hardness Results for Feed-Forward Networks
- 26 Constructive Learning Algorithms for Two-Layer Networks
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
25 - Hardness Results for Feed-Forward Networks
Published online by Cambridge University Press: 26 February 2010
- Frontmatter
- Contents
- Preface
- 1 Introduction
- Part one Pattern Classification with Binary-Output Neural Networks
- Part two Pattern Classification with Real-Output Networks
- Part three Learning Real-Valued Functions
- Part four Algorithmics
- 22 Efficient Learning
- 23 Learning as Optimization
- 24 The Boolean Perceptron
- 25 Hardness Results for Feed-Forward Networks
- 26 Constructive Learning Algorithms for Two-Layer Networks
- Appendix 1 Useful Results
- Bibliography
- Author index
- Subject index
Summary
Introduction
In this chapter we show that the consistency problem can be hard for some very simple feed-forward neural networks. In Section 25.2, we show that, for certain graded spaces of feed-forward linear threshold networks with binary inputs, the consistency problem is NP-hard. This shows that for each such family of networks, unless RP = NP, there can be no efficient learning algorithm in the restricted learning model and hence, in particular, no efficient learning algorithm in the standard model of Part 1. These networks are somewhat unusual in that the output unit is constrained to compute a conjunction. In Section 25.3, we extend the hardness result to networks with an arbitrary linear threshold output unit, but with real inputs. In Section 25.4, we describe similar results for graded classes of feed-forward sigmoid networks with linear output units, showing that approximately minimizing sample error is NP-hard for these classes. Unless RP = NP, this shows that there can be no efficient learning algorithm in the restricted learning model of Part 3.
Linear Threshold Networks with Binary Inputs
For each positive integer n, we define a neural network on n inputs as follows. The network has n binary inputs and k + 1 linear threshold units (k ≥ 1). It has two layers of computation units, the first consisting of k linear threshold units, each connected to all of the inputs.
- Type
- Chapter
- Information
- Neural Network LearningTheoretical Foundations, pp. 331 - 341Publisher: Cambridge University PressPrint publication year: 1999