Published online by Cambridge University Press: 01 April 2022
A computational theory of induction must be able to identify the projectible predicates, that is to distinguish between which predicates can be used in inductive inferences and which cannot. The problems of projectibility are introduced by reviewing some of the stumbling blocks for the theory of induction that was developed by the logical empiricists. My diagnosis of these problems is that the traditional theory of induction, which started from a given (observational) language in relation to which all inductive rules are formulated, does not go deep enough in representing the kind of information used in inductive inferences.
As an interlude, I argue that the problem of induction, like so many other problems within AI, is a problem of knowledge representation. To the extent that AI-systems are based on linguistic representations of knowledge, these systems will face basically the same problems as did the logical empiricists over induction.
In a more constructive mode, I then outline a non-linguistic knowledge representation based on conceptual spaces. The fundamental units of these spaces are “quality dimensions”. In relation to such a representation it is possible to define “natural” properties which can be used for inductive projections. I argue that this approach evades most of the traditional problems.
An earlier version of this article was presented at a conference on the Philosophy of Science, Dubrovnik, April 1987, and at an AI-workshop on Inductive Reasoning, Roskilde, April 1987. I wish to thank the participants of these meetings as well as Johan van Benthem, Jens Erik Fenstad, Lars Löfgren, David Makinson, Ilkka Niiniluoto, Claudio Pizzi and two anonymous referees for helpful comments.