Hostname: page-component-7479d7b7d-qs9v7 Total loading time: 0 Render date: 2024-07-10T07:29:59.306Z Has data issue: false hasContentIssue false

Utility Maximizers in Iterated Prisoner's Dilemmas*

Published online by Cambridge University Press:  05 May 2010

Jordan Howard Sobel
Affiliation:
University of Toronto

Extract

Maximizers in isolated Prisoner's Dilemmas are doomed to frustration. But in Braybrooke's view maximizers might do better in a series, securing Pareto-optimal arrangements if not from the very beginning, at least eventually. Given certain favourable special conditions, it can be shown according to Braybrooke and shown even without question-begging motivational or value assumptions, that in a series of Dilemmas maximizers could manage to communicate a readiness to reciprocate, generate thereby expectations of reciprocation, and so give rise to optimizing reciprocations which, in due course, would reinforce expectations, the net result of all this being an increasingly stable practice to mutual benefit. In this way neighbours could learn to be good neighbours: they could learn to respect each other's property, to engage in reciprocal assistance, and perhaps even to make and keep promises covering a range of activities. So maximizers are, Braybrooke holds, capable of a society of sorts. Out of the ashes they might build something. But even under favourable conditions they could not build much and most conditions would defeat them almost entirely, for many-person Dilemmas whether isolated or in series would be quite beyond them, question-begging motivational assumptions aside. In settings at all like those in which we live, utility maximization assuming only ‘traditional’ motivation is self-defeating without remedy, and most certainly not an adequate basis for social life.' The probable inference from as a conception of individual rationality: for surely truly rational agents would be if not perfectly well designed at least better designed for communal living than utility maximizers would be assuming only ‘traditional’, non-question-begging values or motivation. Or, if this inference is denied and maximization is not challenged as a conception of rationality, we can conclude instead, Braybrooke has more recently suggested (p. 34, this issue), that the advice, ‘Be rational,’ is incomplete, and when addressed to groups should be supplemented by, ‘And be somewhat ‘untraditional’ in your values; respect at least some rules that would make coordination and cooperation possible even in many-person interaction problems.’

Type
Articles
Copyright
Copyright © Canadian Philosophical Association 1976

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 The tradition in question is that of Hobbesian state-of-nature analysis, in which tradition values are always, though generally without explicit notive, assumed to be exclusively forward-looking: in this tradition it is axiomatic that proper or rational values are for states of affairs without regard to their pasts. The motto in this tradition for proper values could be: What is done is done. Thus the seventh Law of Nature is, That in Revenges (that is. retribution of Evil for Evil,) Men look not at the greatnesse of the evill past, hut the greatnesse of the good to follow.” Hobbes, Thomas, Leviathan (New York and London. 1950), p. 126Google Scholar . This law could hardly appeal to men who place a value on retribu- tion of evils done, and a value that varies with the greatness of that evil. But then such values are contrary to reason, and, equivalently, the seventh law of nature is. though Hobbes does not in Leviathan insist on this point, already unconditionally binding in nature. Far from being a help, however, this is a part of the problem in nature: more precisely, the general exclusion of backward-looking values is an important part of the problem, or at any rate my reconstruction of the problem, in nature. Regarding the irrationality of backward-looking revenge, and the special status (pointed out to me many years age by W. D. Falk) of the seventh law, consider that this law is not only somehow “consequent to the next before it, that commandeth Pardon, upon security of the Future time,” but Besides, Revenge without respect to the Example, and profit to come, is triumph, or glorying in the hurt of another, tending to no end; (for the End is alwayes somewhat to come;) and glorying to no end, is vainglory, and con- trary to reason; and to hurt without reason, tendeth to the introduction of Warre; which is against the law of Nature, and is commonly stiled by the name Cruelty. (pp. 126–7)

Cruelty is contrary to reason whether or not one has sufficient security that others will not be cruel: that is, cruelty is contrary to reason even in nature. (Consider Hobbes, Thomas, De Cive or the Citizen (New York, 1949), p. 56Google Scholar : the footnote to Part I, Chapter III, paragraph 27.) And, by implication, all rational values are exclusively forward-looking, “for the End is alwayes somewhat to come.”

And so lines from Leviathan are a footnote to this paper which is, of course, but a footnote to that great work. I owe thanks to Willa Freeman Sobel for many helpful suggestions and discussions concerning in particular, but not only, the idea of forward-looking values.

2 The suggestion, not repeated in “The Insoluble Problem of the Social Contract,” is made in the following words in the paper that was presented in Toronto (See footnote*, above.) and also in a revised version of that paper titled “The Social Contract Returns, This Time as an Elusive Public Good,” presented to the American Political Science Association at its meeting in Chicago, August 29-September 2, 1974:

How much regard and respect should we have for rationality when it is not associated with suitable motivations? I think, strictly limited respect. Perhaps we could take the line that rationality by itself has no indefeasible claim to respect. Perhaps, however, we would do better to hold that since rationality, conceived minimally as operating without motivations which eliminate the problem, leads to self-defeat in Prisoner's Dilemmas and the contract problem for the “multitude” (also in Sobel's variants), rationality so conceived is an inadequate conception of rationality. Ironically, the minimal conception has attracted many thinkers because it seemed both hard-headed and ethically neutral… Yet it will not work, even in its own terms: It is, as the chief merit of studying the contract problems may be to tell us, collectively self-defeating in its own terms, which cannot be charged against conceptions more ancient, and wiser.

3 I discuss the hyperrational community in greater detail in Sobel, J. Howard, “The Need for Coercion,” Coercion: Nomos XIV, Pennock, J. R. and Chapman, J. W., eds. (Chicago and New York, 1972), pp. 149–53Google Scholar.

4 The awkwardness of the formula.

how likely ψ would be made by Φ,

is deliberate. This formula is not supposed to express the conditional probability of φ given Φ, but is intended to express the result of adjusting the (unconditional) probability of φ by the likely influence, if any, of Φ on φ. The lesson, I hold, of Newcomb's Problem is that some such measure is to take the place of conditional probabilities in expected utility calculations. But more of this in another place.

5 The qualification is necessary. Not every situation possible for agents is possible for hyperrational maximizers. Arguments in support of this proposition, and examples of situations that are not possible for hyperrational maximizers, can be found in my “Interaction Problems for Utility maximizers,” Canadian Journal of Philosophy (June, 1975) as well as in “The Need for Coercion,” op. cit.

6 Note that the argument being constructed for series of two-person Prisoner's Dilemmas of indefinite, unknown lengths is actually completely general. Its conclusion covers all series, whether of definite known or indefinite under some rule and unknown length, of all kinds of interaction structures. The conclusion that hyperrational maximizers cannot teach or set precedents, barring ‘untraditional’ motivational assumptions, covers, for example, not only series of structures in which dominant strategies are not jointly Pareto-optimal, but also series of coordination problems in which no strategies are dominant and in which therefore there might at first appear to be more scope for teaching and precedents.

7 For an excellent discussion, with which I am in substantial disagreement, of teaching effects in a hyperrational community of maximizers, see Gibbard, Allan, Utilitarianism and Coordination (unpublished dissertation. Harvard University, Cambridge, 1971), pp. 156–75Google Scholar. Gibbard writes, “Perhaps Hodgson is saying it is irrational to rely on induction from past acts of promise-keeping, but I see no reason why it should be irrational.” (p. 170) My claim is of course not that among hyperrational maximizers induction from past acts is irrational, but that it is always unnecessary. The heart of the matter, if I am right, is that the only projections hyperrational maximizers could make from one another's past acts would be unneeded projections to cases quite like the base cases.

8 See Lewis, David, “Utilitarianism and Truthfulness,” Australasian Journal of Philosophy (05, 1972), pp. 17–9Google Scholar . Assume that ‘you’ and ‘I’ are hyperrational maximizers. (So ‘you’ and ‘I’ are not you and I.) Let them be in a coordination problem such that it is in the interest of each to push red if and only if the other pushes red: let (R, R) be for each one of the two best equilibria. ‘You’ has said, “I pushed red.” Can ‘I’ believe him? An affirmative answer, Lewis assures us, is consistent with what has so far been assumed, since it is consistent with their hyperrationality that ‘you’ and ‘I’ should be, and know each other to be, truthful in the extensional sense of saying only true things (when it is best to instill true beliefs). Lewis does not say that ‘you’ and ‘I’ might, consistent with their hyperrationality, know each other to be truthful in my sense (truthful in the sense of valuing saying truths), but of course they might be, and know each other to be, truthful in this sense. And supposing that they are truthful in my sense would provide one explanation of why they are extensionally truthful, and one explanation of how they know, presumably even without experience and at the very beginning of their relationship, that they are extensionally truthful. One way of explaining this would be to say that they are by their values truthful. Is there another way, given that they, ‘you’ and ‘I’, are hyperrational maximizers?

‘But as a matter of fact we know we are extensionally truthful. You and I know this. How else could we talk? This is “knowledge that [we] do in fact possess.” (Lewis, p. 19.) So why not inquire realistically into how we know? Perhaps that would reveal ‘another way’. Perhaps. But since we are not ‘you’ and ‘I’, since we are not hyperrational maximizers, it is just as likely that an empirical investigation would only confuse the issue.

Consider Mackie's, J. L. discussion of coordination in “The Disutility of Act-Utilitarianism,” The Philosophical Quarterly (10, 1973)Google Scholar . Regarding a symmetrical situation with two best equilibria — the structure could be

he asks, “How do we solve this apparently insoluble problem?” In practice, Mackie observes, “one person happens to move… before the other… [who] then adapts his own movement to fit in with that of the first.” (p. 293) That seems right; often, at any rate, we negotiate such situations in the way Mackie describes. But hyperrational maximizers never just happen to move, they need reasons.

And there are situations possible for hyperrational maximizers in which each must move without prior observational knowledge of the other's move. (That is true even of us.) Mackie holds, incidentally, that coordination problems possessed of unique best equilibria, for example,

resolve without the participants' monitoring each other's moves, but he does not say how such situations resolve supposing “each is aiming simply at maximizing utility.” (p. 291) For reasons not spelled out he holds that it is obvious that hyperrational maximizers would do well in these situations, so he does not say how they would manage or even how we manage in such situations. I discuss, in “Interaction Problems for Utility Maximizers,” op. cit., n. 3, pp. 681–2, a way that does not work for hyperrational maximizers, and maintain that no way does work for them.