Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T09:26:24.499Z Has data issue: false hasContentIssue false

Principlism, Uncodifiability, and the Problem of Specification

Published online by Cambridge University Press:  15 January 2024

Timothy J. Furlan*
Affiliation:
Burnett Family Distinguished Professor of Ethics, Director, Center for Ethical Leadership, University of St Thomas, 3800 Montrose Blvd, Houston, TX 77006, USA
Rights & Permissions [Opens in a new window]

Abstract

In this paper I critically examine the implications of the uncodifiability thesis for principlism as a pluralistic and non-absolute generalist ethical theory. In this regard, I begin with a brief overview of W.D. Ross’s ethical theory and his focus on general but defeasible prima facie principles before turning to 2) the revival of principlism in contemporary bioethics through the influential work of Tom Beauchamp and James Childress; 3) the widespread adoption of specification as a response to the indeterminacy of abstract general principles and the limitations of balancing and deductive approaches; 4) the challenges raised to fully specified principlism by the uncodifiability thesis and 5) finally offer a defense of the uncodifiability thesis against various critiques that have been raised.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Introduction

The uncodifiability thesis asserts that there is no way to fully delineate the relationship between moral and nonmoral properties. As such, the uncodifiability thesis argues that there is no way to finitely or reasonably detail the exceptions to the moral principles and rules that pluralistic generalists support.Footnote 1 From this brief description, it should be apparent that, if the uncodifiability thesis is true, monistic theories about the good and the right face serious challenges. This occurs because monistic theories argue that the relationship between moral and nonmoral properties is strictly reducible to one common denominator, namely, whatever their respective monisms are predicated upon. However, not all generalists are monists, and pluralist ethical theories that support non-absolute generalities do not obviously succumb to such a critique. In this article, I would like to explore the uncodifiability thesis in greater detail and critically examine the implications of this thesis for principlism as a pluralistic and non-absolute generalist ethical theory. In particular, I will focus on the form of principlism developed by Tom Beauchamp and James Childress because of the significant influence their work has had on the field of bioethics.

Ross, Intuitionism, and Ethical Pluralism

Historically, W.D. Ross’ intuitionism has often been classified as a form of commonsense ethics. As such, he believed that one of the most important goals of ethical theory is to account for the actual ethical beliefs and practices held by ordinary people. In addition, Ross also places a high priority on the role of good judgment or practical wisdom in ethics, both in determining how to resolve conflicts of duties and in ascertaining what duties are relevant to a certain case. Like a number of other commonsense ethicists, Ross believed that this conception of a complex and pluralistic commonsense morality can best be captured by general, but defeasible, ethical principles. Because of this focus on principles, I will refer to Ross and similar theorists as “principlists” and to their theories as “principlism.”Footnote 2

Principlists argue that the principles that they support are both foundational and the locus of moral certainty in ethics. Such principles differ from the principles of traditional ethical theories by being prima facie, rather than absolute, and by endorsing a variety of independent and irreducible ethical goods. Monistic ethical theories can (and often do) espouse so-called mid-level exceptionable principles that are often identical to the principles of principlists, but such mid-level principles are reducible to other theoretical commitments and thus are neither foundational nor as emphasized as the exceptionable principles of principlists. Ross’ theory has six main duties, each of which can be formulated into a defeasible principle:

  1. (1) That results from one’s own past actions, which can be further divided into two subtypes:

    1. (A) Duties of fidelity (i.e., I promised)

    2. (B) Duties of reparation (i.e., I did some wrong)

  2. (2) Duties from previous acts of others

  3. (3) Duties of justice

  4. (4) Duties of beneficence

  5. (5) Duties of self-improvement

  6. (6) Duties of nonmaleficenceFootnote 3

Duties such as these are viewed as being fundamental and foundational because they are underived from either other duties or other theoretical commitments. In addition, disagreements about the number and type of duties can be resolved by ascertaining whether the duties in question are wholly underived from other duties. Depending on the result of such inquiries, the number of fundamental duties can conceivably be enlarged or shortened. Ross himself at times shortens or lengthens the above list by, for example, subsuming the fifth duty under the fourth. From the short list of fundamental duties, one can then develop a longer and more specific list of secondary-derived duties. Furthermore, Ross acknowledges that both fundamental and derived duties are often found intertwined and at times in conflict. This interaction might occur in a relatively innocuous form, such that one might have an obligation to perform a specific action that arises from several of the above fundamental duties. For example, helping a parent may arise from duties (1), (2), and (4).

A more troublesome interaction occurs when duties conflict, especially fundamental duties. This potential for conflict among fundamental duties and their respective principles gives rise to one of the most important claims of principlism, namely, that such principles (and duties) are not absolute. Ross formulates his principles with the clear understanding that exceptions can be made to them. However, the only allowable exceptions are those that arise when two principles conflict. For example, at times duties that arise from promises can conflict with duties that arise from justice or beneficence. If one’s duties are absolute, an irresolvable impasse is reached in such situations, and rational and moral action becomes impossible. Unfortunately, such conflicts, while not obviously common, do occur with some frequency, and any moral theory that allows them to lead to impasses appears to be deficient and impractical.

In particular, advocates for Ross’ non-absolute principles claim that they are able to avoid this breakdown of moral rationality by allowing for pertinent exceptions. When such principles do conflict, one “balances” them against each other or otherwise evaluates them to decide which principle carries the most “weight” and thus should be followed. Because Ross’ principles are defeasible, they have traditionally been called prima facie principles, but it has been suggested by Brad Hooker that they more accurately should be called pro tanto principles.Footnote 4 Referring to such principles as being prima facie suggests that when they are instantiated in cases they appear to be reasons at first glance but, upon closer examination, either the first impression was mistaken or the reason disappears. In contrast, pro tanto means “as far as this goes,” and this terminology more accurately suggests that the reason, while it may be overridden by other principles, still remains a relevant reason, arguably even with all of its original force.

Using particularist terminology, this understanding of pro tanto principles suggests that moral properties do not change either the direction or strength of their valence when they are overturned in ethical conflicts. Rather, they are simply overwhelmed by other considerations at those times. According to this understanding, if an act is just or kind or truthful, etc., it is always a right-making feature of such acts. On this account, not all just acts are right, for there can be other and stronger moral considerations that apply to specific acts, but justice always counts in favor of an act. Thus, the pro tanto principle “one ought to keep one’s promises” entails that promise-keeping is always good-making, but it is only obligatory so long as it does not conflict with another moral principle. To briefly summarize, Ross’ pro tanto principles claim that a duty, if present, always counts for or against an action. If only one duty is present, it decides the action. At times, two or more duties will apply to the same action, and if they conflict, one chooses to follow the more important one. The duty that is not followed still retains its inherent good- or bad-making essence even though it does not decide the moral outcome of the case.

One problem that has historically been raised regarding Ross’ ethical theory and principlism in general is that of balancing principles against each other. This issue arises when two or more principles are relevant to a case and not all of them can be followed. There are two ways to approach such balancing. First, one can a priori rank the six (or so) fundamental principles. Second, one can balance or weigh principles against each other after they are instantiated in specific cases.Footnote 5 Traditionally, many philosophers have taken Ross to follow the second option of claiming that such balancing is not possible aside from particular instances in cases. However, David McNaughton claims that Ross does suggest that such rank and order balancing in this first sense is possible; that is, some duties, such as nonmaleficence, are simply more “stringent” than others, such as beneficence.Footnote 6 McNaughton goes on to claim that this attempt by Ross to order his principles ultimately fails, but still suggests that, through discernment or good judgment, one can decide which principle should be given priority on a case-by-case basis. Jonathan Dancy interestingly suggests that this move to the second option is inevitable, for he argues that ethical pluralism necessarily drives one to a particularist epistemology because only through such an epistemology is one able to solve the problem of ordering a variety of fundamental properties or principles.Footnote 7

Finally, Ross’ ethical theory relies on intuitive induction to both understand and ground his six foundational principles. Intuitive induction is the process of learning general truths by examining a small number of specific examples of such truths. For example, one might recognize in certain cases that an action is right because it is just and, after seeing this in various cases, come to realize that justice is universally right and thus can be formulated as a pro tanto principle. This process is inductive because it relies on extrapolation from a small number of case examples, but it is intuitive in that it relies on a leap of understanding that mere induction cannot justify. This intuitive leap is typically argued as being justified because the truth being apprehended is self-evident and because the relationship between the moral and nonmoral properties that are being understood is a necessary one. However, this notion of self-evidence is widely controversial.

In this regard, Ross’ theory has traditionally been discounted because of his strong and explicit reliance on intuition as being foundationally justificatory. It was this issue more so than any other that caused his theory to fall out of theoretical favor. Additionally, this use of intuition as a justificatory foundation is connected to the prior problem of balancing. If intuition is the means to decide what principles one ought to follow, it also seems likely that it should be used to determine that one principle has more importance than another in a specific case (or overall). However, as Henry Richardson points out, “the problem with intuitive balancing [of principles] is not its unattainability but its arbitrariness and lack of rational grounding.”Footnote 8 While Ross’ principle-based ethical intuitionism is one of the more historically important defenses of principles, its reliance on intuition and problems with balancing ultimately brought it and similar theories into philosophical disrepute.Footnote 9

Contemporary Principlism

Although Ross’ theory is still often discussed, this dismissal of principlism by moral philosophers has continued more or less consistently to the present day. Interestingly enough, though, in the latter part of the twentieth century, a strong revival of principlism occurred in biomedical ethics. In fact, over the last thirty to forty years, principlism has become arguably the most important and influential theory in the field. This revival is due in large part to the work of Beauchamp and Childress and their influential textbook The Principles of Biomedical Ethics.Footnote 10 Their four-principle theory emphasizing autonomy, beneficence, nonmaleficence, and justice is currently the most widely followed form of bioethical principlism.Footnote 11

Contemporary principlists follow Ross by claiming that morality is both pluralistic and complex and that this can best be understood by arguing that general principles informed by the various components of morality are foundational. Contemporary principlists tend to differ from Ross both in the number of principles that they support (for instance, Beauchamp and Childress support four, and others support either more or less) and in their understanding of the justification of such principles. Contemporary principlists’ understanding of how the principles they support are justified is varied and interesting,Footnote 12 but too complex to give more than a brief overview here. One approach to take is that which Ross did, which is to argue that principles are foundational and justified by intuition or because they are self-evident.

Another approach, and perhaps the most popular one, is to argue that the selected principles are not foundational, but rather mid-level and universally accepted. This can be understood in various ways, and contemporary principlists are often vague about which way one should take their claim. First, the claim could be that such principles are found or taken from commonsense morality and thus accepted by all normal or moral human beings. This claim hints that common morality is itself foundational, but it also leaves open the possibility that a traditional or other specific ethical theory is correct and explicatory of much or all of what we commonly believe.Footnote 13 Second, the claim could be that all important ethical theories, both traditional and contemporary, can and do commonly support general but not foundational principles. These principles are called mid-level because they lack the foundational quality that is typical of high-level principles, but they are still too general and theoretical to be lower-level (i.e., more specific and practical) claims. This claim suggests that ethical theories ultimately provide foundational support, but it makes no declaration as to which specific theory is correct. A third approach, which is the least popular, is to argue that a specific, typically traditional, ethical theory such as virtue ethics, utilitarianism, deontology, or natural law theory is correct and that its insights can be best applied to specific practical and especially biomedical cases using the selected principles.Footnote 14

Most contemporary principlists respond to the question of the grounding of principles by claiming that, since nearly everyone accepts them and they are useful for explaining and resolving ethical problems, we can take them at face value and apply them without worry, leaving the more theoretical and difficult work of providing their philosophical grounding to those with such inclinations or interests.Footnote 15 At times, they also argue that the four most popular principles of autonomy, nonmaleficence, beneficence, and justice can be individually derived from more developed ethical theories. For instance, justice can be derived from a Rawlsian social contract theory, autonomy can be derived from Kantian theories, beneficence is utilitarian in nature, and nonmaleficence might be drawn from virtue ethics or natural law theory. This suggestion illustrates the important commonsense morality plurality assumption of contemporary principlists, but it is difficult to understand what it signifies for the problem of grounding principles.

Similarly to Ross, contemporary principlists at times struggle to determine which principle should be followed when they conflict in specific cases. In this regard, Beauchamp and Childress follow a balancing approach in the first few editions of their Principles of Biomedical Ethics, but they later switch to a model of making their principles more specific in order to avoid most conflict, while still relying on balancing principles in specific cases when conflict is unavoidable. However, contemporary principlists tend to focus more on a priori lexical ordering than Ross did. The most common way to perform lexical ordering is to assign each principle a priority and then, when applying them, attempt to completely satisfy the highest-ranked principle before others can be evaluated.Footnote 16 For example, John Rawls lexically orders the two principles of justice derived from the original position;Footnote 17 Tristram Engelhardt lexically orders autonomy over beneficence;Footnote 18 Bernard Gert, Charles Culver, and K. Danner Clouser partially lexically rank their principles, with nonmaleficence being ranked over their other principles;Footnote 19 and Robert Veatch proposes a partial lexical ranking, with non-consequentialist principles being lexically ranked over consequentialist principles.Footnote 20

Beauchamp and the Nature of Principles

At this point, I would like to turn to a specific example of contemporary principlism and critically examine how such principles are both formulated and grounded. As one of the foremost proponents of principlism in bioethics, Beauchamp will serve as a useful model for this task. Beauchamp defines an ethical principle as “a fundamental standard of conduct on which many other moral standards and judgments depend.”Footnote 21 He goes on to claim that “a principle is a norm in a system of thought or belief, forming a basis of moral reasoning in that system.”Footnote 22 Significantly, Beauchamp argues that the concept of a principle involved in his work is different from that of ethical principles that have been used in the past. Historically speaking, a principle was viewed as being:

  1. (1) General.

  2. (2) Normative.

  3. (3) Substantive.

  4. (4A) Unexceptionable.

  5. (5A) Foundational.

  6. (6) Theory-summarizing.Footnote 23

In this regard, Beauchamp still accepts conditions (1)–(3), but argues that conditions (4A)–(6) do not apply to his new conception of principles. In addition, he rejects conditions (4A)–(6) for many of the same reasons that casuists would, namely that they immerse one in many of the problems of so-called deductivist ethical theories.Footnote 24 Beauchamp’s new conception of principles is taken from Ross’ formulation of pro tanto principles and claims that principles are as follows:

  1. (1) General.

  2. (2) Normative.

  3. (3) Substantive.

  4. (4B) Exceptionable Prima facie.

  5. (5B) Nonfoundational.

At first glance, this understanding of principles would seem to do much to assuage several of the traditional concerns regarding principlism. In response, I would argue there are two problems with this account of principles. First, I think that Beauchamp, like many contemporary casuists such as Albert Jonsen and Stephen Toulmin, is unclear about the grounding of his principles.Footnote 25 On the one hand, at times he suggests that his principlism can either be theory-free or rely on extant ethical theories for their support. I find both of these responses, however, to be questionable. On the other hand, he almost always refers to and uses his principles as if they were foundational. That is, although he claims that he is not taking a stance on moral theory, he also hints or explicitly claims that his four general principles are indefeasible, universally agreed upon, and the general locus of moral certainty. Thus, he seems to be providing the basic components of a moral theory and using such components to support the rest of his claims, but he is still unwilling to accept the burden of either developing the theory or providing theoretical support for it.

This brings me to my next concern, which is that important questions still remain regarding the nature of justification in principlism. Beauchamp answers this in part when he claims that principles receive support from Rawlsian-considered judgments, which are “justified without argumentative support and are the proper starting points for moral thinking.”Footnote 26 Beauchamp explains that considered judgments have four necessary conditions: (1) A moral judgment occurs; (2) impartiality is maintained; (3) the person making the judgment is competent to make it; and (4) the judgment is generalizable to apply to all cases relevantly similar to those originally judged.Footnote 27 Moreover, Beauchamp claims that one needs a form of coherence theory as a background for considered judgments, to ascertain that all such judgments are compatible with one another. In particular, Beauchamp embraces a form of Rawlsian reflective equilibrium by claiming that “a proper theoretical ideal is to make principles and the relevant features of considered judgments coincide, perhaps through a process of mutual adjustment.”Footnote 28

Unfortunately, the two sets of conditions for considered judgments and principles appear to be conflicting, or at least unconnected. For instance, what are these considered judgments about? They could be judgments of cases, of principles, or of both. Beauchamp originally hints that such judgments are in reference to cases (see the fourth of his conditions above), but he later suggests that they apply to either cases or principles: “the considered judgments with which we begin in constructing an ethical theory themselves can be at any level of generality and may be expressed as principles, rules, maxims, ideals, models, and even as normative judgments about cases.”Footnote 29 Beauchamp goes on to claim that this system allows for a top-down or bottom-up approach: “if these considered judgments occur at a lower level of generality than principles, they support principles bottom up, rather than being supported by principles top down.”Footnote 30

However, Beauchamp’s foundational support of his principles still does not answer the question that it was supposed to answer, namely why his four principles are supported (or correct) as opposed to other conceptions of morality. For example, Beauchamp’s considered judgments do not rule out the traditional ethical theories that he opposes. For instance, Kant’s categorical imperative or utilitarians’ claim about happiness readily fulfills all four of Beauchamp’s rules regarding considered judgments. If Kantian or utilitarian claims are supported by considered judgments—which are apparently the only support for Beauchamp’s own principles—why cannot they also function as rules or principles? This points toward a disconnect between Beauchamp’s understanding of acceptable moral principles and the judgments that ground them. The condition of exceptionality (non-universalizability) appears to be merely an ad hoc restriction to prevent the support of traditional monistic theories.

At this point, one might be tempted to resolve this issue by relying on the fourth of Beauchamp’s conditions for considered judgments, namely that “the judgment is generalizable to apply to all cases relevantly similar to those originally judged.”Footnote 31 At first glance, that condition seems reasonable and appears to rule out monist theories. However, upon closer inspection it appears to be little more than a platitude. Of course, one can only generalize to relevantly similar cases—if the case is not relevantly similar, then there is no basis for a generalization. The real conceptual difficulty is ascertaining the definition of “relevant similarity.” For example, Kant could make the impartial, competent, moral judgment that the categorical imperative is generalizable to all cases of human interaction, because all such cases are relevantly similar to each other in that they involve rational human beings. To a certain extent, monistic theories have an easier time explaining relevant similarity than pluralistic theories, for they can readily claim that their monist foundation is the criterion of similarity that classifies all ethical cases. How does the pluralist tell if a case is relevantly ethically similar to another? Beauchamp gives no answer to this, and indeed, principlism tends to overlook this question. Because of its structure, this question tends to arise more frequently in discussions of casuistry, and many casuists will argue that it is possible to provide an answer by the method of analogy to paradigmatic cases.Footnote 32

Saving Principles with Specification

Turning back to principlism generally, the broad complaint that it provides against general principles is that they are often too abstract and indeterminate to be applicable to specific cases. For instance, the principles “do good,” “be just,” or “do the right thing” provide little or no practical guidance about what this means when one is deciding how to act or who one should aspire to be. The more specifically particularist complaint against principles is that they are simply wrong about the holism of moral reasons. If holism is correct, then most principles are inaccurate, for there are occasions, perhaps many occasions, on which the specific context of an action can change both the strength and direction of a property’s valence.Footnote 33 As Dancy notes, “the leading thought behind particularism is the thought that the behavior of a reason (or of a consideration that serves as a reason) in a new case cannot be predicted from its behavior elsewhere. The way in which the consideration functions here either will or at least may be affected by other considerations here present. So there is no ground for the hope that we can find out here how that consideration functions in general … nor for the hope that we can move in any smooth way to how it will function in a different case.”Footnote 34 This complaint again illustrates the fact that abstract general ethical principles are often too vague to provide meaningful action guidance. There are two obvious and popular ways to respond to this critique.

First, one can make principles more detailed so that they incorporate situation-specific information about their application, exceptions, and range. Second, one can incorporate some broad clause about exceptions that might refer to “all relevant properties” or “lack of other defeating conditions.” The desire to defeat particularist arguments by making one’s principles more detailed is widespread among generalists. The most common form of this move among principlists in the realm of bioethics is that of the specification. In this regard, Jonsen defines specification as “the process of giving greater determinacy to indeterminate moral norms by adding to them qualifying clauses that both respect the intent of the original norm and also bring it closer to concrete cases.”Footnote 35

Specification was first named and detailed by RichardsonFootnote 36 and expanded and specifically applied to medical ethics by David DeGrazia.Footnote 37 After that point, other principlists in bioethics, such as Beauchamp and Childress, soon made use of the concept and terminology in later editions of The Principles of Biomedical Ethics. As Richardson defines the concept, one norm specifies another if (1) everything that satisfies the former’s absolute counterpart will satisfy the latter’s absolute counterpart; (2) the former adds substantive qualifying clauses to the latter rather than simply shifting around its logical form or creating an exception; and (3) these clauses are relevant to the norm being specified rather than being extraneous riders.Footnote 38 In his earlier work, Richardson further elaborates on criterion (2) by claiming p qualifies q by substantive means (and not just by converting universal quantifiers to existential ones) by adding clauses indicating what, where, when, why, how, by what means, by whom, or to whom the action is to be, is not to be, or may be done or the action is to be described, or the end is to be pursued or conceived.Footnote 39 Thus, specification is a formal method of making general ethical principles more detailed, while still incorporating and supporting the substance of their original claim. For example, the norm regarding respect for persons or respect for autonomy can be further specified to “respect the autonomy of competent patients by following their advance directives when they become incompetent.”Footnote 40

Specifications arose from two perceived flaws in traditional principles. First, one must determine how to resolve conflicts between principles. Second, one must know how to make general principles relevant to specific cases. The traditional ways of dealing with these problems have been, respectively, to balance them and to apply them deductively. As previously discussed, balancing principles appear to be largely intuitive in nature and thus presumptively irrational or difficult to justify to others. Deductively applying abstract principles is generally thought to be both difficult to perform in many circumstances, as well as an inaccurate conception of how moral reasoning occurs.Footnote 41 Specification claims to resolve the problems of conflict between principles and how to make such generalities applicably useful to specific cases. In regard to the first concern, Richardson claims that specification can often, but not always, resolve conflicts between general principles by simply being more specific about how such principles apply to detailed situations. For example, if the principles of autonomy and beneficence conflict in a specific case—i.e., one where a patient autonomously refuses a beneficial medical treatment—both principles could be refined with, respectively, clauses about competence and whether the potential benefit is certain or unlikely, extreme or moderate, and so forth. If the specification is successful in this case, the substantive content of the more general principles will be seen as compatible.Footnote 42

Regarding the problem of application, Richardson claims that “once our norms are adequately specified for a given context, it will be sufficiently obvious what ought to be done” and goes on to state that “without further deliberative work, simple inspection of the specified norms will often indicate which option should be chosen.”Footnote 43 That is, if the specification is performed thoroughly and accurately, the resulting principles will often be detailed enough so that, once one understands or knows them, the situations in which they are relevant should be readily apparent.Footnote 44 Furthermore, the applicability of specified principles is predicated upon the assumption that their more general formulations are not absolute, and this prima facie quality arguably extends to specified principles as well.

Specification is an intriguing approach, so much so that even some critics have succumbed to its appeal. For example, Jonsen claims that when maxims, such as “do no harm” or “informed consent is obligatory,” are invoked, they represent, as it were, short-hand versions of the major principles relevant to the topic, such as beneficence and autonomy, “cut down to fit the nature of the topic and the kinds of circumstances that pertain to it,”Footnote 45 and he later states that “specification and casuistic analysis need each other to get close to the case.”Footnote 46 Carson Strong, another casuist critic of principlism, makes similar conciliatory moves, although he claims that specification relies on casuistry to assign priorities to principles, especially conflicting principles.Footnote 47

Additionally, similar to principlists in bioethics, the broader spectra of generalists in ethical theory also tend to take the specification approach, although they do not explicitly refer to it as such. For example, Martha Nussbaum suggests that it is the generality of rules, not their universality, that is problematic.Footnote 48 If rules could be made specific enough, many or most of the problems arising from them would vanish. Walter Sinnott-Armstrong argues that particularism only rules out simple generalities, not detailed or even very complex ones.Footnote 49 Kasper Lippert-Rasmussen also suggests that making principles more detailed is one way to avoid many of their traditional problems.Footnote 50 Most such generalists claim that although explanatory generalities have not yet been specified or detailed, they can be and eventually will be.

Turning back to the particularist holism argument, the specification move, if performed correctly, can accommodate some of its insights about the functioning of moral reasons. Even if moral reasons do rely on background, supporting, and defeating conditions, such conditions can presumably be built into specified principles. There is nothing in holism itself that suggests that this is impossible, for, although many moral reasons do act holistically, in actual fact this might infrequently make a practical difference. If moral particularists are correct that any property can be important as a supporting or undermining condition, then the specification of such principles will be more problematic, and such specifications will be very complex, but still theoretically possible.

Specification and the Nature of Rules

I will shortly argue that, although specification may be a partially effective response against the holism argument, it is still problematic as a whole. However, before I turn to the particularists’ main argument against specification, there are first a few non-particularist problems with specification that I wish to briefly raise. On a broad level, one could argue that the trend toward specification illustrates a misunderstanding of the very nature of rules. H.L.A. Hart raises a similar point regarding the trend to specification in terms of how rules work in the legal system.Footnote 51 He argues that, because of the nature of human language, rules will always be somewhat open-textured, with interpretation being needed to understand both the rules and how to apply them to particular circumstances. Because of this inherent open texture, rules cannot simply be deductively applied, rather, their use requires good judgment and discretion. This uncertainty secondary to the open texture of rules often leads people to believe that rules ought to be formulated more strictly to resolve conflict and minimize the need for difficult choices. This leads to what Hart calls “rule formalism,” which argues that correct rules will be explicit enough to be applied without this uncertainty. In this regard, specification might be seen as part of this larger trend toward rule formalism that occurs in both law and ethics and, as Hart would argue, is based on a misunderstanding of how rules are able to function.Footnote 52

Aside from this general disapproval of the purpose of the specification and the conception of rule formalism that arises, other non-particularist critiques can be made. For example, specification apparently depends on prior theoretical decisions about the priorities of conflicting principles.Footnote 53 This can be taken in several ways. In one sense, this critique suggests that specification is only possible after balancing has occurred to ascertain what role principles should play in the specification process to avoid conflict. On this account, certain principles will be affected or changed more than others in this process, and there needs to be some reason why this occurs to some principles and not others. As Veatch points out, “the claim of those who specify seems to imply that within limited domains, principles can be lexically ranked,”Footnote 54 and yet such ranking, whether taken broadly or narrowly, has traditionally been viewed with skepticism as relying on intuition. In another sense, this critique rightly points out that there must be some method of comparing opposing specifications to each other, for there are many ways to actually specify a principle, and one would wish to be able to evaluate this process. There are certainly ways to avoid this critique, but the supporters of the specification have not yet, to my knowledge, fully or successfully pursued them.Footnote 55

I am also skeptical as to how specified principles can both hold substantially true to the insights of their general predecessors and change to refine and improve our understanding of morality. Richardson makes both such claims for his specification, and yet they are incompatible. On the one hand, he strongly emphasizes what he calls extensional narrowing, namely that “everything that satisfies the specified norm must also satisfy the initial norm.”Footnote 56 A fundamental aspect of the specification is that it adds clauses to the initial norm, thus respecting its substantive content. This condition ensures that the initial general norm is completely satisfied and thus grounds the specification. On the other hand, Richardson claims that “what allows the idea of specification to offer a third way of reflectively coping with conflicts among principles is the fact that it offers a change in the set of norms” and that interpretation of principles, of which specification is an example, “modifies the content of a norm.”Footnote 57 Thus, specification is apparently supposed to satisfy the insight of the original norm (i.e., satisfy its absolute counterpart) and change its content. One could perhaps attempt to argue that general norms have both essential and non-essential contents and that the specification should support the essential content and change the non-essential content, or otherwise argue that general norms can be changed and supported at the same time, but I cannot envision any such arguments being either successful or compelling.

The Uncodifiability Thesis

While the previous critiques of the specification are not theory-specific, particularists will argue that the uncodifiability thesis also prevents the specification move detailed above. The uncodifiability thesis claims that there is no way for rules or principles to fully detail the relationship between moral and nonmoral properties. That is, within the context of the relationship between the moral and nonmoral sets of properties, the particularist denies that “there are any usefully, finitely specifiable conditionals of the form if M then N.”Footnote 58 Another way of expressing this claim is to say that the moral is shapeless in regard to the nonmoral. If this claim is true, then there is no reason to believe that moral properties are either defined by, or inextricably linked to, nonmoral properties, and even extremely detailed specified principles will not be successful in describing the relationship between the moral and the nonmoral.

For instance, consider the virtue of “kindness”. As a moral property, “kindness” supervenes upon certain nonmoral properties. The uncodifiability thesis claims that there is no single common property, or even a unique set of properties, that all acts of kindness consist of. This entails that without understanding the evaluative concept of “kindness” there is no way that someone can correctly identify the comprehensive set of instances of “kindness” by identifying some patterns among the nonmoral properties of the items in such a set. John McDowell explains this in the following way:

However long a list we give of items to which a supervening term applies, described in terms of the level supervened upon, there may be no way, expressible at the level supervened upon, of grouping such items together. Hence there need be no possibility of mastering, in a way that would enable one to go on to new cases, a term which is to function at the level supervened upon, but which is to group together exactly the items to which competent users would apply the supervening term.Footnote 59

This concept of the uncodifiability of the relationship between the moral and the nonmoral is not a historically novel stance. For instance, Aristotle is at times understood to be espousing such a viewpoint when he claims that “matters concerned with conduct must be given in outline and not precisely … matters concerned with conduct and questions of what is good for us have no fixity, any more than matters of health.The general account being of this nature, the account of particular cases is yet more lacking in exactness; for they do not fall under any art or precept, but the agents themselves must in each case consider what is appropriate to the occasion, as happens also in the art of medicine or of navigation.”Footnote 60 Likewise, in his earlier dialogues, Plato points out the sheer difficulty involved in meaningfully defining virtue in nonmoral terms so that all respective virtuous acts are unified by the definition. For example, in the Euthyphro, the young Athenian Euthyphro offers a number of definitions of piety, ranging from “doing what the gods ask” to “giving the gods their due.” However, Socrates’ questions about each individual definition quickly illustrate that all of Euthyphro’s definitions were substantially incomplete, often because they were either too broad, thus encompassing acts that were not pious, or too narrow, therefore excluding pious actions. McDowell highlights this issue as follows:

If one attempts to reduce one’s conception of what virtue requires to a set of rules, then, however subtle and thoughtful one was in drawing up the code, cases would inevitably turn up in which a mechanical application of the rules would strike one as wrong—and not necessarily because one had changed one’s mind; rather, one’s mind on the matter was not susceptible of capture in any universal formula.Footnote 61

Why is morality uncodifiable in relation to nonmoral properties? Many particularists want to be able to answer this question while still claiming that morality is objective. One possible response is to argue that morality is practice-based and thus intrinsically human and evaluative in nature. While this response has some viability—depending on how carefully one details this claim—the problem arises of ethics becoming entirely subjective in nature. One way to address the issue of ethical relativism is to claim that, although moral properties are understandable only from a particular evaluative, likely human, perspective, this limitation is shared by all rational endeavors.

This avenue of thought is often traced back to Wittgenstein’s discussion of rule following in the Philosophical Investigations (1953, § 185). In particular, Wittgenstein suggests that our ability to understand complex practices and concepts, to keep going on, as it were, outruns any formulatable rule or principle. That is, the rules and principles that supposedly ground practices or procedures of any type are too thin or content-poor to actually provide the grounding that we seek. On this account, practices are too richly textured to be susceptible to any finite collection of rules. Rather, when one is immersed in a practice, one develops skills that go beyond one’s experiences and understanding. Plato gives many examples of this in his dialogues, such as when Euthyphro could not define holiness in purely descriptive terms and when Laches failed to define courage, yet both men had the ability to understand the respective concepts and use them correctly.Footnote 62 Even for something as basic as, to use Wittgenstein’s example, extending a series of numbers by two (2, 4, 6, 8, 10, etc.), an individual’s understanding surpasses the grounding that is provided by the finite set of examples that any rule provides and that could have been illustratory of any number of actual rules or practices. In this sense, Wittgenstein is arguing that all human endeavors—be they linguistic, scientific, mathematical, social, or moral—rely on skills that project understanding that is uncodifiable by abstract general rules or principles. Thus, although morality may be uncodifiable and practice-dependent, the same holds true for the broader epistemic realm as well, and yet endeavors in both areas can be rational because of our capacity to understand and participate in such uncodifiable practices.Footnote 63

A related response as to why the moral is uncodifiable in regard to the nonmoral arises as a result of the holism of moral reasons argument. In particular, the holism argument claims that many properties can change or reverse their valence due to the influence of other properties. Specification seeks to account for holism, but it can do so only if there is a limited, or finite, number of circumstances in which such holism actually occurs. Whether or not this is true appears to be simply assumed. However, I believe that we have good reason to think that holism actually pertains to a large, and likely infinite, number of circumstances. If this is true, then we should not expect to find an exact and definite set of rules codifying the relationship between the moral and the nonmoral. There are a potentially infinite number of nonmoral facts, as well as an infinite possible arrangement of sets of such nonmoral facts. Since moral facts supervene both on nonmoral facts and on sets of such facts, there are also an infinite number of possible arrangements of moral facts, as well as supporters, defeaters, and relevant background conditions that affect such facts. Because of this, one can never know a priori what weight a property has in a case or if it even applies at all, because there is always the possibility of defeaters being present, either in terms of prima facie principles or in terms of other moral facts. As John Arras notes, “real life does not announce the nature of problems in advance.”Footnote 64 Additionally, since there are a potentially infinite number of sets of nonmoral facts that moral facts can supervene upon, one cannot automatically assume that there is a way to formulate a finite principle that takes account of every possible organization of nonmoral facts, and this is what complete specification and codification purport to accomplish.

In this regard, Sinnott-Armstrong argues that the particularist uncodifiability thesis merely shows that human beings are simply limited in their formulation of practical codifiable principles and does not show that there is metaphysical uncodifiability.Footnote 65 However, if I am correct that there are a potentially infinite number of sets of combinations of nonmoral facts, then it would be metaphysically impossible to capture this complexity using finite principles. This is one place where Frank Jackson, Philip Pettit, and Michael Smith go astray. They explicitly assume that one can evaluate every individual set of an infinite number of sets of nonmoral properties, and they use this assumption to derive the conclusion that the uncodifiability thesis is false.Footnote 66 Unfortunately, that initial assumption is one of the very moves that particularists are arguing against, for it is impossible to evaluate every component of a potentially infinite number of sets.

Responses to Specification

I have argued that, because of the uncodifiability thesis, a complete specification of moral principles is unachievable. However, leaving that argument to the side for a moment, I believe that there is another particularist approach to argue that specification fails, namely, that the results of the specification (assuming it is partially or wholly possible) are antithetical to our understanding of morality. The dilemma that principlists face is that their principles are either too general and thus lack sufficient content to be useful at all, or, if they are specified, they subsequently become too detailed.Footnote 67 What is wrong with a principle being too detailed? For one thing, the search for moral principles that are sufficiently specific to be useful will result in an enormous multiplication of the necessary principles. Suddenly, what was once one of generalism’s advantages, namely simplicity, disappears. This dramatically impacts both the ability to easily teach such principles and the ability to offer rational justification for them. We are now faced with hundreds or perhaps even thousands of principles, and problems regarding conflict between them and knowledge or understanding of them increase in kind. Additionally, if principles are to be useful it will be difficult to logically stop their specification before it ends with unique principles for each particular situation.

Turning to the first problem, I would argue that the goal of complete specification is simply the wrong way to approach ethical theory. While the ideal of specified principlism might be understandable or attractive in some sense, nonetheless, the end result will be similar to the contemporary US tax code, which contains 70,000 pages of minute, detailed, and prima facie justified rules that attempt to account for every conceivable situation. In this approach, morality and contemporary medical or legal practice are transformed into an enormous unwieldy bureaucracy built on a huge system of rules. Contemporary legal systems that follow a similar approach, such as those in France and Mexico, have become incredibly complex, notoriously inflexible, and riddled with internal tensions and inconsistencies. Following this trend, the goal of ethics essentially becomes the formulation of a vast, all-encompassing omniscient rulebook that, once all the variables are known, one can determine the correct answer to any given situation. Yet, the ideal of complete specification becomes far too complex for any useful action guiding and coherent account of moral rationality. In particular, whenever any of the numerous specifications conflict, a new specification is needed to resolve that conflict, and so on, ad infinitum. In addition, reliance on rules to guide actions leads to a kind of “Third Man” regress, in which the application of rules needs guidance, which must be provided by other rules, which themselves need guidance and rules to be applied.Footnote 68 Finally, to account for new scenarios and circumstances, the rule book would have to undergo constant editing and revision and would continually keep expanding.

In response, I would argue that the key to resolving complex problems instantiated by rule following is not to increase the number and complexity of rules, but rather to focus instead on discretion, sensitivity, perception, and good judgment.Footnote 69 If anything, ethical conflict and complexity suggest that we need more flexibility, not less, in our approach to morality and in our application of commonsense ethical rules and principles. Perhaps some might claim that the specification of principles can be performed only partially, thus preventing morality from being completely codified, or from creating too many principles. In addition, as Beauchamp and Childress acknowledge, the specification will likely have to end at some point as some moral dilemmas may be ineradicable.Footnote 70 However, once one has started down the path of completely codifying the relationship between the moral and the nonmoral, it is difficult to set a place for stopping the process. If the goal of ethical theory is to erase the gap between principles and practical judgments, then specification must continue until this end is met, and this will only occur when specified rules are available for every relevantly similar moral situation. This is where holism regarding moral reasons returns, for, even if the specification can take account of such holism, it would require the formulation of an enormous number of very detailed rules to do so.Footnote 71

In the end, specified principlism seems to become a form of moderate moral particularism, for the formulation and application of a large number of amazingly complex principles ultimately devolves into (and derives from) particular case discussions. The problem arises because there is no a priori way of knowing which moral principles will be relevant to which specific set of circumstances, or what weight such principles might have on different occasions. This can only be ascertained by examining each case in specific detail to ascertain which specified principles hold true in that instance, but at that point one appears to be a principlist in name only. Additionally, any principles that result from specification, if they are truly applicable, will be too individually complex to be applicable to every situation.

Finally, attempts to specify the conditions of a principle will have to include the absence of defeaters and the presence of supporting conditions, for both apply to the holism of moral reasons that specification is meant to account for. The list of potential supporting and defeating conditions is immense, so much so that any principle that actually accommodates them will be paragraphs, if not many pages, in length. Since one of the main arguments for moral principles is that they rightfully summarize and simplify moral knowledge, this result is clearly problematic for supporters of principlism. Furthermore, as argued previously, such extremely complex principles will not be explanatory in the sense that principlists and moral particularists usually rely on, for the good-making characteristics of specific situations become indistinguishable from less important, but still significant, properties.

These problems lead me to the second response taken by principlists in order to account for the insights of moral particularism and specifically those suggested by holism, namely referring to background conditions of normality. A number of principlists claim that particularist arguments can be defused using disclaimers about normal or usual background conditions when ethical principles are formulated. For instance, one could claim that “all else being normal, killing is morally wrong” or “if there are no other significant facts, one ought not to lie.” Principles that reference background conditions differ from Rossian and similar principles by widening the realm of possible defeaters from other basic moral principles to also include nonmoral background and supporting conditions. In this way, the reference to background conditions of normality is partially effective in incorporating holistic insights, but it is more of a response to the critique that specified principles, if spelled out, are both too long and complex to be useful or effective as action-guiding principles. That is, the complex and explicit clauses of such specifications can presumably be summarized and understood by shortened disclaimers.

I raise this response at this point mostly to place it in its appropriate context as a generalist response against specific particularist moves, namely the uncodifiabilty argument. As a rejoinder to this generalist approach, the moral particularist can appeal to several previously used objections. First, if holism regarding reasons is correct, it is not likely that there is any clear set of normative background conditions in the broad sense that generalists appear to rely on here. Second, if uncodifiabilty is correct, there will be no way to quantify and clarify such background conditions accurately, and simply making the reference to such conditions as vague rather than specific does not resolve this issue. Third, such disclaimers fall under the general argument against principles, namely, that they are too vague to offer any real or useful action guidance. One of the main purposes of the specification was to reply to the objection of vagueness, and broad disclaimers about background conditions are a move away from the specification in this regard.

Problems with Uncodifiability

Although the uncodifiability thesis, if true, presents problems for principlists, it is not surprisingly somewhat controversial. For instance, Onora O’Neill and Roger Crisp argue that all principles are going to be at least somewhat indeterminate and that this means that principle-supporting ethicists can un-problematically accept the uncodifiability thesis.Footnote 72 In other words, Kantians and utilitarians can accept the contemporary Wittgensteinian insights that at least partially support the uncodifiability thesis, while still remaining true to their original claims about the foundational aspects of morality and moral reasoning. Assuming that Wittgenstein is correct in his claim that no rules are fully determinate, nor are they required to be, because the use of practical wisdom or good judgment allows them to still be intelligible and action guiding.

Particularists and those who are sympathetic to aspects of their project have several responses to make to this line of argument. First, I would argue that they should welcome the new emphasis on practical wisdom or good judgment that has crossed over from the Aristotelian tradition to generalist ethical theories. The fact that there is increasing awareness of the importance of practical wisdom is encouraging to the field of ethics as a whole, for it promises a richer analysis of the topic than has, perhaps, been previously accomplished. Second, I believe that moral particularists can question how thoroughly monistic ethical theorists have taken this new awareness of practical wisdom and good judgment to heart. If one honestly believes, as some monists claim, that a single criterion is the foundational essence-defining element of morality, then the amount of context-specific judgment needed to apply the principle is likely both too much and too little. It is too little in that, since one knows a priori that one component of any ethical situation is preeminently noteworthy, likely little judgment will often be needed to evaluate that component. It is too much because any judgment can be rationalized or justified if enough ingenuity is used to that end, but the desire to justify and apply a judgment at all costs is antithetical to an honest pursuit of knowledge.

Additionally, as Crisp suggests, traditional ethical theories that take the uncodifiability thesis to heart have a tendency to move to a tiered system, which emphasizes different aspects of morality at differing levels of theoretical and practical concern.Footnote 73 The classic example of such an approach is Henry Sidgwick who argues that utilitarians ought to advocate that people either should not, or try not to, think as utilitarians.Footnote 74 Unfortunately, I believe that this route creates a significant divide between theory and belief, both practically and theoretically. It is disingenuous to suggest that the absolute codification at the theoretical level either disappears or is ignored at the level of practice, and yet this is precisely what monistic theories that accept the uncodifiability thesis must attempt. One possible counter-example to this point can be found in mathematics, where immensely intricate formalized proofs are often bypassed, for reasons of simplicity and discursive ease, for informal proofs. In such situations, absolute codification is often disregarded at the practical level. However, in mathematics, unlike in ethics, the formal codification can be readily provided and proven. Mathematicians could readily provide formal proofs if asked to, ethicists simply cannot. Additionally, although mathematicians often overlook formal proofs, such proofs, if given, would be theoretically consistent with informal proofs. In the ethical theories that I am discussing here, the theoretical commitments are prima facie, if not absolutely, inconsistent with the practical results that are permitted or condoned.

Turning to other critiques, Jackson, Pettit, and Smith offer one of the strongest arguments against the uncodifiability thesis, claiming that the relationship between moral and nonmoral properties must be codifiable if we are to be able to use evaluative predicates rationally.Footnote 75 If the evaluative truly is shapeless in terms of the descriptive, then morality is random, for there is nothing unifying the evaluative properties:

[The uncodifiability thesis] is not, for example, like Wittgenstein’s famous examples of a game and, more generally, of family resemblances. In these cases, it can be difficult to spot or state the pattern, but the fact that, given a large enough diet of examples, we can say of some new case whether or not it is, say, a game (or, perhaps, that it is indeterminate whether it is or not) shows that there is a pattern we can latch on to; our ability to project shows that we have discerned the complex commonality that constitutes that pattern.Footnote 76

If there is no pattern between the moral and the nonmoral—if the connection is totally random—then there is no semantic distinction between discussing right acts and wrong acts. In the end, there is no difference between the two. On this account, a rational semantic distinction is predicated upon some patterned commonality that distinguishes the different classes of actions. One possible response to this argument is to claim that the distinction on which semantical terms are predicated upon is un-analyzable or non-natural.Footnote 77 This, however, simply becomes another way of stating G.E. Moore’s proposal that moral properties are sui generis and are not the novel idea that moral particularists claim they are proposing.Footnote 78

The better particularist response is to claim that there is a pattern—that the connection between the evaluative and the descriptive is not totally random—but that such a pattern is still uncodifiable. This proposal saves the rationality of moral language while also allowing moral particularists to support widespread commonsense moral claims that certain acts tend to be morally important, often in the same fashion, whereas others do not. One possible response to the randomness critique is to argue for what they refer to as restricted particularism, which claims that moral acts are unified solely by our response to them. This restricted particularism denies that there is any non-evaluative or purely descriptive pattern among moral acts and thus appears to follow the uncodifiability thesis at least in substance. However, the problem with restricted particularism, as Jackson et. al. also point out, is that we believe that moral justification arises in part from the descriptive similarities and differences of individual cases, and thus, it is appropriate to question why similarly descriptive moral acts are evaluated as being morally different.

While the randomness critique is more imposing, it is by no means definitive. As Simon Kirchin points out, it is misguided to claim that if one denies the connection of the moral to the nonmoral one is left with merely a new form of the old Moorean sui generis properties.Footnote 79 In this regard, particularist claims are not reducible to sui generis properties, for the properties that they support are not ontologically odd in this sense but rather are merely collections of sets of non-ethical properties.Footnote 80 As a result of this sui generis conception (not property), one can still argue that there is no pattern of descriptive features that unites sets of situations that instantiate certain ethical properties. Rather, the unifying feature is the sui generis conception itself. This points the way toward an escape from the objection raised to restricted particularism—the claim that what unifies moral properties is our human response to certain nonmoral features. One can argue that people are responding to nonmoral features that are particularly important in specific situations as a result of sui generis concepts, which provide the unifying strand. The randomness critique is too quick to assume that restricted particularism can essentially take no account of nonmoral differences or similarities. In fact, it is such descriptive properties that are being responded to, even if they are not the essence-defining components of moral properties.

For example, one can draw the analogy between art and morality and follow a particularist viewpoint to claim that what makes something artistically beautiful or good is uncodifiable. Just as the common similarity among all ethical acts is that they are evaluated as being moral, so also is the common denominator among all works of art the simple fact that they are evaluated as works of art. However, this does not entail that one cannot provide descriptive reasons for why a specific work of art is beautiful or good such as harmony, symmetry, proportion, balance, consonance, clarity, and radiance or why one affective response such as compassion or amusement is more appropriate than another. Rather, such reasons (and similarities and differences that are integral to such evaluations) are in fact the key components of the evaluative response. For example, if one responded to Aeschylus’ Oresteia, Shakespeare’s King Lear, or Tolstoy’s Anna Karenina, with howls of laughter, then one has not understood their work correctly. In the same way, moral properties are uncodifiable but still directly responsive and accountable to descriptive features and their interactions. In addition, one can extend the analogy to argue that it is not possible to specify in advance all possible works of art or music nor is there a simple mechanical step-by-step procedure for creating beautiful works of art or music. If this were the case, any of us could acquire the mastery of Michelangelo or Bach.

Additionally, the randomness critique draws a false dichotomy by assuming that a pattern either has to be absolutely certain or totally random. In this regard, some patterns are absolute. Most, however, are not. There is an important difference between a pattern (a trend) and an absolute 100% correlation—essentially a definition. In this regard, particularists can support trends or patterns. What they must deny, however, is that moral properties are absolutely defined by certain and essential nonmoral properties. For instance, if one sees a thousand crows that are black, there is nothing that necessitates that the next crow one sees must be black. It could just as readily be white. There is a pattern of this property among crows, but it is neither one that holds 100% (most or 99.9% of crows are black), nor one that allows absolute predications to be made about future events or encounters. Even if one has observed every crow that exists, or has existed, one will only be able to repeat the claim that the pattern regarding the color of crows is that 99.9999999% are black. Perhaps one can argue that this pattern differs from other patterns (like that of moral properties relating to nonmoral properties) because it involves contingent properties such as genetic mutations or environmental factors that influence phenotypical expression rather than necessary properties. Inductive patterns are the type of generalizations that particularists can make regarding moral properties and rely on patterns, but such contingent and defeasible patterns can still be uncodifiable in the essence-defining sense that the randomness critique assumes is necessary.

Conclusion

In conclusion, the uncodifiability thesis is, along with holism regarding reasons, one of the two foundational components of contemporary moral particularism. The uncodifiability thesis provides arguments against all traditional types of general ethical principles, but it specifically affects exceptionable (i.e., pro tanto) principles that can theoretically accommodate the particularist claim about the holism of moral reasons. As such, the uncodifiability thesis suggests that two common principlist trends today, namely specifying principles to accommodate exceptions and prefacing principles with broad disclaimers, are both problematic. Moreover, even if fully specified principlism were possible, I have argued that it would not be conducive to our understanding of morality or very helpful in making moral choices. In the end, rather than hoping to endlessly multiply the complexity and number of the moral principles and rules that we must follow, a better approach would be to focus on cultivating situation-specific and case-based practical wisdom and judgment.Footnote 81

Competing interest

The author declares none.

References

Notes

1. On the uncodifiability thesis see Dancy, J. On Moral Properties. Mind 1981;90:355–87Google Scholar; Dancy, J. Ethical particularism and morally relevant properties. Mind 1983;92:530–47CrossRefGoogle Scholar; Dancy, J. Defending particularism. Metaphilosophy 1999;30:2632 CrossRefGoogle Scholar; Dancy, J. On the logical and moral adequacy of particularism. Theoria 1999;65:144155 CrossRefGoogle Scholar; and Dancy, J. Ethics without Principles. Oxford: Oxford University Press; 2006 Google Scholar. For an overview of moral particularism see McNaughton, D. Moral Vision. London: Wiley-Blackwell Press; 1991 Google Scholar at Chapter 13; Sinnott-Armstrong, W. Some varieties of particularism. Metaphilosophy 1999;30:112 CrossRefGoogle Scholar; Bakhurst, D. Ethical particularism in context. In: Hooker, B, Little, M, eds. Moral Particularism, Oxford: Oxford University Press; 2000:157–77CrossRefGoogle Scholar; McKeever, S, Ridge, M. The many moral particularisms. Canadian Journal of Philosophy 2005;35:83106 CrossRefGoogle Scholar; Lance, M, Little, M. Defending moral particularism. In: Drier, J, ed. Contemporary Debates in Moral Theory. Malden, MA: Blackwell Publishing; 2006:303–21Google Scholar; Lance, M, Little, M. Particularism and anti-theory. In: Copp, D, ed. The Oxford Handbook of Ethical Theory. Oxford: University Press; 2007:567–94Google Scholar; Kirchin, S, Moral particularism: an introduction. Journal of Moral Philosophy 2007;4:815 CrossRefGoogle Scholar; and Flynn, J. Recent work: Moral particularism. Analysis 2010;70:140–8CrossRefGoogle Scholar. For critiques of the uncodifiability thesis see Crisp, R. Particularizing particularism. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000; 2347 CrossRefGoogle Scholar; Jackson, F, Pettit, P, Smith, M. Ethical particularism and patterns. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000:7999 CrossRefGoogle Scholar; and O’Neill, O. Practical principles & practical judgment. Hastings Center Report 2001;31:15–23. For an overview of different senses of holism see McKeever, S, Ridge, M. What does holism have to do with moral particularism? Ratio 2005;18(1):93103 CrossRefGoogle Scholar. For a defense of moral holism see Lechler, A. Do particularists have a coherent notion of a reason for action? Ethics 2012;122(4):763–72CrossRefGoogle Scholar. For a critique of strong holism see Crisp, R. Ethics without reasons? Journal of Moral Philosophy 2007;4:40–9CrossRefGoogle Scholar.

2. On Ross’s ethical theory see Philipps, DK. Rossian Ethics: W.D. Ross and Contemporary Ethical Theory. Oxford: Oxford University Press; 2019 CrossRefGoogle Scholar. For a good overview of British moral philosophy during this time see Hurka, T. British Ethical Theorists: From Sidgwick to Ewing. Oxford: Oxford University Press; 2018 Google Scholar.

3. As McNaughton and Rawling, point out, most and perhaps even all of Ross’s duties are actually versions of thick moral concepts, and I believe that this is also true of most other forms of principlism. McNaughton, D, Rawling, P. Unprincipled ethics. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000: 266 Google Scholar. On the topic of thick notions see as well Williams, B. Ethics and the Limits of Philosophy. Cambridge, MA: Harvard University Press; 1985:4045 Google Scholar and Dancy, J. In defense of thick concepts. Midwest Studies in Philosophy 1995;20:263–79CrossRefGoogle Scholar. For a good collection of papers on the topic see Thick Concepts, ed. S. Kirchin (Oxford: Oxford University Press, 2013).

4. Hooker, B. Moral particularism: Wrong and bad. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000 CrossRefGoogle Scholar:266. Shelly Kagan has also argued that Ross used prima facie with effectively the same meaning as pro tanto. See Kagan, S. The Limits of Morality. Oxford: Clarendon Press; 1989:17 Google Scholar. On the meaning and use of prima facie duties see Rights, Feinberg J., Justice, and the Bounds of Liberty. Princeton: Princeton University Press; 1980:226–29Google Scholar and Thomson, JJ. The Realm of Rights. Cambridge, MA: Harvard University Press; 1990:118–29Google Scholar.

5. For a defense of balancing see DeMarco, JP, Ford, PJ. Balancing in ethical deliberations: Superior to specification and casuistry. Journal of Medicine and Philosophy 2006;31:483497 CrossRefGoogle ScholarPubMed.

6. McNaughton, D. An unconnected heap of duties? Philosophical Quarterly 1996;46:442 CrossRefGoogle Scholar.

7. See note 1, Dancy 1983, at 542. Although Ross does seem to be a particularist in his epistemology, he apparently feels the need for a generalist metaphysical viewpoint, likely due to assumptions about the nature of moral rationality. If Dancy is correct here, and I think it likely that he is, then one important difference between Ross and contemporary principlists is that they tend to support a generalist epistemology and metaphysics, whereas Ross only supports a generalist metaphysics.

8. Richardson HS. Specifying norms as a way to resolve concrete ethical problems. Philosophy and Public Affairs 1990;19:287. On the role and limits of intuitionism in the principlism of Beauchamp and Childress see Lustig, A. The method of Principlism: A critique of the critique. Journal of Medicine and Philosophy 1992;17(5):487510 CrossRefGoogle ScholarPubMed.

9. Although the traditional reading of Ross is that such self-evident principles are used to justify case-decisions, there are some occasions in his later work where he appears to makes different claims. For example, at one point Ross states: “when I reflect on my own attitude towards particular acts, I seem to find that it is not by deduction but by direct insight that I see them to be right or wrong. I never seem to be in the position of not seeing directly the rightness or a particular act of kindness, for instance, and of having to read this off from a general principle—‘all acts of kindness are right, and therefore this must be, though I cannot see its rightness directly.’” See Ross, WD. The Foundations of Ethics. Oxford: Clarendon Press; 1939:171 Google Scholar. Ross does go on to suggest that in exceptional circumstances principles do play an epistemological role but his general viewpoint here belies the claim that principles are typically justificatory. Ross also makes the related claim that we apprehend self-evident principles from self-evident acts. See Ross, W.D. The Right and the Good. Oxford: Clarendon Press; 1930:33 Google Scholar. These suggestions that individual acts can be both self-evident and justified without the use or need of principles are contrary to his earlier, and arguably overall, emphasis on principles as being both foundational and justificatory. I raise this point to suggest that, although Ross is typically viewed as being opposed to particularism, there are possible readings of him that are essentially particularist in nature.

10. The seeds for this approach arguably arose out of their participation in the Belmont Commission which resulted in the Belmont Report in 1978. On the Belmont Report see The Belmont Revisited: Ethical Principles for Research with Human Subjects. eds. Childress, James F., Meslin, Eric M., and Shapiro, Harold T.. Washington DC: Georgetown University Press; 2005 Google Scholar; Beauchamp, TL. The origins and drafting of the Belmont report. Perspectives in Biology and Medicine 2020;63(2):240–50CrossRefGoogle ScholarPubMed; Nagai, H, Nakazawa, E, Akabayashi, A. The creation of the Belmont report and its effect on ethical principles: A historical study. Monash Bioethics Review 2022:40:157–70CrossRefGoogle ScholarPubMed. For a critique of the discipline of applied ethics as it emerged during this time see MacIntyre, A. Does applied ethics rest on a mistake? Monist 1984;67(4):498513 CrossRefGoogle Scholar. On the development of principlism see Evans, JH. A sociological account of the growth of Principlism. Hastings Center Report 2000;30:3138 CrossRefGoogle ScholarPubMed and Callahan, D. Universalism and particularism: Fighting to a draw. Hastings Center Report 2000;30:3744 CrossRefGoogle ScholarPubMed. For a helpful history of the origins and development of the field see Pellegrino, E. The metamorphosis of medical ethics: A 30 year perspective. JAMA 1993;269:1158–62CrossRefGoogle Scholar; Jonsen, A. The Birth of Bioethics. New York: Oxford University Press; 2008 Google Scholar; and Callahan, D. In Search of the Good: A Life in Bioethics. Cambridge, MA: MIT Press; 2012 CrossRefGoogle Scholar.

11. See Beauchamp, TL, Childress, JF. The Principles of Biomedical Ethics. 8th ed. New York: Oxford University Press; 2019 Google ScholarPubMed.

12. It is important to note that there are different variations of principlism, with the most obvious difference between them being the number and understanding of principles that they support. For example, Pellegrino and Thomasma support a single-principle theory that emphasizes beneficence, although respect for autonomy and other goods are subsumed under beneficence. See Pellegrino, E, Thomasma, D. For the Patient’s Good: The Restoration of Beneficence in Health Care. New York: Oxford University Press; 1988 Google Scholar. Tristam Engelhardt supports a two-principle theory emphasizing autonomy and beneficence with priority given to autonomy. See Engelhardt, TH. The Foundations of Bioethics. 2nd ed. New York: Oxford University Press; 1996 CrossRefGoogle Scholar at Chapter 2. More recently, David DeGrazia and Joseph Millum have also defended a two principle approach emphasizing well-being and respect. See DeGrazia, D, Millum, J. A Theory of Bioethics. Cambridge: Cambridge University Press; 2021 CrossRefGoogle Scholar. Baruch Brody’s pluralistic theory focuses on five right-making characteristics: consequences, rights, respect for persons, virtues, and a fifth appeal that includes not only justice but also cost-effectiveness. Brody, B. Life and Death Decision Making. New York: Oxford University Press; 1988:1748 Google Scholar. Robert Veatch also supports a principlist viewpoint which has roughly 7 basic principles. See Veatch, R. Resolving conflicts among principles: Ranking, balancing, and specifying. Kennedy Institute of Ethics Journal 1995;5:199218 CrossRefGoogle ScholarPubMed. Bernard Gert, R. M. Green, and Charles Clouser support a theory with 10 basic principles. See Gert, B, Green, RM, Clouser, KD. The method of public morality versus the method of Principlism. Journal of Medicine and Philosophy 1993;18:477–89Google Scholar. The new natural law theory of Germain Grisez, John Finnis, and Joseph Boyle also supports 10 basic goods. See Grisez, G, Finnis, J, and Boyle, J. Practical principles, moral truth, and ultimate ends. American Journal of Jurisprudence 1987;32(1):99151 CrossRefGoogle Scholar and Finnis, J. Natural Law and Natural Rights. 2nd ed. Oxford: Oxford University Press; 2011 Google Scholar, and Tollefsen, C, Curlin, F. The Way of Medicine: Ethics and the Healing Profession. South Bend: University of Notre Dame Press; 2021 Google Scholar at Chapter 2. For a good overview see Veatch, R. Reconciling lists of principles in bioethics. Journal of Medicine and Philosophy 2020;45(5):540–59CrossRefGoogle ScholarPubMed.

13. For a defense of common morality see Donagan, A. The Theory of Morality. Chicago: University of Chicago Press; 1977 CrossRefGoogle Scholar; Beauchamp, TL. A defense of the common morality. Kennedy Institute of Ethics Journal 2003;13(3):259–74CrossRefGoogle ScholarPubMed; DeGrazia, D. Common morality, coherence, and the principles of biomedical ethics. Kennedy Institute of Ethics Journal 2003;13:219–30CrossRefGoogle ScholarPubMed; Gert, B. Morality: Its Nature and Justification. New York: Oxford University Press; 2006:159–61Google Scholar, 246–7; Gert B, Culver C, Danner Clouser K. Bioethics: A Return to Fundamentals. 2nd ed. New York: Oxford University Press; 2006; and Gert, B. Common Morality: Deciding What To Do. New York: Oxford University Press; 2007 Google Scholar. For a critique of common morality see Strong, C. Is there no common morality? Medical Humanities Review 1997;11:3945 Google ScholarPubMed; Arras, J. The Hedgehog and the Borg: Common morality in bioethics. Theoretical Medicine and Bioethics 2009;30:1130 CrossRefGoogle ScholarPubMed; Engelhardt, TH. Bioethics critically considered: Living after foundations. Theoretical Medicine and Bioethics 2012;33(1):97105 CrossRefGoogle Scholar; Hodges, K, Sulmasy, D. Moral status, justice, and the common morality: Challenges for the principlist account of moral change. Kennedy Institute of Ethics Journal 2013;23:275–96CrossRefGoogle ScholarPubMed; Kukla, R. Living with Pirates: Common morality and embodied practice. Cambridge Quarterly of Healthcare Ethics 2014;23:7585 CrossRefGoogle ScholarPubMed; Arras, J. A common morality for Hedgehogs: Bernard Gert’s method. In: Arras, J, ed. Methods in Bioethics: The Way We Reason Now. New York: Oxford University Press; 2017: 2744 CrossRefGoogle Scholar and Bautz, B. What is the common morality, really? Kennedy Institute of Ethics Journal 2016;26:2945 CrossRefGoogle Scholar. For a recent response see Beauchamp, TL, Childress, JF. Common morality principles in biomedical ethics: Responses to critics. Cambridge Quarterly of Healthcare Ethics 2002;31(2):164–76Google Scholar.

14. On the application of principlism to particular bioethical dilemmas see Beauchamp, TL. Methods and principles in biomedical ethics. Journal of Medical Ethics 2003;29(5):269–74CrossRefGoogle ScholarPubMed and Gordon, JS, Rauprich, O, Vollman, J. Applying the four principle approach. Bioethics 2011;25:293300 CrossRefGoogle ScholarPubMed.

15. As Beauchamp notes, “our appeal has been to common morality as the base account, not to an ethical theory, which we have avoided altogether.” See Beauchamp, TL. Reply to strong on principlism and casuistry. Journal of Medicine and Philosophy 2000;25:342–47CrossRefGoogle ScholarPubMed. On this point see as well Beauchamp, TL. Does ethical theory have a future in bioethics? Journal of Law, Medicine, and Bioethics 2004;32:209–17Google ScholarPubMed and Iltis, A. Bioethics as methodological case resolution: Specification, specified principlism, and casuistry. Journal of Medicine and Philosophy 2000;25:271–84CrossRefGoogle ScholarPubMed. For a critique of principlism as lacking a substantive theory of goodness see Callahan, D. Principlism and communitarianism. Journal of Medical Ethics 2003;29(5):287–91CrossRefGoogle ScholarPubMed and Shea, M. Principlism’s balancing act: Why the principles of biomedical ethics need a theory of the good. Journal of Medicine and Philosophy 2020;45:441–70CrossRefGoogle Scholar.

16. See note 12, Veatch, 1995, at 210.

17. Rawls, J. A Theory of Justice revised edition. Cambridge, MA: Harvard University Press; 1999:1920 CrossRefGoogle Scholar.

18. See note 12, Engelhardt, 1996, at 102–20.

19. Gert B, Green R, Clouser KD. The method of public morality versus the method of principlism, pp. 219–36. On this point see as well Beauchamp, TL. Principlism and its alleged competitors. Kennedy Institute of Ethics Journal 1995;5(3):181–98CrossRefGoogle ScholarPubMed and Davis, R. The principlism debate: A critical overview. Journal of Medicine and Philosophy 1995;20:85105 CrossRefGoogle ScholarPubMed. For a critique of the practicality of Gert’s method see Strong, C. Gert’s moral theory and its application to bioethics cases. Kennedy Institute of Ethics Journal 2005;16(1):3958 CrossRefGoogle Scholar.

20. See note 12, Veatch, 1995, at 216. In this regard, Philippa Foot has proposed a similar ranking with non-consequentialist considerations being given precedence. See Foot, P. The problem of abortion and the doctrine of double effect. Oxford Review 1967;5:515 Google Scholar.

21. Beauchamp, TL. The role of principles in practical ethics, 80. In Sumner, LW, Boyle, J. (eds.) Philosophical Perspectives on Bioethics. Toronto: University of Toronto Press; 1996 Google Scholar. In the latest edition of the Principles of Biomedical Ethics, Beauchamp and Childress define principles as “general norms derived from the common morality that form a suitable starting point for reflection on moral problems in biomedical ethics.” See note 11, Beauchamp and Childress, 2019, at 13. For a good collection of Beauchamp’s work on principles as well as a broader series of theoretical issues in contemporary bioethics see Beauchamp, TL. Standing on Principles: Collected Essays. New York: Oxford University Press; 2010 Google Scholar.

22. See note 21, Beauchamp, 1996, at 81.

23. See note 21, Beauchamp, 1996, at 81.

24. In fact, Beauchamp specifically claims that prima facie principles “are entirely compatible with casuistry.” See note 21, Beauchamp, 1996, at 89. For a comparison of casuistry and principlism see Kuczewski M. Casuistry and Principlism: The convergence of method in biomedical ethics. Theoretical Medicine and Bioethics 1998;6:509–24. For a critique of casuistry from the perspective of moral particularism see Kaebnick, G. On the intersection of casuistry and moral particularism. Kennedy Institute of Ethics Journal 2000;10(4):307–22CrossRefGoogle ScholarPubMed.

25. See note 21, Beauchamp, 1996, at 84.

26. See note 21, Beauchamp, 1996, at 84.

27. See note 21, Beauchamp, 1996, at 84.

28. See note 21, Beauchamp, 1996, at 84.

29. See note 21, Beauchamp, 1996, at 84.

30. See note 21, Beauchamp, 1996, at 84.

31. See note 21, Beauchamp, 1996, at 84.

32. Jonsen, A, Toulmin, S. The Abuse of Casuistry. Berkeley: University of California Press; 1988:250–57CrossRefGoogle Scholar. On this point see as well Jonsen, AR. Case analysis in clinical ethics. Journal of Clinical Ethics 1990;1:6365 CrossRefGoogle ScholarPubMed; Jonsen, AR. Of balloons and bicycles: Or the relationship between ethical theory and practical judgment. Hastings Center Report 1991;21:1416 CrossRefGoogle ScholarPubMed and Jonsen, AR. Casuistry as methodology in clinical judgment. Theoretical Medicine and Bioethics 1991;12:295307 CrossRefGoogle Scholar.

33. As Margaret Little notes, “A set of features that in one context makes an action cruel can in another [context] carry no such import; the addition of another detail can change the meaning of the whole… Natural features do not always ground the same moral import, which then goes into the hopper to be weighed against whatever other independent factors happen to be present. The moral contribution they make on each occasion is holistically determined: it is itself dependent… on what other non-moral features are present or absent.” Little M. Wittgensteinian lessons on moral particularism. In: Elliot, C. ed. Slow Cures and Bad Philosophers: Essays on Wittgenstein, Medicine, and Bioethics. Durham, NC: Duke University Press; 2001:165.

34. Dancy, J. Moral Reasons. Oxford: Wiley-Blackwell Press; 1993:67 Google Scholar. As a result, “we cannot judge the effect of the presence of any one feature in isolation from the effect of the others. Whether or not one particular property is morally relevant, and in what way, may depend on the precise nature of the other properties of the action.” See note 1, McNaughton, 1991, at 193.

35. Jonsen AR. Strong on specification. Journal of Medicine and Philosophy 2000;25:353.

36. See note 8, Richardson, 1990, at 279. On this topic see as well his later article Richardson, HS. Specifying, balancing, and interpreting bioethical principles. Journal of Medicine and Philosophy 2000;25:285307 CrossRefGoogle ScholarPubMed.

37. DeGrazia, D. Moving forward in bioethical theory: Theories, cases, and specified principlism. Journal of Medicine and Philosophy 1992;17:511–39CrossRefGoogle ScholarPubMed.

38. Richardson, HS. Beyond good and right: Toward a constructive ethical pragmatism. Philosophy and Public Affairs 1995;24:131.CrossRefGoogle Scholar

39. See note 8, Richardson, 1990, at 295–96.

40. See note 11, Beauchamp and Childress, 2019, at 17. Richardson gives the example of the norm that “one should not directly kill innocent persons” can, upon reflection, be specified into the norm that: it is generally wrong to directly kill innocent human beings who have attained self-consciousness, and generally wrong directly to kill human beings with the genetic potential to develop self-consciousness who would not be better-off dead, but it is not generally wrong directly to kill human beings who meet neither of these criteria. See note 8, Richardson, 1990, at 304.

41. Not all interpreters of principlism agree that it is an anti-deductivist theory. Van der Steen and Borden, for example, argue that principlism is deductive in that, although principles arise from cases, they can be deductively (even if post hoc) applied back to cases, either the original cases, or to new ones. See van der Steen WJ. 1995. Facts, Values, and Methodology: A New Approach to Ethics, 63. Amsterdam-Atlanta, GA, Borden, SL. Character as a safeguard for journalists using case-based ethical reasoning. International Journal of Applied Philosophy 1999;13:93104.Google Scholar

42. Richardson does not totally avoid balancing, but he severely limits its role, allowing it only minor leeway to function at more theoretical levels. However, even this limited reliance upon balancing is not entirely uncontroversial. For example, Veatch claims that “if principles are to be balanced, no norm can successfully be specified.” See note 12, Veatch, 1995, at 216. Here he appears to be suggesting that even theoretical balancing will be impacted as a result of context-changes, and this ever-changing balance prevents any concrete specification.

43. See note 8, Richardson, 1990, at 294.

44. See note 11, Beauchamp and Childress, 2019, at 19.

45. Jonsen, AR. Casuistry: An alternative or complement to principles? Kennedy Institute of Ethics Journal 1995;5:244.CrossRefGoogle ScholarPubMed

46. See note 35, Jonsen, 2000, at 359.

47. Strong C. Specified principlism: What is it, and does it really resolve cases better than Casuistry? Journal of Medicine and Philosophy 2000;25:323–41. For a response to Strong see Rauprich, O. Specification and other methods for determining morally relevant facts. Journal of Medical Ethics 2011;37(10):592–96CrossRefGoogle ScholarPubMed.

48. Nussbaum, M. Why practice needs ethical theory: Particularism, principle, and bad behavior. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000:227–55CrossRefGoogle Scholar. On this point see as well Nussbaum, M. The discernment of perception. In: Love’s Knowledge: Essays on Philosophy and Literature. New York: Oxford University Press; 1990: 54105 Google Scholar and Blum, L. 1994. In: Blum, L, ed. Moral Perception and Particularity. Cambridge: Cambridge University Press; 1994: 3061.CrossRefGoogle Scholar

49. Sinnott-Armstrong W. Some varieties of particularism. Metaphilosophy 1999;30:4.

50. Lippert-Rasmussen, K. On denying a significant version of the constancy assumption. Theoria 1999;65:9698 CrossRefGoogle Scholar.

51. Hart HLA. The Concept of Law, 3rd ed. New York: Oxford University Press; 2011 at Chapter 7.

52. In the same chapter, Hart argues that the uncertainly arising from the open texture of rules can also push one towards rule skepticism. Particularism would be seen by many as the perfect example of this result. However, I think that Hart’s stance is ultimately very similar to a more moderate form of particularism. Hart argues that neither rule-formalism nor rule-skepticism is correct, but rather some position between the two, with both case-experience and rules needed for concrete action guidance.

53. Strong makes a similar point in his article on specified principlism. See note 47, Strong, 2000.

54. See note 12, Veatch, 1995, at 216.

55. Beauchamp and Childress attempt to address this problem by linking specification to Rawls’s method of reflective equilibrium. See note 11, Beauchamp and Childress, 2019, at 456–57. As Arras argues, a serious problem here is that appeals to reflective equilibrium as a form of justification in bioethics often “devolve into the truism that the best method requires the careful rational assessment of all the relevant philosophical arguments bearing on a subject and assessing them on their merits…. Here wide reflective equilibrium appears to be more a rather massive effort of hand waiving than a precise road map to moral justification.” See Arras, JD. Methods in bioethics: The way we reason now. New York: Oxford University Press; 2019:188 Google Scholar.

56. See note 36, Richardson, 2000, at 289.

57. See note 36, Richardson, 2000, at 289.

58. Little, M. 2000. Moral generalities revisited. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000:279 Google Scholar.

59. McDowell, J. Non-cognitivism and rule following. In: Hotzman, S, Leich, C, eds. Wittgenstein: To Follow a Rule. New York: Routledge Press; 1981:145 Google Scholar and McDowell, J. Criteria, defeasibility, and knowledge. In Meaning, Knowledge, and Reality . Cambridge, MA: Harvard University Press; 2001: 369–94 at 382 Google Scholar.

60. Aristotle, Nicomachean Ethics 1103b27-1104a9. On this point see as well Aristotle, Nicomachean Ethics 1094b; MacIntyre A. Whose Justice? Which Rationality? South Bend: University of Notre Dame Press; 1989:124–45, Beresford EB. Can phronesis save the life of medical ethics? Theoretical Medicine and Bioethics 1996;17(3):209–24; Irwin TH. Ethics as an inexact science: Aristotle’s ambitions for moral theory. In: Hooker B, Little M, eds. Moral Particularism. Oxford: Oxford University Press 2000:100–29; Allmark, P. An argument for the use of Aristotelian method in bioethics. Medicine, Health Care & Philosophy 2005;9:6979 CrossRefGoogle Scholar; Annas J. Intelligent Virtue. Oxford: Oxford University Press; 2011 at Chapter 3; Leibowitz, U. Moral particularism in aristotle’s nicomachean ethics. Journal of Moral Philosophy 2012;10:121–47CrossRefGoogle Scholar; McDowell, J. The Engaged Intellect: Philosophical Essays. Cambridge, MA: Harvard University Press; 2013 Google Scholar:at Chapters 2 and 3. It is worth noting as well that Aristotle argues that some actions such as adultery, theft, and murder are always wrong, and he also acknowledges that certain emotions such as spite, shamelessness, and envy are always inappropriate. See Aristotle, Nicomachean Ethics 11007a10-14. Finally, it is also clear that some actions such as repaying a debt do not have a variable valence. In this regard, his account is similar to Ross’s pro tanto principles discussed above insofar as a reason in favor can be outweighed by a stronger reason against. As he notes, “generally the debt should be paid, but if the gift is exceedingly noble or exceedingly necessary, one should defer to these considerations.” See Aristotle, Nicomachean Ethics 1165a1-5. For a helpful discussion see Van Zyl L. Virtue Ethics: A Contemporary Introduction. London: Routledge Press; 2019: at Chapter 8.

61. McDowell J. Virtue and reason. In: McDowell J. ed. Mind, Value, and Reality. Cambridge MA: Harvard University Press; 1998:58.

62. On this point see as well Raz, J. The truth in particularism. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000:69 Google Scholar. In this regard, Noam Chomsky has argued that a competent speaker is able to produce a potentially infinite number of correct sentences including sentences that have never been uttered in that language previously. See Chomsky, Noam. Syntactic Structures. London: Mouton & Co Press; 1957:13 CrossRefGoogle Scholar.

63. I would agree with Aristotle that there are some acts that we can never countenance or consider morally acceptable. Murder, sexual assault, slavery, torture, and cruelty for sadistic pleasure are examples of such acts. As Elizabeth Anscombe famously states, albeit in reference to utilitarianism, if anyone really questions whether certain depraved or vicious acts might be allowable, they show “a corrupt mind,” and it seems that to avoid this charge particularists must support some general principles universally condemning certain actions. Anscombe, E. Modern moral philosophy. Philosophy 1858;33:119 Google Scholar. For a helpful exegesis of Anscombe’s critique and especially how it relates to Aristotle’s critique of the “depraved person” (akolastos) who acts contrary to correct moral principles without experiencing moral reservations, conflict, or regret see Flannery, K. Anscombe and aristotle on corrupt minds. Christian Bioethics 2008;14:151–64CrossRefGoogle Scholar. In response to criticism, Dancy has acknowledged that could be a few invariant generalities such as purposefully inflicting undeserved pain is always wrong. Dancy, J. The particularist’s progress. In: Hooker, B, Little, M, eds. Moral Particularism. Oxford: Oxford University Press; 2000:131 Google Scholar.

64. See Arras, JD. Getting down to cases: The revival of casuistry in bioethics, reprinted in Methods in Bioethics: The Way We Reason Now . New York: Oxford University Press; 2018:58 Google Scholar.

65. Sinnot-Armstrong, Some Varieties of Particularism, pp. 6–8. On this point see as well Bakhurst, D. Moral particularism: Ethical not metaphysical? Thinking about Reasons: Themes from the Philosophy of Jonathan Dancy . Oxford: Oxford University Press; 2013 Google Scholar.

66. See note 1, Jackson, Petit, and Smith, 2000, at 85.

67. On this point see Wildes, KWM. Moral Acquaintances: Methodology in Bioethics. South Bend: University of Notre Dame Press Google Scholar; at Chapter 3.

68. Plato, Parmenides 126a-134e. On the third man argument see as well Fine G. Third man arguments. In: On Ideas: Aristotle’s Criticism of Plato’s Theory of Forms. Oxford: Clarendon Press; 1996:at Chapter 15.

69. Aristotle defines practical wisdom as “a true and reasoned state of capacity to act with regard to the things that are good or bad for man. See Aristotle Nicomachean Ethics 1140b5. On this point see Schwartz, B, Sharpe, K. Practical Wisdom: The Right Way to do the Right Thing. New York: Riverhead Books; 2011 Google Scholar; Russell, D, Practical Intelligence and the Virtues. Oxford: Oxford University Press; 2012 Google Scholar; and Reeve, CDC. Aristotle on Practical Wisdom: Nicomachean Ethics VI. Cambridge, MA: Harvard University Press; 2013 CrossRefGoogle Scholar.

70. See note 11, Beauchamp and Childress, 2019, at 10–12. In this regard, there is a need for further work on moral dilemmas and how one might know when to abandon the project of specification (and balancing) in the face of irresolvable moral dilemmas. On this point see as well Demarco, JP. Principlism and moral dilemmas: A New Principle. Journal of Medical Ethics 2005;31:101–105. For a good collection of articles on the topic see Moral Dilemmas and Moral Theory. Ed. Mason, H.E. New York: Oxford University Press; 1996.

71. I would argue that non-absolute or exceptionable principles can still account for moral holism. That is, generalists in this more moderate sense can accept the particularist viewpoint that the valence of a moral property is contextualized. Non-universal but still general ethical principles could be used as justificatory grounds for relevant decisions by either explaining how the valences of moral properties change in different contexts, or by excluding variant-causing contexts from their justificatory sphere.

72. See note 1, O’Neill, 2001, at 18 and note 1, Crisp, 2000, Particularizing Particularism at 32.

73. See note 1, Crisp, 2000, at 28–29.

74. See Sidgwick H. The Methods of Ethics, 7th ed. Indianapolis: Hackett Publishers; 1981:11. On Sidgwick see Schultz, B. Henry Sidgwick: The Eye of the Universe. Cambridge: Cambridge University Press; 2004 CrossRefGoogle Scholar; Phillips, DK. Sidgwickian Ethics. Oxford: Oxford University Press; 2011 CrossRefGoogle Scholar; and Crisp, R. The Cosmos of Duty: Henry Sidgwick’s Methods of Ethics. Oxford: Oxford University Press; 2017 Google Scholar.

75. See note 1, Jackson, Petit, and Smith, 2000, at 83–88.

76. See note 1, Jackson, Petit, and Smith, 2000, at 85.

77. See note 1, Jackson, Petit, and Smith, 2000, at 88.

78. See note 1, Jackson, Petit, and Smith, 2000, at 88.

79. Kirchin, S. Particularism, generalism, and the counting argument. European Journal of Philosophy 2003;11: 68–9CrossRefGoogle Scholar; note 15.

80. See note 69, Kirchin, 2003, at 69.

81. For example, in clinical medicine many “principles” are actually guidelines that are known to have exceptions that must be evaluated on a case-by-case basis. Thus, the guideline “splint fractured bones” is a very useful generality, and one which is clear and easy to teach and follow. If a doctor universally followed this principle, he or she would be performing the right action most of the time. In fact, if a doctor followed this principle absolutely, they still would be fairly adequate, perhaps even average, at their craft. However, absolute and rigid obedience to this rule would not be helpful to some patients, and could even severely harm other patients. The mature practically wise physician who has a true understanding of his or her craft realizes the useful exceptions to this principle through prior experience and by evaluating its connection to other generalities, such as “preserve blood flow to extremities” and “lungs need adequate room for expansion to prevent collapse.” It is interesting to note how the concepts of virtue and practical wisdom become increasingly prevalent in successive editions of Beauchamp and Childress’s Principles of Biomedical Ethics. On this point see especially the section on discernment in Chapter 2, pp. 39–40.