Skip to main content Accessibility help
×
Hostname: page-component-7479d7b7d-qs9v7 Total loading time: 0 Render date: 2024-07-13T17:57:09.705Z Has data issue: false hasContentIssue false

Part III - Theoretical Upshots

Published online by Cambridge University Press:  16 February 2024

Mona Simion
Affiliation:
University of Glasgow

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Chapter 10 Epistemic Oughts and Epistemic Dilemmas

The following chapters examine the theoretical upshots of the positive epistemological view proposed in this book. The account developed so far delivers the result that epistemic justifiers constitute epistemic oughts. In this chapter, I discuss the worry that such accounts threaten to give rise to widely spread epistemic dilemmas between paradigmatic epistemic norms. I argue for a modest scepticism about epistemic dilemmas. In order to do that, I first point out that not all normative conflicts constitute dilemmas: more needs to be the case. Second, I look into the moral dilemmas literature and identify a set of conditions that need to be at work for a mere normative conflict to be a genuine normative dilemma. Last, I argue that while our epistemic life is peppered with epistemic normative conflict, epistemic dilemmas are much harder to find than we thought.

10.1 Obligations to Believe and Epistemic Dilemmas

My account takes epistemic justification to be epistemic obligation. Nevertheless, it does not follow from my view that we are obliged to believe all of the things that, for example, happen in our visual fields at the same time. That is because what grounds obligations to believe is being in a position to know, and being in a position to know is limited quantitatively by our cognitive capacities.

We have also seen that, on my account, paradigmatic epistemic norms such as Kp will not come into conflict with zetetic norms, or norms of inquiry and gathering evidence. The question that arises at this point is: how about paradigmatic epistemic norms themselves? Will they not generate normative conflicts and, indeed, normative dilemmas?

First, recall that the account predicts that normative priority will be decided by availability rankings. As such, short of too many facts (i.e. more than I can uptake at once) enjoying the precise same availability ranking, the view will not result in normative dilemmas. Second, although epistemic dilemmas have been hotly discussed in recent epistemology, I remain unconvinced of their in-principle possibility. The following sections, indeed, will argue for a (modest) scepticism about epistemic dilemmas. If I am right, while our epistemic lives – in line with our normative lives more generally – encounter decidable epistemic conflict often, epistemic dilemmas are rather hard to come by.

10.2 What Dilemmas Are Not

What is a normative dilemma? On a first approximation, it seems plausible that one is facing a dilemma just in case two courses of action and two courses only are available to one, and whichever one chooses, one finds oneself in normative breach: there’s no good way out, as it were. This seems promising. Let’s spell it out:

  • Normative dilemma #1 (ND1): A state of affairs such that a subject S has only two available courses of action, both of which imply norm violation.

ND1 is false – and widely acknowledgedFootnote 1 to be false – in that it is too inclusive: it defines conflicts rather than dilemmas. Not all normative conflicts are dilemmas: there are several well-theorised phenomena that prevent garden variety normative conflicts from becoming full-blown dilemmas.

First, in cases in which one norm overrides the other, conflict is present, whereas there’s no dilemma as to the best action to pursue: it is the one recommended by the overriding norm. Indeed, it lies in the meaning of ‘overriding’ that such cases are cases of normative conflict without a dilemma. That’s what it is for a norm to override another: to come into conflict with it and take precedence. If it takes precedence, there’s no dilemma left to face the subject of the normative constraints in question: again, I face a moral conflict but not a moral dilemma when I decide to save the drowning child even at the cost of breaking my promise to meet you for lunch at noon. The norm of saving lives conflicts with the one of promise-keeping and renders one course of action, and only one, permissible – no dilemma here. Our definition needs tightening up if we are to distinguish dilemmas proper from mere normative conflicts. In addition to the conditions stipulated by ND1, at a minimum, it also needs to be the case that neither of the normative constraints are overridden.

Second, normative requirements can be overridden, but they can also be undercut. The difference between undercutting and overriding is that, roughly, while in cases of overriding we get reasons for and against a course of action phi, and the reasons against phi-ing are, for example, weightier than the reasons for phi-ing – such that the latter get overridden by the former – in cases of undercutting the counter-reasons speak, in the first instance, against the normative strength of the reasons in favour of phi-ing rather than directly against phi-ing. Your testimony that the train leaves at 8 a.m. is reason for me to go to the station before 8 a.m. My finding out that you are a compulsive liar undercuts this reason and renders it normatively inert. There is normative conflict between the two reasons, for sure. But the conflict fails to result in a dilemma: I know precisely what to do in this situation (i.e. not base my action on trust in your testimony).

Our definition needs to be tightened up if it is to distinguish between mere normative conflicts and normative dilemmas proper. At a minimum, on top of the conditions stipulated by ND1, we need to add an anti-overriding and an anti-undercutting condition. Here is a second pass:

  • Normative dilemma #2 (ND2): A state of affairs such that a subject S has only two available courses of action, both of which imply active norm violation.

In ND2, what is meant by ‘active norm’ is a norm that remains efficacious within the context after all things normative are considered. What ND2 implies is that, for a dilemma to be instantiated, (1) the two normative constraints need to be equally weighty (on pain of overriding taking place) and (2) it should not be the case that the one sheds doubt on the normative credentials of the other.

On closer inspection, though, ND2 is still too broad: sometimes, two active (equally weighty, non-undercut) norms can come into conflict, while a dilemma is not instantiated in virtue of the fact that one takes qualitative precedence over the other. Of course, overriding and undercutting are also ways in which norms can take precedence; they are not the only ways, however. Overriding is a quantitative matter: weightier normative constraints prevail. Precedence relations, however, can also be qualitative: they can be exhibited between active, non-overridden norms as well. A paradigmatic such case is one in which one of the norms is derivative of the other. Here is such a case:

  • Promise-breaker George: George is a promise-breaker: whenever possible, he will reliably break his promises to others. One day, George promises his colleague, Anna, to call her on Thursday at precisely 12 o’clock, and, as per usual, he doesn’t care much about keeping his promise. On Thursday, he looks at his watch, comes to believe it’s 11.45 a.m., and decides to take a nap before making the call, thinking that he will be at most thirty to forty minutes late. Luckily, though, George’s watch is broken: it’s actually only 10.45 a.m. After taking his nap, George ends up calling Anna at precisely 12 o’clock.

Promise-breaker George is in breach of a bunch of conceivable norms in this scenario (Williamson Reference Williamson, Dorsch and Dutantforthcominga): he has a bad disposition – he is a promise-breaker; he acts in ways that would, had his watch not been broken, have resulted in breaking his promise; in general, he seems to be rather inconsiderate and untrustworthy.Footnote 2 But here is one thing that cannot be said about George: that he broke his promise to Anna. Morally lucky George did no such thing. Indeed, he called Anna at precisely 12 o’clock as promised.

What is happening in this case is an instance of breach of a number of norms that derive from what we may call, following Williamson (Reference Williamson, Dorsch and Dutantforthcominga), the ‘primary’ norm of promise-keeping (‘keep your promises!’; e.g. ‘don’t be a promise-breaker!’, ‘don’t act like a promise-breaker would!’) without breach of the primary norm.

What is crucial to note about this case is that George cannot comply with both the primary norm of promise-keeping – ‘keep your promises!’ – and, for example, the derived norm ‘do what a promise-keeper would do!’. Meeting one will ensure he is in breach of the other. In that, one would think, the case looks initially promising for a dilemmatic case. Had George not been in breach of some of the derived norms, he would have been the victim of moral bad luck and would have ended up in breach of the primary norm at stake: had he, for example, called Anna at what he thought – according to his broken watch – was 12 o’clock, he would have failed to keep his promise (since he would have, in fact, called her at 11 o’clock). Last but not least, note also that this is not a case of either overriding or undercutting: the norms at stake here – both primary and derived – are standing, active norms. It’s easy to see this from the fact that, whichever of them George would be in breach of, he would intuitively be the proper subject of blame. It’s worthy of blame to break your promises, but so is to be the kind of person who would do so and to act accordingly. Still, George is not faced with a moral dilemma here: the primary norm of promise-keeping, as its description suggests, has priority. To see this, note that we describe the case as one of moral good luck. Had the norm of promise-keeping not taken precedence, it’s a mystery why we wouldn’t describe the case in neutral terms (should the norm of promise-keeping be as stringent as the norm requiring one to be the kind of person who keeps promises, etc.) or negative terms (should what we have dubbed ‘derived’ norms take precedence).

What George’s case suggests is that we need a further (and final) restriction on our account of dilemmas: a state of affairs will qualify as a dilemma if and only if a subject S has only two available courses of action, both of which imply active norm violation, where neither of the norms at stake is derivative of the other. Here is a simpler way to put this:

  • Genuine normative dilemma (GND): A state of affairs such that a subject S has only two available courses of action, both of which imply norm violation, where neither of the norms at stake takes precedence over the other.

In turn, as we have seen, taking precedence can come in many shapes, including overriding, undercutting, or normative primacy. William Styron’s (Reference Styron1979) Sophie’s Choice presents a useful example of such a genuine moral dilemma. Sophie Zawistowska has been asked to choose which of her two children, Eva or Jan, will be sent to the gas chamber in Auschwitz. An SS doctor, Fritz Jemand von Niemand, will grant a dispensation to only one of Sophie’s children. If she does not choose which one should live, Dr von Niemand will send both to their deaths.

In Sophie’s case, the moral norm asks of her to make a choice rather than not; otherwise, the worst scenario obtains: both children will die. So withholding from acting is not an option. The two options are: sacrifice Eva, or sacrifice Jan. Both options represent breaches of standing, non-overridden (the two options are equally bad, no norm is weightier than the other), non-undercut moral (and prudential) norms, neither of which takes priority over the other. Indeed, plausibly, the moral norm at stake on both horns of the dilemma is one and the same: ‘don’t put the life of your child in danger!’. In this, Sophie’s Choice is a GND.

10.3 Epistemic Non-dilemmas

This section puts the results above to use: it looks at several cases featured in the epistemological literature as alleged examples of epistemic dilemmas and argues that they are garden variety normative conflicts rather than genuine dilemmas.

Let’s start with a very straightforward case:

  • Rebutting defeat: My four-year-old hears me coughing and tells me I have a cold. My doctor disagrees: it’s bronchitis.

Here’s a garden variety epistemic normative conflict that is not a normative dilemma. Rather, it’s a straightforward case of normative overriding: I have stronger reason to trust my doctor’s testimony than my four-year-old son’s. Indeed, note that epistemological terminology implies that this is a non-dilemmatic normative conflict: after all, this is a classic case of full rebutting defeat acting against the epistemic reason provided by my son’s testimony. The presence of full rebutting defeat, however, implies that the reasons against my belief that it’s a cold are weightier than those in favour: otherwise, full defeat would not be instantiated. The presence of defeat, then, precludes this variety of normative epistemic conflict from being a genuine dilemma. Indeed, this is, of course, a mere epistemic incarnation of the case in which I am late for lunch because I stop to save the child: a classic case of normative overriding.

What about a case where two equally reliable, equally trustworthy, etc., sources (plug in your favourite view of the epistemology of testimony) offer conflicting testimonies? No problem at all: uncontroversially, the epistemically correct thing to do is to suspend/withhold beliefFootnote 3; again, there is no epistemic dilemma here. Indeed, strictly speaking, we should expect epistemology to be, if anything, most often the proper home of normative trilemmas rather than dilemmas: after all, in epistemology, there’s always the possibility to suspend belief.

So far so good: one wouldn’t expect much in the way of controversy to be triggered by this fairly straightforward diagnosis of rebutting defeat.

But here is a type of case hotly discussed under the heading of an epistemic dilemma, starting back in the early 1990s:

  • Evidence-undermining belief: As S considers some proposition, p, it is clear to S that an effect of S’s believing p would be to undermine the evidence S has, which otherwise is sufficient epistemic reason for S to believe p.

Evidence-undermining belief is a type of case extensively discussed in several epistemological works (e.g. Conee Reference Conee1987, Reference Conee1992, Sorensen Reference Sorensen1987, Richter Reference Richter1990, Foley Reference Foley1991, Kroon Reference Kroon1993, Odegard Reference Odegard1993). Here, for example, is Odegard’s take on the normative landscape present in this type of case:

Clearly we should not deny the belief, since this would be to deny a belief for which we have adequate evidence, both prior to adopting a position on it and when we adopt a position on it. But we should not affirm the belief either, for this would be to affirm a belief for which we would not have adequate evidence when we held it […]. Yet it can seem that we should not withhold on the belief either, since in withholding on it we fail to adopt a belief for which we have adequate evidence when we consider it for adoption. So it can seem that whatever happens, we do something that we should not do.

(Odegard Reference Odegard1993, 161)

The discussion in the Section 10.2, however, should by now have made it clear that Odegard’s trilemmic diagnosis here is mistaken. What we have here is a straightforward case of normative undercutting: the normative force of the evidence that the subject has for thinking that p is undercut by the evidence that the subject has for thinking that as soon as they adopt a belief in p, this would undermine the normative strength of the evidence for p. This is a straightforward case of undercutting defeat. Indeed, here is Conee’s diagnosis along the same lines:

When believing would result in a loss of crucial evidence for the believed proposition, adopting the belief would not bring about knowledge of the proposition. Foreseeing this sort of loss excludes having an epistemic reason to believe when contemplating the proposition.

(Conee 1993, 478)

If this is a case of undercutting defeat, however, it cannot, in principle, constitute a normative dilemma, for roughly the same reason why cases of rebutting defeat cannot constitute dilemmas: that is what it means for evidence to be defeated – it is for it to lose its initial normative strength. Fully undercut evidence no longer supports one’s belief in the target proposition. The case cannot be at the same time one of full undercutting defeat and a dilemma.

How about partial undercutting defeat? Can’t it be that the dilemma arises when the undercutter only partially affects the first-order evidence? Consider:

Logic problem: Anna is a logic student who is evaluating a tautology (T). Anna is certain that (T) is true. However, her logic professor, Chad, then tells her that before she began the exam, she was slipped a reason-distorting drug that impairs one’s ability to solve logic problems; those who are affected by the drug only reach the right conclusions 50 per cent of the time. As it turns out, though, unbeknownst to both Anna and Chad, the drug was just a placebo and Anna’s logic reasoning abilities were not affected in the least.

(Adapted from Leonard Reference Leonard2020)

Chad’s testimony provides Anna with higher-order evidence to the effect that there is a 50 per cent chance that she botched her assessment of what this first-order evidence actually supports. Her first-order evidence is thus partially undercut. What is Anna supposed to believe? According to some,Footnote 4 Anna is faced by a dilemma that requires a lot of epistemic fine-tuning to explain away.

However, note that, again, insofar as we accept the case as one of undercutting defeat, it can’t be that this is a dilemma rather than a mere normative conflict (and if we don’t thus accept it, no dilemma arises): its being a case of partial defeat implies that the higher-order evidence partially neutralises the normative force of the first-order evidence. The fact that the defeat is merely partial does nothing to change this: by the way the case is built, at least on a first approximation, Anna is left with 50 per cent first-order normative support for her belief that (T). If so, there is no dilemma here: Anna should suspend belief, since she has equal support for (T) and non-(T).

To see this further, let’s see what would have to be the case for this to be a genuine dilemmatic case. For a genuine dilemmatic normative conflict to be instantiated, we would have to think that something like the following principle (Worsnip Reference Worsnip2018) holds:

Possibility of Iterative Failure (PIF). It is possible that:

  1. i. S’s evidence supports D(p); and

  2. ii. S’s evidence supports believing that her evidence does not support D(p),

where D(p) is a possible doxastic attitude for a subject S towards a proposition p. I find PIF implausible, for the following reason (which I expanded on in Chapter 8): evidence about what evidence supports will normatively affect what evidence supports, in one way or another: sometimes, when weightier than the first-order evidence, it will defeat its normative strength. Andy’s testimony that p: ‘the train leaves at 8 a.m.’ is a reason for me to believe that the train leaves at 8 a.m. Testimony from the much more reliable (trustworthy, etc.) Mary that Andy is a compulsive liar is a reason for me to believe that Andy’s testimony is less weighty than I thought and thus lower my confidence in p. This need not always be the case, of course. It may be that defeat goes the other way – in cases in which the first-order evidence is weightier. My a priori justification that there are no round cubes will likely defeat my four-year-old’s testimony that I’m confused, since he just saw a round cube at his friend’s house. At other times, the two sources can also be equally weighty, in which case, again, the proper thing to do is to suspend. The important point, though, is that higher-order evidence interacts normatively with first-order evidence, which renders PIF implausible.Footnote 5

Do we have any reason to believe PIF to be true, in spite of this prima facie plausible evidentiary situation? Alex Worsnip argues that we do; according to Worsnip, rejecting PIF commits one to an implausibly strong claim about justification: denying (PIF), according to him, requires denying that one can have, all things considered, misleading evidence about what one’s evidence supports; in other words, justified false beliefs about what one’s evidence supports are impossible. That is a strong claim, as any claim that a particular kind of justified false belief is impossible would be (Worsnip Reference Worsnip2018).

Let’s state this clearly. According to Worsnip, the following claim holds:

  • Non-PIF implies no justified false beliefs about evidential support (NJFBES): If non-PIF, then one cannot have justified false beliefs about what one’s evidence supports.

I agree with Worsnip that NJFBES is implausibly strong.Footnote 6 I disagree, though, with the claim that the denial of PIF implies it. In particular, denying PIF is perfectly compatible with having a justified false belief that your evidence does support p. Rather, what the denial of PIF implies is the weaker:

  • No justified false beliefs about lack of evidential support (NJFBLES): One cannot have a justified false belief that one’s evidence does not support p.

NJFBLES might be hard to recognise at first, but, as opposed to its more ambitious cousin NJFBES, it is not a particularly controversial claim: it’s merely stating that undercutting defeat via higher-order evidence is a genuine phenomenon. As soon as you have justification for thinking your evidence does not support p, either the higher-order justification negatively affects the normative strength of your first-order evidence, making it true that you don’t have evidential support for p, or, if weightier, your first-order evidence affects the normative strength of the higher-order justification, making it false that you are justified. If you think there is such a thing as undercutting defeat, you hold NJBLES to be true (and for people who don’t, see the discussion in Chapter 8 on scepticism about the defeating power of higher-order evidence, as well as the discussion below on knowledge-first, level-splitting views). If so, the fact that denying PIF does imply NJFBLES (but not NJFBES) is not a problem for denying PIF but at worst a natural feature, and it is more likely a theoretical virtue of prior plausibility (given the widely spread popularity of undercutting defeat).

To sum up: for all of the above cases, I have argued that if we accept that they are cases of defeat, their being epistemic dilemmas becomes an in-principle impossibility, since their being cases of defeat implies that they are cases of either normative overriding or normative undercutting. How about if one wants to deny the very idea of defeat? Here is another type of case featuring what seems to be undercutting defeat that has been discussed in more recent literature:

Maths: A competent mathematician has just proved a surprising new theorem. She shows her proof to several distinguished senior colleagues, who all tell her that it involves a subtle fallacy. She cannot quite follow their explanations of her mistake. In fact, the only mistake is in their objections, obscured by sophisticated bluster; her proof is perfectly valid.

In this case, too, if we accept that what is going on is undercutting defeat, we are left with mere normative conflict without normative dilemma: depending on the weight of the colleagues’ testimony, the mathematician’s first-order support for the theorem will be more or less diminished. The amount of warrant left will support either belief (if only marginally affected) or disbelief (if seriously affected), or else suspension.

Several authors in the knowledge-first camp, though, argue against undercutting defeat for knowledge. According to people like Maria Lasonen-Aarnio (Reference Lasonen-Aarnio2014) and Tim Williamson (Reference Williamson and Hughesforthcomingb), insofar as our mathematician knows that the theorem in question holds, misleading higher-order evidence will have no impact on the normative credentials of their belief: they should hold steadfast. In turn, these philosophers explain the intuition of the impermissibility of such dogmatic doxastic behaviour via appeal to epistemic blameworthiness. According to Lasonen-Aarnio, the intuition that dogmatism is suspicious doxastic behaviour even in the presence of knowledge is to be explained by the fact that ignoring evidence is, generally speaking, a bad epistemic disposition that is worthy of blame. As such, while our mathematician is not in breach of the norm of belief in this case – since they are a knower – they are blameworthy for displaying a bad epistemic disposition in ignoring available evidence.

Does this take on these cases create problems for our diagnosis of them as non-dilemmatic normative conflicts? The answer is ‘no’. Insofar as one holds that there is some sort of priority ordering between the two norms coming into conflict – the knowledge norm of belief on the one hand and the norm prescribing against a disposition to ignore evidence on the other – the case is not an epistemic dilemma. Recall the case of George the promise-breaker: just like in that case, insofar as one takes one of these norms to have primacy over the other, GND is not instantiated.

Now, it is plain to see that according to the no-defeat champions, the knowledge norm takes primacy in Maths: first, because they hold that the mathematician should hold steadfast in this case – which suggests that the knowledge norm takes primacy over the dispositionalist norm – and second because they hold that the mathematician is in mere blameworthy norm compliance rather than in genuine norm violation. If so, by the lights of this variety of undercutting defeat deniers, there will be no dilemma instantiated in this case.

We have seen that both champions and foes of undercutting defeat will have to deny that cases like Maths instantiate epistemic dilemmas. Note, though, that there is still a bit of theoretical distance between this result and an in-principle impossibility of cases like these being dilemmatic: after all, there is one possibility left in the logical space. One could deny undercutting defeat and at the same time hold that the first- and second-order norms in this case have equal normative strength: none takes primacy. If so, one would think, we would have an instance of GND in this case: an epistemic Sophie’s Choice.

Fortunately, though, that’s not quite right. Our epistemic lives are easier than our moral lives: what is often an epistemically available option, but is not always a morally or prudentially available option, is suspending. By stipulation, Sophie does not have the option not to make any choice between her children: if she refuses to choose, they will both be killed. In cases of epistemic conflict, however, suspending belief is often an available option. As such, mere normative strength parity will not be enough to generate a dilemma.

This will be the case in both cases of (alleged) undercutting and rebutting defeat: for the theorist who rejects defeat and upholds normative parity, what is going on in these cases is a conflict between two equally weighty norms – one requiring belief and one disbelief in the relevant target proposition. If so, epistemology has an easy answer to these cases: the subject must, ceteris absentibus, suspend.

What the discussion so far suggests is that epistemic dilemmas are hard to come by. What we would need to generate an epistemic Sophie’s Choice is, for example, an equally weighty reason against believing that p and believing that non-p and an even stronger reason against suspending belief. Here it is:

  • Genuine epistemic dilemma (GED): A state of affairs such that believing that p, believing that non-p, and suspending on p all imply epistemic norm violation, where the norm forbidding one of the three options is weightier than the remaining two and neither of the remaining two norms takes precedence over the other.

Alternatively, we can also have a genuine epistemic trilemma, should the norms in question be equally weighty:

  • Genuine epistemic trilemma (GET): A state of affairs such that believing that p, believing that non-p, and suspending on p all imply epistemic norm violation, where none of the norms at stake take precedence over the others.

Note how far we’ve come from our first pass at isolating dilemmas proper from mere normative conflicts: normative conflicts are ubiquitous in epistemology; di/trilemmatic conflicts, however, are less so, if they exist at all.

Could we get something like GED/GET in our epistemic life? One thing to notice, from the start, is that what we would need is a proper epistemic reason against suspending. Stipulating that you’re bound to either believe or disbelieve because a villain is holding a gun to your head and threatens to kill you if you suspend, even though your evidence equally supports p and non-p, will not generate an epistemic dilemma, but rather an inter-normative conflict with an easy way out: all things considered, you should randomly believe whatever just to save your life. Epistemically, though, you should suspend.

I would like to end this chapter on a more optimistic note than I have proceeded with so far: I would like, that is, to propose two cases that, at least at first glance, look to me like better candidates for an epistemic di/trilemma than what we have been looking at so far. I am not myself convinced that they will hold water ultimately (which is why I dub them, modestly, ‘attempted epistemic di/trilemmas’). But it does seem to me, in the light of the results in this chapter, that they stand a better chance at instantiating di/trilemmatic normative conflict proper than the cases that we have been looking at. Because of this, I think they are worth putting on the table for further discussion. Here they are:

  • Attempted epistemic trilemma (AET): Mary, John, and Anna are equally reliable, equally trustworthy testifiers, and you know them to be such (again, plug in whatever else you need to instantiate epistemic justification on your favourite view of testimony). Mary tells you that p: the train leaves at 8 a.m. John tells you that non-p: the train does not leave at 8 a.m. Anna tells you that you don’t have equally weighty evidence for p and non-p (alternatively, Anna tells you that it’s epistemically impermissible for you to suspend belief on whether the train leaves at 8 a.m.).

And, correspondingly:

  • Attempted epistemic dilemma (AED): Mary and John are equally reliable, equally trustworthy testifiers, and you know them to be such. Mary tells you that p: the train leaves at 8 a.m. John tells you that non-p: the train does not leave at 8 a.m. Anna is the most reliable (trustworthy, etc.) testifier you know. Anna tells you that you don’t have equally weighty evidence for p and non-p (alternatively, Anna tells you that it’s epistemically impermissible for you to suspend belief on whether the train leaves at 8 a.m.).

There are a few things to notice about these cases. First, note that the cases need not be spelled out as featuring testimony; the choice here is driven by convenience. Parallel cases can be described with any other sources of knowledge. Nor does it have to be the case that one and only one type of source is at stake: a combination would do, too. Second, about AED: it is meant to be the epistemic equivalent of Sophie’s Choice structurally. Third, note that AED and AET are only di/trilemmas if we assume that an undercutting defeat-denying view is false, and thus that the higher-order evidence provided by Anna affects the justification you get from the first-order evidence generated by Mary and John. Otherwise, the case will be one of permissible suspension, and thus no dilemma will be instantiated.

Are AED and AET genuine epistemic di/trilemmas? Again, I’m not fully convinced: it may depend on what the correct view of evidential weight will be (e.g. the correct view of evidential weight might make it such that what one should do in these cases is suspend on everything: p, non-p, and the issue of what your evidence supports). I do believe, though, that these cases are worthy of serious attention, in that, as opposed to other cases that are historically popular in the literature, they do instantiate a di/trilemmatic structure proper: it looks as though, that is, whatever one decides to do – doxastically speaking – in these cases, one is in breach of equally strong, standing norms, neither of which takes priority over the other. (Compatibly, of course, structure might not be all that there is to epistemic dilemmatic conflict.)

10.4 Conclusion

I have defended modest scepticism about epistemic dilemmas: they’re hard to find. My scepticism, to be clear, only falls short of being radical insofar as the attempts I made at mimicking a Sophie’s Choice structure for the epistemic – or similar attempts – can be made to work. I am not myself convinced that they will, however. If they turn out to fail, I want to claim that we have reason to be very pessimistic about the in-principle possibility of an epistemic dilemma. I have also argued that an account like mine, taking justifiers to be obligations, will not deliver the theoretically suspicious result that the epistemic domain is peppered with dilemmas, but rather the more modest result that, just as in other normative domains, the epistemic sometimes faces us with normative conflict: sometimes, I can’t make it to lunch in time and save the child from drowning – something has to give. Epistemically, things look pretty similar.

Chapter 11 Scepticism as Resistance to Evidence

The view of evidence, defeat, and suspension put forth here delivers the result that paradigmatic scepticism about knowledge and justification is an instance of resistance to evidence. This chapter argues that this result is correct. In order to do that, I look at extant neo-Moorean responses to purported instances of failure of knowledge closure (Pryor Reference Pryor2004, Williamson Reference Williamson, Jackson and Smith2007) and warrant transmission and argue that they are either too weak – in that they concede too much to the sceptic – or too strong – in that they cannot accommodate the intuition of reasonableness surrounding sceptical arguments. I propose a novel neo-Moorean explanation of the data, relying on my preferred account of defeat and permissible suspension, on which the sceptic is in impermissible suspension but in fulfilment of their contrary-to-duty epistemic obligations.

11.1 Two Neo-Mooreanisms

Moore sees his hands in front of him and comes to believe that HANDS: ‘hands exist’ based on his extraordinarily reliable perceptual belief-formation processes. Moore’s belief is warranted, if any beliefs are: Moore is an excellent believer. Indeed, Moore knows that hands exist. In spite of his laudable epistemic ways, Dretske (Reference Dretske1971) thinks Moore shouldn’t feel free to do whatever it pleases him to do with this belief, epistemically speaking; in particular, Dretske thinks that, in spite of his warranted belief that HANDS, Moore should refrain from reasoning to some propositions he knows to be entailed by HANDS, such as WORLD: ‘there is an external world’. He thinks that this is an instance of closure failure for knowledge: we don’t always know the stuff that we know our knowledge to entail. In better news, conversely, that’s why the sceptic is wrong to think that my not knowing that I’m not a brain in a vat implies that I don’t know any of the ordinary things I take myself to know.

Wright (Reference Wright2002, Reference Wright and Nuccetelli2003, Reference Wright2004) agrees: Moore shouldn’t reason to WORLD from HANDS. However, that’s not because closure fails, but because the stronger principle of warrant transmission fails: the problem here, according to Wright, is not that we sometimes fail to know the stuff that we know is entailed by what we know. Rather, the issue is that the warrant Moore has for HANDS fails to transmit to WORLD. Compatibly, though, Moore may still be entitled to believe WORLD on independent grounds. If Moore is entitled to believe HANDS, then perhaps he must also be entitled to believe WORLD. But it doesn’t follow that his warrant to believe WORLD is his warrant to believe HANDS. Rather, it may be that Moore needs to be independently entitled to believe WORLD to begin with if he is to be entitled to believe HANDS.

Many philosophers are on board with rejecting at least one of these principles – be it merely warrant transmission or closure as well. At the same time, since closure and warrant transmission constitute a bedrock of our epistemic ways – indeed, they are crucial vehicles for expanding our body of knowledge – one cannot give them up without a working restriction recipe: if closure and/or warrant transmission don’t hold unrestrictedly, when do they hold? It is fair to say that the jury is still out on this front, and a satisfactory restriction recipe does not seem to be within easy reach.Footnote 1

That being said, several philosophers take the alternative route of resisting the failure claims altogether and thus fully dismiss the data: according to them, closure and warrant transmission are too important theoretical tools to be abandoned on grounds of misguided intuitions. They reject the intuition that something fishy is going on in Moore’s argument and argue that scepticism is just an instance of cognitive malfunction: the sceptic’s cognitive system malfunctions in that it fails to get rid of their unjustified sceptical beliefs in favour of the justified Moorean conclusion. I call these people ‘radical neo-Mooreans’. Here is Williamson:

Our cognitive immunity system should be able to destroy bad old beliefs, not just prevent the influx of bad new ones. But that ability sometimes becomes indiscriminate, and destroys good beliefs too.

I like radical neo-Mooreanism a lot. The majority reaction to this move, however, is that it is less than fair to the sceptic; indeed, this view (intuitively unfairly) categorises scepticism, without qualification, in the same normative boat with other epistemic malfunctions, such as wishful thinking. It is undeniable, though, that in the case of the sceptic, but not in the case of the wishful thinker, we think that there is something reasonable – even if not quite right – about their resistance to Moore’s argument. This intuitive difference cries out for an explanation.

At the other side of the neo-Moorean spectrum, we find concessive neo-Mooreans (e.g. Pryor Reference Pryor2004, Reference Pryor and Coliva2012); these philosophers accept both closure and transmission in Moorean inferences and try to come up with alternative explanations of the data (i.e. with an alternative account of what is intuitively amiss with Moore’s argument). In the next section, I look closer at the concessive neo-Moorean explanation of this datum.

11.2 Against Concessive Neo-Mooreanism

According to Jim Pryor (Reference Pryor2004), while Moore is right to reason from HANDS to WORLD, he wouldn’t be very convincing were he to do so in conversation with a sceptic. The problem behind the intuitive fishiness of Moore’s reasoning pattern is pragmatic, not epistemic: it is lack of dialectical force, not lack of warrant transmission, that’s triggering the uneasiness intuition. In the cases of alleged failure of closure and/or transmission, warrant transmits, but the argument fails dialectically due to psychological higher-order defeat.Footnote 2 The sceptic about WORLD will not be convinced by Moore’s argument in its favour from HANDS. Here is Pryor:

For a philosopher with such beliefs [i.e. sceptical beliefs], it’d be epistemically defective to believe things just on the basis of her experiences – even if those experiences are in fact giving her categorical warrant to so believe.

Why would it be thus epistemically defective? According to Pryor, the sceptic’s unjustified sceptical beliefs rationally obstruct them from believing based on Moore’s argument via psychological defeat. In particular, Pryor thinks that Moore’s argument gives the sceptic propositional justification for the conclusion, but it fails to generate doxastic justification due to the psychological defeat generated by the sceptic’s previously acquired sceptical beliefs. Since the sceptical beliefs are not justified, according to Pryor, they don’t defeat the propositional justification generated by Moore’s argument. They do, however, rationally obstruct the sceptic from justifiably believing the conclusion of Moore’s argument, and in this they defeat the sceptic’s doxastic justification.

The point, then, in a nutshell, is that even though it transmits warrant, the Moorean argument fails to convince the rational sceptic in virtue of the conflict between the Moorean claims and the sceptic’s previously held beliefs. The sceptic has propositional justification but does not have doxastic justification for HANDS and WORLD.

In what follows, I will take issue with this claim at several junctures. First and foremost, though, it is worth clarifying what exactly the content of the sceptical beliefs that allegedly do the defeating work here is. I want to start off by noting that it is implausible to think that the sceptical belief at stake in the literature is (or should be) something like non-WORLD: ‘the external world does not exist’. After all, what we are talking about – and the philosopher that is worth engaging with – is a reasonable sceptic who, for example, believes in underdetermination (i.e. who thinks that, for all they know, they may well be a brain in a vat), not someone who is anxiously fully confident that they’re a brain in a vat. The reasonable sceptic that is worth engaging with thinks that, for all the evidence that they have, there may well be no external world. If so, the reasonable sceptic will, at best, have a 0.5 credence that non-WORLD, or else they will suspend belief on the issue. Not much will hang on this below, but since I am interested in being maximally charitable to concessive neo-Mooreanism, I will, for the most part, discuss the reasonable sceptic rather than the maximally anxious sceptic in what follows. Everything I will say, though, will apply mutatis mutandis to the anxious sceptic as well.

Now here is a widely endorsed thesis in philosophy: justification is normative. The following is an attractive way of capturing this thought: one’s phi-ing is prima facie practically, morally, epistemically, etc., justified if and only if one prima facie practically, morally, epistemically, etc., permissibly phis. Plausibly enough, then, one’s belief that p is epistemically justified if and only if one epistemically permissibly believes that p. Justifiers are considerations that support belief, in that, if all else goes well (i.e. proper basing, no defeat, good processing, etc.), enough justifiers render a belief epistemically permissible.

Where does defeat fit within this picture? Just like justification, defeat is a normative category, in that it affects the permissibility of belief. Unlike justification, however, its function is to counter rather than support believing. If justifiers support belief – they contribute to rendering it permissible – defeaters contribute to rendering it impermissible. It is plausible, then, to think that defeat is the arch-enemy of justification: if justification is normative with a positive valence – in that it renders belief permissible – (full) defeat is normative with a negative valence, in rendering belief impermissible. In reason terms, if you wish, justifiers are normative reasons for belief, whereas defeaters are normative reasons against believing.

Now let’s go back to Pryor’s account of what goes on in the exchange between Moore and the sceptic. Recall that, according to Pryor, even though Moore’s argument does provide the sceptic with propositional justification, it fails to provide them with doxastic justification, in virtue of their unjustified sceptical beliefs defeating the latter but not the former. As such, according to Pryor, the sceptic’s belief that HANDS (and WORLD) based on Moore’s argument would be rendered unjustified via defeat.

The problem with this picture is that it’s not clear how an unjustified belief can have defeating force to begin with. To be clear, I am not claiming that we do not often resist information that we are presented with because of our previously held unjustified beliefs. Indeed, we often resist information presented to us for bad reasons (e.g. due to wishfully believing that it is not true; think, for instance, of cases of resistance to evidence due to partisanship in virtue of friendship, cases of people in abusive relationships who refuse to acknowledge the abuse, etc.). The question at stake when it comes to defeat, though, is not one concerning the possibility of resistance to evidence but of permissibility: since justification and defeat are normative, they can only be instantiated in cases in which permissibility is at stake. Cases of wishful thinking are paradigmatic cases in which the hearer is, to use Pryor’s term, ‘obscured’ from believing information that is presented to them due to their wishes. Clearly, though, wishful thinking cases are impermissibility cases: the hearer should not, as a matter of fact, resist the testimony in question, even though they do. Again, to follow Pryor’s terminology, these are cases in which the believer is not ‘rationally obscured’ from forming said beliefs but merely ‘obscured’. Or, to put it in reason terms, their unjustified, wishful thinking-based beliefs are mere motivating reasons for resisting testimony but not normative reasons.

If all of this is so, the question that arises is: is the sceptic being ‘rationally obscured’, as Pryor would have it, from adopting a belief based on Moore’s testimony by their previously held unjustified sceptical beliefs or rather, just like the wishful thinker, merely ‘obscured’ from so doing? Since defeat is a normative category, and since, by Pryor’s own stipulation, the sceptic’s sceptical beliefs are unjustified, it would seem as though they do not qualify as justification defeaters proper, but rather as mere motivating reasons for resisting Moore’s argument. The non-normative cannot defeat the normative: motivating reasons cannot outweigh normative reasons normatively. Just because I wish really hard to steal your purse, it does not follow that it is permissible to steal your purse: my motivating reasons, no matter how strong, in favour of stealing cannot outweigh the normative reasons against stealing, since they don’t factor into the overall permissibility calculus to begin with.

Why, then, is it intuitive and, according to Pryor, right to think that, once one has adopted a belief that non-p (or a doubt about whether p, or a 0.5 credence that non-p), it would be importantly epistemically defective to adopt a subsequent belief that p? Take the following standard case of higher-order defeat: I come to believe that the walls in your studio are white but illuminated by a red light to look red. Subsequently, upon arriving at your studio, it seems problematic for me to adopt the belief ‘the wall in front of me is red’ based on my corresponding perceptual experience as of a red wall. Why is this so? In particular, why is it that, even if we stipulate that my initial belief that the wall is white and illuminated to look red is unjustified, it would seem that, now that I hold it, I shouldn’t just trust my perceptual experience?

Maybe the answer to this question has something to do with the order in which the beliefs have been acquired; that is, maybe a difference in extant doxastic states is an epistemologically significant difference. Indeed, Pryor himself alludes to an answer along these lines. According to him, were the sceptic to believe based on Moore’s testimony that HANDS, and thereby WORLD, their belief would be irrational because it would not cohere with their previously held sceptical beliefs. According to Pryor, since irrationality precludes justification, were the sceptic to believe what Moore says, their belief would also be unjustified:

I will count a belief as rational when it’s a belief that none of your other beliefs or doubts rationally oppose or rationally obstruct you from believing. […] A rational commitment is a hypothetical relation between your beliefs; it doesn’t ‘detach’. That is, you can have a belief in P, that belief can rationally commit you to believe Q, and yet you be under no categorical requirement to believe Q. Suppose you believe Johnny can fly. This belief rationally commits you to the belief that someone can fly. If you’re not justified in believing that Johnny can fly, though, you need not have any justification for the further belief. You may even have plenty of evidence and be fully justified in believing that no one can fly. But your belief that Johnny can fly still rationally commits you to the belief that someone can fly. Given your belief about Johnny, if you refrain from believing that someone can fly, you’ll thereby exhibit a rational failing.

(Pryor Reference Pryor2004, 363–364)

Since rational failings are incompatible with justification, Pryor takes it that this hypothetical type of normativity that he associates with rationality – of the form ‘if you believe that p, then you are rationally committed to believing that q’ – will affect the permissibility of belief tout court: were the sceptic to believe what Moore tells them, their belief would be irrational – since they are antecedently committed to believing the opposite – and thereby unjustified.

There are two problems with this normative assessment, though. First and foremost, note that there are two ways of resolving cognitive dissonance due to holding two conflicting beliefs B1 and B2: one can either abandon B1 or abandon B2. Coherence doesn’t tell us which one we should choose: it merely tells us that one needs to go.Footnote 3 There are two ways of proceeding in cases in which one is presented with information B2 that runs counter to one’s extant belief B1: one can resist adopting B2 or, alternatively, one can abandon B1. Again, coherence doesn’t recommend any particular course of action: it just tells us that we need to choose between them.

One thing that Pryor could reply at this juncture is: time makes a difference, epistemically. The previously held belief takes precedence over the incoming information; this is what explains why the sceptic is rational to resist Moore’s argument.

The question that arises, though, is: why should we think that time is of such devastating epistemological significance? Just because the sceptical belief precedes Moore’s testimony temporally, why is it that we should think that it also gets normative priority? After all, consider the following pair of cases (adapted from Jessica Brown Reference Brown2018)Footnote 4:

  1. Case 1: A reliable testifier A, who knows that p, asserts that p. At the very same time as receiving A’s testimony, the hearer also receives contrary testimony from another reliable testifier, B, that not-p.

  2. Case 2: We slightly change Case 1 so that the testimony from B arrives just a bit later than the testimony from A, but for whatever reason the hearer does not form any belief about p before the testimony from B arrives.

In these cases, the evidentiary and doxastic situation is constant: one testimony item for p and one against p, and there is no difference in mental states. Clearly, the time difference will not make any epistemic difference: in both Case 1 and Case 2, the hearer has equally strong evidence for and against p. They should suspend belief. But now consider:

  1. Case 3: This differs from Case 2 only in the following respect: as a result of receiving A’s testimony, the hearer forms the belief that p before receiving B’s testimony.

Note that there is no temporal difference between Case 2 and Case 3. As such, even by the lights of the philosopher who believes that time can make an epistemic difference, there should be no difference in epistemic assessment either. But if there is no epistemic difference between Cases 1 and 2, nor any epistemic difference between Cases 2 and 3, it follows that there is no epistemic difference between Cases 1 and 3 either. If so, what the hearer should do in both cases is suspend rather than give priority to the first belief they formed and dismiss the second.

Let’s take stock. We have seen that considerations pertaining to coherence cannot explain why we should think that the sceptic is rational to resist Moore’s argument: coherence is indifferent between resisting Moore’s argument and abandoning the previously held sceptical belief. We have also seen that time does not make an epistemic difference either. If so, just because a belief is antecedently held, it does not follow that it takes epistemic priority. All of this suggests that the sceptic has no epistemic normative reason to give priority to their sceptical belief and thereby resist Moore’s argument.

Furthermore, recall that, on Pryor’s view, Moore’s argument is justification conferring, whereas the sceptical belief is unjustified. If so, there is epistemic normative reason for the sceptic to adopt the conclusion of Moore’s argument, and there is no epistemic normative reason to hold onto the sceptical belief – albeit, of course, the sceptic may well have a merely motivating reason to do so. All in all, it would seem, the sceptic ought (epistemically) to abandon their sceptical belief and adopt the conclusion of Moore’s argument. The concessive neo-Moorean solution to the sceptical puzzle is wrong: while Moore’s argument may well often fail to convince the sceptic, this is not because it lacks dialectical power, but rather because the sceptic is epistemically impermissibly resisting its conclusion in virtue of their previously held unjustified sceptical beliefs.

11.3 A New Radical Neo-Mooreanism

Let’s take stock again: we’ve seen that radical neo-Mooreanism – claiming that the sceptic’s resistance to Moore’s argument is an instance of epistemic malfunction – is thought by many to fail to offer a fully satisfactory explanation of the datum, in that it places the sceptic in the same boat with wishful thinkers, epistemically speaking. However, intuitively, we find the sceptic to be reasonable, even if wrong, when they resist Moore’s inference.

Concessive neo-Mooreanism does better on this front. According to this philosopher, the intuition of epistemic permissibility concerning the sceptic’s resistance to Moore’s argument is to be explained in terms of psychological defeat: Moore’s argument is warrant conferring but dialectically defective. Alas, on closer investigation, this account was shown to run into normative trouble: given that the sceptical belief is unjustified, it remains unclear why the sceptic should favour it over the warranted conclusion of the Moorean argument.

In what follows, I will develop a new neo-Mooreanism. My view falls squarely within the radical neo-Moorean camp, in that it takes transmission to hold in Moorean inferences and finds no flaw – epistemic or dialectical – with Moore’s argument. However, as opposed to extant radical neo-Mooreanism, it does predict that there is something epistemically good about the sceptic’s doxastic response that sets them apart from believers merely displaying full-on cognitive malfunctions, such as wishful thinking.

Recall that, on the account developed here, evidence consists of facts that are knowledge indicators, in that they enhance closeness to knowledge: it consists of facts that one is in a position to know and that increase one’s evidential probability (i.e. the probability on one’s total body of evidence) of p being the case. The fact that there is a table in front of me is a piece of evidence for me that there is a table in front of me. It is a knowledge indicator: it raises the probability on my evidence that there is a table in front of me, and I’m in a position to know it.

As such, not just any psychological facts will constitute evidence that there is a table in front of me: my having a perception as of a table will fit the bill in virtue of having the relevant indicator property. The fact that I wish that there was a table in front of me will not fit the bill, even if, unbeknownst to me, my table wishes are strongly correlated with the presence of tables: wishes are not knowledge indicators, for they don’t raise my evidential probability of p being the case. For the same reason, mere beliefs, as opposed to justified and knowledgeable beliefs, will not be evidence material; they lack the relevant indicator property.

Conversely, defeaters are indicators of ignorance: they are facts that one is in a position to know and that lower one’s evidential probability that p is the case.

Going back to our sceptic: just like the wishful thinker, on this view of evidence and defeat, the sceptic has no epistemic reason to believe their preferred sceptical hypothesis. There are no knowledge indicators available to them to this effect. There are no facts that raise the evidential probability of the sceptical hypotheses within their reach. Furthermore, Moore’s assertion that HANDS provides the sceptic with evidence that there are hands, as Moore’s testimony to this effect is a knowledge indicator. Also, as the sceptic’s sceptical belief is not an ignorance indicator (i.e. it does not lower the relevant evidential probability), it does not qualify as a defeater for HANDS. In this, the sceptic is in double breach of justification-conferring epistemic norms: they have unjustified sceptical beliefs, and they resist knowledge indicators on offer because of them. The sceptic does not have defeaters for HANDS; rather, they have mere motivating reasons to this effect: evidentially irrelevant facts (i.e. the fact that they believe non-WORLD/doubt WORLD) that lead her to unjustifiably reject HANDS.

What is it, then, that explains our intuition of reasonableness in the sceptic case and the lack thereof in the case of the wishful thinker? Recall that, according to the view developed here, the sceptic ought not to hold sceptical beliefs to begin with, ought to come to believe that WORLD based on Moore’s argument, and thereby ought to draw the inference to WORLD with Moore and abandon their antecedently held sceptical beliefs. If they fail to do all that, they are in breach of the justification-conferring epistemic norm: their resistance to Moore’s argument is epistemically impermissible.

Now, here is, however, a well-known fact about norms, generally speaking: sometimes, when we engage in impermissible actions, this gives rise to contrary-to-duty obligations. Consider the following normative claims:

  1. (1) It ought to be that John does not break the neighbour’s window.

  2. (2) If John breaks the neighbour’s window, it ought to be that he apologises.

(1) is a primary obligation, saying what John ought to do unconditionally. In contrast, (2) is a contrary-to-duty obligation about (in the context of (1)) what John ought to do conditional on his violating his primary obligation. (1) is a norm of many sorts: social, prudential, moral, and one of politeness. Should John break the neighbour’s window, there would be nothing good about it. That being said, John would be even worse off if, should he break the neighbour’s window, he would also fail to go and apologise to the neighbour.

Our functionalist normative schema has the resources needed to explain this datum: input-independent proper functioning – of the type that governs hearts – remains a dimension of functional evaluation in its own right, independently of whether the general proper functioning of the trait in question is input dependent or not. Just like we can ask whether a heart is doing what it’s supposed to do with the stuff that it takes up – be it blood or orange juice – we can also ask whether the lungs are doing the stuff that they’re supposed to do with the stuff that they have taken up – be it oxygen or carbon dioxide. There’s going to be an evaluative difference, then, between two pairs of lungs that are both improperly functioning simpliciter (i.e. in the input-dependent sense, in that they take up the wrong kind of stuff from the environment) in terms of how they process their input gas: are they carrying the input gas through the respiratory system, and subsequently through the lining of the air sacs, to the blood cells? The pair of lungs that do are better than the pair of lungs that don’t in that, even though strictly speaking both are malfunctioning overall, the former are at least displaying input-independent proper functioning.

What explains our intuition of reasonableness in the sceptic’s case, I claim, is not an epistemic norm simpliciter but rather an epistemic contrary-to-duty imperative: now that the sceptic is in breach of the justification-conferring epistemic norm, short of abandoning their unjustified beliefs, the next best thing for them to do is to embrace the commitments following from their unjustified beliefs and reject the commitments that follow from their negation. The next best thing for the sceptic, now that they believe/have a 0.5 credence that non-WORLD and reject HANDS, both impermissibly, short of abandoning their impermissible beliefs, is to reject whatever follows from HANDS. The sceptic’s cognitive system, just like the wishful thinker’s and just like lungs taking up carbon dioxide from the environment, is overall malfunctioning on several counts: it takes up improper inputs (the sceptic’s sceptical beliefs) and rejects excellent inputs (Moore’s testimony that HANDS). That being so, though, the sceptic’s cognitive system does something right in terms of input-independent functioning: it processes the (bad) stuff that it has taken up in the right way. The sceptic’s cognitive system would be even worse were they, now that they believe/have a 0.5 credence that non-HANDS, to go ahead and infer that WORLD.

Before I close, I would like to consider a possible objection to my view. So far, I have been assuming, with Pryor and Williamson, that the sceptic’s sceptical beliefs/doubts are unjustified. One could worry, though, that my view of evidence might allow for the (reasonable) sceptic to have induction-based evidence for their 0.5 credence that non-WORLD. After all, the sceptic could reason as follows: (1) when I can’t tell the difference between pears and apples, I can’t come to know that there’s an apple in front of me. (2) When I can’t tell the difference between John and his twin brother, Tim, I can’t come to know that John is in front of me. (3) Therefore, when I can’t tell the difference between x and y, I can’t come to know that x is the case. (4) I can’t tell the difference between WORLD and non-WORLD. (5) Therefore, I don’t know that WORLD. In turn, if the sceptic believes that (5), on pain of Moorean paradoxicality, they can’t believe that WORLD.

There are two points to consider about this. First, crucially, the envisaged sceptic is wrong, as (1) is notably too strong: I can come to know that there’s a pear in front of me in a world where there are no apples, or where apples are extremely rare, even if I can’t tell the difference between pears and apples. That being said, of course, (1) may well be justified inductively, which would lead to (5) being justified inductively. Second, though, note that Moorean paradoxicality, just like incoherence, tells us nothing about which of the two beliefs should be abandoned: it merely predicts that one needs to go. Why think WORLD needs to go rather than (5)? Furthermore, notice that in everyday testimonial cases it’s the previously held ignorance belief that should be abandoned: I believe I don’t know whether you are thirty-two years old, you tell me that you are thirty-two years old, and I thereby come to know that you are thirty-two years old and abandon my belief that I don’t know that you are thirty-two years old. That’s how it normally goes.

Here is a last attempt: maybe the sceptic’s inductively justified belief that they can’t tell the difference between WORLD and non-WORLD acts as an undercutting defeater for Moore’s testimony that HANDS? This could work. The problem, though, is that undercutting defeaters need to exhibit particular strength properties in order to successfully undercut. For instance, my three-year-old’s testimony that Dretske is wrong about closure failure because he took a hallucinogenic drug before writing his paper ‘Epistemic Operators’ will not successfully undercut my evidence that closure fails sourced in Dretske’s paper. Why not? My three-year-old is just not a very reliable testifier on the issue – not reliable enough to undercut Dretske’s written testimony at any rate. The testimony from my three-year-old does not lower my evidential probability conditional on Dretske’s testimony that closure fails. If so, what would need to happen in the case of the sceptic for their induction-based sceptical belief to undercut Moore’s testimony would be that the former is weighty enough, epistemically. Why, though, should we think that the sceptic’s induction has such devastating epistemic effects against Moore’s testimony? Also, recall that the inductive argument only warrants the reasonable sceptical belief ‘I don’t know that WORLD’, not the anxious sceptical belief that ‘non-WORLD’. Of course, though, the former is much weaker than the latter and thus has much less defeat power.Footnote 5

11.4 Conclusion

This chapter has developed a novel, functionalist variety of radical neo-Mooreanism. I have argued with Williamson that, just like the wishful thinker, the sceptic is displaying epistemic malfunction in rejecting Moore’s testimony. On my account, that is because their cognitive processes fail to pick up knowledge indicators. I have also shown, however, that the intuition that there’s something reasonable about the sceptic who resists going through Moore’s inference is right: the sceptic is in compliance with a contrary-to-duty obligation akin to input-independent well-functioning.

To be clear: this account does not make any concessions to the sceptic in terms of justification-conferring epistemic norms (i.e. primary epistemic obligations): no justification for sceptical beliefs, nor any defeat against Moore’s testimony, is instantiated in the context. The account merely explains why we find the sceptic reasonable (albeit wrong) to resist Moore’s inference from HANDS to WORLD: they are in compliance with their contrary-to-duty epistemic obligations. Now that they have broken the window, as it were, the sceptic might as well go ahead and apologise.

Chapter 12 Knowledge and Disinformation

Ideally, we want to resist mis/disinformation but not evidence. If this is so, we need accounts of misinformation and disinformation to match the epistemic normative picture developed so far. This chapter develops a full account of the nature of disinformation. The view, if correct, carries high-stakes upshots, both theoretically and practically. First, it challenges several widely spread theoretical assumptions about disinformation – such as that it is a species of information, a species of misinformation, essentially false or misleading, or essentially intended/aimed/having the function of generating false beliefs in/misleading hearers. Second, it shows that the challenges faced by disinformation tracking in practice go well beyond mere fact checking. I begin with an interdisciplinary scoping of the literature in information science, communication studies, computer science, and philosophy of information to identify several claims constituting disinformation orthodoxy. I then present counterexamples to these claims and motivate my alternative account. Finally, I put forth and develop my account: disinformation as ignorance-generating content.

12.1 Information and Disinformation

Philosophers of information, as well as information and communication scientists, have traditionally focused their efforts in three main directions: offering an analysis of information, a way to measure it, and investigating prospects for analysing epistemic states – such as knowledge and justified belief – in terms of information. Misinformation and disinformation have traditionally occupied the backseat of these research efforts.Footnote 1 The assumption has mostly been that a unified account of the three is going to become readily available as soon as we figure out what information is. As a result, for the most part, misinformation and disinformation have received dictionary treatment: for whatever the correct analysis of information was taken to be, misinformation and disinformation have either been taken to constitute the false variety thereof (misinformation) or the intentionally false/misleading variety thereof (disinformation) – by theorists endorsing non-factive accounts of disinformation – or, alternatively, something like information minus truth (misinformation) or information minus truth spread with an intention to mislead (disinformation) in the case of theorists endorsing factive accounts of information.

This is surprising in more than one way: first, it is surprising that philosophers of any brand would readily and unreflectively endorse dictionary definitions of pretty much anything – not to mention entities with such high practical stakes associated with them – such as mis/disinformation. Second, it is surprising that not more effort on the side of information, communication, and computer scientists has been spent on identifying a correct account of the nature of disinformation given the increasingly high stakes of issues having to do with the spread of disinformation that threaten our democracies, our trust in expertise, our uptake of health provision, and our social cohesion. We are highly social creatures, dependent on each other for flourishing in all walks of life. Our epistemic endeavours make no exception: due to our physical, geographical, and psychological limitations, most of the information we have is sourced in social interactions. We must inescapably rely on the intellectual labour of others, from those we know and trust well, to those whose epistemic credentials we take for granted online. Given the staggering extent of our epistemic dependence – one that recent technologies have only served to amplify – having a correct account of the nature of mis/disinformation, in order to be able to reliably identify it and escape it, is crucial.

Disinformation is widespread and harmful, epistemically and practically. We are currently facing a global information crisis that the Secretary-General of the World Health Organization (WHO) has declared an ‘infodemic’. Furthermore, crucially, there are two key faces to this crisis: two ways in which disinformation spreads societal ignorance: One concerns the widespread sharing of disinformation (e.g. fake cures, health superstitions, conspiracy theories, political propaganda, etc.), especially online and via social media, which contribute to dangerous and risky political and social behaviour. Separately, though at least as critical to the wider infodemic we face, is the prevalence of disinformation-generated resistance to evidence: even when the relevant information available is reliably sourced and accurate, many information consumers fail to take it on board or otherwise resist or discredit it (Klintman Reference Klintman2019) due to the rising lack of trust and scepticism generated by the ubiquity of disinformation. An important pay-off, then, of a correct analysis of the nature of disinformation is an understanding of how to help build and sustain more resilient trust networks. It is urgent that we gain such answers and insights: according to the 2018 Edelman Trust Barometer, UK public trust in social media and online news has plummeted to below 25 per cent, and trust in government is at a low of 36 per cent. This present crisis in trust of corresponds with a related crisis of distrust, in that the dissemination and uptake of disinformation, particularly on social media, have risen dramatically over the past few years (Lynch Reference Lynch2001, Levinson Reference Levinson2017, Barclay Reference Barclay2022).

12.2 Against Disinformation Orthodoxy

In what follows, I will scope the scientific and philosophical literature, identify three very widely spread – and rarely defended – assumptions about the nature of disinformation, and argue against their credentials.

  1. (1) Assumption §1: Disinformation is a species of information (e.g. Shannon Reference Shannon1948, Carnap and Bar-Hillel Reference Carnap and Bar-Hillel1952, Frické Reference Frické1997, Fallis Reference Fallis2009, Reference Fallis2015, Cevolani Reference Cevolani2011, D’Alfonso Reference D’Alfonso2011, Dinneen and Brauner Reference Dinneen and Brauner2015).

These theorists take information to be non-factive and disinformation to be the false and intentionally misleading variety thereof. On accounts like these, information is something like meaning: ‘the cat is on the mat’, on this view, caries the information that the cat is on the mat in virtue of the fact that it means that the cat is on the mat. Disinformation, on this view, consists in spreading ‘the cat is on the mat’ in spite of knowing it to be false and with the intention to mislead.

Why think in this way? Two rationales can be identified in the literature, one practical and one theoretical.

12.2.1 The Practical Rationale

Factivity doesn’t matter for the information scientist. In the early days of information science, the thought behind this went roughly as follows: for the information scientist, the stakes associated with the factivity/non-factivity of information are null – after all, what the computer scientist/communication theorist cares about is the quantity of information that can be packed into a particular signal/channel. Whether the relevant content will be true or not makes little difference to the prospects of answering this question.

It is true that, when it comes to how much data one can pack into a particular channel, factivity doesn’t make much difference. However, times have changed, and so have the questions the information scientist needs to answer: the ‘infodemic’ has brought with it concerted efforts to fight the spread of disinformation online and through traditional media. We have lately witnessed an increased interest in researching and developing automatic algorithmic detection of misinformation and disinformation, such as PHEME (2014), Kumar and Geethakumari’s (Reference Kumar and Geethakumari2014) ‘Twitter algorithm’, Karlova and Fisher’s (Reference Karlova and Fisher2013) diffusion model, and the Hoaxy platform (Shao et al. Reference Shao, Ciampaglia, Flammini and Menczer2016), to name a few. Interest from developers has also been matched by interest from policymakers: the European Commission has brought together major online platforms, emerging and specialised platforms, players in the advertising industry, fact-checkers, and research and civil society organisations to deliver a strengthened Code of Practice on Disinformation (European Commission 2022). The American Library Association (2005) has issued a ‘Resolution on Disinformation, Media Manipulation, and the Destruction of Public Information’. The UK Government has recently published a call for evidence into how to address the spread of disinformation via employing trusted voices. These are, of course, only a few examples of disinformation-targeting initiatives. If all of these and others are to stand any chance at succeeding, we need a correct analysis of disinformation. The practical rationale is false.

12.2.2 The Theoretical Rationale

Natural language gives us clear hints as to the non-factivity of information: we often hear people utter things like ‘the media is spreading a lot of fake information’. We also utter things like ‘the library contains a lot of information’ – however, clearly, there will be a fair share of false content featured in any library (Fallis Reference Fallis2015). If this is correct, the argument goes, natural language suggests that information is not factive – there can be true and false varieties thereof. Therefore, disinformation is a species of information.

The first problem with the natural language rationale is that the cases in point are underdeveloped. Take the library case: I agree that we will often say that libraries contain information in spite of the likelihood of false content. This, however, is compatible with information being factive: after all, the claim about false content, as far as I can see, is merely an existential claim. There being some false content in a library is perfectly compatible with it containing a good amount of information alongside it. Would we still say the same were we to find out that this particular library contains only falsehoods? I doubt it. If anything, at best, we might utter something like: ‘this library contains a lot of fake information.’

Which brings me to my more substantial point: natural language at best cannot decide the factivity issue either way and at worst suggests that information is factive. Here is why: first, it is common knowledge in formal semantics that, when a complex expression consists of a intensional modifier and a modified expression, we cannot infer a type–species relation – or, indeed, to the contrary, in some cases, we might be able to infer that a type–species relation is absent. This latter class includes the so-called privative modifiers such as ‘fake’, ‘former’, and ‘spurious’, which get their name from the fact that they license the inference to ‘not x’ (McNally Reference McNally, Aloni and Dekker2016). If so, the fact that ‘information’ takes ‘fake’ as modifier suggests, if anything, that information is factive, in that fake acts as privative: it suggests that it is not information to begin with. As Dretske (Reference Dretske1981) well puts it, mis/disinformation is as much a type of information as a decoy duck is a type of duck (see also Mingers (Reference Mingers1995) and Floridi (Reference Floridi2004, Reference Floridi2005a, Reference Floridi and Zalta2005b) for defences of factivity). If information is factive and disinformation is not, however, the one is not the species of the other. The theoretical rationale is false: meaning and disinformation come apart on factivity grounds. As Dretske well puts it:

signals may have a meaning, but they carry information. What information a signal carries is what it is capable of telling us, telling us truly, about another state of affairs. […] When I say I have a toothache, what I say means that I have a toothache whether it’s true or false. But when false, it fails to carry the information that I have a toothache.

(Dretske Reference Dretske1981, 44, emphases in original)

Natural language semantics also gives us further, direct reason to be sceptical about disinformation being a species of information: several instances of dis-prefixed properties that fail to signal type–species relations – disbarring is not a way of becoming a member of the bar, displeasing is not a form of pleasing, and displacing is not a form of placing. More on this below.

  1. (2) Assumption §2: Disinformation is a species of misinformation (e.g. Floridi Reference Floridi2007, Reference Floridi, Adriaans and van Benthem2008, Reference Floridi2011, Fallis Reference Fallis2009, Reference Fallis2015).

Misinformation is essentially false content, and the mis- prefix modifies as ‘badly’, ‘wrongly’, ‘unfavourably’, ‘in a suspicious manner’, ‘opposite or lack of’, or ‘not’. In this, misinformation is essentially non-information, in the same way in which fake gold is essentially non-gold.

As opposed to this, for the most part, dis- modifies as ‘deprive of’ (a specified quality, rank, or object), ‘exclude’, or ‘expel from’. In this, paradigmatically,Footnote 2 dis- does not negate the prefixed content, but rather it signals un-doing: if misplacing is placing in the wrong place, displacing is taking out of the right place. Disinformation is not a species of misinformation any more than displacing is a species of misplacing. To think otherwise is to engage in a category mistake.

Note also that disinformation, as opposed to misinformation, is not essentially false: I can, for instance, disinform you via asserting true content and generating false implicatures. I can also disinform you via stripping you of justification via misleading defeaters.

Finally, note also that information/misinformation exists out there, whereas disinformation is us-dependent: there is information/misinformation in the world without anyone being informed/misinformed (Dretske Reference Dretske1981), whereas there is no disinformation without audience. Disinformation is essentially audience-involving.Footnote 3

  1. (3) Assumption §3: Disinformation is essentially intentional/functional (e.g. Fetzer Reference Fetzer2004b, Floridi Reference Floridi2007, Reference Floridi, Adriaans and van Benthem2008, Reference Floridi2011, Mahon Reference Mahon and Zalta2008, Fallis Reference Fallis2009, Reference Fallis2015).

The most widely spread assumption across disciplines is that disinformation is intentionally spread misleading content, where the relevant way to think about the intention at stake can be quite minimal, as having to do with content that has the function to mislead (Fallis Reference Fallis2009, Reference Fallis2015). I think this is a mistake generated by paradigmatic instances of disinformation. I also think it is a dangerous mistake, in a world of the automated spread of disinformation that has little to do with any intention on the part of the programmer, to operate with such a restricted concept of disinformation. To see this, consider a black-box artificial intelligence (AI) that, in the absence of any intention to this effect on the part of the designer, learns how to and proceeds to widely spreading false claims about COVID-19 vaccines in the population in a systematic manner. Intention is missing in this case, as is function: the AI has not been designed to proceed in this way (no design function), and it does not do so in virtue of some benefit or another generated for either itself or any human user (no etiological function). Furthermore, and most importantly, AI is not the only place where the paradigmatic and the analytic part ways: I can disinform you unintentionally (where, furthermore, the case is one of genuine disinformation rather than mere misinformation). Consider the following case: I am a trusted journalist in village V, and, unfortunately, I am the kind of person who is unjustifiably very impressed by there being any scientific disagreement whatsoever on a given topic. Should even the most isolated voices express doubt about a scientific claim, I withhold belief. Against this background, I report on V TV (the local TV station in V) that there is disagreement in science about climate change and the safety of vaccines. As a result, whenever V inhabitants encounter expert claims that climate change is happening and vaccines are safe, they hesitate to update accordingly.

A couple of things about this case: first, this is not a case of false content/misinformation spreading – after all, it is true that there is disagreement on these issues (albeit very isolated). Second, there is no intention to mislead present in the context, nor any corresponding function. Third, and crucially, however, it is a classic case of disinformation spreading.

Finally, consider the paradigmatic spread of conspiracy theories. Their advocates are, paradigmatically, believers, and their intention is to inform rather than mislead. Since spreading conspiracy theories is a central case of disinformation spread – indeed, I submit, if our account of disinformation cannot accommodate this case, we should go back to the drawing board – we need a new account of the nature of disinformation that does not require any intention or function to mislead.

12.3 A Knowledge-First Account of Disinformation

In what follows, I will offer a knowledge-first account of disinformation that aims to vindicate the findings of the previous section.

Traditionally, in epistemology (e.g. Dretske Reference Dretske1981) and philosophy of information alike, the relation between knowledge and information has been conceived on a right-to-left direction of explanation (i.e. several theorists have attempted to analyse knowledge in terms of information). Notably, Fred Dretske thought knowledge was information-caused true belief. More recently, Luciano Floridi’s (Reference Floridi2004) network theory involves an argument for the claim that, should information be embedded within a network of questions and answers, then it is necessary and sufficient for it to count as knowledge. Accounts like these, unsurprisingly, encounter the usual difficulties in analysing knowledge.

The fact that information-based analyses of knowledge remain unsuccessful, however, is not good reason to abandon the theoretical richness of the intuitive tight relation between the two. In extant work (Simion and Kelp Reference Simion, Kelp and Popa-Wyattforthcoming), I have developed a knowledge-based account of information that explores the prospects of the opposite, left-to-right direction of explanation: according to this view, very roughly, a signal s carries the information that p iff it has the capacity to generate knowledge that p.Footnote 4 On this account, then, information carries its functional nature up its sleeve, as it were: just like a digestive system is a system with the function to digest and the capacity to do so under normal conditions, information has the function to generate knowledge and the capacity to do so under normal conditions (i.e. given a suitably situated agent).

Against this background, I find it very attractive to think of disinformation as the counterpart of information: roughly, as stuff that has the capacity to generate or increase ignorance (i.e. to fully/partially strip someone of their status as knower, or to block their access to knowledge, or to decrease their closeness to knowledge). Here is the account I want to propose:

  • Disinformation as ignorance-generating content (DIGC): X is disinformation in a context C iff X is a content unit communicated at C that has a disposition to generate or increase ignorance at C in normal conditions.

Normal conditions are understood in broadly etiological functionalist terms (e.g. Graham Reference Graham2012, Simion Reference Simion2019b, Reference Simion2021a) as the conditions under which our knowledge-generating cognitive processes have acquired their function of generating knowledge. The view is contextualist in that the same communicated content will act differently depending on contextual factors such as the evidential backgrounds of the audience members, the shared presuppositions, extant social relations, and social norms. Importantly, as with dispositions more generally, said content need not actually generate ignorance in the context – after all, dispositions are sometimes masked.

Now, generating ignorance can be done in a variety of ways – which means that disinformation will come in diverse incarnations. In what follows, I will make an attempt at offering a comprehensive taxonomy of disinformation. (The ambition to exhaustiveness is probably beyond the scope of this chapter, or even of an isolated philosophical project such as mine; however, it will be useful to have a solid taxonomy as a basis for a fully-fledged account of disinformation: at a minimum, any account should be able to incorporate all varieties of disinformation we will have identified.Footnote 5) Here it goes:

  1. (1) Disinforming via spreading content that has the capacity of generating false belief. The paradigmatic case of this is the traditionally recognised species of disinformation: intentionally spread false assertions with the capacity to generate false beliefs in hearers.

  2. (2) Disinforming via misleading defeat. This category of disinformation has the capacity of stripping the audience of held knowledge/being in a position to know via defeating justification.

  3. (3) Disinforming via content that has the capacity of inducing epistemic anxiety (Nagel Reference Nagel2010). This category of disinformation has the capacity of stripping the audience of knowledge via belief defeat. The paradigmatic way to do this is via artificially raising the stakes of the context/introducing irrelevant alternatives as being relevant: ‘Are you really sure that you’re sitting at your desk? After all, you might well be a brain in a vat’; or ‘Are you really sure he loves you? After all, he might just be an excellent actor, in which case you will have wasted years of your life.’ The way in which this variety of disinforming works is via falsely implicating that these error possibilities are relevant in the context when in fact they are not. In this, the audience’s body of evidence is changed to include misleading justification defeaters.

  4. (4) Confidence-defeating disinformation. This has the capacity to reduce justified confidence via justification/doxastic defeat: you are sure that your name is Anna, but I introduce misleading (justification/doxastic) defeaters, which gets you to lower your confidence. You may remain knowledgeable about p: ‘my name is Anna’ in cases in which the confidence lowering does not bring you below the knowledge threshold. Compatibly, however, your knowledge – or evidential support – concerning the correct likelihood of p is lost: you now take/are justified to take the probability of your name being Anna to be much lower than it actually is.

  5. (5) Disinforming via exploiting pragmatic phenomena. Pragmatic phenomena can be easily exploited to the end of disinforming in all of the ways above: true assertions carrying false implicatures will display this capacity to generate false beliefs in the audience. I ask: ‘Is there a gas station anywhere near here? I’m almost out of gas.’ And you reply: ‘Yeah, sure, just one mile in that direction!’, knowing perfectly well that it’s been shut down for years. Another way in which disinformation can be spread via making use of pragmatic phenomena is by introducing false presuppositions. Finally, both justification and doxastic defeat will be achievable via speech acts with true content but problematic pragmatics, even in the absence of generating false implicatures.

What all of these ways of disinforming have in common is that they generate ignorance – by generating either false beliefs, knowledge loss, or a decrease in warranted confidence. One important thing to notice, which was also briefly discussed in the previous section, is that this account, and the associated taxonomy, is strongly audience-involving, in that disinformation has to do with the capacity to have a particular effect – generating ignorance – in the audience. Importantly, though, this capacity will heavily depend on the audience’s background evidence/knowledge: after all, in order to figure out whether a particular piece of communicated content has the disposition to undermine an audience in their capacity as knowers, it is important to know their initial status as knowers. Here is, then, on my view, in more precise terms, what it takes for a signal to carry a particular piece of disinformation for an audience A:

  • Agent disinformation: A signal r carries disinformation for an audience A wrt p iff A’s evidential probability that p conditional on r is less than A’s unconditional evidential probability that p and p is true.

What is relevant for agent disinformation with regard to p is the probability that p on the agent’s evidence. And A’s evidence – and, correspondingly, what underlies A’s evidential probability – lies outwith A’s skull: it consists in probability raisers that A is in a position to know. Recall the account defended in Chapter 7:

  • Evidence as knowledge indicators: A fact e is evidence for p for S iff S is in a position to know e and P(p/e) > P(p).

In turn, we have seen that, on this account, a fact e being such that I am in a position to know it has to do with the capacity of my properly functioning knowledge-generating capacity to take up e:

  • Being in a position to know: S is in a position to know a fact e if S has a cognitive capacity with the function of generating knowledge that can (qualitatively, quantitatively, and environmentally) easily uptake e in cognisers of S’s type.

This completes my account of disinformation. On this account, disinformation is the stuff that undermines one’s status as a knower. It does so via lowering their evidential probability for p – the probability on the p-relevant facts that they are in a position to know – for a true proposition. It can, again, do so by merely communicating to A (semantically, pragmatically, etc.) that not-p when in fact p is the case. Alternatively, it can do so by (partially or fully) defeating A’s justification for p, A’s belief that p is the case, or A’s confidence in p.

One worry that the reader may have at this point goes along the following lines: isn’t the account in danger of over-generating disinformation? After all, I might be wrong about something I tell you through no fault of my own; isn’t it harsh to describe me as thereby spreading disinformation? Furthermore, every true assertion that I make in your presence about p being the case may, for all I know, serve as (to some extent) defeating evidence for a different proposition q, which may well be true. I truthfully tell you it’s raining outside, which, unrelatedly and unbeknownst to me, together with your knowledge about Mary not liking the rain, may function as partial rebutting defeat for ‘Mary is taking a walk’ – which may well, nevertheless, be true. Is it now appropriate to accuse me of having thereby disinformed you? Intuitively, that seems wrong.Footnote 6

Here are also a couple of parallel cases from Sandy Goldberg (in conversation): say that S is widely (but falsely) thought to be an inveterate liar, so that whenever S says that p, everyone immediately concludes that ~p. Prior to encountering S, A has a credence of 0.2 in p (this is what A’s evidence supports prior to encountering S’s testimony). S testifies (truly) that p, and A, who, like everyone else in the entire community, takes S to be a liar, drops her credence in p to 0.1. If S’s reputation as a liar is assumed to be common knowledge, such that everyone knows of it and would update accordingly, it seems that S’s true testimony would meet my analysis of agent disinformation. Conversely, one can imagine cases in which S says something false, explicitly aiming to disinform, but others (who know of S’s lying ways) come to true conclusions on the basis of the fact that S said so. On my account, this will not count as a case of disinformation.

Three things about these cases: first, note, once more, that intentions don’t matter for disinforming. As such, restricting disinforming via defeat to intentional/functional cases will not work for the same reasons that created problems for the intention/function condition on disinformation more broadly – we want an account of disinformation to be able to predict that asserters generating doubt about, for example, climate change via spreading defeaters to scientific evidence, even if they do it without any malicious intention, are disinforming the audience.

Second, note that it is independently plausible that, just as any bad deed can be performed blamelessly, one can also disinform blamelessly; if so, given garden variety epistemic and control conditions on blame, any plausible account of disinformation will have to accommodate non-knowledgeable and non-intentional instances of disinformation. Conversely, like with all intentions, intentions to disinform can also fail: one may aim to disinform and fail to do so. Indeed, it would be theoretically strange if an intention to disinform will be analytically successful.

Finally, note that we don’t need to restrict the account in order to accommodate the datum that disinformation attribution, and the accompanying criticism, would sound inappropriate in the cases above. We can use simple Gricean pragmatics to predict as much via the maxim of relevance: since the issue of whether Mary was going for a walk was not under discussion, and nor was it remotely relevant in our conversational context, flat out accusing you of disinforming me when you assert truthfully that it’s raining is pragmatically impermissible (although strictly speaking true with regard to Mary’s actions).

Going back to the account, note that, interestingly, on this view, one and the same piece of communication can, at the same time, be a piece of information and a piece of disinformation: information, as opposed to disinformation, is not context-relative. Content with knowledge-generating potential (i.e. that can generate knowledge in a possible agent) is information. Compatibly, the same piece of content, in a particular context, can be a piece of disinformation insofar as it has a disposition to generate ignorance under normal conditions. I think this is the right result: me telling you that p: 99 per cent of Black people at Club X are staff members is me informing you that p. Me telling you that p in the context of you inquiring as to whether you can give your coat to a particular Black man is a piece of disinformation since it carries a strong disposition (due to the corresponding relevance implicature) to generate the unjustified (and maybe false) belief in you that this particular Black man is a member of staff (Gendler Reference Gendler2011).

Finally, and crucially, my account allows that disinformation for an audience A can exist in the absence of A’s hosting any relevant belief/credence: (partial) defeat of epistemic support that one is in a position to know is enough for disinformation. Even if I (irrationally) don’t believe that vaccines are safe or that climate change is happening to begin with, I am still vulnerable to disinformation in this regard in that I am vulnerable to content that has, under normal conditions, a disposition to defeat epistemic support available to me that vaccines are safe and climate change is happening. In this, disinformation, on my view, can generate ignorance even in the absence of any doxastic attitude – by decreasing closeness to knowledge via defeating available evidence. This, I submit, is a very nice result: in this, the account explains the most dangerous variety of disinformation available out there – disinformation targeting the already epistemically vulnerable.

12.4 Conclusion

Disinformation is not a type of information and disinforming is not a way of informing: while information is content with knowledge-generating potential, disinformation is content with a disposition to generate ignorance under normal conditions in the context at stake. This way of thinking about disinformation, crucially, tells us that it is much more ubiquitous and hard to track than it is currently taken to be in policy and practice: mere fact-checkers just won’t do. Some of the best disinformation detection tools at our disposal will fail to capture most types of disinformation. To give but a few examples (but more research on this is clearly needed): the PHEME project aims to algorithmically detect and categorise rumours in social network structures (such as X (formerly Twitter) and Facebook) and to do so, impressively, in near real time. The rumours are mapped according to four categories, including ‘disinformation, where something untrue is spread with malicious intent’ (Søe Reference Søe2016). Similarly, Kumar and Geethakumari’s project (Reference Kumar and Geethakumari2014) had developed an algorithm that ventures to detect and flag whether a tweet is misinformation or disinformation. In their framework, ‘Misinformation is false or inaccurate information, especially that which is deliberately intended to deceive [and d]isinformation is false information that is intended to mislead, especially propaganda issued by a government organization to a rival power or the media’ (Kumar and Geethakumari Reference Kumar and Geethakumari2014, 3). In Karlova and Fisher’s (Reference Karlova and Fisher2013) diffusion model, disinformation is taken to be deceptive information. Hoaxy is ‘a platform for the collection, detection, and analysis of online misinformation, defined as “false or inaccurate information”’ (Shao et al. Reference Shao, Ciampaglia, Flammini and Menczer2016, 745). Examples targeted, however, include clear cases of disinformation such as rumours, false news, hoaxes, and elaborate conspiracy theories (Shao et al. Reference Shao, Ciampaglia, Flammini and Menczer2016).

It becomes clear that these excellent tools are just the beginning of a much wider effort that is needed in order to capture disinformation in all of its facets rather than mere paradigmatic instances thereof. At a minimum, pragmatic deception mechanisms, as well as evidential probability-lowering potentials, will need to be tracked against an assumed (common) evidential background of the audience.

Footnotes

Chapter 10 Epistemic Oughts and Epistemic Dilemmas

1 For an excellent overview of the relevant literature, see McConnell (Reference McConnell and Zalta2018).

2 See Kelp and Simion (Reference Kelp and Simion2023a) for an account of trustworthiness in terms of a disposition to comply with one’s obligations.

3 The difference between withholding and suspending belief will be of no consequence throughout this chapter. I will therefore use them interchangeably.

4 See Leonard (2018) for an overview and discussion.

5 See Chapter 8 for an extensive critical discussion of scepticism targeting this claim.

6 Note, though, that traditionalist evidentialists might have to accept it. If one’s justification is strictly a function of one’s evidence, then it seems to follow that one cannot have justified false beliefs about what one’s evidence supports.

Chapter 11 Scepticism as Resistance to Evidence

1 But see Kelp (Reference Kelp2019) for my favourite proposal.

2 To my knowledge, the first to have introduced the category of psychological (or doxastic) defeat is Jennifer Lackey (e.g. Reference Lackey2006, 438). For excellent recent work on defeat, see Brown and Simion (Reference Brown and Simion2021).

3 See also Graham and Lyons (Reference Graham, Lyons, Brown and Simion2021) for similar points.

5 Thanks to Chris Kelp for pressing me on this.

Chapter 12 Knowledge and Disinformation

1 While fully-fledged accounts of the nature of disinformation are still thin on the ground, a number of information scientists and philosophers of information have begun to address the problem of disinformation (Hernon Reference Hernon1995, Skinner and Martin Reference Skinner and Martin2000, Calvert Reference Calvert2001, Lynch Reference Lynch2001, Piper Reference Piper and Mintz2002, Fallis Reference Fallis2009, Walsh Reference Walsh2010, Rubin and Conroy Reference Rubin and Conroy2012, Whitty et al., Reference Whitty, Buchanan, Joinson and Meredith2012, Karlova and Fisher Reference Karlova and Fisher2013).

2 Not essentially, however. Disagreeable and dishonest are cases in point, where the dis- prefix modifies as ‘not-’. The underlying rationale for the paradigmatic usage, however, is solidly grounded in the Latin and later French source of the English version of the prefix (the Latin prefix meaning ‘apart’, ‘asunder’, ‘away’, ‘utterly’, or having a privative, negative, or reversing force).

3 See Grundmann (Reference Grundmann2020) for an audience-orientated account of fake news.

4 My co-author and I owe inspiration for this account to Fred Dretske’s excellent book Knowledge and the Flow of Information (Reference Dretske1981). While Dretske himself favours the opposite direction of analysis (knowledge in terms of information), at several points he says things that sound very congenial to our preferred account and that likely played an important role in shaping our thinking on this topic. On page 44 of this book, for instance, Dretske claims that ‘[r]oughly speaking, information is that commodity capable of yielding knowledge, and what information a signal carries is what we can learn from it’. Sandy Goldberg pointed out to me that Gareth Evans as well may well have had something in the vicinity in mind in his Varieties of Reference (Reference Evans1982), in the chapter on communication, when he said that we can exploit epistemic principles about knowledge transmission in testimony cases to derive the semantics of the words used in those knowledge-transmitting cases (roughly, the words mean what they must if such knowledge is to be transmitted).

5 See Simion (Reference Simion2019a, Reference Simion2021a, Reference Simion2021b), Simion and Kelp (2022), and Kelp and Simion (Reference Kelp and Simion2023a) for knowledge-centric accounts of trustworthiness and testimonial entitlement. See Kelp and Simion (Reference Kelp and Simion2017, Reference Kelp and Simion2021) for functionalist accounts of the distinctive value of knowledge.

6 Many thanks to Sandy Goldberg, Julia Staffel, and Martin Smith for pressing me on this.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Theoretical Upshots
  • Mona Simion, University of Glasgow
  • Book: Resistance to Evidence
  • Online publication: 16 February 2024
  • Chapter DOI: https://doi.org/10.1017/9781009298537.013
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Theoretical Upshots
  • Mona Simion, University of Glasgow
  • Book: Resistance to Evidence
  • Online publication: 16 February 2024
  • Chapter DOI: https://doi.org/10.1017/9781009298537.013
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Theoretical Upshots
  • Mona Simion, University of Glasgow
  • Book: Resistance to Evidence
  • Online publication: 16 February 2024
  • Chapter DOI: https://doi.org/10.1017/9781009298537.013
Available formats
×