Hostname: page-component-84b7d79bbc-g78kv Total loading time: 0 Render date: 2024-07-25T07:07:50.438Z Has data issue: false hasContentIssue false

Against “Possibilist” Interpretations of Climate Models

Published online by Cambridge University Press:  17 February 2023

Corey Dethier*
Affiliation:
Leibniz Universität Hannover, Philosophy, DE, Germany
Rights & Permissions [Opens in a new window]

Abstract

Climate scientists frequently employ heavily idealized models. How should these models be interpreted? Some philosophers have advanced a possibilist interpretation: climate models stand in for possible scenarios that could occur but do not provide information about how probable those scenarios are. This article argues that possibilism is (1) undermotivated, (2) incompatible with successful practices in the science, and (3) unable to correct for known biases. The upshot is that the models should be interpreted probabilistically in at least some cases.

Type
Contributed Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Climate scientists frequently employ groups or “ensembles” of heavily idealized models. How should these models be interpreted? To make this question more approachable, consider a standard case in which a simulation of some climate scenario is run on a model that represents the global climate. This simulation generates an output P. We can distinguish between two broad approaches to interpreting what this result indicates: probabilistic, where the result tells us about the probability that P holds in the actual world, and possibilistic, where the result tells us only that P is a “real possibility.”

The probabilistic approach is common in climate science: climate scientists regularly use groups of models to generate (precise) probability distributions in a manner that is incompatible with taking the results of model simulations to tell us nothing about probabilities. In the critical literature, by contrast, the probabilistic approach has few defenders and a large number of detractors. Footnote 1

Some philosophers have suggested that the possibilist framework offers a superior alternative. Betz (Reference Betz2007), for instance, argues that in the context of policy advice, climate scientists should not present precise probabilistic claims about the future. More recently, Betz (Reference Betz2015) has argued for possibilism in the context of model-generated predictions more generally, and Katzav (Reference Katzav2014) extends the view to the context of model evaluation, arguing that the primary focus of studies that compare the performance of models to empirical data should be on showing that models are “real possibilities,” not on determining how well they represent the actual world. Finally, Katzav et al. (Reference Katzav, Thompson, James Risbey, Stainforth and Frisch2021, 2) urge that probability density functions “should not be used in the climate context” and tentatively advocate for a possibilistic approach instead.

Given these arguments, it makes sense to ask whether possibilism can replace probabilism, that is, whether it is a good general framework for the interpretation of climate models. The answer is no: although it may be that possibilistic approaches are preferable in some cases, the arguments that have been offered in the literature do not support the generalization to all cases. Furthermore, the possibilist approach is both incompatible with successful practices in the science and less able to correct for biases in the models. As such, though there are good arguments to be had about how precisely to interpret climate models, our starting point should be that the models provide evidence about probabilities and the actual world in at least some cases.

2. Possibilism and individual models

Extant arguments for possibilism all operate by way of rejecting the (opposed) probabilist interpretation. What we’ll see in the next two sections is that none of these arguments motivate rejecting probabilism in general. At worst, they motivate a case-by-case evaluation of the costs and benefits of the two approaches.

As is widely appreciated, climate models are imperfect representations of our actual climate. Furthermore, ensembles of climate models do not behave like random samples centered on the true climate. Both of these facts are established by empirical work on the subject (Knutti et al. Reference Knutti, Reinhard Furrer, Cermak and Meehl2010); both are also predictable based on our knowledge of how models are constructed. The upshot is that both individual models and groups of models must be understood as idealized representations in the sense that there are some aspects of their target that they distort. Proponents of possibilism have used the idealized character of climate models to argue that possibilism should be preferred to probabilism. Betz (Reference Betz2015) and Katzav (Reference Katzav2014), for instance, suggest that we’re not justified in treating a climate model as providing information about the actual world when we know that said model includes false assumptions.

If Betz (Reference Betz2015) and Katzav (Reference Katzav2014) are right, however, that would make climate models the exception, not the rule. The received wisdom in philosophy of science is that all models are idealized in some respect. It’s unquestionable that some models provide us with information about the actual world, however. For example, point-mass models of the solar system can provide knowledge about the velocities of stellar bodies, even though they massively misrepresent their densities. So if the argument offered by Betz and Katzav is to work, it must be because there’s something about the idealizations involved in climate models in particular that makes it so that we cannot trust their outputs qua representations of the actual world.

There are a few candidates for distinguishing factors. First, it might be that there’s empirical literature that indicates that climate models are generally untrustworthy. There is in fact substantial empirical literature on the accuracy of climate models, but it would be a serious misreading to interpret that literature as wholly negative. On the contrary, the papers cited by proponents of possibilism show that the models are in fact quite trustworthy with respect to some aspects of the climate, and groups of climate models are in general even better (Knutti et al. Reference Knutti, Reinhard Furrer, Cermak and Meehl2010). Indeed, ensembles often outperform what we would expect given what we know about their idealized character, leading climate scientists to search for explanations for their “surprising” success (Annan and Hargreaves Reference Annan and Hargreaves2011). Nor is all of the evidence here fully backward looking; recent work by Hausfather et al. (Reference Hausfather, Drake, Abbott and Schmidt2020) indicates that climate models dating as far back as the 1970s have been quite good at predicting the relationship between increases in greenhouse gas (GHG) concentration and temperature. In other words, what the empirical literature indicates is not that we cannot trust climate models to tell us about the actual world but instead that (1) we should trust climate models more in some cases and less in others and (2) we should generally trust ensembles more than individual models (though how much more varies with context).

Of course, another potentially differentiating factor is that there might be some reason why we cannot use the empirical evidence to evaluate model accuracy in this way. For example, Katzav (Reference Katzav2014) and Lenhard and Winsberg (Reference Lenhard and Winsberg2010) raise the specter of holism in arguing that models are essentially black boxes and so empirical success cannot be attributed to particular elements of the climate models. Katzav (Reference Katzav2014) further argues that in many cases, we lack independent access to the relevant empirical facts—we don’t really have a good, model-independent means of estimating the internal variability of the climate, to pick just one example—and so we cannot test how accurate the models are with respect to those variables. And a number of authors have rightly pointed out that climate science is a domain in which the extrapolation from past performance is dubious. At worst, however, these points undermine the trust we should have in the models in specific cases; they don’t indicate that the models never license conclusions about the actual world. To support the latter view, the empirical literature would need to show both that the predictions generated by models are generally untrustworthy and that we cannot typically distinguish between the good cases and the bad cases. But neither thesis is warranted by the evidence on hand.

The position motivated by the empirical literature therefore seems to be a moderate one: in at least some cases, we should take a given climate model to provide us with information about the actual world. It’s conceivable that in other cases, the most that a climate model will warrant is a possibility claim—that a simulation run on the model delivers a result P indicates only that P is a “real possibility.” But the idealized character of climate models doesn’t give us any reason to think that this is generally the case. As the data gathered by Hausfather et al. (Reference Hausfather, Drake, Abbott and Schmidt2020) make clear, if you were previously indifferent over a wide range of possible values for the relationship between GHGs and temperature, learning the results of model-based simulations should lead you to adopt a narrower band of confidence around those results (Parker Reference Parker2022). As such, the flaws in individual models can’t be said to motivate adopting possibilism as a general framework.

3. Possibilism and multiple models

Until recently, proponents of possibilism have focused their arguments on individual models, arguing that these cannot be said to represent the actual world (at least not in the cases that they’re considering). In a recent article, by contrast, Katzav et al. (Reference Katzav, Thompson, James Risbey, Stainforth and Frisch2021) take a different approach, arguing that whatever the status of individual models, we’re not justified in trusting the probability distributions generated by groups, or “ensembles,” of models. After all, groups of models share idealizations and don’t behave like random samples. Even if an individual climate model has something to say about the actual world, probabilities generated by treating extant groups of them as some kind of sample are liable to distort the “evidential” or “objective” probabilities that we in some sense ought to have. As a consequence, employing the probabilities generated by an ensemble in the context of decision-making is likely to lead to bad decisions. A number of authors, including Parker and Risbey (Reference Parker and Risbey2015) and Winsberg (Reference Winsberg2018), take this argument to cast doubt on the probabilistic interpretation of climate models; Katzav et al. (Reference Katzav, Thompson, James Risbey, Stainforth and Frisch2021) go further and tentatively suggest that possibilism is one alternative that might be preferable. Footnote 2

While the argument just given might show that the probabilities generated by extant ensembles misrepresent in various ways, it doesn’t show that we shouldn’t use them, let alone that we shouldn’t use them in general. To begin, it’s worth reiterating the main point from the last section: that a probability distribution involves some sort of misrepresentation is not sufficient to motivate abandoning it. As I stress in Dethier (Reference Dethier2022b), we accept misrepresentations in the form of idealizations and abstractions throughout the sciences, and there’s no obvious reason why probability distributions should be exceptions to the general rule. Even if we accept that extant ensembles misrepresent, they may nevertheless be our best option for representing the “true” probabilities (Katzav and Parker Reference Katzav and Parker2015). So, for example, if we have to choose between making a decision based solely on the point-value estimate given by a single model or one based on the probability distribution generated by an ensemble, we should in general prefer the latter, even if we expect it to misrepresent the “true” probabilities—after all, both representations can be thought of as probability distributions, with the difference being that the former assigns a confidence of 1 to the output of a single model, whereas the latter distributes confidence more equitably over the available options. So even establishing that the probability distributions in question grossly misrepresent the true probabilities only gives us a defeasible reason to reject them—we still have to ask about the relative costs and benefits of various alternatives.

Notably, the possibilist alternative is not free of misrepresentation. Once we accept that individual models provide information about the actual world, the possibilist option is just as much guaranteed to misrepresent the true state of our knowledge as any probabilistic approach. Where probabilism errs toward overstating our knowledge, possibilism errs toward understating it (Risbey Reference Risbey2007). After all, as its proponents explicitly acknowledge, the possibilist interpretation allows us to say nothing about the outer limits of the “real possibilities” on the basis of modeling results, meaning that we have to ignore whatever information climate models provide about the limits of what is really possible. It’s plausible that even if climate models don’t warrant precise probabilistic judgments concerning the actual world, they tell us something about which scenarios are more and less likely. Which approach should be preferred thus depends at least on whether errors due to overprecision or underprecision are more worrying in a given context.

Following the arguments offered by Betz (Reference Betz2007) and Parker and Risbey (Reference Parker and Risbey2015), we might think that overprecision is generally more worrying in the setting of policy advice. Plausibly, scientists should give advice only within the scope of their expertise, and this principle tells against overreaching in a way that it doesn’t tell against more conservative approaches. There are reasons to doubt this reasoning, but even if we accept it, there are many contexts other than the contexts of decision-making and policy advice in which the calculus should be expected to be different. It may be the case, for example, that for the purposes of decision-making, climate scientists should prefer to make only firm judgments about what is and isn’t possible, but that for all other purposes, it’s worthwhile to adopt a more fine-grained probabilistic approach. This imagined position could be motivated by making an analogy with rounding to a particular significant digit. Scientists typically round their measurements to avoid adopting hypotheses that are more precise than is warranted—exactly the same motivation to which Parker and Risbey (Reference Parker and Risbey2015) have appealed in criticizing the use of probabilities in climate science. But rounding is a step that occurs only at the end of the measurement process; it’s a serious methodological error to round results at every stage, because repeatedly rounding can lead to estimates that are significantly different from an entirely unrounded estimate. Similarly, we might think that moving from the precision offered by precise probabilities to a more coarse-grained possibilistic representation is ultimately what’s demanded by our evidence while simultaneously thinking that it’s a mistake to coarse-grain in this manner at any point other than the end of the calculation. Notably, this is essentially the approach adopted by the Intergovernmental Panel on Climate Change (IPCC) (see, e.g., Intergovernmental Panel on Climate Change Reference Masson-Delmotte2021).

The upshot of the foregoing is that arguments found in the literature do not motivate adopting possibilism as a general framework. At most, these arguments establish that whether we should adopt a probabilistic interpretation of climate models depends on the costs and benefits of the approach relative to its competitors. In what follows, we’ll see that possibilism is in fact generally worse than probabilism in at least two respects: it declares some instances of successful science to be unmotivated, and it is liable to be less accurate than the probabilistic alternative in some realistic scenarios.

4. Possibilism and successful practice

Proponents of possibilism have focused largely on future forecasting. But neither climate models nor the probabilities that they generate are used solely in that context. On the contrary, complex climate models are relied on throughout the science, and they’re consistently understood as providing information about the way the world actually is.

Consider attribution studies, which are backward facing in that they aim to determine how responsible humans are for observed climate change. Many of the specific concerns proponents of possibilism have raised don’t apply in the attribution context—we’re not extrapolating from present climate data to a future that we know will be different, for instance—and though substantial uncertainty about many specific details remains, the IPCC takes attribution studies to provide us “unequivacable” knowledge in at least some cases (Intergovernmental Panel on Climate Change Reference Masson-Delmotte2021).

Broadly speaking, attribution studies proceed as follows. Climate scientists collect substantial data on past changes to temperature and then run complex regressions to determine how much of the past temperature change can be attributed to CO $_2$ and how much to other factors, such as the internal variability of the climate system. To run these regressions, they need a quantified understanding of how different factors affect the climate. So, for example, we consistently observe that although the planet as a whole is warming, the upper atmosphere is actually cooling. To determine how much of the observed warming is caused by CO $_2$ and how much by other factors, we need to know how these different factors affect the distribution of heat throughout the atmosphere. This information—what’s sometimes called the “signature” or “fingerprint” of a particular factor—is usually provided by climate models.

Simplifying and abstracting substantially, the resulting regression equation is $Y = {{\rm{\Sigma }}_i}\left( {{\beta _i}{X_i}} \right) + {\upsilon_Y}$ , where Y is the observed data; ${\beta _i}$ and ${X_i}$ are the percentage of the increase due to the ith factor and the signature of that factor, respectively; and ${\upsilon_Y}$ is the internal variability of the climate. Standard least squares algorithms are then used to estimate the $\beta $ terms. The results indicate how much of observed warming a particular factor is responsible for; if the least squares analysis yields a result that ${\beta _{{\rm{G}}HG}} = 0.95$ , for example, that would indicate that GHGs are responsible for 95 percent of observed warming.

On a possibilist reading, however, this conclusion is not warranted—in fact, a general possibilist view is required to say that attribution studies cannot tell us anything about the actual world. The reason why is that the X terms are typically derived from climate models. Footnote 3 So if climate models do not provide information about the actual world, then the X terms only represent “real possibilities,” and the results for the $\beta $ terms must be thought of as providing information only about which contributions to climate change are possible. It’s not clear whether the consistent possibilist is even capable of making qualitative claims about humanity’s contribution to climate change in the actual world. Perhaps other lines of evidence would allow them to say more than just that “it’s a real possibility” that humanity has contributed to climate change, but certainly they’re not warranted in adopting stronger conclusions on the basis of the standard form of attribution studies employed in climate science.

Of course, the possibilist has a couple of potential responses here. First, they could make the retreat already suggested in the last section and argue that although climate models can provide information about the actual world, the information generated by ensembles should not be given a probabilistic reading. Second, they could argue that there’s something particularly special about forecasting that distinguishes it from backward-facing attribution science. Neither rejoinder is successful.

On the first count, we’ve already seen that the position in question is an unhappy halfway home: it’s hard to motivate the idea that individual models tell us about the actual world, but when we group them together, we are no longer justified in drawing conclusions about anything more than real possibilities. It’s especially unmotivated in this particular case, however. For roughly the last twenty years, climate scientists have employed the probability distributions generated by ensembles either in place of the X terms (Huntingford et al. Reference Huntingford, Stott, Allen and Hugo Lambert2006) or as part of more complicated Bayesian updating procedures (Schurer et al. Reference Schurer, Gabi Hegerl, Polson, Morice and Tett2018). So even if the possibilist can recover the basic approach adopted in attribution studies by retreating in this way, they’re committed to rejecting the conclusions generated by more complex versions of the approach—and, notably, these more complex approaches consistently yield results that are more accurate and reliable than the simple method yields (Hannart, Ribes, and Naveau Reference Hannart, Ribes and Naveau2014; Schurer et al. Reference Schurer, Gabi Hegerl, Polson, Morice and Tett2018). This means of recovering the successful practice of climate scientists thus looks like a nonstarter.

The other rejoinder is no more successful. Contrary to what we might expect, there’s not really a practical bright line to be drawn in climate science between how models are used in forecasting the future and how they’re used in analyzing the past. Attribution studies illustrate the point nicely: since Stott et al. (Reference Stott, Mitchell, Allen, Delworth, Gregory, Meehl and Santer2006), it’s been standard practice to use the results of attribution studies to estimate how GHGs will affect temperatures in the future. The possibilist thus faces a dilemma. On one horn is the position that the resulting forecasts should be interpreted in the same possibilist manner as those directly generated by climate models. But then it’s not clear what their position has to do with climate models—it seems as though what they’re really advocating is simply skepticism about climate scientists’ ability to predict the future. On the other horn, they can assimilate the forecasts generated in this manner to the successful practices of attribution science and allow for a probabilistic interpretation here as well. It’s not clear what the motivation for the resulting position could possibly be, however: why is it that forecasts in which the models play one role can be interpreted probabilistically, but those in which the models play a different role can’t be?

The takeaway: the probabilistic interpretation of climate models is more deeply intertwined with successful climate science than the literature acknowledges. Applied consistently, possibilism would require us to completely rewrite not just future-facing climate science but all of climate science, including those areas deemed highly successful by the scientists themselves. As we’ve just seen, although the possibilist has potential avenues for resisting these applications, the resulting positions are hard to motivate.

5. Possibilism and bias

As stressed earlier, there’s no reason to think that the possibilist interpretation of climate models is guaranteed to be a more accurate representation of the current state of the evidence than a probabilistic one. On the contrary, just as the probabilistic representation risks overstating the evidence, the possibilist one risks understating it. In this sense, the two positions are analogous—as is often true, the question is simply which kind of error is more worrying.

There is an important sense, however, in which the possibilist interpretation is liable to be less informative than the probabilist one. Recall that there are empirical studies that examine how accurately groups of models represent present-day climate targets (see, e.g., Knutti et al. Reference Knutti, Reinhard Furrer, Cermak and Meehl2010). The principal finding of these studies is that extant ensembles undersample from the extremes relative to your standard normal distribution centered on the truth—there’s less variance in the sample than we would expect there to be, indicating that a probability distribution generated on the assumption that the models are normally distributed is liable to underestimate the probability of extreme outcomes. Footnote 4

Unsurprisingly, these findings are widely cited by critics of the probabilist interpretation. But the same results undermine possibilist approaches for essentially the same reason: if the set of models available undersamples from the extremes, then the set of “real possibilities” that are represented is unlikely to include extreme scenarios. To illustrate, suppose that the sample were normally distributed. Then you would need fourteen models to have even a 50 percent chance of getting a result with a “true” probability of 5 percent or less. Given that we know that extant ensembles are not normally distributed, we should expect the numbers to be even higher, meaning that—on the possibilist interpretation—we should expect extant ensembles not to tell us anything about extreme cases. In aiming to avoid misrepresenting the probability of extreme scenarios, possibilism takes the quietist route and refuses to say anything about them at all.

Quietism isn’t the only option here. One of the great advantages of the probabilist approach is that it is flexible: we don’t have to assume that extant ensembles are normally distributed around the truth. Indeed, in the last decade, it’s become increasingly common for climate scientists to adopt a different interpretation of the models motivated by the same empirical literature cited earlier—in particular, they tend to assume that the “truth” behaves like a single sample from the same population as the models, an assumption that fits better with the empirical data and that (if anything) leads to an oversampling from extremes (Annan and Hargreaves Reference Annan and Hargreaves2011; Sedláek and Knutti Reference Knutti2013).

Of course, as Katzav et al. (Reference Katzav, Thompson, James Risbey, Stainforth and Frisch2021) stressed, correctly estimating the biases in extant ensembles is a tricky empirical task. The point here is not that these corrections are easy or even successful but that probabilism opens the door for this kind of response to known biases. Furthermore, although the information that the probabilist approach provides with respect to the extreme scenarios may be flawed in some ways, the tails of the probability distribution do provide some information about what we can expect in these cases. That’s no guarantee that the resulting probabilities will be perfectly accurate, but it’s something. The possibilist interpretation, by contrast, doesn’t provide any information about these scenarios at all: it requires climate scientists to draw no conclusions about the extremes on the basis of extant models, and it has no way of taking into account any of the empirical information derived from studies like Knutti et al. (Reference Knutti, Reinhard Furrer, Cermak and Meehl2010). Insofar as we’re worried about scenarios that we think are unlikely but not impossible, therefore, we should prefer the probabilist approach, unless we think that the resulting information is more likely than not to be misleading in the relevant context. This extreme view is unmotivated by the arguments we surveyed earlier, however, at least in its general form: that extant probability distributions are more precise than is warranted does not mean that they are likely to be so inaccurate that we cannot draw any conclusions from them.

6. Conclusion

In the recent literature, it has become increasingly common for philosophers of climate science to advocate for a “possibilist” interpretation of climate models. What we’ve seen in the foregoing is that even if a possibilist interpretation has some worthwhile domains of application, it can’t replace the probabilistic one as a general framework. The explicit philosophical arguments that defenders of possibilism advance fail to show that the position is preferable in general. Moreover, consistently applying the possibilist interpretation requires us to reject successful applications of climate science, and we should furthermore expect the possibilist approach to be less informative than a probabilistic approach in at least some cases.

Acknowledgments

Thanks to Matthias Ackermann, Mathias Frisch, Joel Katzav, James Risbey, Joe Roussos, Erica Thompson, and audiences in Hannover and Pittsburgh for comments on an earlier version of this article.

Funding statement

Funding for this paper was provided by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 254954344/GRK2073.

Footnotes

1 Betz (Reference Betz2007, Reference Betz2015), Katzav (Reference Katzav2014), and Katzav et al. (Reference Katzav, Thompson, James Risbey, Stainforth and Frisch2021) argue against probabilism and for possibilism. Parker (Reference Parker2010) and Winsberg (Reference Winsberg2018) raise problems for probabilism without rejecting it entirely.

2 The other alternative that they discuss involves employing imprecise probability distributions. As this alternative is a version of the probabilist framework, it is not a target of my criticism.

3 Of course, it’s not just the X terms. The description of attribution studies given here is highly simplified relative to actual practice—see Dethier (Reference Dethier2022a)—and examining the complexities makes it clear that climate models are heavily involved at every step in the process.

4 This situation seems to have reversed in the last few years (Tokarska et al. Reference Tokarska, Hegerl, Schurer, Forster and Marvel2020). If an ensemble oversamples from the extremes rather than undersampling, however, that doesn’t change the main point of this section, which is that probabilism allows scientists to respond to known biases.

References

Annan, James D., and Hargreaves, Julia C.. 2011. “Understanding the CMIP3 Model Ensemble.” Journal of Climate 24 (16):4529–38.Google Scholar
Betz, Gregor. 2007. “Probabilities in Climate Policy Advice: A Critical Comment.” Climatic Change 85 (1–2):19.Google Scholar
Betz, Gregor. 2015. “Are Climate Models Credible Worlds? Prospects and Limitations of Possibilistic Climate Prediction.” European Journal for Philosophy of Science 5 (2):191215.Google Scholar
Dethier, Corey. 2022a. “Calibrating Statistical Tools: Improving the Measure of Humanity’s Influence on the Climate.” Studies in the History and Philosophy of Science 94:158–66.Google Scholar
Dethier, Corey. 2022b. “When Is an Ensemble Like a Sample? ‘Model-Based’ Inferences in Climate Modeling.” Synthese 200 (52):120.Google Scholar
Hannart, Alexis, Ribes, Aurélien, and Naveau, Phillippe. 2014. “Optimal Fingerprinting under Multiple Sources of Uncertainty.” Geophysical Research Letters 41 (4):1261–68.Google Scholar
Hausfather, Zeke, Drake, Henri F., Abbott, Tristan, and Schmidt, Gavin A.. 2020. “Evaluating the Performance of Past Climate Model Projections.” Geophysical Research Letters 47 (1):e2019GL085378.Google Scholar
Huntingford, Chris, Stott, Peter A., Allen, Myles R., and Hugo Lambert, F.. 2006. “Incorporating Model Uncertainty into Attribution of Observed Temperature Change.” Geophysical Research Letters 33 (5):L05710.Google Scholar
Intergovernmental Panel on Climate Change. 2021. Climate Change 2021: The Physical Science Basis—Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Edited by Masson-Delmotte, Valérie et al. Cambridge: Cambridge University Press.Google Scholar
Katzav, Joel. 2014. “The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science.” Studies in History and Philosophy of Science, Part B 46 (3):228–38.Google Scholar
Katzav, Joel, and Parker, Wendy S.. 2015. “The Future of Climate Modeling.” Climatic Change 132 (4):475–87.Google Scholar
Katzav, Joel, Thompson, Erica L., James Risbey, David A. Stainforth, Seamus Bradley, and Frisch, Mathias. 2021. “On the Appropriate and Inappropriate Uses of Probability Distributions in Climate Projections, and Some Alternatives.” Climatic Change 169 (15): 120.Google Scholar
Knutti, Reto, Reinhard Furrer, Claudia Tebaldi, Cermak, Jan, and Meehl, Gerald A.. 2010. “Challenges in Combining Projections from Multiple Climate Models.” Journal of Climate 23 (10):2739–58.Google Scholar
Lenhard, Johannes, and Winsberg, Eric. 2010. “Holism, Entrenchment, and the Future of Climate Model Pluralism.” Studies in History and Philosophy of Science, Part B 41 (3):253–62.Google Scholar
Parker, Wendy S. 2010. “Whose Probabilities? Predicting Climate Change with Ensembles of Models.” Philosophy of Science 77 (5):985–97.Google Scholar
Parker, Wendy S. 2022. “Evidence and Knowledge from Computer Simulation.” Erkenntnis 87 (2):1521–38.Google Scholar
Parker, Wendy S., and Risbey, James S.. 2015. “False Precision, Surprise and Improved Uncertainty Assessment.” Philosophical Transactions of the Royal Society, Part A 373 (3055):20140453.Google Scholar
Risbey, James S. 2007. “Subjective Elements in Climate Policy Advice.” Climatic Change 85 (1):1117.Google Scholar
Schurer, Andrew P., Gabi Hegerl, Aurélien Ribes, Polson, Debbie, Morice, Colin, and Tett, Simon 2018. “Estimating the Transient Climate Response from Observed Warming.” Journal of Climate 31 (20):8645–63.Google Scholar
Sedláek, Jan, and Knutti, Reto. 2013. “Evidence for External Forcing on 20th-Century Climate from Combined Ocean–Atmosphere Warming Patterns.” Geophysical Research Letters 39 (20).Google Scholar
Stott, Peter A., Mitchell, John F. B., Allen, Myles R., Delworth, Thomas L., Gregory, Jonathan M., Meehl, Gerald A., and Santer, Benjamin D.. 2006. “Observational Constraints on Past Attributable Warming and Predictions of Future Global Warming.” Journal of Climate 19 (13):3055–69.Google Scholar
Tokarska, Katarzyna B., Hegerl, Gabriele C., Schurer, Andrew P, Forster, Piers M., and Marvel, Kate. 2020. “Observational Constraints on the Effective Climate Sensitivity from the Historical Period.” Environmental Research Letters 15 (3):034043.Google Scholar
Winsberg, Eric. 2018. Philosophy and Climate Science. Cambridge: Cambridge University Press.Google Scholar