Hostname: page-component-7bb8b95d7b-nptnm Total loading time: 0 Render date: 2024-09-27T23:40:02.996Z Has data issue: false hasContentIssue false

Review of Matthew J. Brown’s Science and the Moral Imagination: A New Ideal for Values in Science - Matthew J. Brown, Science and Moral Imagination: A New Ideal for Values in Science. With a foreword by Kim Stanley Robinson. Pittsburgh: Pittsburgh University Press (2020), 288 pp., $50.00 (Hardcover). Available open access with supplementary materials: https://valuesinscience.com

Review products

Matthew J. Brown, Science and Moral Imagination: A New Ideal for Values in Science. With a foreword by Kim Stanley Robinson. Pittsburgh: Pittsburgh University Press (2020), 288 pp., $50.00 (Hardcover). Available open access with supplementary materials: https://valuesinscience.com

Published online by Cambridge University Press:  17 February 2023

Paul L. Franco*
Affiliation:
Department of Philosophy, University of Washington, Seattle, WA, USA.
*
Rights & Permissions [Opens in a new window]

Abstract

Type
Book Review
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Philosophy of Science Association

Matthew J. Brown’s Science and Moral Imagination is a clearly written, well-structured book that makes unique contributions to both the rapidly growing literature in science and values and philosophy of science more generally. I especially appreciate Brown’s attempt to unify arguments rejecting the value-free ideal; his empirical, pragmatic account of value judgment and corresponding transformation of wishful thinking worries; and his ideal of moral imagination. I discuss these in order.

1. Rejecting the value-free ideal

Brown describes himself as “a lumper rather than a splitter” (2020, 86). Accordingly, he highlights a common core to previous arguments for the descriptive and normative inadequacy of the value-free ideal with his contingency argument. This argument emphasizes that within scientific inquiry there are “many contingent moments, with reasonable alternative possible options” (63). For example, scientists have leeway in choosing concepts to characterize and analyze a pressing problem. Next, he argues no one value or set of values—epistemic or nonepistemic—conclusively forces a choice among options. Further, our choices “may have implications and consequences for things that we care about, including ethical and social, as well as political, cognitive, and aesthetic values” (63). Given scientists’ general moral responsibilities to consider the consequences of their choices, value judgments have a role in settling contingencies throughout scientific inquiry.

The contingency argument makes good on Brown’s lumping tendencies. For example, the argument from inductive risk for values in science points to contingencies in scientific practice, the settling of which increases or decreases the risks of error and attendant nonepistemic consequences. Assessing and responding to these risks requires value judgments. The aims approach to values in science points out that science has diverse aims. For example, in addition to the epistemic aim of generating knowledge, science has nonepistemic aims like providing policy advice. On this approach, nonepistemic considerations like time-constraints or the values of the public help settle contingencies about, say, model choice.

While the contingency argument successfully brings out “a common structure” (86) to arguments in values in science, I think we often have good reason to keep them separate, as they individually bring into sharper relief a scientist’s responsibilities in settling contingencies in specific situations. The different roles the sciences play relative to the aims of socially-situated scientific practices mean a scientist’s responsibilities to consider the consequences of unforced choices can take highly particular forms. For example, the contingencies and attendant risks scientists face communicating with labmates, in peer-reviewed journals, or with the general public differ. General arguments establishing the role of nonepistemic values in scientific practices risk obscuring potentially important differences regarding responsibilities in settling specific contingencies.

Another “complementary argument” Brown advances against the value-free ideal is “the practical reason argument” (67). On this argument, settling contingencies involves deciding on a particular “course of action,” and nonepistemic values are needed to motivate such decisions (67). For example, while scientific inquiry might provide good epistemic reasons to hold a belief, those alone don’t compel us to assert it (66). Why not? Asserting is an action, and for many philosophers of science, epistemic values “are not action-motivating reasons” (65). I may have a belief meeting high epistemic standards, but decide not to assert because I want to find a way to communicate that lowers the risk of costly misinterpretation. Further, the values of those impacted by our decisions are also relevant to our practical decisions: A modeling choice that leads to epistemically better-supported beliefs than another isn’t necessarily the best if it provides information irrelevant to stakeholders’ values.Footnote 1

The practical reason argument is connected to two other central aspects of Brown’s rejection of the value-free ideal. First is his thoroughgoing pragmatism, rooted in his longstanding engagement with Dewey. For the pragmatist, the point of knowledge is action, and our practical concerns shape scientific inquiry and its products. Second is Brown’s denial of “the lexical priority of evidence,” the view that “evidence sets the bounds in which values can influence science, and where evidence and values conflict, evidence trumps” (94). This may raise some eyebrows, as it seems to invite wishful thinking. Brown’s account of values aims to address this worry.

2. The nature of value judgments

Generally, work in science and values highlights value judgments in particular conceptual choices, modeling practices, etc., with a stronger focus on science than values.Footnote 2 Brown wants to remedy this oversight. To start, I think he’s right that some work in science and values implicitly assumes a “coarse noncognitivism” (92) about values and value judgements that understands them as subjective and not answerable to standards similar to empirical judgments (89). On such a view, denying the lexical priority of evidence is worrisome, as it invites accepting or rejecting claims not for empirical reasons, but because they accord or don’t accord with subjective desires, preferences, or wishes. However, if, as Brown thinks, nonepistemic value judgments are responsive to empirical evidence and also advance epistemic goals, then worries about wishful thinking evaporate or at least take a different form.

To show this, Brown offers a pragmatist, pluralist taxonomy of values and their sources that “fits our everyday experiences of valuing and decision making” (114) better than noncognitivism. According to Brown, values “have cognitive status and evidential value” (115) and value judgment is “a type of practical-empirical inquiry ultimately connected with questions of what to do, with the same basic structure as scientific inquiry” (114). Value judgments are the outcomes of reflecting on “problems of practice concerning the choice of an end or the determination of something’s worth in relation to our practices and ends” (156). Empirical evidence is central to reflecting well on such problems, and value judgments can, in some contexts, function as evidence.

Here, Brown draws upon feminist, pragmatist work. Following Elizabeth Anderson, he suggests “most people recognize that new evidence or experience is relevant to the reappraisal of their values” (178). Further, value judgments aren’t only open to revision given new evidence—Brown appeals to Lynn Hankinson Nelson’s argument that they are part of our web of beliefs—they can also function as evidence. For example, “Feminist values have a strong track record of successfully guiding science” (164). As such, they provide prima facie reasons to reject hypotheses that might undermine them, e.g., “coherence with feminist values ought to speak in favor of some theory or hypothesis, and failure to cohere a piece of evidence against it” (164; footnote removed).Footnote 3 Accordingly, Brown’s argument against the lexical priority of evidence doesn’t deny a central role to empirical evidence in scientific inquiry. Instead, he expands the empirical reasons for settling contingencies to include empirically-sensitive values and value judgments.

On my understanding, Brown’s view suggests worries about wishful thinking shouldn’t be cast as worries that values will replace evidence. Instead, they’re worries about close-mindedness, insensitivity to relevant possibilities, or relying on unreflective habit to settle contingencies. Glossed in this way, Brown shows wishful thinking isn’t a special problem for those who deny the lexical priority of evidence, since someone who subscribes to the lexical priority of evidence will also need to avoid the worries just mentioned.

3. The ideal of moral imagination

The contingency and practical reason arguments reject the value-free ideal. The empirical account of value judgment reframes and defuses wishful thinking worries. Finally, the ideal of moral imagination provides guidance for scientists in thinking through the consequences of settling contingencies and deciding on certain courses of action. Brown defines the ideal in this way: “Scientists should recognize contingencies in their work as unforced choices, discover morally and epistemically salient aspects of the situation they are deciding, empathetically recognize and understand the legitimate stakeholders and their interests, imaginatively construct and explore possible options, and exercise fair and warranted value judgment in order to guide those decisions” (186).

With the ideal in hand, Brown identifies a form of scientific irresponsibility related to moral failures in exercising imagination, namely, “When scientists fail to recognize contingencies or fail to consider superior options where their decision has significant effects on stakeholders or other morally salient aspects” (187). These are failures of imagination insofar as relevant possibilities or contingencies aren’t acknowledged or recognized. They are moral since they affect things we care about and we have general responsibilities to consider the foreseeable consequences of settling contingencies in particular ways. Empathy also has a key role to play in this process in that it can help increase understanding of “the perspectives of others who are impacted by our decisions and actions” (166).

For Brown, ethical scientific conduct isn’t a matter of learning and following a set of principles or moral theories. Instead, it involves settling contingencies in reflective ways sensitive to empirical evidence and the consequences for legitimate stakeholders’ values. Importantly, Brown makes the case that exercising moral imagination and good epistemic practices go hand-in-hand. Since “[t]he ideal…requires multiplying options beyond the obvious in hopes of finding solutions that better integrate value considerations,” it can “create significant epistemic benefits in helping prevent scientists from being stuck in…solutions that appear best because too narrow a view of possibilities has been taken” (188).

In the conclusion to his book, Brown describes how he and his colleagues use the framework in responsible conduct of research training, compares it to other decision-making protocols, and applies it to three cases. At times, I think the guidance provided by the ideal of moral imagination feels vague outside the context of particular practical problems. This recalls how Brown’s contingency argument, pitched at a high level of generality, glosses over relevant differences in tackling specific contingencies.

I also think Brown’s focus on the individual exercise of moral imagination and empathy undersells how imagining relevant possibilities, thinking through consequences, and finding and including legitimate stakeholders are best undertaken with others. Brown rightfully acknowledges the limits of focusing on individual decision-making, suggesting “ideals that speak to community structure and democratic obligations” are needed to “do justice to…large social contexts” (215). So, while I don’t think Brown would disagree with the following points, and some of what he says about science and democracy speaks to them, I think they’re worth emphasizing. First, it seems exercising moral imagination, even in smaller contexts, functions best as a social process, in part because exercising it with a community of inquirers embodying a variety of perspectives might ameliorate individual deficiencies in imagination and empathy. Second, ideals related to the social structures of science might provide insight regarding how to incentivize individuals to exercise moral imagination responsibly. Finally, following Matthew Sample (Reference Sample2022), philosophers of science can also benefit from exercising moral imagination by critically examining alternatives to our idealizations of the sciences and their place in society, since these have implications for conceptualizing individual and collective responsibilities of scientists.

4. Conclusion

Brown tackles a question that’s come to occupy an important place in philosophy of science: What’s an epistemically and morally viable alternative to the value-free ideal for scientific practices? Though Brown’s lumping tendencies mean some arguments are pitched at a general level, his answers are unique, generative, and provide a flexible framework applicable across a range of contexts. Indeed, Karoliina Pulkinnen and co-authors (Reference Pulkinnen, Undorf and Bender2022) have already tested the practicality of his ideal and fruitfully built upon it in the context of climate modeling. Just as importantly, Brown’s compelling pragmatic account of scientific-practical inquiry and the “values” in “science and values” may have readers questioning—rightfully, in my view—the separation of the social, political, ethical, and scientific.

Footnotes

1 See Intemann (Reference Intemann2015).

2 Ward (Reference Ward2019) makes a similar point.

3 Brown makes an analogous argument about the values of racial justice and equality and the history of “flawed and unreliable” research into cognitive differences across races (138).

References

Intemann, Kristen. 2015. “Distinguishing Between Legitimate and Illegitimate Values in Climate Modeling.” European Journal of the Philosophy of Science 5:217–32.CrossRefGoogle Scholar
Pulkinnen, Karoliina, Undorf, Sabine, and Bender, Frida A.-M.. 2022. “Values in Climate Modelling: Testing the Practical Applicability of the Moral Imagination Ideal.” European Journal of Philosophy of Science. 12 (68):118. https://doi.org/10.1007/s13194-022-00488-4.Google Scholar
Sample, Matthew. 2022. “Science, Responsibility, and the Philosophical Imagination.” Synthese. 200 (79):119. https://doi.org/10.1007/s11229-022-03612-2.CrossRefGoogle Scholar
Ward, Zina. 2019. Review of Exploring Inductive Risk: Case Studies of Values in Science, edited by Kevin Elliott and Ted Richards. Journal of Moral Philosophy. 16 (6):769–72.Google Scholar