Hostname: page-component-7bb8b95d7b-495rp Total loading time: 0 Render date: 2024-09-27T21:12:24.612Z Has data issue: false hasContentIssue false

The scientific value of explanation and prediction

Published online by Cambridge University Press:  06 December 2023

Hause Lin*
Affiliation:
Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA hauselin@gmail.com https://www.hauselin.com Hill and Levene Schools of Business, University of Regina, Regina, SK, Canada

Abstract

Deep neural network models have revived long-standing debates on the value of explanation versus prediction for advancing science. Bowers et al.'s critique will not make these models go away, but it is likely to prompt new work that seeks to reconcile explanatory and predictive models, which could change how we determine what constitutes valuable scientific knowledge.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Explanatory power and predictive accuracy are different qualities, but are they inconsistent or incompatible? Bowers et al.'s critique of deep neural network models of biological vision resurfaces age-old debates and controversial questions in the history of science (Breiman, Reference Breiman2001; Hempel & Oppenheim, Reference Hempel and Oppenheim1948). First, must an explanatory model have predictive accuracy to be considered scientifically valuable? Similarly, must a predictive model have explanatory power to have scientific value? Second, what kinds of models are better for advancing scientific knowledge, and how should we determine the scientific value of models?

To appreciate the significance of Bowers et al.'s critique, let us consider explanation and prediction as two orthogonal dimensions rather than two extremes on a continuum. As shown in Figure 1a, some of the most successful models and theories in the history of humankind have occupied different positions in this two-dimensional space: Theories like relativity and quantum electrodynamics are located in the top-right quadrant (i.e., very high explanatory power and predictive accuracy), whereas Darwinian evolution sits at the bottom-right quadrant (i.e., high explanatory power but little predictive accuracy, or at least cannot be tested for predictive accuracy yet). Importantly, successful models in disciplines ranging from physics to biology generally have high explanatory power.

Figure 1. Scientific value of models with different degrees of two qualities: Explanatory power and predictive accuracy. (a) Bowers et al. value explanation over prediction, such that models with greater explanatory power are preferred. (b) Alternative value function that values both qualities equally. Hotter colors denote greater scientific value, whereas cooler colors denote less scientific value.

Younger disciplines such as neuroscience and psychology – to which biological vision belongs – often aspire to emulate more established disciplines by developing models and theories with increasing explanatory power over time. Bowers et al. also prefer explanatory models and emphasize the importance of using controlled laboratory experimentation to test causal mechanisms and develop explanatory models and theories. Since researchers in these disciplines have historically valued models with explanatory power more than those with predictive accuracy, the consequence is that existing models are mostly located in the bottom two quadrants (Fig. 1a; some explanatory power but relatively low predictive accuracy). Models with high predictive accuracy are rare or even unheard of (e.g., Eisenberg et al., Reference Eisenberg, Bissett, Zeynep Enkavi, Li, MacKinnon, Marsch and Poldrack2019; Yarkoni & Westfall, Reference Yarkoni and Westfall2017).

Neural network models of biological vision have therefore introduced a class of scientific models that occupies a unique location in the two-dimensional space in Figure 1a (top-left quadrant). One could even argue that it might be the first time the discipline (including neuroscience and psychology) has produced models that have greater predictive accuracy than explanatory power. If so, it should come as no surprise that researchers – many of whom have been trained to rely primarily on experimentation to test theories – would feel uncomfortable with models with such different qualities and even question the scientific value of these models, despite recent calls to integrate explanation and prediction in neighboring disciplines (Hofman et al., Reference Hofman, Watts, Athey, Garip, Griffiths, Kleinberg and Yarkoni2021; Yarkoni & Westfall, Reference Yarkoni and Westfall2017).

The current state of research on deep neural network models of biological vision reflects a critical juncture in the history of neuroscience as well as psychological and social science. The long-standing tension between different philosophical approaches to theory development no longer exists only in the abstract – arguably for the first time, researchers have to reconcile, in practice, explanatory models with their predictive counterparts.

Bowers et al. emphasize the value of experimentation and the need for models to explain a wide range of experimental results. But this approach is not without limitations: When experiments and models become overly wedded to each other, models might lose touch with reality because they explain phenomena only within but not beyond the laboratory (Lin, Werner, & Inzlicht, Reference Lin, Werner and Inzlicht2021).

Should explanation be favored over prediction? The prevailing approach to theory development has certainly favored explanation (Fig. 1a), but the state of research on deep neural network models suggests that developing models with predictive accuracy might be a complementary approach that could help to test the relevance of explanatory models that have been developed through controlled experimentation. Predictive models could also be used to discover new explanations or causal mechanisms. If so, it is conceivable that current and future generations of researchers (who have been trained to also consider predictive accuracy) might come to value explanation and prediction equally (Fig. 1b).

Deep neural network models are becoming increasingly popular in a wide range of academic disciplines. Although Bowers et al.'s critique is unlikely to reverse this trend, it highlights how new methods and technological advances can turn age-old philosophical debates into practical issues researchers now have to grapple with. How the explanatory and predictive approaches are reconciled or integrated in the coming years by researchers working on biological vision is likely to have far-reaching consequences on how researchers in other disciplines think about theory development and the philosophy of science. And it is also likely to reshape our views of what constitutes valid and valuable scientific knowledge.

Acknowledgments

I thank Adam Bear and Alexandra Decker for helpful discussions.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199231. https://doi.org/10.1214/ss/1009213726CrossRefGoogle Scholar
Eisenberg, I. W., Bissett, P. G., Zeynep Enkavi, A., Li, J., MacKinnon, D. P., Marsch, L. A., & Poldrack, R. A. (2019). Uncovering the structure of self-regulation through data-driven ontology discovery. Nature Communications, 10(1), 113. https://doi.org/10.1038/s41467-019-10301-1CrossRefGoogle ScholarPubMed
Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15(2), 135175. https://doi.org/10.1086/286983CrossRefGoogle Scholar
Hofman, J. M., Watts, D. J., Athey, S., Garip, F., Griffiths, T. L., Kleinberg, J., … Yarkoni, T. (2021). Integrating explanation and prediction in computational social science. Nature, 595(7866), 181188. https://doi.org/10.1038/s41586-021-03659-0CrossRefGoogle ScholarPubMed
Lin, H., Werner, K. M., & Inzlicht, M. (2021). Promises and perils of experimentation: The mutual-internal-validity problem. Perspectives on Psychological Science, 16(4), 854863. https://doi.org/10.1177/1745691620974773CrossRefGoogle ScholarPubMed
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 11001122. https://doi.org/10.1177/1745691617693393CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Scientific value of models with different degrees of two qualities: Explanatory power and predictive accuracy. (a) Bowers et al. value explanation over prediction, such that models with greater explanatory power are preferred. (b) Alternative value function that values both qualities equally. Hotter colors denote greater scientific value, whereas cooler colors denote less scientific value.