Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-20T16:17:55.308Z Has data issue: false hasContentIssue false

Explore your experimental designs and theories before you exploit them!

Published online by Cambridge University Press:  05 February 2024

Marina Dubova*
Affiliation:
Cognitive Science Program, Indiana University Bloomington, IN, USA mdubova@iu.edu https://www.mdubova.com/
Sabina J. Sloman
Affiliation:
Department of Computer Science, University of Manchester, Manchester, UK sabina.sloman@manchester.ac.uk
Ben Andrew
Affiliation:
Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA benjamin_andrew@brown.edu matthew_nassar@brown.edu musslick@brown.edu www.smusslick.com Carney Institute for Brain Science, Brown University, Providence, RI, USA
Matthew R. Nassar
Affiliation:
Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA benjamin_andrew@brown.edu matthew_nassar@brown.edu musslick@brown.edu www.smusslick.com Carney Institute for Brain Science, Brown University, Providence, RI, USA
Sebastian Musslick
Affiliation:
Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA benjamin_andrew@brown.edu matthew_nassar@brown.edu musslick@brown.edu www.smusslick.com Carney Institute for Brain Science, Brown University, Providence, RI, USA Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
*
*Corresponding author.

Abstract

In many areas of the social and behavioral sciences, the nature of the experiments and theories that best capture the underlying constructs are themselves areas of active inquiry. Integrative experiment design risks being prematurely exploitative, hindering exploration of experimental paradigms and of diverse theoretical accounts for target phenomena.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Almaatouq et al. argue that one-at-a-time experiments hamper efficient exploration of target phenomena and theoretical integration. To address this, they suggest integrative experimentation: Data collection in a large, predetermined, experimental design space. Although integrative experimentation addresses many limitations of current experimental practices in the social and behavioral sciences, we argue that integrative experimentation risks being prematurely exploitative by (a) committing to existing experimental paradigms and dimensions of the corresponding design space, and (b) imposing constraints on theory-building. One-at-a-time experimentation serves a critical role in exploring useful experimental and theoretical paradigms that can then be effectively exploited by integrative experimentation.

Integrative experimentation exploits existing experimental paradigms and dimensions of the corresponding design spaces

Although integrative experimentation facilitates exploration within the prespecified design space, it exploits the information – or lack thereof – that informs the characterization of this space. To perform integrative experiments, scientists must identify a priori a small set of experimental tasks to invest in. Almaatouq et al. present several illustrative examples: Peterson, Bourgin, Agrawal, Reichman, and Griffiths (Reference Peterson, Bourgin, Agrawal, Reichman and Griffiths2021) invested enormous resources to collect human decisions on ~10,000 bandit gambles, Baribault et al. (Reference Baribault, Donkin, Little, Trueblood, Oravecz, Van Ravenzwaaij and Vandekerckhove2018) focused on a specific subliminal priming task, and Awad et al. (Reference Awad, Dsouza, Kim, Schulz, Henrich, Shariff and Rahwan2018, Reference Awad, Dsouza, Bonnefon, Shariff and Rahwan2020) extensively sampled a space of trolley problems. In fields where the nature of the experiments that best measure the underlying constructs are themselves areas of active inquiry, experiments are run under imperfect knowledge about the paradigm that will best capture a target phenomenon. One-at-a-time experimentation enables open-ended, cheap, and sequentially adaptive exploration of experimental paradigms and assumptions about the design spaces corresponding to these paradigms – including exploration along previously unexplored dimensions of a theoretically infinite design space.

Most areas of social and behavioral sciences use experimental manipulations and outcomes to measure unobservable constructs. Social and behavioral scientists in most domains are still engaged in iterative refinement of the experimental paradigms and dimensions of the design space that will best measure these constructs (Dubova & Goldstone, Reference Dubova and Goldstone2023). For instance, while a plethora of paradigms – including the multisource interference task, the task switching paradigm, and the N-back task – are utilized for the study of mental effort, there is little agreement about which experimental manipulations evoke mentally effortful processes, let alone how these manipulations would be combined into an integrative experiment (Bustamante et al., Reference Bustamante, Oshinowo, Lee, Tong, Burton, Shenhav and Daw2022; Koch, Poljac, Müller, & Kiesel, Reference Koch, Poljac, Müller and Kiesel2018; Kool, McGuire, Rosen, & Botvinick, Reference Kool, McGuire, Rosen and Botvinick2010; Shenhav et al., Reference Shenhav, Musslick, Lieder, Kool, Griffiths, Cohen and Botvinick2017; Westbrook & Braver, Reference Westbrook and Braver2015). Here, running integrative experiments can hinder solving the main problem of the field – identifying a set of experimental manipulations relevant to the construct of mental effort.

Almaatouq et al. give examples of areas in the social and behavioral sciences that are dominated by a small set of “standard” experimental paradigms, such as bandit gambles for risky decision making. In these cases, integrative experimentation can facilitate efficient exploration of behavior across the space defined by these paradigms. In other cases, however, integrative experimentation may actually hinder exploration of the target phenomena. For instance, early vision science operated in design spaces involving artificial visual stimuli. While integrative experimentation would have yielded theoretical commensurability in this space, one-at-a-time experiments (i.e., the use of stimuli that differed from the common design space) enabled a quick expansion of the space to natural stimuli that in turn led to rapid revisions of dominant theories of vision (Olshausen & Field, Reference Olshausen and Field2005; Zhaoping, Reference Zhaoping2014). Thus, scientific inquiry may often not justify a large investment of resources and interinstitutional coordination at the expense of expanding the design space or developing a number of completely new tasks.

Integrative experimentation exploits existing theoretical paradigms

Almaatouq et al. advocate for using integrative experiments to enforce commensurability of theoretical accounts for the data. However, this approach may prematurely prioritize some theoretical frameworks over others. For example, the BrainScore benchmark integrates neuroimaging studies on visual object recognition to standardize the comparison of formal theories of neural visual processing (Schrimpf et al., Reference Schrimpf, Kubilius, Lee, Murty, Ajemian and DiCarlo2020). Although aiming for inclusivity, BrainScore's design required certain commitments, such as the set of target phenomena and measurements to be accounted for (i.e., neural recordings in object recognition experiments) and the form that the theories can take (i.e., neural networks mimicking the ventral stream, taking pixels as inputs, and predicting behavioral responses). Equally justified alternative benchmarks could have led to different theories of visual processing being prioritized: For instance, the dataset could have emphasized temporal aspects of vision, or clumped together object recognition with visual search tasks when identifying the domain space for theories to capture. Similarly, standardizing theoretical accounts by the constraints imposed by integrative experiments, which often focus on a single experimental paradigm, may hinder exploration of theoretical frameworks that target different aspects of the phenomena.

Many, if not most, areas of social and behavioral sciences would benefit from facilitating investigation of a larger class of theoretical paradigms, rather than constraining theory-building. For example, cognitive science consists of incommensurable theoretical paradigms, such as rational analysis and dynamical systems, which make predictions about different, often nonoverlapping, aspects of cognitive phenomena. For instance, dynamical systems modeling seeks to capture the temporal aspects of a cognitive process, whereas rational analysis focuses on the outcomes of cognition. A diversity of theoretical frameworks informs the design of new experimental paradigms, broadens the collective conceptualization of the relevant design spaces (Chang, Reference Chang2012; Massimi, Reference Massimi2022), and contributes to more comprehensive insights on cognition (Krakauer, Ghazanfar, Gomez-Marin, MacIver, & Poeppel, Reference Krakauer, Ghazanfar, Gomez-Marin, MacIver and Poeppel2017; Marr, Reference Marr1982; Medin & Bang, Reference Medin and Bang2014). Constraining theory-building risks reinforcing biases that favor existing experimental paradigms, further inhibiting exploration of novel experimental and theoretical frameworks (Dubova, Moskvichev, & Zollman, Reference Dubova, Moskvichev and Zollman2022; Sloman, Oppenheimer, Broomell, & Shalizi, Reference Sloman, Oppenheimer, Broomell and Shalizi2022).

Integrative and one-at-a-time experimentation benefit fields with different goals at different stages of their development

Viewed from a resource allocation perspective, scientific endeavors face an explore–exploit dilemma. Integrative experimentation facilitates broad characterization of behavior within a specific paradigm and its corresponding design space. One-at-a-time experimentation encourages iterative refinement of experimental paradigms and the development of new theoretical frameworks. We believe a combination of integrative and one-at-a-time experimentation is needed to effectively address the explore–exploit problem in sciences.

Financial support

S. M. is supported by Schmidt Science Fellows, in partnership with the Rhodes Trust. M. R. N. is supported by NIH RO1 MH126971.

Competing interest

None.

References

Awad, E., Dsouza, S., Bonnefon, J. F., Shariff, A., & Rahwan, I. (2020). Crowdsourcing moral machines. Communications of the ACM, 63(3), 4855.CrossRefGoogle Scholar
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 5964.CrossRefGoogle ScholarPubMed
Baribault, B., Donkin, C., Little, D. R., Trueblood, J. S., Oravecz, Z., Van Ravenzwaaij, D., … Vandekerckhove, J. (2018). Metastudies for robust tests of theory. Proceedings of the National Academy of Sciences of the United States of America, 115(11), 26072612.CrossRefGoogle ScholarPubMed
Bustamante, L. A., Oshinowo, T., Lee, J. R., Tong, E., Burton, A. R., Shenhav, A. S., … Daw, N. D. (2022). Effort foraging task reveals positive correlation between individual differences in the cost of cognitive and physical effort in humans and relationship to self-reported motivation and affect. bioRxiv, 2022-11.Google Scholar
Chang, H. (2012). Is water H2O?: Evidence, realism and pluralism (Vol. 293). Springer Science & Business Media.CrossRefGoogle Scholar
Dubova, M., & Goldstone, R. L. (2023). Carving joints into nature: reengineering scientific concepts in light of concept-laden evidence. Trends in Cognitive Sciences, 27(7), 656670.CrossRefGoogle ScholarPubMed
Dubova, M., Moskvichev, A., & Zollman, K. (2022). Against theory-motivated experimentation in science. MetaArXiv. June, 24.Google Scholar
Koch, I., Poljac, E., Müller, H., & Kiesel, A. (2018). Cognitive structure, flexibility, and plasticity in human multitasking – An integrative review of dual-task and task-switching research. Psychological Bulletin, 144(6), 557.CrossRefGoogle ScholarPubMed
Kool, W., McGuire, J. T., Rosen, Z. B., & Botvinick, M. M. (2010). Decision making and the avoidance of cognitive demand. Journal of Experimental Psychology: General, 139(4), 665.CrossRefGoogle ScholarPubMed
Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A., & Poeppel, D. (2017). Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93(3), 480490.CrossRefGoogle ScholarPubMed
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. W.H. Freeman.Google Scholar
Massimi, M. (2022). Perspectival realism. Oxford University Press.CrossRefGoogle Scholar
Medin, D. L., & Bang, M. (2014). Who's asking?: Native science, western science, and science education. MIT Press.CrossRefGoogle Scholar
Olshausen, B. A., & Field, D. J. (2005). How close are we to understanding V1?. Neural Computation, 17(8), 16651699.CrossRefGoogle ScholarPubMed
Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D., & Griffiths, T. L. (2021). Using large-scale experiments and machine learning to discover theories of human decision-making. Science (New York, N.Y.), 372(6547), 12091214.CrossRefGoogle ScholarPubMed
Schrimpf, M., Kubilius, J., Lee, M. J., Murty, N. A. R., Ajemian, R., & DiCarlo, J. J. (2020). Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 108(3), 413423.CrossRefGoogle ScholarPubMed
Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T. L., Cohen, J. D., & Botvinick, M. M. (2017). Toward a rational and mechanistic account of mental effort. Annual Review of Neuroscience, 40, 99124.CrossRefGoogle Scholar
Sloman, S. J., Oppenheimer, D. M., Broomell, S. B., & Shalizi, C. R. (2022). Characterizing the robustness of Bayesian adaptive experimental designs to active learning bias. arXiv preprint arXiv:2205.13698.Google Scholar
Westbrook, A., & Braver, T. S. (2015). Cognitive effort: A neuroeconomic approach. Cognitive, Affective, & Behavioral Neuroscience, 15, 395415.CrossRefGoogle ScholarPubMed
Zhaoping, L. (2014). Understanding vision: Theory, models, and data. Oxford University Press.CrossRefGoogle Scholar