Hostname: page-component-7bb8b95d7b-495rp Total loading time: 0 Render date: 2024-09-27T02:01:26.508Z Has data issue: false hasContentIssue false

Misdiagnosing the problem of why behavioural change interventions fail

Published online by Cambridge University Press:  30 August 2023

Magda Osman*
Affiliation:
Centre for Science and Policy, University of Cambridge, Cambridge, Cambridgeshire, UK m.osman@jbs.cam.ac.uk; https://magdaosman.com

Abstract

Routes to achieving any sort of meaningful success in the enterprise of behavioural change requires an understanding of the rate of failure, and why failures occur. This commentary shows that there is more to diagnosis of failures than fixating on micro- rather than the macro-level behaviours.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

The reasons for why behavioural change interventions keep failing are multifaceted, and this is an important motif that runs through this commentary, and less so in the target article. The diagnosis it offers as to why behavioural change interventions are doomed to fail is that behavioural scientists are focusing on the wrong unit of analysis. Just like economists and social workers do, we first need to acknowledge micro (individual – or “i-frame”), mezzo (group), and macro (population – or “s-frame”) level differences in behaviour. By shifting away from micro straight to macro level we have a better chance of unlocking the potential of behavioural change interventions, and at the same time avoid doing the bidding of private sector organisations.

First, researchers had already highlighted the serious problems involved in fixating narrowly on fitting an intervention to a target behaviour while neglecting the wider context where both are couched in (Meder, Fleischhut, & Osman, Reference Meder, Fleischhut and Osman2018). This is also where we begin to understand that a thorough diagnosis of failure requires a multidisciplinary approach.

Second, by focusing on where successes lie, we focus less on how they fail, how often they fail, and where they fail (Hummel & Maedche, Reference Hummel and Maedche2019; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020). By making inroads to classifying the many types of failures that have been documented (Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020), we can start to address these outstanding issues. Moreover, by doing this we can open up opportunities to work with decision sciences, data scientists, and social scientists to understand and explain why behavioural change interventions fail when they do, and what success realistically looks like (Cartwright & Hardie, Reference Cartwright and Hardie2012). A unifying causal analytic approach can help to build theories and new empirical practices (Bryan, Tipton, & Yeager, Reference Bryan, Tipton and Yeager2021; Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020) that can uncover which combinations of interventions can work (e.g., Osman et al., Reference Osman, McLachlan, Fenton, Neil, Löfstedt and Meder2020).

Third, because we are offering practical solutions to public policy problems, such as those offered in Tables 1 and 2 of the target article, as applied by behavioural scientists, we confront the world of policy making. Maintaining a naïve understanding of the science–policy interface, where accessibility of evidence is viewed as a key to successful implementation (Reichmann & Wieser, Reference Reichmann and Wieser2022) is a considerable barrier to estimating realistic success rates of behavioural change interventions. We might think that the use of evidence works through what is often referred to as the policy cycle – agenda setting, policy formation, decision making, policy implementation, and policy evaluation (Lasswell, Reference Lasswell1956). But public policy, public administration, and political science research show that this is ideal, and that there are at least six different competing characterisations of the policy-making process, and in each the uptake of scientific evidence is far from linear (Cairney, Reference Cairney2020). So, to inform public and social policy making, behavioural scientists need to at least acknowledge the considerations of the policy issues that need addressing from the perspective of those that are likely to be implementing the behavioural interventions.

Scientific progress depends on acknowledging failure, and the target article is an honest account of the limitations of past efforts to achieve behavioural change. However, viable solutions will depend on an accurate characterisation of the aetiology of the failings, along with a new theoretical account that sets the foundations for new theorising and empirical investigations.

Competing interest

None.

References

Bryan, C. J., Tipton, E., & Yeager, D. S. (2021). Behavioural science is unlikely to change the world without a heterogeneity revolution. Nature Human Behaviour, 5(8), 980989.CrossRefGoogle ScholarPubMed
Cairney, P. (2020). Understanding public policy (2nd ed.). Red Globe.Google Scholar
Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford University Press.CrossRefGoogle Scholar
Hummel, D., & Maedche, A. (2019). How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies. Journal of Behavioral and Experimental Economics, 80, 4758.CrossRefGoogle Scholar
Lasswell, H. D. (1956). The decision process: Seven categories of functional analysis. College of Business and Public Administration, University of Maryland.Google Scholar
Meder, B., Fleischhut, N., & Osman, M. (2018). Beyond the confines of choice architecture: A critical analysis. Journal of Economic Psychology, 68, 3644.CrossRefGoogle Scholar
Osman, M., McLachlan, S., Fenton, N., Neil, M., Löfstedt, R., & Meder, B. (2020). Learning from behavioural changes that fail. Trends in Cognitive Sciences, 24(12), 969980.CrossRefGoogle ScholarPubMed
Reichmann, S., & Wieser, B. (2022). Open science at the science–policy interface: Bringing in the evidence?. www.osf.ioCrossRefGoogle Scholar