Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-19T18:02:10.729Z Has data issue: false hasContentIssue false

11 - An Open Science Workflow for More Credible, Rigorous Research

from Part III - Your Research/Academic Career

Published online by Cambridge University Press:  21 July 2022

Mitchell J. Prinstein
Affiliation:
University of North Carolina, Chapel Hill

Summary

Part of what distinguishes science from other ways of knowing is that scientists show their work. Yet when probed, it turns out that much of the process of research is hidden away: in personal files, in undocumented conversations, in point-and-click menus, and so on. In recent years, a movement toward more open science has arisen in psychology. Open science practices capture a broad swath of activities designed to take parts of the research process that were previously known only to a research team and make them more broadly accessible (e.g., open data, open analysis code, pre-registration, open research materials). Such practices increase the value of research by increasing transparency, which may in turn facilitate higher research quality. Plus, open science practices are now required at many journals. This chapter will introduce open science practices and provide plentiful resources for researchers seeking to integrate these practices into their workflow.

Type
Chapter
Information
The Portable Mentor
Expert Guide to a Successful Career in Psychology
, pp. 197 - 216
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Recent years have heralded a relatively tumultuous time in the history of psychological science. The past decade saw the publication of a landmark paper that attempted to replicate 100 studies and estimated that just 39 percent of studies published in top psychology journals were replicable (Open Science Collaboration, 2015). There was also a surplus of studies failing to replicate high-profile effects that had long been taken as fact (e.g., Reference Hagger, Chatzisarantis, Alberts, Anggono, Batailler, Birt, Brand, Brandt, Brewer, Bruyneel, Calvillo, Campbell, Cannon, Carlucci, Carruth, Cheung, Crowell, De Ridder, Dewitte and ZwienenbergHagger et al., 2016; Reference Harris, Coburn, Rohrer and PashlerHarris et al., 2013; Reference Wagenmakers, Beek, Dijkhoff, Gronau, Acosta, Adams, Albohn, Allard, Benning, Blouin-Hudon, Bulnes, Caldwell, Calin-Jageman, Capaldi, Carfagno, Chasten, Cleeremans, Connell, DeCicco and ZwaanWagenmakers et al., 2016). Taken together, suddenly, the foundations of much psychological research seemed very shaky.

As with similar evidence in other scientific fields (e.g., biomedicine, criminology), these findings have led to a collective soul-searching dubbed the “replication crisis” or the “credibility revolution” (Reference Nelson, Simmons and SimonsohnNelson et al., 2018; Reference VazireVazire, 2018). Clearly, something about the way scientists had gone about their work in the past wasn’t effective at uncovering replicable findings, and changes were badly needed. An impressive collection of meta-scientific studies (i.e., studies about scientists and scientific practices) have revealed major shortcomings in standard research and statistical methods (e.g., Reference Button, Ioannidis, Mokrysz, Nosek, Flint, Robinson and MunafòButton et al., 2013; Reference John, Loewenstein and PrelecJohn et al., 2012; Reference Nuijten, Hartgerink, Van Assen, Epskamp and WichertsNuijten et al., 2016; Reference Simmons, Nelson and SimonsohnSimmons et al., 2011). These studies point to a clear way to improve not only replicability but also the accuracy of scientific conclusions: open science.

Open science refers to a radically transparent approach to the research process. “Open” refers to sharing – making accessible – parts of the research process that have traditionally been known only to an individual researcher or research team. In a standard research article, authors summarize their research methods and their findings, leaving out many details along the way. Among other things, open science includes sharing research materials (protocols) in full, making data and analysis code publicly available, and pre-registering (i.e., making plans public) study designs, hypotheses, and analysis plans.

Psychology has previously gone through periods of unrest similar to the 2010s, with methodologists and statisticians making persuasive pleas for more transparency and rigor in research (e.g., Reference BakanBakan, 1966; Reference CohenCohen, 1994; Reference KerrKerr, 1998; Reference MeehlMeehl, 1978). Yet, it is only now with improvements in technology and research infrastructure, together with concerted efforts in journals and scientific societies by reformers, that changes have begun to stick (Reference SpellmanSpellman, 2015).

Training in open science practices is now a required part of becoming a research psychologist. The goal of this chapter is to briefly review the shortcomings in scientific practice that open science practices address and then to give a more detailed account of open science itself. We’ll consider what it means to work openly and offer pragmatic advice for getting started.

1. Why Open Science?

When introducing new researchers to the idea of open science, the need for such practices seems obvious and self-evident. Doesn’t being a scientist logically imply an obligation to transparently show one’s work and subject it to rigorous scrutiny? Yet, abundant evidence reveals that researchers have not historically lived up to this ideal and that the failure to do transparent, rigorous work has hindered scientific progress.

1.1 Old Habits Die Hard

Several factors in the past combined to create conditions that encouraged researchers to avoid open science practices. First, incentives in academic contexts have not historically rewarded such behaviors and, in some cases, may have actually punished them (Reference Smaldino and McElreathSmaldino & McElreath, 2016). To get ahead in an academic career, publications are the coin of the realm, and jobs, promotions, and accolades can sometimes be awarded based on number of publications, rather than publication quality.

Second, human biases conspire to fool us into thinking we have discovered something when we actually have not (Reference BishopBishop, 2020). For instance, confirmation bias allows us to selectively interpret results in ways that support our pre-existing beliefs or theories, which may be flawed. Self-serving biases might cause defensive reactions when critics point out errors in our methods or conclusions. Adopting open science practices can expose researchers to cognitive discomfort (e.g., pre-existing beliefs are challenged; higher levels of transparency mean that critics are given ammunition), which we might naturally seek to avoid.

Finally, psychology uses an apprenticeship model of researcher training, which means that the practices of new researchers might only be as good as the practices of the more senior academics training them. When questionable research practices are taught as normative by research mentors, higher-quality open science practices might be dismissed as methodological pedantry.

Given the abundant evidence of flaws in psychology’s collective body of knowledge, we now know how important it is to overcome the hurdles described here and transition to a higher standard of practice. Incentives are changing, and open science practices are becoming the norm at many journals (Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and YarkoniNosek et al., 2015). A new generation of researchers is being trained to employ more rigorous practices. And although the cognitive biases just discussed might be some of the toughest problems to overcome, greater levels of transparency in the publishing process help fortify the ability of the peer review process to serve as a check on researcher biases.

1.2 Benefits of Open Science Practices

A number of benefits of open science practices are worth emphasizing. First, increases in transparency make it possible for errors to be detected and for science to self-correct. The self-correcting nature of science is often heralded as a key feature that distinguishes scientific approaches from other ways of knowing. Yet, self-correction is difficult, if not impossible, when details of research are routinely withheld (Reference Vazire and HolcombeVazire & Holcombe, 2020).

Second, openly sharing research materials (protocols), analysis code, and data provides new opportunities to extend upon research and adds value above and beyond what a single study would add. For example, future researchers can more easily replicate a study’s methods if they have access to a full protocol and materials; secondary data analysts and meta-analysts can perform novel analyses on raw data if they are shared.

Third, collaborative work becomes easier when teams employ the careful documentation that is well-honed for followers of open science practices. Even massive collaborations across time and location become possible when research materials and data are shared following similar standards (Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher and ChartierMoshontz et al., 2018).

Finally, the benefits of open science practices accrue not only to the field at large, but also to individual researchers. Working openly provides a tangible record of your contributions as a researcher, which may be useful when it comes to applying for funding, awards, or jobs.

Reference MarkowetzMarkowetz (2015) describes five “selfish” reasons to work reproducibly, chiefly: (a) to avoid “disaster” (i.e., major errors), (b) because it’s easier, (c) to smooth the peer review process, (d) to allow others to build on your work, and (e) to build your reputation. Likewise, Reference McKiernan, Bourne, Brown, Buck, Kenall, Lin, McDougall, Nosek, Ram, Soderberg, Spies, Thaney, Updegrove, Woo and YarkoniMcKiernan et al. (2016) review the ample evidence that articles that feature open science practices tend to be more cited, more discussed in the media, attract more funding and job offers, and are associated with having a larger network of collaborators. Reference Allen and MehlerAllen and Mehler (2019) review benefits (along with challenges) specifically for early career researchers.

All of this is not to say that there are not costs or downsides to some of the practices discussed here. For one thing, learning and implementing new techniques takes time, although experience shows that you’ll become faster and more efficient with practice. Additionally, unsupportive research mentors or other senior collaborators can make it challenging to embrace open science practices. The power dynamics in such relationships may mean that there is little flexibility in the practices that early career researchers can employ. Trying to propose new techniques can be stressful and might strain advisor-advisee relationships, but see Reference Kathawalla, Silverstein and SyedKathawalla et al. (2021) for rebuttals to these issues and other common worries.

In spite of these persistent challenges and the old pressures working against the adoption of open science practices, I hope to convince you that the benefits of working openly are numerous – both to the field and to individual researchers. As a testament to changing norms and incentives, open science practices are spreading and taking hold in psychology (Reference Christensen, Freese and MiguelChristensen, Freese, et al., 2019; Reference Tenney, Costa, Allard and VazireTenney et al., 2021). Let us consider in more detail what we actually mean by open science practices.

2. Planning Your Research

Many open science practices boil down to forming or changing your work habits so that more parts of your work are available to be observed by others. But like other healthy habits (eating healthy food, exercising), open science practices may take some initial effort to put into place. You may also find that what works well for others doesn’t work well for you, and it may take some trial and error to arrive at a workflow that is both effective and sustainable. However, the benefits that you’ll reap from establishing these habits – both immediate and delayed – are well worth putting in the effort. It may not seem like it, but there is no better time in your career to begin than now.

Likewise, you may find that many open science practices are most easily implemented early in the research process, during the planning stages. But fear not: if a project is already underway, we’ll consider ways to add transparency to the research process at later stages as well. Here, we’ll discuss using the Open Science Framework (https://osf.io), along with pre-registration and registered reports, as you plan your research.

2.1 Managing the Open Science Workflow: The Open Science Framework

The Open Science Framework (OSF; https://osf.io) is a powerful research management tool. Using a tool like OSF allows you to organize all stages of the research process in one location, which can help you stay organized. Using OSF is also not tied to any specific academic institution, so you won’t have to worry about transferring your work when you inevitably change jobs (perhaps several times). Other tools exist that can do many of the things OSF can (some researchers like to use GitHub, figshare, or Zenodo, for instance), but OSF was specifically created for managing scientific research and has a number of features that make it uniquely suited for the task. OSF’s core functions include (but are not limited to) long-term archival of research materials, analysis code, and data; a flexible but robust pre-registration tool; and support for collaborative workflow management. Later in the chapter, we’ll discuss the ins and outs of each of these practices, but here I want to review a few of the ways that OSF is specialized for these functions.

The main unit of work on OSF is the “project.” Each project has a stable URL and the potential to create an associated digital object identifier (DOI). This means that researchers can make reference to OSF project pages in their research articles without worry that links will cease to function or shared content will become unavailable. A sizable preservation fund promises that content shared on OSF will remain available for at least 50 years, even if the service should cease to operate. This stability makes OSF well-suited to host part of the scientific record.

A key feature of projects is that they can be made public (accessible to all) or private (accessible only to contributors). This feature allows you to share your work publicly when you are ready, whether that is immediately or only after a project is complete. Another feature is that projects can be shared using “view-only” links. These links have the option to remove contributor names to enable the materials shared in a project to be accessible to peer reviewers at journals that use masked review.

Projects can have any number of contributors, making it possible to easily work collaboratively even with a large team. An activity tracker gives a detailed and complete account of changes to the project (e.g., adding or removing a file, editing the project wiki page), so you always know who did what, and when, within a project. Another benefit is the ability to easily connect OSF to other tools (e.g., Google drive, GitHub) to further enhance OSF’s capabilities.

Within projects, it is possible to create nested “components.” Components have their own URLs, DOIs, privacy settings, and contributor list. It is possible, for instance, to create a component within a project and to restrict access to that component alone while making the rest of the project publicly accessible. If particular parts of a project are sensitive or confidential, components can be a useful way to maintain the privacy of that information. Similarly, perhaps it is necessary for part of a research group to have access to parts of a research project and for others to not have the access. Components allow researchers this fine-grained level of control.

Finally, OSF’s pre-registration function allows projects and components to be “frozen” (i.e., saved as time-stamped copies that cannot be edited). Researchers can opt to pre-register their projects using one of many templates, or they can simply upload the narrative text of their research plans. In this way, researchers and editors can be confident about which elements of a study were pre-specified and which were informed by the research process or outcomes.

The review of OSF’s features here is necessarily brief. Reference SoderbergSoderberg (2018) provides a step-by-step guide for getting started with OSF. Tutorials are also available on the Center for Open Science’s YouTube channel. I recommend selecting a project – perhaps one for which you are the lead contributor – to try out OSF and get familiar with its features in greater detail. Later, you may want to consider using a project template, like the one that I use in my lab (Reference CorkerCorker, 2016), to standardize the appearance and organization of your OSF projects.

2.2 Pre-Registration and Registered Reports

Learning about how to pre-register research involves much more than just learning how to use a particular tool (like OSF) to complete the registration process. Like other research methods, training and practice are needed to become skilled at this key open science technique (Reference Tackett, Brandes, Dworak and ShieldsTackett et al., 2020). Pre-registration refers to publicly reporting study designs, hypotheses, and/or analysis plans prior to the onset of a research project. Additionally, the pre-registered plan should be shared in an accessible repository, and it should be “read-only” (i.e., not editable after posting). As we’ll see, there are several reasons a researcher might choose to pre-register, along with a variety of benefits of doing so, but the most basic function of the practice is that pre-registration clearly delineates the parts of a research project that were specified before the onset of a project from those parts that were decided on along the way or based on observed data.

Depending on their goals, researchers might pre-register for different reasons (Reference Da Silva Frost and Ledgerwoodda Silva Frost & Ledgerwood, 2020; Reference LedgerwoodLedgerwood, 2018; Reference NavarroNavarro, 2019). First, researchers may want to constrain particular data analytic choices prior to encountering the data. Doing so makes it clear to the researchers, and to readers, that the presented analysis is not merely the one most favorable to the authors’ predictions, nor the one with the lowest p-value. Second, researchers might desire to specify theoretical predictions prior to encountering a result. In so doing, they set up conditions that enable a strong test of the theory, including the possibility for falsification of alternative hypotheses (Reference PlattPlatt, 1964). Third, researchers may seek to increase the transparency of their research process, documenting particular plans and, crucially, when those plans were made. In addition to the scientific benefits of transparency, pre-registration can also facilitate more detailed planning than usual, potentially increasing research quality as potential pitfalls are caught early enough to be remedied.

Some of these reasons are more applicable to certain types of research than others, but nearly all research can benefit from some form of pre-registration. For instance, some research is descriptive and does not test hypotheses stemming from a theory. Other research might feature few or no statistical analyses. The theory testing or analytic constraint functions of pre-registration might not be applicable in these instances. However, the benefits of increased transparency and enhanced planning stand to benefit many kinds of research (but see Reference Devezer, Navarro, Vandekerckhove and BuzbasDevezer et al., 2021, for a critical take on the value of pre-registration).

A related but distinct practice is Registered Reports (Reference ChambersChambers, 2013). In a registered report, authors submit a study proposal – usually as a manuscript consisting of a complete introduction, proposed method, and proposed analysis section – to a journal that offers the format. The manuscript (known at that point as “stage 1”) is then peer-reviewed, after which it can be rejected, accepted, or receive a revise and resubmit. Crucially, once the stage 1 manuscript is accepted (most likely after revision following peer review), the journal agrees to publish the final paper regardless of the statistical significance of results, provided the agreed upon plan has been followed – a phase of publication known as “in-principle acceptance.” Once results are in, the paper (at this point known as a “stage 2” manuscript) goes out again for peer review to verify that the study was executed as agreed.

When stage 1 proposals are published (either as stand-alone manuscripts or as supplements to the final stage 2 manuscripts), registered reports allow readers to confirm which parts of a study have been planned ahead of time, just like ordinary pre-registrations. Likewise, registered reports limit strategic analytic flexibility, allow strong tests of hypotheses, and increase the transparency of research. Crucially, however, registered reports also address publication bias, because papers are not accepted or rejected on the basis of the outcome of the research. Furthermore, the two-stage peer-review process has an even greater potential to improve study quality, because researchers receive the benefit of peer critique during the design phase of a study when there is still time to correct flaws. Finally, because the publication process is overseen by an editor, undisclosed deviations from the pre-registered plan may be less likely to occur than they are with unreviewed pre-registration. Pragmatically, registered reports might be especially worthwhile in contentious areas of study where it is useful to jointly agree on a critical test ahead of time with peer critics. Authors can also enjoy the promise of acceptance of the final product prior to investing resources in data collection.

Table 11.1 lists guidance and templates that have been developed across different subfields and research methods to enable nearly any study to be pre-registered. A final conceptual distinction is worth brief mention. Pre-registrations are documentation of researchers’ plans for their studies (in systematic reviews of health research, these documents are known as protocols). When catalogued and searchable, pre-registrations form a registry. In the United States, the most common study registry is clinicaltrials.gov, because the National Institutes of Health requires studies that it funds to be registered there. PROSPERO (Reference Page, Shamseer and TriccoPage et al., 2018) is the main registry for health-related systematic reviews. Entries in clinicaltrials.gov and PROSPERO must follow a particular format, and adhering to that format may or may not fulfill researchers’ pre-registration goals (for analytic constraint, for hypothesis testing, or for increasing transparency). For instance, when registering a study in clinicaltrials.gov, researchers must declare their primary outcomes (i.e., dependent variables) and distinguish them from secondary outcomes, but they are not required to submit a detailed analysis plan. A major benefit of study registries is to track the existence of studies independent of final publications. Registries also allow the detection of questionable research practices like outcome switching (e.g., Reference Goldacre, Drysdale, Dale, Milosevic, Slade, Hartley, Marston, Powell-Smith, Heneghan and MahtaniGoldacre et al., 2019). However, entries in clinicaltrials.gov and PROSPERO fall short in many ways when it comes to achieving the various goals of pre-registration discussed above. It is important to distinguish brief registry entries from more detailed pre-registrations and protocols.

Table 11.1 Guides and templates for pre-registration

Method/subfieldSource
Clinical scienceReference Benning, Bachrach, Smith, Freeman and WrightBenning et al. (2019)
Cognitive modeling applicationReference Crüwell and EvansCrüwell & Evans (2020)
Developmental cognitive neuroscienceReference Flourney, Vijayakumar, Cheng, Cosme, Flannery and PfeiferFlourney et al. (2020)
EEG/ERPReference Paul, Govaart and SchettinoPaul et al. (2021)
Experience samplingReference Kirtley, Lafit, Achterhof, Hiekkaranta and Myin-GermeysKirtley et al. (2021)
Experimental social psychologyReference van ’t Veer and Giner-Sorollavan ’t Veer & Giner-Sorolla (2016)
Exploratory researchDirnagl (2020)
fMRIReference FlanneryFlannery (2020)
Infant researchReference Havron, Bergmann and TsujiHavron et al. (2020)
Intervention researchReference Moreau and WiebelsMoreau & Wiebels (2021)
LinguisticsReference RoettgerRoettger (2021); Reference Mertzen, Lago and VasishthMertzen et al. (2021)
PsychopathologyReference Krypotos, Klugkist, Mertens and EngelhardKrypotos et al. (2019)
Qualitative researchReference Haven and Van GrootelHaven & Van Grootel (2019); Reference Haven, Errington, Gleditsch, van Grootel, Jacobs, Kern, Piñeiro, Rosenblatt and MokkinkHaven et al. (2020)
Quantitative researchReference Bosnjak, Fiebach, Mellor, Mueller, O’Connor, Oswald and Sokol-ChangBosnjak et al. (2021)
Replication researchReference Brandt, IJzerman, Dijksterhuis, Farach, Geller, Giner-Sorolla, Grange, Perugini, Spies and van ’t VeerBrandt et al. (2014)
Secondary data analysisReference Weston, Ritchie, Rohrer and PrzybylskiWeston et al. (2019); Reference Mertens and KrypotosMertens & Krypotos (2019); Reference Van den Akker, Weston, Campbell, Chopik, Damian, Davis-Kean, Hall, Kosie, Kruse, Olsen, Ritchie, Valentine, van ’t Veer and BakkerVan den Akker et al. (2021)
Single-case designReference Johnson and CookJohnson & Cook (2019)
Systematic review (general)Reference Van den Akker, Peters, Bakker, Carlsson, Coles, Corker, Feldman, Mellor, Moreau, Nordström, Pfeiffer, Pickering, Riegelman, Topor, van Veggel and YeungVan den Akker et al. (2020)
Systematic review and meta-analysis protocols (PRISMA-P)Reference Moher, Shamseer, Clarke, Ghersi, Liberati, Petticrew, Shekelle and StewartMoher et al. (2015); Reference Shamseer, Moher, Clarke, Ghersi, Liberati, Petticrew, Shekelle, Stewart and GroupShamseer et al. (2015)
Systematic review (non-interventional)Reference Topor, Pickering, Barbosa Mendes, Bishop, Büttner, Elsherif, Evans, Henderson, Kalandadze, Nitschke, Staaks, van den Akker, Yeung, Zaneva, Lam, Madan, Moreau, O’Mahony, Parker and WestwoodTopor et al. (2021)

3. Doing the Research

Open science considerations are as relevant when you are actually conducting your research as they are when you are planning it. One of the things you have surely already learned in your graduate training is that research projects often take a long time to complete. It may be several months, or perhaps even longer, after you have planned a study and collected the data before you are actually finalizing a manuscript to submit for publication. Even once an initial draft is completed, you will again have a lengthy wait while the paper is reviewed, after which time you will invariably have to return to the project for revisions. To make matters worse, as your career unfolds, you will begin to juggle multiple such projects simultaneously. Put briefly: you need a robust system of documentation to keep track of these many projects.

In spite of the importance of this topic, most psychology graduate programs have little in the way of formal training in these practices. Here, I will provide an overview of a few key topics in this area, but you would be well served to dig more deeply into this area on your own. In particular, Reference BrineyBriney (2015) provides a book-length treatment on data management practices. (Here “data” is used in the broad sense to mean information, which includes but extends beyond participant responses.) Reference HenryHenry (2021a, Reference Henry2021b) provides an overview of many relevant issues as well. Another excellent place to look for help in this area is your university library. Librarians are experts in data management, and libraries often host workshops and give consultations to help researchers improve their practices.

Several practices are part of the array of options available to openly document your research process. Here, I’ll introduce open lab notebooks, open protocols/materials, and open data/analysis code. Reference Klein, Hardwicke, Aust, Breuer, Danielsson, Mohr, IJzerman, Nilsonne, Vanpaemel and FrankKlein et al. (2018) provide a detailed, pragmatic look at these topics, highlighting considerations around what to share, how to share, and when to share.

3.1 Open Lab Notebooks

One way to track your research as it unfolds is to keep a detailed lab notebook. Recently, some researchers have begun to keep open, digital lab notebooks (Reference CampbellCampbell, 2018). Put briefly, open lab notebooks allow outsiders to access the research process in its entirety in real time (Reference Bradley, Lang, Koch, Neylon, Elkins, Lang, Koch and NeylonBradley et al., 2011). Open lab notebooks might include entries for data collected, experiments run, analyses performed, and so on. They can also include accounts of decisions made along the way – for instance, to change an analysis strategy or to modify the participant recruitment protocol. Open lab notebooks are a natural complement to pre-registration insofar as a pre-registration spells out a plan for a project, and the lab notebook documents the execution (or alteration) of that plan. In fact, for some types of research, where the a priori plan is relatively sparse, an open lab notebook can be an especially effective way to transparently document exploration as it unfolds.

On a spectrum from completely open research to completely opaque research, the practice of keeping an open lab notebook marks the far (open) end of the scale. For some projects (or some researchers) the costs of keeping a detailed open lab notebook in terms of time and effort might greatly exceed the scientific benefits for transparency and record keeping. Other practices may achieve similar goals more efficiently, but for some projects, the practice could prove invaluable. To decide whether an open lab notebook is right for you, consider the examples given in Reference CampbellCampbell (2018). You can also see an open notebook in action here: https://osf.io/3n964/ (Reference Koessler, Campbell and KohutKoessler et al., 2019).

3.2 Open Protocols and Open Materials

A paper’s Method section is designed to describe a study protocol – that is, its design, participants, procedure, and materials – in enough detail that an independent researcher could replicate the study. In actuality, many key details of study protocols are omitted from Method sections (Reference ErringtonErrington, 2019). To remedy this information gap, researchers should share full study protocols, along with the research materials themselves, as supplemental files. Protocols can include things like complete scripts for experimental research assistants, video demonstrations of techniques (e.g., a participant interaction or a neurochemical assay), and full copies of study questionnaires. The goal is for another person to be able to execute a study fully without any assistance from the original author.

Research materials that have been created specifically for a particular study – for instance, the actual questions asked of participants or program files for an experimental task – are especially important to share. If existing materials are used, the source where those materials can be accessed should be cited in full. If there are limitations on the availability of materials, which might be the case if materials are proprietary or have restricted access for ethical reasons, those limitations should be disclosed in the manuscript.

3.3 Reproducible Analyses, Open Code, and Open Data

One of the basic features of scientific research products is that they should be independently reproducible. A finding that can only be recreated by one person is a magic trick, not a scientific truism. Here, reproducible means that results can be recreated using the same data originally used to make a claim. By contrast, replicability implies the repetition of a study’s results using different data (e.g., a new sample). Note that also, a finding can be reproducible, or even replicable, but not be a valid or accurate representation of reality (Reference Vazire, Schiavone and BottesiniVazire et al., 2020). Reproducibility can be thought of as a minimally necessary precursor to later validity claims. In psychology, analyses of quantitative data very often form the backbone of our scientific claims. Yet, the reproducibility of data analytic procedures may never be checked, or if they are checked, findings may not be reproducible (Reference Obels, Lakens, Coles, Gottfried and GreenObels et al., 2020; Reference Stodden, Seiler and MaStodden et al., 2018). Even relatively simple errors in reporting threaten the accuracy of the research literature (Reference Nuijten, Hartgerink, Van Assen, Epskamp and WichertsNuijten et al., 2016).

Luckily, these problems are fixable, if we are willing to put in the effort. Specifically, researchers should share the code underlying their analyses and, when legally and ethically permissible, they should share their data. But beyond just sharing the “finished product,” it may be helpful to think about preparing your data and code to share while the project is actually under way (Reference Klein, Hardwicke, Aust, Breuer, Danielsson, Mohr, IJzerman, Nilsonne, Vanpaemel and FrankKlein et al., 2018).

Whenever possible, analyses should be conducted using analysis code – also known as scripting or syntax – rather than by using point-and-click menus in statistical software or doing hand calculations in spreadsheet programs. To further enhance the reproducibility of reported results, you can write your results section using a language called R Markdown. Succinctly, R Markdown combines descriptive text with results (e.g., statistics, counts) drawn directly from analyses. When results are prepared in this way, there is no need to worry about typos or other transcription errors making their way into your paper, because numbers from results are pulled directly from statistical output. Additionally, if there is a change to the data – say, if analyses need to be re-run on a subset of cases – the result text will automatically update with little effort.

Reference Peikert and BrandmaierPeikert and Brandmeier (2019) describe a possible workflow to achieve reproducible results using R Markdown along with a handful of other tools. Reference RouderRouder (2016) details a process for sharing data as it is generated – so-called “born open” data. This method also preserves the integrity of original data. When combined with Peikert and Brandmeier’s technique, the potential for errors to affect results or reporting is greatly diminished.

Regardless of the particular scripting language that you use to analyze your data, the code, along with the data itself, should be well documented to enable use by others, including reviewers and other researchers. You will want to produce a codebook, also known as a data dictionary, to accompany your data and code. Reference Buchanan, Crain, Cunningham, Johnson, Stash, Papadatou-Pastou, Isager, Carlsson and AczelBuchanan et al. (2021) describe the ins and outs of data dictionaries. Reference ArslanArslan (2019) writes about an automated process for codebook generation using R statistical software.

3.4 Version Control

When it comes to tracking research products in progress, a crucial concept is known as version control. A version control system permits contributors to a paper or other product (such as analysis code) to automatically track who made changes to the text and when they made them. Rather than saving many copies of a file in different locations and under different names, there is only one copy of a version-controlled file, but because changes are tracked, it is possible to roll back a file to an earlier version (for instance, if an error is detected). On large collaborative projects, it is vital to be able to work together simultaneously and to be able to return to an earlier version of the work if needed.

Working with version-controlled files decreases the potential for mistakes in research to go undetected. Reference Rouder, Haaf and SnyderRouder et al. (2019) describe practices, including the use of version control, that help to minimize mistakes and improve research quality. Reference Vuorre and CurleyVuorre and Curley (2018) provide specific guidance for using Git, one of the most popular version control systems. An additional benefit of learning to use these systems is their broad applicability in non-academic research settings (e.g., at technology and health companies). Indeed, developing skills in domain general areas like statistics, research design, and programming will broaden the array of opportunities available to you when your training is complete.

3.5 Working Openly Facilitates Teamwork and Collaboration

Keeping an open lab notebook, sharing a complete research protocol, or producing a reproducible analysis script that runs on open data might seem laborious compared to closed research practices, but there are advantages of these practices beyond the scientific benefits of working transparently. Detailed, clear documentation is needed for any collaborative research, and the need might be especially great in large teams. Open science practices can even facilitate massive collaborations, like those managed by the Psychological Science Accelerator (PSA; Reference Moshontz, Campbell, Ebersole, IJzerman, Urry, Forscher and ChartierMoshontz et al., 2018). The PSA is a global network of over 500 laboratories that coordinates large investigations of democratically selected study proposals. It enables even teams with limited resources to study important questions at a large enough scale to yield rich data and precise answers. Open science practices are baked into all parts of the research process, and indeed, such efforts would not be feasible or sustainable without these standard operating procedures.

Participating in a large collaborative project, such as one run by the PSA, is an excellent way to develop your open science skillset. It can be exciting and quite rewarding to work in such a large team, but in so doing, there is also the opportunity to learn from the many other collaborators on the project.

4. Writing It Up: Open Science and Your Manuscript

The most eloquent study with the most interesting findings is scientifically useless until the findings are communicated to the broader research community. Indeed, scientific communication may be the most important part of the research process. Yet skillfully communicating results isn’t about mechanically relaying the outcomes of hypothesis tests. Rather, it’s about writing that leaves the reader with a clear conclusion about the contribution of a project. In addition to being narratively compelling, researchers employing open science practices will also want to transparently and honestly describe the research process. Adept readers may sense a conflict between these two goals – crafting a compelling narrative vs. being transparent and honest – but in reality, both can be achieved.

4.1 Writing Well and Transparently

Reference GernsbacherGernsbacher (2018) provides detailed guidance on preparing a high-quality manuscript (with a clear narrative) while adhering to open science practices. She writes that the best articles are transparent, reproducible, clear, and memorable. To achieve clarity and memorability, authors must attend to good writing practices like writing short sentences and paragraphs and seeking feedback. These techniques are not at odds with transparency and reproducibility, which can be achieved through honest, detailed, and clear documentation of the research process. Even higher levels of detail can be achieved by including supplemental files along with the main manuscript.

One issue, of course, is how to decide which information belongs in the main paper versus the supplemental materials. A guiding principle is to organize your paper to help the reader understand the paper’s contribution while transparently describing what you’ve done and learned. Reference GernsbacherGernsbacher (2018) advises having an organized single file as a supplement to ease the burden on reviewers and readers. A set of well-labeled and organized folders in your OSF project (e.g., Materials, Data, Analysis Code, Manuscript Files) can also work well. Consider including a “readme” file or other descriptive text to help readers understand your file structure.

If a project is pre-registered, it is important that all of the plans (and hypotheses, if applicable) in the study are addressed in the main manuscript. Even results that are not statistically significant deserve discussion in the paper. If planned methods have changed, this is normal and absolutely fine. Simply disclose the change (along with accompanying rationale) in the paper, or better yet, file an addendum to your pre-registration when the change is made before proceeding. Likewise, when analysis plans change, disclose the change in the final paper. If the originally planned analysis strategy and the preferred strategy are both valid techniques, and others might disagree about which strategy is best, present results using both strategies. The details of the comparative analyses can be placed in a supplement, but discuss the analyses in the main text of the paper.

A couple of additional tools to assist with writing your open science manuscript are worth mention. First, Reference Aczel, Szaszi, Sarafoglou, Kekecs, Kucharský, Benjamin, Chambers, Fisher, Gelman, Gernsbacher, Ioannidis, Johnson, Jonas, Kousta, Lilienfeld, Lindsay, Morey, Munafò, Newell and WagenmakersAczel et al. (2020) provide a consensus-based transparency checklist that authors can complete to confirm that they have made all relevant transparency-based disclosures in their papers. The checklist can also be shared (e.g., on OSF) alongside a final manuscript to help guide readers through the disclosures. Second, R Markdown can be used to draft the entire text of your paper, not just the results section. Doing so allows you to render the final paper using a particular typesetting style more easily. More importantly, the full paper will then be reproducible. Rather than work from scratch, you may want to use the papaja package (Reference Aust and BarthAust & Barth, 2020), which provides an R Markdown template. Many researchers also like to use papaja in concert with Zotero (www.zotero.org/), an open-source reference manager.

4.2 Selecting a Journal for Your Research

Beyond questions of a journal’s topical reach and its reputation in the field, different journals have different policies when it comes to open science practices. When selecting a journal, you will want to review that journal’s submission guidelines to ensure that you understand and comply with its requirements. Another place to look for guidance on a journal’s stance on open science practices is editorial statements. These statements usually appear within the journal itself, but if the journal is owned by a society, they may also appear in society publications (e.g., American Psychological Association Monitor, Association for Psychological Science Observer).

Many journals are signatories of the TOP (Transparency and Openness Promotion) Guidelines, which specify three different levels of adoption for eight different transparency standards (Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys and YarkoniNosek et al., 2015; see also https://topfactor.org). Journals with policies at level 1 require authors to disclose details about their studies in their manuscripts – for instance, whether the data associated with studies are available. At level 2, sharing of study components (e.g., materials, data, or analysis code) is required for publication, with exceptions granted for valid legal and ethical restrictions on sharing. At level 3, the journal or its designee verifies the shared components – for instance, a journal might check whether a study’s results can be reproduced from shared analysis code. Importantly, journals can adopt different levels of transparency for the different standards. For instance, a journal might adopt level 1 (disclose) for pre-registration of analysis plans, but level 3 (verify) for study materials. Again, journal submission guidelines, along with editorial statements, provide guidance as to the levels adopted for each standard.

Some journals also offer badges for adopting transparent practices. At participating journals, authors declare whether they have pre-registered a study or shared materials and/or data, and the journal then marks the resulting paper with up to three badges (pre-registration, open data, open materials) indicating the availability of the shared content.

A final consideration is the pre-printing policies of a journal. Almost certainly, you will want the freedom to share your work on a preprint repository like PsyArXiv (https://psyarxiv.com). Preprint repositories allow authors to share their research ahead of publication, either before submitting the work for peer review at a journal or after the peer-review process is complete. Some repositories deem the latter class of manuscripts “post-prints” to distinguish them from papers that have not yet been published in a journal. Sharing early copies of your work will enable to you to get valuable feedback prior to journal submission. Even if you are not ready to share a pre-publication copy of your work, sharing the final post-print increases access to the work – especially for those without access through a library including researchers in many countries, scholars without university affiliations, and the general public. Manuscripts shared on PsyArXiv are indexed on Google Scholar, increasing their discoverability.

You can check the policies of your target journal at the Sherpa Romeo database (https://v2.sherpa.ac.uk/romeo/). The journals with the most permissive policies allow sharing of the author copy of a paper (i.e., what you send to the journal, not the typeset version) immediately on disciplinary repositories like PsyArXiv. Other journals impose an embargo on sharing of perhaps one or two years. A very small number of journals will not consider manuscripts that have been shared as preprints. It’s best to understand a journal’s policy before choosing to submit there.

Importantly, sharing pre- or post-print copies of your work is free to do, and it greatly increases the reach of your work. Another option (which may even be a requirement depending on your research funder) is to publish your work in a fully open-access journal (called “gold” open access) or in a traditional journal with the option to pay for your article to be made open access (called “hybrid” open access). Gold open-access journals use the fees from articles to cover the costs of publishing, but articles are free to read for everyone without a subscription. Hybrid journals, on the other hand, charge libraries large subscription fees (as they do with traditional journals), and they charge authors who opt to have their articles made open access, effectively doubling the journal’s revenue without incurring additional costs. The fees for hybrid open access are almost never worth it, given that authors can usually make their work accessible for free using preprint repositories.

Fees to publish your work in a gold open-access journal currently vary from around US$1000 on the low end to US$3000 or more on the high end. Typically, a research funder pays these fees, but if not, there may be funds available from your university library or research support office. Some journals offer fee waivers for authors who lack access to grant or university funding for these costs. Part of open science means making the results of research as accessible as possible. Gold open-access journals are one means of achieving this goal, but preprint repositories play a critical role as well.

5. Coda: The Importance of Community

Certainly, there are many tools and techniques to learn when it comes to open science practices. When you are just beginning, you will likely want to take it slow to avoid becoming overwhelmed. Additionally, not every practice described here will be relevant for every project. With time, you will learn to deploy the tools you need to serve a particular project’s goals. Yet, it is also important not to delay beginning to use these practices. Now is the time in your career where you are forming habits that you will carry with you for many years. You want to lay a solid foundation for yourself, and a little effort to learn a new skill or technology now will pay off down the road.

One of the best ways to get started with open science practices is to join a supportive community of other researchers who are also working towards the same goal. Your region or university might have a branch of ReproducibiliTea (https://reproducibilitea.org/), a journal club devoted to discussing and learning about open science practices. If it doesn’t, you could gather a few friends and start one, or you could join one of the region-free online clubs. Twitter is another excellent place to keep up to date on new practices, and it’s also great for developing a sense of community. Another option is to attend the annual meeting of the Society for the Improvement of Psychological Science (SIPS; http://improvingpsych.org). The SIPS meeting features workshops to learn new techniques, alongside active sessions (hackathons and unconferences) where researchers work together to develop new tools designed to improve psychological methods and practices. Interacting with other scholars provides an opportunity to learn from one another, but also provides important social support. Improving your research practices is a career-long endeavor; it is surely more fun not to work alone.

Acknowledgments

Thank you to Julia Bottesini and Sarah Schiavone for their thoughtful feedback. All errors and omissions are my own.

References

Recommended Reading

Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Exeter: Pelagic Publishing Ltd.Google Scholar
Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. Oakland, CA: University of California Press.Google Scholar
Christensen, G., Wang, Z., Paluck, E. L., Swanson, N., Birke, D. J., Miguel, E., & Littman, R. (2019, October 18). Open science practices are on the rise: The State of Social Science (3S) survey. https://doi.org/10.31222/osf.io/5rksuGoogle Scholar
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414.CrossRefGoogle ScholarPubMed
Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684.CrossRefGoogle Scholar

References

Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, S., Morey, C. C., Munafò, M., Newell, B. R., … & Wagenmakers, E. J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 46. https://doi.org/10.1038/s41562-019-0772-6CrossRefGoogle ScholarPubMed
Allen, C., & Mehler, D. M. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246Google Scholar
Arslan, R. C. (2019). How to automatically document data with the codebook package to facilitate data reuse. Advances in Methods and Practices in Psychological Science, 2, 169187. https://doi.org/10.1177/2515245919838783CrossRefGoogle Scholar
Aust, F., & Barth, M. (2020, July 7). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papajaGoogle Scholar
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66(6), 423437. https://doi.org/10.1037/h0020412Google Scholar
Benning, S. D., Bachrach, R. L., Smith, E. A., Freeman, A. J., & Wright, A. G. C. (2019). The registration continuum in clinical science: A guide toward transparent practices. Journal of Abnormal Psychology, 128(6), 528540. https://doi.org/10.1037/abn0000451CrossRefGoogle ScholarPubMed
Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 119. https://doi.org/10.1177/1747021819886519CrossRefGoogle ScholarPubMed
Bosnjak, M., Fiebach, C., Mellor, D. T., Mueller, S., O’Connor, D. B., Oswald, F. L., & Sokol-Chang, R. (2021, February 22). A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force. https://doi.org/10.31234/osf.io/d7m5rCrossRefGoogle Scholar
Bradley, J. C., Lang, A. S., Koch, S., & Neylon, C. (2011). Collaboration using open notebook science in academia. In Elkins, S., Lang, A. S., Koch, S., & Neylon, C. (Eds.), Collaborative computational technologies for biomedical research (pp. 423452). Hoboken, NJ: John Wiley & Sons, Inc. https://doi.org/10.1002/9781118026038.ch25CrossRefGoogle Scholar
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217224. https://doi.org/10.1016/j.jesp.2013.10.005Google Scholar
Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Exeter: Pelagic Publishing Ltd.Google Scholar
Buchanan, E. M., Crain, S. E., Cunningham, A. L., Johnson, H. R., Stash, H., Papadatou-Pastou, M., Isager, P. M., Carlsson, R., & Aczel, B. (2021). Getting started creating data dictionaries: How to create a shareable data set. Advances in Methods and Practices in Psychological Science, 4(1), 110. https://doi.org/10.1177/2515245920928007Google Scholar
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365376. https://doi.org/10.1038/nrn3475CrossRefGoogle ScholarPubMed
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609610. https://doi.org/10.1016/j.cortex.2012.12.016CrossRefGoogle ScholarPubMed
Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. Oakland, CA: University of California Press.Google Scholar
Cohen, J. (1994). The earth is round (p<. 05). American Psychologist, 49(12), 9971003. https://doi.org/10.1037/0003-066X.49.12.997CrossRefGoogle Scholar
Corker, K. S. (2016, January 15). PMG Lab – Project template. https://doi.org/10.17605/OSF.IO/SJTYRCrossRefGoogle Scholar
Crüwell, S., & Evans, N. J. (2020, September 19). Preregistration in complex contexts: A preregistration template for the application of cognitive models. https://doi.org/10.31234/osf.io/2hykxCrossRefGoogle Scholar
Da Silva Frost, A., & Ledgerwood, A. (2020). Calibrate your confidence in research findings: A tutorial on improving research methods and practices. Journal of Pacific Rim Psychology, 14, E14. https://doi.org/10.1017/prp.2020.7Google Scholar
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8, 200805. https://doi.org/10.1101/2020.04.26.048306Google Scholar
Dirnagl, U. (2019). Preregistration of exploratory research: Learning from the golden age of discovery. PLoS Biology, 18(3), e3000690. https://doi.org/10.1371/journal.pbio.3000690CrossRefGoogle Scholar
Errington, T. M. (2019, September 5). Reproducibility Project: Cancer Biology – Barriers to replicability in the process of research. https://doi.org/10.17605/OSF.IO/KPR7UCrossRefGoogle Scholar
Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403414. https://doi.org/10.1177/2515245918754485Google Scholar
Goldacre, B., Drysdale, H., Dale, A., Milosevic, I., Slade, E., Hartley, P., Marston, C., Powell-Smith, A., Heneghan, C., & Mahtani, K. R. (2019). COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials, 20(1), 116. https://doi.org/10.1186/s13063-019-3173-2Google Scholar
Flannery, J. E. (2020, October 22). fMRI Preregistration Template. https://osf.io/6juftGoogle Scholar
Flourney, J. C., Vijayakumar, N., Cheng, T. W., Cosme, D., Flannery, J. E., & Pfeifer, J. H. (2020). Improving practices and inferences in developmental cognitive neuroscience. Developmental Cognitive Neuroscience, 45, 100807 https://doi.org/10.1016/j.dcn.2020.100807Google Scholar
Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A. R., Brand, R., Brandt, M. J., Brewer, G., Bruyneel, S., Calvillo, D. P., Campbell, W. K., Cannon, P. R., Carlucci, M., Carruth, N. P., Cheung, T., Crowell, A., De Ridder, D. T. D., Dewitte, S., … Zwienenberg, M. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11(4), 546573. https://doi.org/10.1177/1745691616652873CrossRefGoogle ScholarPubMed
Harris, C. R., Coburn, N., Rohrer, D., & Pashler, H. (2013). Two failures to replicate high-performance-goal priming effects. PLoS One, 8(8), e72467. https://doi.org/10.1371/journal.pone.0072467Google Scholar
Haven, T. L., & Van Grootel, L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229244. https://doi.org/10.1080/08989621.2019.1580147Google Scholar
Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., Piñeiro, R., Rosenblatt, F., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19, 113. https://doi.org/10.1177/1609406920976417Google Scholar
Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research – A primer. Infancy, 25(5), 734754. https://doi.org/10.1111/infa.12353Google Scholar
Henry, T. R. (2021a, February 26). Data Management for Researchers: Three Tales. https://doi.org/10.31234/osf.io/ga9yfGoogle Scholar
Henry, T. R. (2021b, February 26). Data Management for Researchers: 8 Principles of Good Data Management. https://doi.org/10.31234/osf.io/5tmfeCrossRefGoogle Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524532. https://doi.org/10.1177/0956797611430953Google Scholar
Johnson, A. H., & Cook, B. G. (2019). Preregistration in single-case design research. Exceptional Children, 86(1), 95112. https://doi.org/10.1177/0014402919868529CrossRefGoogle Scholar
Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/10.1525/collabra.18684CrossRefGoogle Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. https://doi.org/10.1207/2Fs15327957pspr0203_4Google Scholar
Kirtley, O. J., Lafit, G., Achterhof, R., Hiekkaranta, A. P., & Myin-Germeys, I. (2021). Making the black box transparent: A template and tutorial for registration of studies using experience-sampling methods. Advances in Methods and Practices in Psychological Science, 4(1), 116. https://doi.org/10.1177/2515245920924686Google Scholar
Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C. (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1), 20. https://doi.org/10.1525/collabra.158Google Scholar
Koessler, R. B., Campbell, L., & Kohut, T. (2019, February 27). Open notebook. https://osf.io/3n964/Google Scholar
Krypotos, A.-M., Klugkist, I., Mertens, G., & Engelhard, I. M. (2019). A step-by-step guide on preregistration and effective data sharing for psychopathology research. Journal of Abnormal Psychology, 128(6), 517527. https://doi.org/10.1037/abn0000424Google Scholar
Ledgerwood, A. (2018). The preregistration revolution needs to distinguish between predictions and analyses. Proceedings of the National Academy of Sciences, 115(45), E10516E10517. https://doi.org/10.1073/pnas.1812592115Google Scholar
Markowetz, F. (2015). Five selfish reasons to work reproducibly. Genome Biology, 16, 274. https://doi.org/10.1186/s13059-015-0850-7Google Scholar
McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B. A., Ram, K., Soderberg, C. K., Spies, J. R., Thaney, K., Updegrove, A., Woo, K. H., & Yarkoni, T. (2016). Point of view: How open science helps researchers succeed. eLife, 5, e16800. https://doi.org/10.7554/eLife.16800Google Scholar
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46(4), 806834. https://doi.org/10.1037/0022-006X.46.4.806CrossRefGoogle Scholar
Mertens, G., & Krypotos, A.-M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338352. http://doi.org/10.5334/pb.493Google Scholar
Mertzen, D., Lago, S., & Vasishth, S. (2021, March 4). The benefits of preregistration for hypothesis-driven bilingualism research. https://doi.org/10.31234/osf.io/nm3egCrossRefGoogle Scholar
Moher, D., Shamseer, L., Clarke, M. Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4(1), 19. https://doi.org/10.1186/2046-4053-4-1Google Scholar
Moreau, D., & Wiebels, K. (2021). Assessing change in intervention research: The benefits of composite outcomes. Advances in Methods and Practices in Psychological Science, 4(1), 114. https://doi.org/10.1177/2515245920931930Google Scholar
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501515. https://doi.org/10.1177/2515245918797607Google Scholar
Navarro, D. (2019, January 17). Prediction, pre-specification and transparency [blog post]. https://featuredcontent.psychonomic.org/prediction-pre-specification-and-transparency/Google Scholar
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511534. https://doi.org/10.1146/annurev-psych-122216-011836CrossRefGoogle ScholarPubMed
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 14221425. https://doi.org/10.1126/science.aab2374Google Scholar
Nuijten, M. B., Hartgerink, C. H., Van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 12051226. https://doi.org/10.3758/s13428-015-0664-2CrossRefGoogle ScholarPubMed
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in Registered Reports in psychology. Advances in Methods and Practices in Psychological Science, 229237. https://doi.org/10.1177/2515245920918872Google Scholar
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716CrossRefGoogle Scholar
Page, M. J., Shamseer, L., & Tricco, A. C. (2018). Registration of systematic reviews in PROSPERO: 30,000 records and counting. Systematic Reviews, 7(1), 32. https://doi.org/10.1186/s13643-018-0699-4Google Scholar
Paul, M., Govaart, G., & Schettino, A. (2021, March 1). Making ERP research more transparent: Guidelines for preregistration. https://doi.org/10.31234/osf.io/4tgveCrossRefGoogle Scholar
Peikert, A., & Brandmaier, A. M. (2019, November 11). A reproducible data analysis workflow with R Markdown, Git, Make, and Docker. https://doi.org/10.31234/osf.io/8xzqyGoogle Scholar
Platt, J. R. (1964). Strong inference. Science, 146(3642), 347353. https://www.jstor.org/stable/1714268Google Scholar
Roettger, T. B. (2021). Preregistration in experimental linguistics: Applications, challenges, and limitations. Linguistics, 59, 12271249. https://doi.org/10.31234/osf.io/vc9huGoogle Scholar
Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior Research Methods, 48(3), 10621069. https://doi.org/10.3758/s13428-015-0630-zCrossRefGoogle ScholarPubMed
Rouder, J. N., Haaf, J. M., & Snyder, H. K. (2019). Minimizing mistakes in psychological science. Advances in Methods and Practices in Psychological Science, 2(1), 311. https://doi.org/10.1177/2515245918801915Google Scholar
Shamseer, L., Moher, D., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & the Group, PRISMA-P. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ, 350, g7647. https://doi.org/10.1136/bmj.g7647CrossRefGoogle ScholarPubMed
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384Google Scholar
Soderberg, C. K. (2018). Using OSF to share data: A step-by-step guide. Advances in Methods and Practices in Psychological Science, 1(1), 115120. https://doi.org/10.1177/2515245918757689CrossRefGoogle Scholar
Spellman, B. A. (2015). A short (personal) future history of revolution 2.0. Perspectives on Psychological Science, 10(6), 886899. https://doi.org/10.1177/1745691615609918CrossRefGoogle Scholar
Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115, 25842589. https://10.1073/pnas.1708290115Google Scholar
Tackett, J. L., Brandes, C. M., Dworak, E. M., & Shields, A. N. (2020). Bringing the (pre)registration revolution to graduate training. Canadian Psychology/Psychologie canadienne, 61(4), 299309. https://doi.org/10.1037/cap0000221CrossRefGoogle Scholar
Tenney, E., Costa, E., Allard, A., & Vazire, S. (2021). Open science and reform practices in organizational behavior research over time (2011 to 2019). Organizational Behavior and Human Decision Processes, 162, 218223. https://doi.org/10.1016/j.obhdp.2020.10.015Google Scholar
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F. C., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J. P. C., van den Akker, O., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A., … Westwood, S. J. (2021, March 5). An integrative framework for planning and conducting Non-Interventional, Reproducible, and Open Systematic Reviews (NIRO-SR). https://doi.org/10.31222/osf.io/8gu5zCrossRefGoogle Scholar
van ’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology – A discussion and suggested template. Journal of Experimental Social Psychology, 67, 212. https://doi.org/10.1016/j.jesp.2016.03.004Google Scholar
Van den Akker, O., Peters, G.-J. Y., Bakker, C., Carlsson, R., Coles, N. A., Corker, K. S., Feldman, G., Mellor, D., Moreau, D., Nordström, T., Pfeiffer, N., Pickering, J., Riegelman, A., Topor, M., van Veggel, N., & Yeung, S. K. (2020, September 15). Inclusive systematic review registration form. https://doi.org/10.31222/osf.io/3nbeaCrossRefGoogle Scholar
Van den Akker, O., Weston, S. J., Campbell, L., Chopik, W. J., Damian, R. I., Davis-Kean, P., Hall, A. N., Kosie, J., E., Kruse, E., Olsen, J., Ritchie, S. J., Valentine, K. D., van ’t Veer, A., & Bakker, M. (2021, February 21). Preregistration of secondary data analysis: A template and tutorial. https://doi.org/10.31234/osf.io/hvfmrGoogle Scholar
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411417. https://doi.org/10.1177/1745691617751884Google Scholar
Vazire, S., & Holcombe, A. O. (2020, August 13). Where are the self-correcting mechanisms in science? https://doi.org/10.31234/osf.io/kgqztGoogle Scholar
Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020, October 7). Credibility beyond replicability: Improving the four validities in psychological science. https://doi.org/10.31234/osf.io/bu4d3Google Scholar
Vuorre, M., & Curley, J. P. (2018). Curating research assets: A tutorial on the Git version control system. Advances in Methods and Practices in Psychological Science, 1(2), 219236. https://doi.org/10.1177/2515245918754826Google Scholar
Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Albohn, D. N., Allard, E. S., Benning, S. D., Blouin-Hudon, E.-M., Bulnes, L. C., Caldwell, T. L., Calin-Jageman, R. J., Capaldi, C. A., Carfagno, N. S., Chasten, K. T., Cleeremans, A., Connell, L., DeCicco, J. M., … Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917928. https://doi.org/10.1177/1745691616674458Google Scholar
Weston, S. J., Ritchie, S. J., Rohrer, J. M., & Przybylski, A. K. (2019). Recommendations for increasing the transparency of analysis of preexisting data sets. Advances in Methods and Practices in Psychological Science, 2(3), 214227. https://doi.org/10.1177/2515245919848684Google Scholar
Figure 0

Table 11.1 Guides and templates for pre-registration

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×