Skip to main content Accessibility help
×
Journal information

All manuscripts should report the following in the main text:

  • If the experiment(s) are preregistered, provide a link to the anonymized preregistration document.
  • Make explicit how the experimental design relates to the hypotheses.
  • Explain how the sample size was determined and note statistical power.
  • Clearly state any exclusion criteria.
  • Report all experimental conditions and disclose all post-treatment measures.
  • Clearly label findings as confirmatory or exploratory.
  • If there is a preregistration, authors should report all preregistered results (even if briefly), explicitly label analyses that were not preregistered, and detail departures from the preregistration.
  • Means of the dependent variable(s) by experimental conditions, along with standard deviations, and Ns should be reported.
  • Exact p-values should be reported in the text and confidence intervals should be shown in figures.
  • Lack of statistical significance is not sufficient evidence for “no effect.” If authors wish to make a claim that a treatment had “no effect,” they should report Bayesian Factors with well-justified priors.

 

All manuscripts should report the following in a supplemental appendix:

  • A section that fully details the experimental design at the level of detail that would allow an independent replication of the design (e.g., disclosing materials, recruitment protocols, questionnaires, coding decisions, etc.).
  • A section that answers each item on the JEPS Reporting Guidelines Checklist (even if the answer is “not applicable.”)
  • A section that affirms that the experimental design is consistent with APSA’s Principles and Guidance for Human Subjects Research. This section should:
    • If the experimental design used deception or has potential harms, discuss these and justify how the design is consistent with APSA Guidance. 
    • Describe process for consent and debriefing participants or provide justification for the absence of these processes. 
    • If human participants were compensated for participation declare the amount and a justification for it. 
    • Address issues of confidentiality, compliance with laws/regulations, and any other issue that is pertinent to the principles of respect of persons, beneficence, and justice as outlined by the Belmont Report.

    Standards for publication

    JEPS is a general field journal that publishes work in all of the empirical subfields of political science that uses experimental methods to identify causal effects. In order to be published, the editorial team must deem a manuscript to offer 1) a strong theoretical and/or empirical contribution, 2) an appropriate, rigorous, well-powered experimental design, and 3) statistical analyses that are appropriate, clear, and transparent. We encourage authors to make explicit how their manuscript fulfills these criteria within the page limit of the article type they have selected to submit. Clever use of the Supplemental Appendix should help authors meet the page limit.

    Consistent statistically significant results (i.e., “all of our hypotheses are correct”), is not a criterion for publishing in JEPS. We seek to publish research that “tells it like it is.” In this respect, authors should not feel the pressure to present unexpected findings as if they were expected. If the results are not consistent with a hypothesis, just tell us that they are not. Although it is not required, we have observed that authors benefit from preregistering their hypotheses and designs by being able to transparently communicate what the hypotheses were at the outset of the experiment. Along these lines, preregistered designs also allow authors to label clearly which analyses are confirmatory and which analyses are exploratory. We welcome research that provides exploratory analyses of experimental data, as long as it is clearly labeled as such. The Registered Report track allows authors to build this level of transparency into the review process itself and ensure that reviewers and editors are not inadvertently allowing the empirical results to drive their judgments about the significance of the contribution.

    While JEPS takes a broad view of what counts as an experiment, the burden rests with authors to articulate why their design identifies the causal effect that they are studying. Authors that use research designs that exploit ostensibly exogenous variation in observational settings (e.g., “natural experiments”) or a non-random manipulation of the theoretical variable of interest (e.g., within-subject designs, behavioral games, etc.) should consider alternative explanations and the sensitivity of the results to confounders. Note that experimental designs that use randomization do not automatically provide sufficient evidence for causal identification. Authors should articulate how the randomization adequately simulates the counterfactual conditions that their hypotheses imply. Moreover, designs that interact an observationally measured moderator with randomized manipulations do not offer unambiguous evidence that the moderator is causally responsible for heterogenous treatment effects. In these cases, authors should consider alternative explanations and the sensitivity of the results to confounders. Relatedly, mediation analyses do not offer unambiguous evidence for causal mechanisms and we strongly encourage authors to provide sensitivity analyses and shy away from causal language.

    JEPS welcomes informative replications. By informative, we mean that the replication makes a sufficient empirical contribution to our knowledge about a causal effect of interest. Brandt, et al. (2014) offer helpful guidelines for convincing replications, which we strongly encourage authors to read before undertaking a replication project. Here at JEPS we take seriously the distinction between direct replications, which attempt to replicate as closely as possible a previously published experiment, and conceptual replications, which attempt to extend the finding of one experiment into another domain. We believe that direct replications are more informative, because they probe whether we can replicate the results of an experiment. Conceptual replications are useful once the original experiment has withstood direct replication. In both cases, only a high-powered design will provide sufficient evidence. We also encourage authors to approach replications in a value-neutral way. If an experiment does not replicate it does not indicate that the results from the original experimental should now be considered invalid and it certainly does not imply that the original research team did anything “wrong.”