No CrossRef data available.
Article contents
Direct replications in the era of open sampling
Published online by Cambridge University Press: 27 July 2018
Abstract
Data collection in psychology increasingly relies on “open populations” of participants recruited online, which presents both opportunities and challenges for replication. Reduced costs and the possibility to access the same populations allows for more informative replications. However, researchers should ensure the directness of their replications by dealing with the threats of participant nonnaiveté and selection effects.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 2018
References
Arechar, A. A., Kraft-Todd, G. T. & Rand, D. G. (2017) Turking overtime: How participant characteristics and behavior vary over time and day on Amazon Mechanical Turk. Journal of the Economic Science Association 3(1):1–11.Google Scholar
Casey, L., Chandler, J., Levine, A. S., Proctor, A. & Strolovitch, D. Z. (2017, April–June) Intertemporal differences among MTurk worker demographics. SAGE Open, 1–15. doi: 10.1177/2158244017712774.Google Scholar
Chandler, J., Mueller, P. & Paolacci, G. (2014) Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods 46(1):112–30.Google Scholar
Chandler, J., Paolacci, G., Peer, E., Mueller, P. & Ratliff, K. A. (2015) Using nonnaive participants can reduce effect sizes. Psychological Science 26(7):1131–39.Google Scholar
Chandler, J. & Shapiro, D. (2016) Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology 12:53–81.Google Scholar
DeVoe, S. E. & House, J. (2016). Replications with MTurkers who are naïve versus experienced with academic studies: A comment on Connors, Khamitov, Moroz, Campbell, and Henderson (2015). Journal of Experimental Social Psychology 67:65–67.Google Scholar
Difallah, D., Filatova, E. & Ipeirotis, P. (2018) Demographics and dynamics of mechanical Turk workers. In: Proceedings of WSDM 2018: The Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA February 5–9, 2018, pp. 135–143. Available at: https://dl.acm.org/citation.cfm?doid=3159652.3159661.Google Scholar
Goodman, J. K. & Paolacci, G. (2017) Crowdsourcing consumer research. Journal of Consumer Research 44(1):196–210.Google Scholar
Krupnikov, Y. & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science 1(1), 59–80.Google Scholar
Peer, E., Vosgerau, J. & Acquisti, A. (2014) Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods 46(4):1023–31.Google Scholar
Simonsohn, U. (2015) Small telescopes: Detectability and the evaluation of replication results. Psychological Science 26:559–69.Google Scholar
Stewart, N., Chandler, J. & Paolacci, G. (2017) Crowdsourcing samples in cognitive science. Trends in Cognitive Sciences 21(10):736–48.Google Scholar
Stewart, N., Ungemach, C., Harris, A. J., Bartels, D. M., Newell, B. R., Paolacci, G. & Chandler, J. (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision Making 10(5):479–91.Google Scholar
Thomson, K. S. & Oppenheimer, D. M. (2016) Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making 11(1):99–113.Google Scholar
Zwaan, R. A., Pecher, D., Paolacci, G., Bouwmeester, S., Verkoeijen, P., Dijkstra, K. & Zeelenberg, R. (2017) Participant nonnaiveté and the reproducibility of cognitive psychology. Psychonomic Bulletin and Review. Available at: http://doi.org/10.3758/s13423-017-1348-y.Google Scholar
Target article
Making replication mainstream
Related commentaries (36)
A Bayesian decision-making framework for replication
A pragmatist philosophy of psychological science and its implications for replication
An argument for how (and why) to incentivise replication
Bayesian belief updating after a replication experiment
Conceptualizing and evaluating replication across domains of behavioral research
Constraints on generality statements are needed to define direct replication
Data replication matters to an underpowered study, but replicated hypothesis corroboration counts
Direct replication and clinical psychological science
Direct replications in the era of open sampling
Don't characterize replications as successes or failures
Enhancing research credibility when replication is not feasible
Holding replication studies to mainstream standards of evidence
How to make replications mainstream
If we accept that poor replication rates are mainstream
Introducing a replication-first rule for Ph.D. projects
Making prepublication independent replication mainstream
Making replication prestigious
Putting replication in its place
Replication is already mainstream: Lessons from small-N designs
Replications can cause distorted belief in scientific progress
Scientific progress is like doing a puzzle, not building a wall
Selecting target papers for replication
Strong scientific theorizing is needed to improve replicability in psychological science
The costs and benefits of replication studies
The importance of exact conceptual replications
The meaning of a claim is its reproducibility
The replicability revolution
Three strong moves to improve research and replications alike
Three ways to make replication mainstream
To make innovations such as replication mainstream, publish them in mainstream journals
Verifiability is a core principle of science
Verify original results through reanalysis before replicating
What have we learned? What can we learn?
What the replication reformation wrought
Why replication has more scientific value than original discovery
You are not your data
Author response
Improving social and behavioral science by making replication mainstream: A response to commentaries