Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-26T08:43:32.306Z Has data issue: false hasContentIssue false

Hypothesis Testing Reconsidered

Published online by Cambridge University Press:  29 May 2019

Gregory Francis
Affiliation:
Purdue University, Indiana

Summary

Hypothesis testing is a common statistical analysis for empirical data generated by studies of perception, but its properties and limitations are widely misunderstood. This Element describes several properties of hypothesis testing, with special emphasis on analyses common to studies of perception. The author also describes the challenges and difficulties with using hypothesis testing to interpret empirical data. Many common applications of hypothesis testing inflate the intended Type I error rate. Other aspects of hypothesis tests have important implications for experimental design. Solutions are available for some of these difficulties, but many issues are difficult to deal with.
Get access
Type
Element
Information
Online ISBN: 9781108582995
Publisher: Cambridge University Press
Print publication: 23 May 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Banerjee, P., Chatterjee, P., & Sinha, J. (2012). Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological Science, 23(4), 407409.Google Scholar
Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407425.Google Scholar
Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R.Johnson, V. E. (2018). Redefine statistical significance. Nature Human Behavior, 2, 610.CrossRefGoogle ScholarPubMed
Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach (2nd ed.). New York: Springer.Google Scholar
Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49, 609610.Google Scholar
Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Ford, C., & Volcic, R. (2018). Package ‘pwr’: Basic functions for power analysis.Google Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
Cramer, D. J., van Ravenzwaaij, D., Matzke, D., Steingroever, H., Wetzels, R., Grasman, R. P.Wagenmakers, E.-J. (2016). Hidden multiplicity in exploratory multiway ANVOA: Prevalence and remedies. Psychonomic Bulletin & Review, 23(2), 640647.CrossRefGoogle Scholar
Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 729.Google Scholar
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavioral Research Methods, 39, 175191.CrossRefGoogle ScholarPubMed
Firestone, C., & Scholl, B. J. (2014). “Top-down” effects where none should be found: The El Greco fallacy in perception research. Psychological Science, 25(1), 3846.Google Scholar
Francis, G. (2012). Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151156.Google Scholar
Francis, G. (2014). The frequency of excess success for articles in Psychological Science. Psychonomic Bulletin & Review, 21, 11801187.Google Scholar
Francis, G. (2018). IntroStats Online 2. Sage Publications. https://introstatsonline.com/Google Scholar
Francis, G., & Neath, I. (2018). CogLab 5. Cengage Publishing. https://coglab.cengage.com/Google Scholar
Francis, G., Tanzman, J., & Matthews, W. J. (2014). Excess success for psychology articles in the journal Science, PLOS One, 9(12), e114255. doi:10.1371/journal.pone.0114255Google Scholar
Frick, R. W. (1998). A better stopping rule for conventional statistical tests. Behavior Research Methods, Instruments & Computers, 30, 690697.Google Scholar
Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist 102, 460. doi:10.1511/2014.111.460Google Scholar
Herzog, M. H., Francis, G., & Clarke, A. (in press). How to not lie with statistics: The essentials of statistics and experimental design for everyone with many examples from medicine and bioengineering. New York: Springer.Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196217.Google Scholar
Kruschke, J. K. (2010). Bayesian data analysis. Wiley Interdisciplinary Reviews: Cognitive Science, 1(5), 658676. doi:10.1002/wcs.72Google Scholar
Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user’s guide. Cambridge: Cambridge University Press.Google Scholar
McElreath, R. (2016). Statistical rethinking. Boca Raton, FL: CRC Press.Google Scholar
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 0.1126/science.aac4716Google Scholar
R core team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. www.r-project.org/Google Scholar
Rouder, J. N., & Haaf, J. M. (2018). Power, dominance, and constraint: A note on the appeal of different design traditions. Advances in Methods and Practices in Psychological Science, 1(1), 1926.Google Scholar
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple study articles. Psychological Methods, 17(4), 551566.Google Scholar
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6) 666681.Google Scholar
Smith, P. L., & Little, D. R. (2018). Small is beautiful: In defense of the small-N design. Psychonomic Bulletin & Review, 25(6), 20832101.Google Scholar
Stefanucci, J. K., & Geuss, M. N. (2009). Big people, little world: The body influences size perception. Perception, 38, 17821795.Google Scholar
Ueno, T., Fastrich, G. M., & Murayama, K. (2016). Meta-analysis to integrate effect sizes within an article: Possible misuse and Type I error inflation. Journal of Experimental Psychology: General, 5, 643654.Google Scholar
van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods, 20(3), 293309.Google Scholar
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632638.Google Scholar
Wolfe, J. M. (2013). Registered reports and replications in Attention, Perception, & Psychophysics [Editorial]. Attention, Perception, & Psychophysics, 75, 781783.Google Scholar

Save element to Kindle

To save this element to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Hypothesis Testing Reconsidered
Available formats
×

Save element to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Hypothesis Testing Reconsidered
Available formats
×

Save element to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Hypothesis Testing Reconsidered
Available formats
×