Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-01T21:24:59.168Z Has data issue: false hasContentIssue false

Bibliography

Published online by Cambridge University Press:  05 June 2012

Paul D. Ellis
Affiliation:
Hong Kong Polytechnic University
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
The Essential Guide to Effect Sizes
Statistical Power, Meta-Analysis, and the Interpretation of Research Results
, pp. 153 - 169
Publisher: Cambridge University Press
Print publication year: 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abelson, R.P. (1985), “A variance explanation paradox: When a little is a lot,” Psychological Bulletin, 97(1): 129–133.CrossRefGoogle Scholar
Abelson, R.P. (1997), “On the surprising longevity of flogged horses,” Psychological Science, 8(1): 12–15.CrossRefGoogle Scholar
,AERA (2006), “Standards for reporting on empirical social science research in AERA publications,” American Educational Research Association websitewww.aera.net/opportunities/?id=1850, accessed 11 September 2008.
Aguinis, H., Beaty, J.C., Boik, R.J., and Pierce, C.A. (2005), “Effect size and power in assessing moderating effects of categorical variables using multiple regression: A 30 year review,” Journal of Applied Psychology, 90(1): 94–107.CrossRefGoogle ScholarPubMed
Aguinis, H., Werner, S., Abbott, J., Angert, C., Park, J.H., and Kohlhausen, D. (in press), “Customer-centric science: Reporting significance research results with rigor, relevance, and practical impact in mind,” Organizational Research Methods.
Algina, J. and Keselman, H.J. (2003), “Approximate confidence intervals for effect sizes,” Educational and Psychological Measurement, 63(4): 537–553.CrossRefGoogle Scholar
Algina, J., Keselman, H.J., and Penfield, R.D. (2005), “An alternative to Cohen's standardized mean difference effect size: A robust parameter and confidence interval in the two independent groups case,” Psychological Methods, 10(3): 317–328.CrossRefGoogle ScholarPubMed
Algina, J., Keselman, H.J., and Penfield, R.D. (2007), “Confidence intervals for an effect size measure in multiple linear regression,” Educational and Psychological Measurement, 67(2): 207–218.CrossRefGoogle Scholar
Allison, D.B., Allison, R.L., Faith, M.S., Paultre, F., and Pi-Sunyer, F.X. (1997), “Power and money: Designing statistically powerful studies while minimizing financial costs,” Psychological Methods, 2(1): 20–33.CrossRefGoogle Scholar
Allison, G.T. (1971), Essence of Decision: Explaining the Cuban Missile Crisis. Boston, MA: Little, Brown.Google Scholar
Altman, D.G., Machin, D., Bryant, T.N., and Gardner, M.J. (2000), Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Journal Books.Google Scholar
Altman, D.G., Schulz, K.F., Moher, D., Egger, M., Davidoff, F., Elbourne, D., Gøtzsche, P.C., and Lang, T. (2001), “The revised CONSORT statement for reporting randomized trials: Explanation and elaboration,” Annals of Internal Medicine, 134(8): 663–694.CrossRefGoogle ScholarPubMed
Andersen, M.B., McCullagh, P., and Wilson, G.J. (2007), “But what do the numbers really tell us? Arbitrary metrics and effect size reporting in sport psychology research,” Journal of Sport and Exercise Psychology, 29(5): 664–672.CrossRefGoogle ScholarPubMed
Anesi, C. (1997), “The Titanic casualty figures,” website www.anesi.com/titanic.htm, accessed 3 September 2008.
,APA (1994), Publication Manual of the American Psychological Association, 4th Edition. Washington, DC: American Psychological Association.Google Scholar
,APA (2001), Publication Manual of the American Psychological Association, 5th Edition. Washington, DC: American Psychological Association.Google Scholar
,APA (2010), Publication Manual of the American Psychological Association, 6th Edition. Washington, DC: American Psychological Association.Google Scholar
Armstrong, J.S. (2007), “Significance tests harm progress in forecasting,” International Journal of Forecasting, 23(2): 321–327.CrossRefGoogle Scholar
Armstrong, J.S. and Overton, T.S. (1977), “Estimating nonresponse bias in mail surveys,” Journal of Marketing Research, 14(3): 396–402.CrossRefGoogle Scholar
Armstrong, S.A. and Henson, R.K. (2004), “Statistical and practical significance in the IJPT: A research review from 1993–2003,” International Journal of Play Therapy, 13(2): 9–30.CrossRefGoogle Scholar
Atkinson, D.R., Furlong, M.J., and Wampold, B.E. (1982), “Statistical significance, reviewer evaluations, and the scientific process: Is there a (statistically) significant relationship?Journal of Counseling Psychology, 29(2): 189–194.CrossRefGoogle Scholar
Atuahene-Gima, K. (1996), “Market orientation and innovation,” Journal of Business Research, 35(2): 93–103.CrossRefGoogle Scholar
Austin, P.C., Mamdani, M.M., Juurlink, D.N., and Hux, J.E. (2006), “Testing multiple statistical hypotheses resulted in spurious associations: A study of astrological signs and health,” Journal of Clinical Epidemiology, 59(9): 964–969.CrossRefGoogle ScholarPubMed
Bailar, J.C. (1995), “The practice of meta-analysis,” Journal of Clinical Epidemiology, 48(1): 149–157.CrossRefGoogle ScholarPubMed
Bailar, J.C. and Mosteller, F.M. (1988), “Guidelines for statistical reporting in articles for medical journals: Amplifications and explanations,” Annals of Internal Medicine, 108(2): 266–273.CrossRefGoogle ScholarPubMed
Bakan, D. (1966), “The test of significance in psychological research,” Psychological Bulletin, 66(6): 423–437.CrossRefGoogle ScholarPubMed
Bakeman, R. (2001), “Results need nurturing: Guidelines for authors,” Infancy, 2(1): 1–5.CrossRefGoogle Scholar
Bakeman, R. (2005), “Infancy asks that authors report and discuss effect sizes,” Infancy, 7(1): 5–6.CrossRefGoogle Scholar
Bangert-Drowns, R.L. (1986), “Review of developments in meta-analytic method,” Psychological Bulletin, 99(3): 388–399.CrossRefGoogle Scholar
Baroudi, J.J. and Orlikowski, W.J. (1989), “The problem of statistical power in MIS research,” MIS Quarterly, 13(1): 87–106.CrossRefGoogle Scholar
Bausell, R.B. and Li, Y.F. (2002), Power Analysis for Experimental Research: A Practical Guide for the Biological, Medical and Social Sciences, Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
,BBC (2007), “Test the nation 2007,” website www.bbc.co.uk/testthenation/, accessed 5 May 2008.
Becker, B.J. (1994), “Combining significance levels,” in Cooper, H. and Hedges, L.V. (editors), Handbook of Research Synthesis. New York: Russell Sage Foundation, 215–230.Google Scholar
Becker, B.J. (2005), “Failsafe N or file-drawer number,” in Rothstein, H.R., Sutton, A.J., and Borenstein, M. (editors), Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. Chichester, UK: John Wiley and Sons, 111–125.Google Scholar
Becker, L.A. (2000), “Effect size calculators,” website http://web.uccs.edu/lbecker/Psy590/escalc3.htm, accessed 5 May 2008.
Begg, C.B. (1994), “Publication bias,” in Cooper, H. and Hedges, L.V. (editors), Handbook of Research Synthesis. New York: Russell Sage Foundation, 399–409.Google Scholar
Bezeau, S. and Graves, R. (2001), “Statistical power and effect sizes of clinical neuropsychology research,” Journal of Clinical and Experimental Neuropsychology, 23(3): 399–406.CrossRefGoogle ScholarPubMed
Bird, K.D. (2002), “Confidence intervals for effect sizes in analysis of variance,” Educational and Psychological Measurement, 62(2): 197–226.CrossRefGoogle Scholar
Blanton, H. and Jaccard, J. (2006), “Arbitrary metrics in psychology,” American Psychologist, 61(1): 27–41.CrossRefGoogle Scholar
Borkowski, S.C., Welsh, M.J., and Zhang, Q. (2001), “An analysis of statistical power in behavioral accounting research,” Behavioral Research in Accounting, 13: 63–84.CrossRefGoogle Scholar
Boruch, R.F. and Gomez, H. (1977), “Sensitivity, bias, and theory in impact evaluations,” Professional Psychology, 8(4): 411–434.CrossRefGoogle Scholar
Brand, A., Bradley, M.T., Best, L.A., and Stoica, G. (2008), “Accuracy and effect size estimates from published psychological research,” Perceptual and Motor Skills, 106(2): 645–649.CrossRefGoogle ScholarPubMed
Breaugh, J.A. (2003), “Effect size estimation: Factors to consider and mistakes to avoid,” Journal of Management, 29(1): 79–97.CrossRefGoogle Scholar
Brewer, J.K. (1972), “On the power of statistical tests in the American Educational Research Journal,” American Educational Research Journal, 9(3): 391–401.Google Scholar
Brewer, J.K. and Owen, P.W. (1973), “A note on the power of statistical tests in the Journal of Educational Measurement,” Journal of Educational Measurement, 10(1): 71–74.CrossRefGoogle Scholar
Brock, J. (2003), “The ‘power’ of international business research,” Journal of International Business Studies, 34(1): 90–99.CrossRefGoogle Scholar
Bryant, T.N. (2000), “Computer software for calculating confidence intervals (CIA),” in Altman, D.G., Machin, D., Bryant, T.N., and Gardner, M.J (editors), Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Journal Books, 208–213.Google Scholar
Callahan, J.L. and Reio, T.G. (2006), “Making subjective judgments in quantitative studies: The importance of using effect sizes and confidence intervals,” Human Resource Development Quarterly, 17(2): 159–173.CrossRefGoogle Scholar
Campbell, D.T. (1994), “Retrospective and prospective on program impact assessment,” Evaluation Practice, 15(3): 291–298.CrossRefGoogle Scholar
Campbell, D.T. and Stanley, J.C. (1963), Experimental and Quasi-Experimental Designs for Research, Boston, MA: Houghton-Mifflin.Google Scholar
Campbell, J.P. (1982), “Editorial: Some remarks from the outgoing editor,” Journal of Applied Psychology, 67(6): 691–700.CrossRefGoogle Scholar
Campion, M.A. (1993), “Article review checklist: A criterion checklist for reviewing research articles in applied psychology,” Personnel Psychology, 46(3): 705–718.CrossRefGoogle Scholar
Cano, C.R., Carrillat, F.A., and Jaramillo, F. (2004), “A meta-analysis of the relationship between market orientation and business performance,” International Journal of Research in Marketing, 21(2): 179–200.CrossRefGoogle Scholar
Cappelleri, J.C., Ioannidis, J.P., Schmid, C.H., Ferranti, S.D., Aubert, M., Chalmers, T.C., and Lau, J. (1996), “Large trials vs meta-analysis of smaller trials: How do their results compare?Journal of the American Medical Association, 276(16): 1332–1338.CrossRefGoogle ScholarPubMed
Carver, R.P. (1978), “The case against statistical significance testing,” Harvard Educational Review, 48(3): 378–399.CrossRefGoogle Scholar
Cascio, W.F. and Zedeck, S. (1983), “Open a new window in rational research planning: Adjust alpha to maximize statistical power,” Personnel Psychology, 36(3): 517–526.CrossRefGoogle Scholar
Cashen, L.H. and Geiger, S.W. (2004), “Statistical power and the testing of null hypotheses: A review of contemporary management research and recommendations for future studies,” Organizational Research Methods, 7(2): 151–167.CrossRefGoogle Scholar
Chamberlin, T.C. (1897), “The method of multiple working hypotheses,” Journal of Geology, 5(8): 837–848.CrossRefGoogle Scholar
Chan, H.N. and Ellis, P. (1998), “Market orientation and business performance: Some evidence from Hong Kong,” International Marketing Review, 15(2): 119–139.CrossRefGoogle Scholar
Chase, L.J. and Baran, S.J. (1976), “An assessment of quantitative research in mass communication,” Journalism Quarterly, 53(2): 308–311.CrossRefGoogle Scholar
Chase, L.J. and Chase, R.B. (1976), “A statistical power analysis of applied psychological research,” Journal of Applied Psychology, 61(2): 234–237.CrossRefGoogle Scholar
Chase, L.J. and Tucker, R.K. (1975), “A power-analytic examination of contemporary communication research,” Speech Monographs, 42(1): 29–41.CrossRefGoogle Scholar
Christensen, J.E. and Christensen, C.E. (1977), “Statistical power analysis of health, physical education, and recreation research,” Research Quarterly, 48(1): 204–208.Google ScholarPubMed
Churchill, G.A., Ford, N.M., Hartley, S.W., and Walker, O.C. (1985), “The determinants of salesperson performance: A meta-analysis,” Journal of Marketing Research, 22(2): 103–118.CrossRefGoogle Scholar
Clark-Carter, D. (1997), “The account taken of statistical power in research published in the British Journal of Psychology,” British Journal of Psychology, 88(1): 71–83.CrossRefGoogle Scholar
Clark-Carter, D. (2003), “Effect size: The missing piece in the jigsaw,” The Psychologist, 16(12): 636–638.Google Scholar
Coe, R. (2002), “It's the effect size, stupid: What effect size is and why it is important,” Paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12–14 September, accessed from www.leeds.ac.uk/educol/documents/00002182.htm on 24 January 2008.
Cohen, J. (1962), “The statistical power of abnormal-social psychological research: A review,” Journal of Abnormal and Social Psychology, 65(3): 145–153.CrossRefGoogle ScholarPubMed
Cohen, J. (1983), “The cost of dichotomization,” Applied Psychological Measurement, 7(3): 249–253.CrossRefGoogle Scholar
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Cohen, J. (1990), “Things I have learned (so far),” American Psychologist, 45(12): 1304–1312.CrossRefGoogle Scholar
Cohen, J. (1992), “A power primer,” Psychological Bulletin, 112(1): 155–159.CrossRefGoogle ScholarPubMed
Cohen, J. (1994), “The earth is round (p < .05),” American Psychologist, 49(12): 997–1003.CrossRefGoogle Scholar
Cohen, J., Cohen, P., West, S.G., and Aiken, L.S. (2003), Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd Edition. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Cohn, L.D. and Becker, B.J. (2003), “How meta-analysis increases statistical power,” Psychological Methods, 8(3): 243–253.CrossRefGoogle ScholarPubMed
Colegrave, N. and Ruxton, G.D. (2003), “Confidence intervals are a more useful complement to nonsignificant tests than are power calculations,” Behavioral Ecology, 14(3): 446–450.CrossRefGoogle Scholar
Cortina, J.M. (2002), “Big things have small beginnings: An assortment of ‘minor’ methodological understandings,” Journal of Management, 28(3): 339–362.CrossRefGoogle Scholar
Cortina, J.M. and Dunlap, W.P. (1997), “Logic and purpose of significance testing,” Psychological Methods, 2(2): 161–172.CrossRefGoogle Scholar
Coursol, A. and Wagner, E.E. (1986), “Effect of positive findings on submission and acceptance rates: A note on meta analysis bias,” Professional Psychology: Research and Practice, 17(2): 136–137.CrossRefGoogle Scholar
Cowles, M. and Davis, C. (1982), “On the origins of the .05 level of significance,” American Psychologist, 37(5): 553–558.CrossRefGoogle Scholar
Cumming, G., Fidler, F., Leonard, M., Kalinowski, P., Christiansen, A., Kleinig, A., Lo, J., McMenamin, N., and Wilson, S. (2007), “Statistical reform in psychology: Is anything changing?Psychological Science, 18(3): 230–232.CrossRefGoogle ScholarPubMed
Cumming, G. and Finch, S. (2001), “A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions,” Educational and Psychological Measurement, 61(4): 532–574.CrossRefGoogle Scholar
Cumming, G. and Finch, S. (2005), “Inference by eye: Confidence intervals and how to read pictures of data,” American Psychologist, 60(2): 170–180.CrossRefGoogle Scholar
Cummings, T.G. (2007), “2006 Presidential address: Quest for an engaged academy,” Academy of Management Review, 32(2): 355–360.CrossRefGoogle Scholar
Daly, J.A. and Hexamer, A. (1983), “Statistical power research in English education,” Research in the Teaching of English, 17(2): 157–164.Google Scholar
Daly, L.E. (2000), “Confidence intervals and sample sizes,” in Altman, D.G., Machin, D., Bryant, T.N., and Gardner, M.J. (editors), Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Journal Books, 139–152.Google Scholar
Daniel, F., Lohrke, F.T., Fornaciari, C.J., and Turner, R.A. (2004), “Slack resources and firm performance: A meta-analysis,” Journal of Business Research, 57(6): 565–574.CrossRefGoogle Scholar
Dennis, M.L., Lennox, R.D., and Foss, M.A. (1997), “Practical power analysis for substance abuse health services research,” in Bryant, K.J., Windle, M., and West, S.G. (editors), The Science of Prevention, Washington, DC: American Psychological Association, 367–404.Google Scholar
Derr, J. and Goldsmith, L.J. (2003), “How to report nonsignificant results: Planning to make the best use of statistical power calculations,” Journal of Orthopaedic and Sports Physical Therapy, 33(6): 303–306.CrossRefGoogle Scholar
Di Paula, A. (2000), “Using the binomial effect size display to explain the practical importance of correlations,” Quirk's Marketing Research Review (Nov): website www.nrgresearchgroup.com/media/documents/BESD_000.pdf, accessed 1 April 2008.Google Scholar
Di Stefano, J. (2003), “How much power is enough? Against the development of an arbitrary convention for statistical power calculations,” Functional Ecology, 17(5): 707–709.CrossRefGoogle Scholar
Dixon, P. (2003), “The p-value fallacy and how to avoid it,” Canadian Journal of Experimental Psychology, 57(3): 189–202.CrossRefGoogle Scholar
Duarte, J., Siegel, S., and Young, L.A. (2009), “Trust and credit,” SSRN working paper: http://ssrn.com/abstract=1343275, accessed 15 March 2009.
Dunlap, W.P. (1994), “Generalizing the common language effect size indicator to bivariate normal correlations,” Psychological Bulletin, 116(3): 509–511.CrossRefGoogle Scholar
Eden, D. (2002), “Replication, meta-analysis, scientific progress, and AMJ's publication policy,” Academy of Management Journal, 45(5): 841–846.CrossRefGoogle Scholar
Efran, M.G. (1974), “The effect of physical appearance on the judgment of guilt, interpersonal attraction, and severity of recommendation in a simulated jury task,” Journal of Research in Personality, 8(1): 45–54.CrossRefGoogle Scholar
Egger, M. and Smith, G.D. (1995), “Misleading meta-analysis: Lessons from an ‘effective, safe, simple’ intervention that wasn't,” British Medical Journal, 310(25 March): 751–752.CrossRefGoogle Scholar
Egger, M., Smith, G.D., Schneider, M., and Minder, C. (1997), “Bias in meta-analysis detected by simple graphical test,” British Medical Journal, 315(7109): 629–634.CrossRefGoogle ScholarPubMed
Eisenach, J.C. (2007), “Editor's note,” Anesthesiology, 106(3): 415.Google Scholar
Ellis, P.D. (2005), “Market orientation and marketing practice in a developing economy,” European Journal of Marketing, 39(5/6): 629–645.CrossRefGoogle Scholar
Ellis, P.D. (2006), “Market orientation and performance: A meta-analysis and cross-national comparisons,” Journal of Management Studies, 43(5): 1089–1107.CrossRefGoogle Scholar
Ellis, P.D. (2007), “Distance, dependence and diversity of markets: Effects on market orientation,” Journal of International Business Studies, 38(3): 374–386.CrossRefGoogle Scholar
Ellis, P.D. (2009), “Effect size calculators,” website http://myweb.polyu.edu.uk/nmspaul/calculator/calculator.html, accessed 31 December 2009.
Embretson, S.E. (2006), “The continued search for nonarbitrary metrics in psychology,” American Psychologist, 61(1): 50–55.CrossRefGoogle Scholar
Erceg-Hurn, D.M. and Mirosevich, V.M. (2008), “Modern robust statistical methods: An easy way to maximize the accuracy and power of your research,” American Psychologist, 63(7): 591–601.CrossRefGoogle ScholarPubMed
Erturk, S.M. (2005), “Retrospective power analysis: When?Radiology, 237(2): 743.CrossRefGoogle Scholar
,ESA (2006), “European Space Agency news,” website www.esa.int/esaCP/SEM09F8LURE_index_0.html, accessed 25 April 2008.
Eysenck, H.F. (1978), “An exercise in mega-silliness,” American Psychologist, 33(5): 517.CrossRefGoogle Scholar
Falk, R. and Greenbaum, C.W. (1995), “Significance tests die hard: The amazing persistence of a probabilistic misconception,” Theory and Psychology, 5(1): 75–98.CrossRefGoogle Scholar
Fan, X.T. (2001), “Statistical significance and effect size in education research: Two sides of a coin,” Journal of Educational Research, 94(5): 275–282.CrossRefGoogle Scholar
Fan, X.T. and Thompson, B. (2001), “Confidence intervals about score reliability coefficients, please: An EPM guidelines editorial,” Educational and Psychological Measurement, 61(4): 517–531.Google Scholar
Faul, F., Erdfelder, E., Lang, A.G., and Buchner, A. (2007), “G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences,” Behavior Research Methods, 39(2): 175–191.CrossRefGoogle ScholarPubMed
,FDA (2008), “Estrogen and estrogen with progestin therapies for postmenopausal women,” website www.fda.gov/CDER/Drug/infopage/estrogens_progestins/default.htm, accessed 7 May 2008.
Feinberg, W.E. (1971), “Teaching the Type I and Type II errors: The judicial process,” The American Statistician, 25(3): 30–32.Google Scholar
Feinstein, A.R. (1995), “Meta-analysis: Statistical alchemy for the 21st century,” Journal of Clinical Epidemiology, 48(1): 71–79.CrossRefGoogle ScholarPubMed
Fidler, F., Cumming, G., Thomason, N., Pannuzzo, D., Smith, J., Fyffe, P., Edmonds, H., Harrington, C., and Schmitt, R. (2005), “Toward improved statistical reporting in the Journal of Consulting and Clinical Psychology,” Journal of Consulting and Clinical Psychology, 73(1): 136–143.CrossRefGoogle ScholarPubMed
Fidler, F., Thomason, N., Cumming, G., Finch, S., and Leeman, J. (2004), “Editors can lead researchers to confidence intervals, but can't make them think,” Psychological Science, 15(2): 119–126.CrossRefGoogle ScholarPubMed
Field, A.P. (2003a), “Can meta-analysis be trusted?The Psychologist, 16(12): 642–645.Google Scholar
Field, A.P. (2003b), “The problems in using fixed-effects models of meta-analysis on real-world data,” Understanding Statistics, 2(2): 105–124.CrossRefGoogle Scholar
Field, A.P. (2005), “Is the meta-analysis of correlation coefficients accurate when population correlations vary?Psychological Methods, 10(4): 444–467.CrossRefGoogle ScholarPubMed
Field, A.P. and Wright, D.B. (2006), “A bluffer's guide to effect sizes,” PsyPAG Quarterly, 58(March): 9–23.Google Scholar
Finch, S., Cumming, G., and Thomason, N. (2001), “Reporting of statistical inference in the Journal of Applied Psychology: Little evidence of reform,” Educational and Psychological Measurement, 61(2): 181–210.Google Scholar
Fisher, R.A. (1925), Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd.Google Scholar
Fleiss, J.L. (1994), “Measures of effect size for categorical data,” in Cooper, H. and Hedges, L.V. (editors), The Handbook of Research Synthesis. New York: Russell Sage Foundation, 245–260.Google Scholar
Fleiss, J.L., Levin, B., and Paik, M.C. (2003), Statistical Methods for Rates and Proportions, 3rd Edition. Hoboken, NJ: Wiley-Interscience.CrossRefGoogle Scholar
Friedman, H. (1968), “Magnitude of experimental effect and a table for its rapid estimation,” Psychological Bulletin, 70(4): 245–251.CrossRefGoogle Scholar
Friedman, H. (1972), “Trial by jury: Criteria for convictions by jury size and Type I and Type II errors,” The American Statistician, 26(2): 21–23.Google Scholar
Gardner, M.J. and Altman, D.G. (2000), “Estimating with confidence,” in Altman, D.G., Machin, D., Bryant, T.N., and Gardner, M.J. (editors), Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Journal Books, 3–5.Google Scholar
Gigerenzer, G. (1998), “We need statistical thinking, not statistical rituals,” Behavioral and Brain Sciences, 21(2): 199–200.CrossRefGoogle Scholar
Gigerenzer, G. (2004), “Mindless statistics,” Journal of Socio-Economics, 33(5): 587–606.CrossRefGoogle Scholar
Glass, G. (1976), “Primary, secondary, and meta-analysis of research,” Educational Researcher, 5(10): 3–8.CrossRefGoogle Scholar
Glass, G.V. (2000), “Meta-analysis at 25,” website http://glass.ed.asu.edu/gene/papers/meta25.html, accessed 7 May 2008.
Glass, G.V., McGaw, B., and Smith, M.L. (1981), Meta-Analysis in Social Research. Beverly Hills, CA: Sage.Google Scholar
Glass, G.V. and Smith, M.L. (1978), “Reply to Eysenck,” American Psychologist, 33(5): 517–518.CrossRefGoogle Scholar
Gleser, L.J. and Olkin, I. (1996), “Models for estimating the number of unpublished studies,” Statistics in Medicine, 15(23): 2493–2507.3.0.CO;2-C>CrossRefGoogle ScholarPubMed
Gliner, J.A., Morgan, G.A., and Harmon, R.J. (2002), “The chi-square test and accompanying effect sizes,” Journal of the American Academy of Child and Adolescent Psychiatry, 41(12): 1510–1512.CrossRefGoogle Scholar
Goodman, S.N. and Berlin, J.A. (1994), “The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results,” Annals of Internal Medicine, 121(3): 200–206.CrossRefGoogle ScholarPubMed
Gøtzsche, P.C., Hammarquist, C., and Burr, M. (1998), “House dust mite control measures in the management of asthma: Meta-analysis,” British Medical Journal, 317(7166): 1105–1110.CrossRefGoogle ScholarPubMed
Green, S.B. (1991), “How many subjects does it take to do a regression analysis?Multivariate Behavioral Research, 26(3): 499–510.CrossRefGoogle Scholar
Greenland, S. (1994), “Can meta-analysis be salvaged?American Journal of Epidemiology, 140(9): 783–787.CrossRefGoogle ScholarPubMed
Greenley, G.E. (1995), “Market orientation and company performance: Empirical evidence from UK companies,” British Journal of Management, 6(1): 1–13.CrossRefGoogle Scholar
Grégoire, G., Derderian, F., and LeLorier, J. (1995), “Selecting the language of the publications included in a meta-analysis: Is there a Tower of Babel bias?Journal of Clinical Epidemiology, 48(1): 159–163.CrossRefGoogle Scholar
Grissom, R.J. (1994), “Probability of the superior outcome of one treatment over another,” Journal of Applied Psychology, 79(2): 314–316.CrossRefGoogle Scholar
Grissom, R.J. and Kim, J.J. (2005), Effect Sizes for Research: A Broad Practical Approach. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Haase, R., Waechter, D.M., and Solomon, G.S. (1982), “How significant is a significant difference? Average effect size of research in counseling psychology,” Journal of Counseling Psychology, 29(1): 58–65.CrossRefGoogle Scholar
Hadzi-Pavlovic, D. (2007), “Effect sizes II: Differences between proportions,” Acta Neuropsychiatrica, 19(6): 384–385.CrossRefGoogle Scholar
Hair, J.F., Anderson, R.E., Tatham, R.L., and Black, W.C. (1998), Multivariate Data Analysis, 5th Edition. Upper Saddle River, NJ: Prentice-Hall.Google Scholar
Hall, S.M. and Brannick, M.T. (2002), “Comparison of two random-effects methods of meta-analysis,” Journal of Applied Psychology, 87(2): 377–389.CrossRefGoogle ScholarPubMed
Halpern, S.D., Karlawish, J.H.T., and Berlin, J.A. (2002), “The continuing unethical conduct of underpowered trials,” Journal of the American Medical Association, 288(3): 358–362.CrossRefGoogle Scholar
Hambrick, D.C. (1994), “1993 presidential address: What if the Academy actually mattered?Academy of Management Review, 19(1): 11–16.Google Scholar
Harlow, L.L., Mulaik, S.A., and Steiger, J.H. (editors) (1997), What if There Were No Significance Tests?Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Harris, L.C. (2001), “Market orientation and performance: Objective and subjective empirical evidence from UK companies,” The Journal of Management Studies, 38(1): 17–43.CrossRefGoogle Scholar
Harris, M.J. (1991), “Significance tests are not enough: The role of effect-size estimation in theory corroboration,” Theory and Psychology, 1(3): 375–382.CrossRefGoogle Scholar
Harris, R.J. (1985), A Primer of Multivariate Statistics, 2nd Edition. Orlando, FL: Academic Press.Google Scholar
Hedges, L.V. (1981), “Distribution theory for Glass's estimator of effect size and related estimators,” Journal of Educational Statistics, 6(2): 106–128.CrossRefGoogle Scholar
Hedges, L.V. (1988), “Comment on ‘Selection models and the file drawer problem’,” Statistical Science, 3(1): 118–120.CrossRefGoogle Scholar
Hedges, L.V. (1992), “Meta-analysis,” Journal of Educational Statistics, 17(4): 279–296.CrossRefGoogle Scholar
Hedges, L.V. (2007), “Meta-analysis,” in Rao, C.R. and Sinharay, S. (editors), Handbook of Statistics, Volume 26. Amsterdam: Elsevier, 919–953.Google Scholar
Hedges, L.V. and Olkin, I. (1980), “Vote-counting methods in research synthesis,” Psychological Bulletin, 88(2): 359–369.CrossRefGoogle Scholar
Hedges, L.V. and Olkin, I. (1985), Statistical Methods for Meta-Analysis. London: Academic Press.Google Scholar
Hedges, L.V. and Pigott, T.D. (2001), “The power of statistical tests in meta-analysis,” Psychological Methods, 6(3): 203–217.CrossRefGoogle ScholarPubMed
Hedges, L.V. and Vevea, J.L. (1998), “Fixed- and random-effects models in meta-analysis,” Psychological Methods, 3(4): 486–504.CrossRefGoogle Scholar
Hoenig, J.M. and Heisey, D.M. (2001), “The abuse of power: The pervasive fallacy of power calculations for data analysis,” The American Statistician, 55(1): 19–24.CrossRefGoogle Scholar
Hollenbeck, J.R., DeRue, D.S., and Mannor, M. (2006), “Statistical power and parameter stability when subjects are few and tests are many: Comment on Peterson, Smith, Martorana and Owens (2003),” Journal of Applied Psychology, 91(1): 1–5.CrossRefGoogle Scholar
Hoppe, D.J. and Bhandari, M. (2008), “Evidence-based orthopaedics: A brief history,” Indian Journal of Orthopaedics, 42(2): 104–110.CrossRefGoogle ScholarPubMed
Houle, T.T., Penzien, D.B., and Houle, C.K. (2005), “Statistical power and sample size estimation for headache research: An overview and power calculation tools,” Headache: The Journal of Head and Face Pain, 45(5): 414–418.CrossRefGoogle ScholarPubMed
Hubbard, R. and Armstrong, J.S. (1992), “Are null results becoming an endangered species in marketing?Marketing Letters, 3(2): 127–136.CrossRefGoogle Scholar
Hubbard, R. and Armstrong, J.S. (2006), “Why we don't really know what ‘statistical significance’ means: A major educational failure,” Journal of Marketing Education, 28(2): 114–120.CrossRefGoogle Scholar
Huberty, C.J. (2002), “A history of effect size indices,” Educational and Psychological Measurement, 62(2): 227–240.CrossRefGoogle Scholar
Hunt, M. (1997), How Science Takes Stock: The Story of Meta-Analysis. New York: Russell Sage Foundation.Google Scholar
Hunter, J.E. (1997), “Needed: A ban on the significance test,” Psychological Science, 8(1): 3–7.CrossRefGoogle Scholar
Hunter, J.E. and Schmidt, F.L. (1990), Methods of Meta-Analysis. Newbury Park, CA: Sage.Google Scholar
Hunter, J.E. and Schmidt, F.L. (2000), “Fixed effects vs. random effects meta-analysis models: Implications for cumulative research knowledge,” International Journal of Selection and Assessment, 8(4): 275–292.CrossRefGoogle Scholar
Hunter, J.E. and Schmidt, F.L. (2004), Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, 2nd Edition. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Hyde, J.S. (2001), “Reporting effect sizes: The role of editors, textbook authors, and publication manuals,” Educational and Psychological Measurement, 61(2): 225–228.CrossRefGoogle Scholar
Iacobucci, D. (2005), “From the editor,” Journal of Consumer Research, 32(1): 1–6.CrossRefGoogle Scholar
Ioannidis, J.P.A. (2005), “Why most published research findings are false,” PLoS Med, website http://medicine.plosjournals.org/ 2(8): e124, 696–701, accessed 1 April 2007.CrossRefGoogle ScholarPubMed
Ioannidis, J.P.A. (2008), “Why most discovered true associations are inflated,” Epidemiology, 19(5): 640–648.CrossRefGoogle ScholarPubMed
Iyengar, S. and Greenhouse, J.B. (1988), “Selection models and the file drawer problem,” Statistical Science, 3(1): 109–135.CrossRefGoogle Scholar
Jaworski, B.J. and Kohli, A.K. (1993), “Market orientation: Antecedents and consequences,” Journal of Marketing, 57(3): 53–70.CrossRefGoogle Scholar
,JEP (2003), “Instructions to authors,” Journal of Educational Psychology, 95(1): 201.Google Scholar
Johnson, D.H. (1999), “The insignificance of statistical significance testing,” Journal of Wildlife Management, 63(3): 763–772.CrossRefGoogle Scholar
Johnson, B.T., Mullen, B., and Salas, E. (1995), “Comparisons of three meta-analytic approaches,” Journal of Applied Psychology, 80(1): 94–106.CrossRefGoogle Scholar
Jones, B.J. and Brewer, J.K. (1972), “An analysis of the power of statistical tests reported in the Research Quarterly,” Research Quarterly, 43(1): 23–30.Google ScholarPubMed
Katzer, J. and Sodt, J. (1973), “An analysis of the use of statistical testing in communication research,” Journal of Communication, 23(3): 251–265.CrossRefGoogle Scholar
Kazdin, A. (1999), “The meanings and measurements of clinical significance,” Journal of Consulting and Clinical Psychology, 67(3): 332–339.CrossRefGoogle Scholar
Kazdin, A.E. (2006), “Arbitrary metrics: Implications for identifying evidence-based treatments,” American Psychologist, 61(1): 42–49.CrossRefGoogle ScholarPubMed
Keller, G. (2005), Statistics for Management and Economics. Belmont, CA: Thomson.Google Scholar
Kelley, K. and Maxwell, S.E. (2008), “Sample size planning with applications to multiple regression: Power and accuracy for omnibus and targeted effects,” in Alasuutari, P., Bickman, L., and Brannen, J. (editors), The Sage Handbook of Social Research Methods. London: Sage, 166–192.CrossRefGoogle Scholar
Kendall, P.C. (1997), “Editorial,” Journal of Consulting and Clinical Psychology, 65(1): 3–5.CrossRefGoogle Scholar
Keppel, G. (1982), Design and Analysis: A Researcher's Handbook, 2nd Edition. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Kerr, N.L. (1998), “HARKing: Hypothesizing after the results are known,” Personality and Social Psychology Review, 2(3): 196–217.CrossRefGoogle ScholarPubMed
Keselman, H.J., Algina, J., Lix, L.M., Wilcox, R.R., and Deering, K.N. (2008), “A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes,” Psychological Methods, 13(2): 110–129.CrossRefGoogle ScholarPubMed
Kieffer, K.M., Reese, R.J., and Thompson, B. (2001), “Statistical techniques employed in AERJ and JCP articles from 1988 to 1997: A methodological review,” Journal of Experimental Education, 69(3): 280–309.CrossRefGoogle Scholar
Kirca, A.H., Jayachandran, S., and Bearden, W.O. (2005), “Market orientation: A meta-analytic review and assessment of its antecedents and impact on performance,” Journal of Marketing, 69(2): 24–41.CrossRefGoogle Scholar
Kirk, R.E. (1996), “Practical significance: A concept whose time has come,” Educational and Psychological Measurement, 56(5): 746–759.CrossRefGoogle Scholar
Kirk, R.E. (2001), “Promoting good statistical practices: Some suggestions,” Educational and Psychological Measurement, 61(2): 213–218.CrossRefGoogle Scholar
Kirk, R.E. (2003), “The importance of effect magnitude,” in Davis, S.F. (editor), Handbook of Research Methods in Experimental Psychology. Oxford, UK: Blackwell, 83–105.CrossRefGoogle Scholar
Kline, R.B. (2004), Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research. Washington DC: American Psychological Association.CrossRefGoogle Scholar
Kohli, A.J., Kaworski, B.J., and Kumar, A. (1993), “MARKOR: A measure of market orientation,” Journal of Marketing Research, 30(4): 467–477.CrossRefGoogle Scholar
Kolata, G.B. (1981), “Drug found to help heart attack survivors,” Science, 214(13): 774–775.CrossRefGoogle ScholarPubMed
Kolata, G.B. (2002), “Hormone replacement study a shock to the medical system,” New York Times on the Web, website www.nytimes.com/2002/07/10health/10/HORM.html, accessed 1 May 2008.Google Scholar
Kosciulek, J.F. and Szymanski, E.M. (1993), “Statistical power analysis of rehabilitation research,” Rehabilitation Counseling Bulletin, 36(4): 212–219.Google Scholar
Kraemer, H.C. and Thiemann, S. (1987), How Many Subjects? Statistical Power Analysis in Research. Newbury Park, CA: Sage.Google Scholar
Kraemer, H.C., Yesavage, J., and Brooks, J.O. (1998), “The advantages of excluding under-powered studies in meta-analysis: Inclusionist vs exclusionist viewpoints,” Psychological Methods, 3(1): 23–31.CrossRefGoogle Scholar
Kroll, R.M. and Chase, L.J. (1975), “Communication disorders: A power analytic assessment of recent research,” Journal of Communication Disorders, 8(3): 237–247.CrossRefGoogle ScholarPubMed
Greca, A.M. (2005), “Editorial,” Journal of Consulting and Clinical Psychology, 73(1): 3–5.CrossRefGoogle Scholar
Lane, D. (2008), “Fisher r-to-z calculator,” website http://onlinestatbook.com/calculators/fisher_z.html, accessed 27 November 2008.
Lang, J.M., Rothman, K.J., and Cann, C.I. (1998), “That confounded p-value,” Epidemiology, 9(1): 7–8.Google ScholarPubMed
LeCroy, C.W. and Krysik, J. (2007), “Understanding and interpreting effect size measures,” Journal of Social Work Research, 31(4): 243–248.CrossRefGoogle Scholar
LeLorier, J., Grégoire, G., Benhaddad, A., Lapierre, J., and Derderian, F. (1997), “Discrepancies between meta-analyses and subsequent large scale randomized, controlled trials,” New England Journal of Medicine, 337(21 Aug): 536–618.CrossRefGoogle Scholar
Lenth, R.V. (2001), “Some practical guidelines for effective sample size determination,” The American Statistician, 55(3): 187–193.CrossRefGoogle Scholar
Levant, R.F. (1992), “Editorial,” Journal of Family Psychology, 6(1): 3–9.CrossRefGoogle Scholar
Levine, M. and Ensom, M. (2001), “Post hoc analysis: An idea whose time has passed?Pharmacotherapy, 21(4): 405–409.CrossRefGoogle ScholarPubMed
Light, R.J. and Smith, P.V. (1971), “Accumulating evidence: Procedures for resolving contradictions among different research studies,” Harvard Educational Review, 41(4): 429–471.CrossRefGoogle Scholar
Lilford, R. and A.J. Stevens (2002), “Underpowered studies,” British Journal of Sociology, 89(2): 129–131.Google ScholarPubMed
Lindsay, R.M. (1993), “Incorporating statistical power into the test of significance procedure: A methodological and empirical inquiry,” Behavioral Research in Accounting, 5: 211–236.Google Scholar
Lipsey, M.W. (1990), Design Sensitivity: Statistical Power for Experimental Research. Newbury Park, CA: Sage.Google Scholar
Lipsey, M.W. (1998), “Design sensitivity: Statistical power for applied experimental research,” in Bickman, L. and Rog, D.J. (editors), Handbook of Applied Social Research Methods. Thousand Oaks, CA: Sage, 39–68.Google Scholar
Lipsey, M.W. and Wilson, D.B. (1993), “The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis,” American Psychologist, 48(12): 1181–1209.CrossRefGoogle ScholarPubMed
Lipsey, M.W. and Wilson, D.B. (2001), Practical Meta-Analysis. Thousand Oaks, CA: Sage.Google Scholar
Livingston, E.H. and Cassidy, L. (2005), “Statistical power and estimation of the number of required subjects for a study based on the t-test: A surgeon's primer,” Journal of Surgical Research, 128(2): 207–217.CrossRefGoogle Scholar
Lowry, R. (2008a), “Fisher r-to-z calculator,” website http://faculty.vassar.edu/lowry/tabs.html#fisher, accessed 27 November 2008.
Lowry, R. (2008b), “z-to-P calculator,” website http://faculty.vassar.edu/lowry/tabs.html#z, accessed 27 November 2008.
Lustig, D. and Strauser, D. (2004), “Editor's comment: Effect size and rehabilitation research,” Journal of Rehabilitation, 70(4): 3–5.Google Scholar
Machin, D., Campbell, M., Fayers, P., and Pinol, A. (1997), Sample Size Tables for Clinical Studies, 2nd Edition. Oxford, UK: Blackwell.Google Scholar
Maddock, J.E. and Rossi, J.S. (2001), “Statistical power of articles published in three health psychology-related journals,” Health Psychology, 20(1): 76–78.CrossRefGoogle ScholarPubMed
Malhotra, N.K. (1996), Marketing Research: An Applied Orientation, 2nd Edition. Upper Saddle River, NJ: Prentice-Hall.Google Scholar
Masson, M.E.J. and Loftus, G.R. (2003), “Using confidence intervals for graphically based data interpretation,” Canadian Journal of Experimental Psychology, 57(3): 203–220.CrossRefGoogle ScholarPubMed
Maxwell, S.E. (2004), “The persistence of unpowered studies in psychological research: Causes, consequences, and remedies,” Psychological Methods, 9(2): 147–163.CrossRefGoogle Scholar
Maxwell, S.E., Kelley, K., and Rausch, J.R. (2008), “Sample size planning for statistical power and accuracy in parameter estimation,” Annual Review of Psychology, 59: 537–563.CrossRefGoogle ScholarPubMed
Mazen, A.M., Graf, L.A., Kellogg, C.E., and Hemmasi, M. (1987a), “Statistical power in contemporary management research,” Academy of Management Journal, 30(2): 369–380.Google Scholar
Mazen, A.M., Hemmasi, M., and Lewis, M.F. (1987b), “Assessment of statistical power in contemporary strategy research,” Strategic Management Journal, 8(4): 403–410.CrossRefGoogle Scholar
McCartney, K. and Rosenthal, R. (2000), “Effect size, practical importance and social policy for children,” Child Development, 71(1): 173–180.CrossRefGoogle ScholarPubMed
McClave, J.T. and Sincich, T. (2009), Statistics, 11th Edition. Upper Saddle River, NJ: Prentice-Hall.Google Scholar
McCloskey, D. (2002), The Secret Sins of Economics. Chicago, IL: Prickly Paradigm Press, website www.prickly-paradigm.com/paradigm4.pdf.Google Scholar
McCloskey, D.N. and Ziliak, S.T. (1996), “The standard error of regressions,” Journal of Economic Literature, 34(March): 97–114.Google Scholar
McGrath, R.E. and Meyer, G.J. (2006), “When effect sizes disagree: The case of r and d,” Psychological Methods, 11(4): 386–401.CrossRefGoogle Scholar
McGraw, K.O. and Wong, S.P. (1992), “A common language effect size statistic,” Psychological Bulletin, 111(2): 361–365.CrossRefGoogle Scholar
McSwain, D.N. (2004), “Assessment of statistical power in contemporary accounting information systems research,” Journal of Accounting and Finance Research, 12(7): 100–108.Google Scholar
Meehl, P.E. (1967), “Theory testing in psychology and physics: A methodological paradox,” Philosophy of Science, 34(June): 103–115.CrossRefGoogle Scholar
Meehl, P.E. (1978), “Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology,” Journal of Consulting and Clinical Psychology, 46(4): 806–834.CrossRefGoogle Scholar
Megicks, P. and Warnaby, G. (2008), “Market orientation and performance in small independent retailers in the UK,” International Review of Retail, Distribution and Consumer Research, 18(1): 105–119.CrossRefGoogle Scholar
Melton, A. (1962), “Editorial,” Journal of Experimental Psychology, 64(6): 553–557.CrossRefGoogle Scholar
Mendoza, J.L. and Stafford, K.L. (2001), “Confidence intervals, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables,” Educational and Psychological Measurement, 61(4): 650–667.CrossRefGoogle Scholar
Miles, J.M. (2003), “A framework for power analysis using a structural equation modelling procedure,” BMC Medical Research Methodology, 3(27), website www.biomedcentral.com/1471–2288/3/27, accessed 1 April 2008.CrossRefGoogle ScholarPubMed
Miles, J.M. and Shevlin, M. (2001), Applying Regression and Correlation. London: Sage.Google Scholar
Moher, D., Schulz, K.F., and Altman, D.G. (2001), “The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials,” Lancet, 357(9263): 1191–1194.CrossRefGoogle Scholar
Mone, M.A., Mueller, G.C., and Mauland, W. (1996), “The perceptions and usage of statistical power in applied psychology and management research,” Personnel Psychology, 49(1): 103–120.CrossRefGoogle Scholar
Muncer, S.J. (1999), “Power dressing is important in meta-analysis,” British Medical Journal, 318(27 March): 871.Google ScholarPubMed
Muncer, S.J., Craigie, M., and Holmes, J. (2003), “Meta-analysis and power: Some suggestions for the use of power in research synthesis,” Understanding Statistics, 2(1): 1–12.CrossRefGoogle Scholar
Muncer, S.J., Taylor, S., and Craigie, M. (2002), “Power dressing and meta-analysis: Incorporating power analysis into meta-analysis,” Journal of Advanced Nursing, 38(3): 274–280.CrossRefGoogle ScholarPubMed
Murphy, K.R. (1997), “Editorial,” Journal of Applied Psychology, 82(1): 3–5.CrossRefGoogle Scholar
Murphy, K.R. (2002), “Using power analysis to evaluate and improve research,” in Rogelberg, S.G. (editor), Handbook of Research Methods in Industrial and Organizational Psychology. Oxford, UK: Blackwell, 119–137.Google Scholar
Murphy, K.R. and Myors, B. (2004), Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests, 2nd Edition. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Nakagawa, S. and Foster, T.M. (2004), “The case against retrospective statistical power analyses with an introduction to power analysis,” Acta Ethologica, 7(2): 103–108.CrossRefGoogle Scholar
Narver, J.C. and Slater, S.F. (1990), “The effect of a market orientation on business profitability,” Journal of Marketing, 54(4): 20–35.CrossRefGoogle Scholar
Neeley, J.H. (1995), “Editorial,” Journal of Experimental Psychology: Learning, Memory and Cognition, 21(1): 261.Google Scholar
,NEO (2008), “NASA statement on student asteroid calculations,” Near-Earth Object Program, website http://neo.jpl.nasa.gov/news/news158.html, accessed 17 April 2008.Google Scholar
Newcombe, R.G. (2006), “A deficiency of the odds ratio as a measure of effect size,” Statistics in Medicine, 25(24): 4235–4240.CrossRefGoogle ScholarPubMed
Nickerson, R.S. (2000), “Null hypothesis significance testing: A review of an old and continuing controversy,” Psychological Methods, 5(2): 241–301.CrossRefGoogle ScholarPubMed
Norton, B.J. and Strube, M.J. (2001), “Understanding statistical power,” Journal of Orthopaedic and Sports Physical Therapy, 31(6): 307–315.CrossRefGoogle ScholarPubMed
Nunnally, J.C. (1978), Psychometric Theory, 2nd Edition. New York: McGraw-Hill.Google Scholar
Nunnally, J.C. and Bernstein, I.H. (1994), Psychometric Theory, 3rd Edition. New York: McGraw-Hill.Google Scholar
Olejnik, S. and Algina, J. (2000), “Measures of effect size for comparative studies: Applications, interpretations, and limitations,” Contemporary Educational Psychology, 25(3): 241–286.CrossRefGoogle ScholarPubMed
Olkin, I. (1995), “Statistical and theoretical considerations in meta-analysis,” Journal of Clinical Epidemiology, 48(1): 133–146.CrossRefGoogle ScholarPubMed
Onwuegbuzie, A.J. and Leech, N.L. (2004), “Post hoc power: A concept whose time has come,” Understanding Statistics, 3(4): 201–230.CrossRefGoogle Scholar
Orme, J.G. and Combs-Orme, T.D. (1986), “Statistical power and Type II errors in social work research,” Social Work Research and Abstracts, 22(3): 3–10.CrossRefGoogle Scholar
Orwin, R.G. (1983), “A fail-safe N for effect size in meta-analysis,” Journal of Educational Statistics, 8(2): 157–159.Google Scholar
Orwin, R.G. (1994), “Evaluating coding decisions,” in Cooper, H. and Hedges, L.V. (editors), Handbook of Research Synthesis. New York: Russell Sage Foundation, 139–162.Google Scholar
Osborne, J.W. (2008a), “Bringing balance and technical accuracy to reporting odds ratios and the results of logistic regression analyses,” in Osborne, J.W. (editor), Best Practices in Quantitative Methods. Thousand Oaks, CA: Sage, 385–389.CrossRefGoogle Scholar
Osborne, J.W. (2008b), “Sweating the small stuff in educational psychology: How effect size and power reporting failed to change from 1969 to 1999, and what that means for the future of changing practices,” Educational Psychology, 28(2): 151–160.CrossRefGoogle Scholar
Overall, J.E. and Dalal, S.N. (1965), “Design of experiments to maximize power relative to cost,” Psychological Bulletin, 64(Nov): 339–350.CrossRefGoogle Scholar
Pampel, F.C. (2000), Logistic Regression: A Primer. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Parker, R.I. and Hagan-Burke, S. (2007), “Useful effect size interpretations for single case research,” Behavior Therapy, 38(1): 95–105.CrossRefGoogle ScholarPubMed
Parks, J.B., Shewokis, P.A., and Costa, C.A. (1999), “Using statistical power analysis in sport management research,” Journal of Sport Management, 13(2): 139–147.CrossRefGoogle Scholar
Pearson, K. (1905), “Report on certain enteric fever inoculation statistics,” British Medical Journal, 2(2288): 1243–1246.Google Scholar
Pelham, A. (2000), “Market orientation and other potential influences on performance in small and medium-sized manufacturing firms,” Journal of Small Business Management, 38(1): 48–67.Google Scholar
Perrin, B. (2000), “Donald T. Campbell and the art of practical ‘in-the-trenches’ program evaluation,” in Bickman, L. (editor), Validity and Social Experimentation: Donald Campbell's Legacy, Volume 1. Thousand Oaks, CA: Sage, 267–282.Google Scholar
Peterson, R.S., Smith, D.B., Martorana, P.V., and Owens, P.D. (2003), “The impact of chief executive officer personality on top management team dynamics: One mechanism by which leadership affects organizational performance,” Journal of Applied Psychology, 88(5): 795–808.CrossRefGoogle Scholar
Phillips, D.W. (2007), “The Titanic numbers game,” website www.titanicsociety.com/readables/main/articles_04–20-1998_titanic_numbers_game.asp, accessed 3 September 2008.
Platt, J.R. (1964), “Strong inference,” Science, 146(3642): 347–353.CrossRefGoogle ScholarPubMed
Popper, K. (1959), The Logic of Scientific Discovery. New York: Harper and Row.Google Scholar
Prentice, D.A. and Miller, D.T. (1992), “When small effects are impressive,” Psychological Bulletin, 112(1): 160–164.CrossRefGoogle Scholar
Randolph, J.J. and Edmondson, R.S. (2005), “Using the binomial effect size display (BESD) to present the magnitude of effect sizes to the evaluation audience,” Practical Assessment, Research and Evaluation, 10(14), electronic journal: http://pareonline.net/pdf/v10n14.pdf, accessed 17 April 2008.Google Scholar
Roberts, J.K. and Henson, R.K. (2002), “Correction for bias in estimating effect sizes,” Educational and Psychological Measurement, 62(2): 241–253.CrossRefGoogle Scholar
Roberts, R.M. (1989), Serendipity: Accidental Discoveries in Science. New York: John Wiley and Sons.Google Scholar
Rodgers, J.L. and Nicewander, W.A. (1988), “Thirteen ways to look at the correlation coefficient,” The American Statistician, 42(1): 59–66.CrossRefGoogle Scholar
Rosenthal, J.A. (1996), “Qualitative descriptors of strength of association and effect size,” Journal of Social Service Research, 21(4): 37–59.CrossRefGoogle Scholar
Rosenthal, M.C. (1994), “The fugitive literature,” in Cooper, H. and Hedges, L.V. (editors), Handbook of Research Synthesis. New York: Russell Sage Foundation, 85–94.Google Scholar
Rosenthal, R. (1979), “The ‘file drawer problem’ and the tolerance for null results,” Psychological Bulletin, 86(3): 638–641.CrossRefGoogle Scholar
Rosenthal, R. (1990), “How are we doing in soft psychology?American Psychologist, 45(6): 775–777.CrossRefGoogle Scholar
Rosenthal, R. (1991), Meta-Analytic Procedures for Social Research. Newbury Park, CA: Sage.CrossRefGoogle Scholar
Rosenthal, R. and DiMatteo, M.R. (2001), “Meta-analysis: Recent developments in quantitative methods for literature reviews,” Annual Review of Psychology, 52(1): 59–82.CrossRefGoogle ScholarPubMed
Rosenthal, R. and Rubin, D.R. (1982), “A simple, general purpose display of magnitude of experimental effect,” Journal of Educational Psychology, 74(2): 166–169.CrossRefGoogle Scholar
Rosenthal, R., Rosnow, R.L., and Rubin, D.B. (2000), Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach. Cambridge, UK: Cambridge University Press.Google Scholar
Rosnow, R.L. and Rosenthal, R. (1989), “Statistical procedures and the justification of knowledge in psychological science,” American Psychologist, 44(10): 1276–1284.CrossRefGoogle Scholar
Rosnow, R.L. and Rosenthal, R. (2003), “Effect sizes for experimenting psychologists,” Canadian Journal of Experimental Psychology, 57(3): 221–237.CrossRefGoogle ScholarPubMed
Rossi, J.S. (1985), “Tables of effect size for z score tests of differences between proportions and between correlation coefficients,” Educational and Psychological Measurement, 45(4): 737–745.CrossRefGoogle Scholar
Rossi, J.S. (1990), “Statistical power of psychological research: What have we gained in 20 years?Journal of Consulting and Clinical Psychology, 58(5): 646–656.CrossRefGoogle ScholarPubMed
Rothman, K.J. (1986), “Significance testing,” Annals of Internal Medicine, 105(3): 445–447.CrossRefGoogle Scholar
Rothman, K.J. (1990), “No adjustments are needed for multiple comparisons,” Epidemiology, 1(1): 43–46.CrossRefGoogle ScholarPubMed
Rothman, K.J. (1998), “Writing for Epidemiology,” Epidemiology, 9(3): 333–337.CrossRefGoogle ScholarPubMed
Rouder, J.N. and Morey, R.D. (2005), “Relational and arelational confidence intervals,” Psychological Science, 16(1): 77–79.CrossRefGoogle ScholarPubMed
Rynes, S.L. (2007), “Editor's afterword: Let's create a tipping point – what academics and practitioners can do, alone and together,” Academy of Management Journal, 50(5): 1046–1054.CrossRefGoogle Scholar
Sauerland, S. and Seiler, C.M. (2005), “Role of systematic reviews and meta-analysis in evidence-based medicine,” World Journal of Surgery, 29(5): 582–587.CrossRefGoogle ScholarPubMed
Sawyer, A.G. and Ball, A.D. (1981), “Statistical power and effect size in marketing research,” Journal of Marketing Research, 18(3): 275–290.CrossRefGoogle Scholar
Sawyer, A.G. and Peter, J.P. (1983), “The significance of statistical significance tests in marketing research,” Journal of Marketing Research, 20(2): 122–133.CrossRefGoogle Scholar
Scarr, S. (1997), “Rules of evidence: A larger context for the statistical debate,” Psychological Science, 8(1): 16–17.CrossRefGoogle Scholar
Schmidt, F.L. (1992), “What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology,” American Psychologist, 47(10): 1173–1181.CrossRefGoogle Scholar
Schmidt, F.L. (1996), “Statistical significance testing and cumulative knowledge in psychology: Implications for the training of researchers,” Psychological Methods, 1(2): 115–129.CrossRefGoogle Scholar
Schmidt, F.L. and Hunter, J.E. (1977), “Development of a general solution to the problem of validity generalization,” Journal of Applied Psychology, 62(5): 529–540.CrossRefGoogle Scholar
Schmidt, F.L. and Hunter, J.E. (1996), “Measurement error in psychological research: Lessons from 26 research scenarios,” Psychological Methods, 1(2): 199–223.CrossRefGoogle Scholar
Schmidt, F.L. and Hunter, J.E. (1997), “Eight common but false objections to the discontinuation of significance testing in the analysis of research data,” in Harlow, L.L., Mulaik, S.A., and Steiger, J.H. (editors), What if There Were No Significance Tests?. Mahwah, NJ: Lawrence Erlbaum, 37–64.Google Scholar
Schmidt, F.L. and Hunter, J.E. (1999a), “Comparison of three meta-analysis methods revisited: An analysis of Johnson, Mullen and Salas (1995),” Journal of Applied Psychology, 84(1): 144–148.CrossRefGoogle Scholar
Schmidt, F.L. and Hunter, J.E. (1999b), “Theory testing and measurement error,” Intelligence, 27(3): 183–198.CrossRefGoogle Scholar
Schmidt, F.L., Oh, I.S., and Hayes, T.L. (2009), “Fixed- versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results,” British Journal of Mathematical and Statistical Psychology, 62(1): 97–128.CrossRefGoogle ScholarPubMed
Schulze, R. (2004), Meta-Analysis: A Comparison of Approaches. Cambridge, MA: Hogrefe and Huber.Google Scholar
Schwab, A. and Starbuck, W.H. (2009), “Null-hypothesis significance tests in behavioral and management research: We can do better,” in Bergh, D. and Ketchen, D. (editors), Research Methodology in Strategy and Management, Volume 5. Emerald, 29–54.Google Scholar
Sechrest, L., McKnight, P., and McKnight, K. (1996), “Calibration of measures for psychotherapy outcome studies,” American Psychologist, 51(10): 1065–1071.CrossRefGoogle ScholarPubMed
Sedlmeier, P. and Gigerenzer, G. (1989), “Do studies of statistical power have an effect on the power of studies?Psychological Bulletin, 105(2): 309–316.CrossRefGoogle Scholar
Seth, A., Carlson, K.D., Hatfield, D.E., and Lan, H.W. (2009), “So what? Beyond statistical significance to substantive significance in strategy research,” in D.D. Bergh and D.J. Ketchen (editors), Research Methodology in Strategy and Management, Volume 5. Emerald, 3–27.CrossRefGoogle Scholar
Shapiro, S. (1994), “Meta-analysis/shmeta-analysis,” American Journal of Epidemiology, 140(9): 771–778.CrossRefGoogle ScholarPubMed
Shaughnessy, J.J., Zechmeister, E.B., and Zechmeister, J.S. (2009), Research Methods in Psychology, 8th Edition. New York: McGraw-Hill.Google Scholar
Shaver, J.M. (2006), “Interpreting empirical findings,” Journal of International Business Studies, 37(4): 451–452.CrossRefGoogle Scholar
Shaver, J.M. (2007), “Interpreting empirical results in strategy and management research,” in D. Ketchen and D. Bergh (editors), Research Methodology in Strategy and Management, Volume 4. Elsevier 273–293.CrossRefGoogle Scholar
Shaver, J.M. (2008), “Organizational significance,” Strategic Organization, 6(2): 185–193.CrossRefGoogle Scholar
Shaver, J.P. (1993), “What statistical significance testing is, and what it is not,” Journal of Experimental Education, 61(4): 293–316.CrossRefGoogle Scholar
Shoham, A., Rose, G.M., and Kropp, F. (2005), “Market orientation and performance: A meta-analysis,” Marketing Intelligence & Planning, 23(5): 435–454.CrossRefGoogle Scholar
Sigall, H. and Ostrove, N. (1975), “Beautiful but dangerous: Effects of offender attractiveness and nature of the crime on juridic judgment,” Journal of Personality and Social Psychology, 31(3): 410–414.CrossRefGoogle Scholar
Silverman, I., Choi, J., Mackewn, A., Fisher, M., Moro, J., and Olshansky, E. (2000), “Evolved mechanisms underlying wayfinding: Further studies on the hunter-gatherer theory of spatial sex differences,” Evolution and Human Behavior, 21(3): 210–213.CrossRefGoogle ScholarPubMed
Simon, S. (2001), “Odds ratio versus relative risk,” website www.childrensmercy.org/stats/journal/oddsratio.asp, accessed 17 April 2008.
Sink, C.A. and Stroh, H.R. (2006), “Practical significance: The use of effect sizes in school counseling research,” Professional School Counseling, 9(5): 401–411.CrossRefGoogle Scholar
Slater, S.F. and Narver, J.C. (2000), “The positive effect of a market orientation on business profitability: A balanced replication,” Journal of Business Research, 48(1): 69–73.CrossRefGoogle Scholar
Smith, M.L. and Glass, G.V. (1977), “Meta-analysis of psychotherapy outcome studies,” American Psychologist, 32(9): 752–760.CrossRefGoogle ScholarPubMed
Smithson, M. (2001), “Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals,” Educational and Psychological Measurement, 61(4): 605–632.CrossRefGoogle Scholar
Smithson, M. (2003), Confidence Intervals. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Snyder, P. and Lawson, S. (1993), “Evaluating results using corrected and uncorrected effect size estimates,” Journal of Experimental Education, 61(4): 334–349.CrossRefGoogle Scholar
,Steering Committee of the Physicians' Health Study Research Group (1988), “Preliminary report: Findings from the aspirin component of the ongoing Physicians' Health Study Research Group,” New England Journal of Medicine, 318(4): 262–264.CrossRefGoogle Scholar
Steiger, J.H. (2004), “Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis,” Psychological Methods, 9(2): 164–182.CrossRefGoogle Scholar
Sterling, T.D. (1959), “Publication decisions and their possible effects on inferences drawn from tests of significance – or vice versa,” Journal of the American Statistical Association, 54(285): 30–34.Google Scholar
Sterne, J.A.C., Becker, B.J., and Egger, M. (2005), “The funnel plot,” in Rothstein, H.R., Sutton, A.J., and Borenstein, M. (editors), Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. Chichester, UK: John Wiley and Sons, 75–98.Google Scholar
Sterne, J.A.C. and Egger, M. (2005), “Regression methods to detect publication and other bias in meta-analysis,” in Rothstein, H.R., Sutton, A.J., and Borenstein, M. (editors), Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. Chichester, UK: John Wiley and Sons, 99–110.Google Scholar
Sterne, J.A.C., Egger, M., and Smith, G.D. (2001), “Investigating and dealing with publication and other biases,” in Egger, M., Smith, G.D., and Altman, D.G. (editors), Systematic Reviews in Health Care: Meta-Analysis in Context. London: BMJ, 189–208.CrossRefGoogle Scholar
Stock, W.A. (1994), “Systematic coding for research synthesis,” in Cooper, H. and Hedges, L.V. (editors), Handbook of Research Synthesis. New York: Russell Sage Foundation, 125–138.Google Scholar
Strube, M.J. (1988), “Averaging correlation coefficients: Influence of heterogeneity and set size,” Journal of Applied Psychology, 73(3): 559–568.CrossRefGoogle Scholar
Sudnow, D. (1967), “Dead on arrival,” Transaction, 5(Nov): 36–43.Google Scholar
Sullivan, M. (2007), Statistics: Informed Decisions Using Data. Upper Saddle River, NJ: Prentice-Hall.Google Scholar
Sutcliffe, J.P. (1980), “On the relationship of reliability to statistical power,” Psychological Bulletin, 88(2): 509–515.CrossRefGoogle Scholar
Teo, K.T., Yusuf, S., Collins, R., Held, P.H., and Peto, R. (1991), “Effects of intravenous magnesium in suspected acute myocardial infarction: Overview of randomized trials,” British Medical Journal, 303(14 Dec): 1499–1503.CrossRefGoogle Scholar
Thalheimer, W. and Cook, S. (2002), “How to calculate effect sizes from published research articles: A simplified methodology,” website http://work-learning.com/effect_sizes.htm, accessed 23 January 2008.
Thomas, L. (1997), “Retrospective power analysis,” Conservation Biology, 11(1): 276–280.CrossRefGoogle Scholar
Thompson, B. (1999a), “If statistical significance tests are broken/misused, what practices should supplement or replace them?Theory and Psychology, 9(2): 165–181.CrossRefGoogle Scholar
Thompson, B. (1999b), “Journal editorial policies regarding statistical significance tests: Heat is to fire as p is to importance,” Educational Psychology Review, 11(2): 157–169.CrossRefGoogle Scholar
Thompson, B. (1999c), “Why ‘encouraging’ effect size reporting is not working: The etiology of researcher resistance to changing practices,” Journal of Psychology, 133(2): 133–140.CrossRefGoogle Scholar
Thompson, B. (2002a), “‘Statistical,’ ‘practical,’ and ‘clinical’: How many kinds of significance do counselors need to consider?Journal of Counseling and Development, 80(1): 64–71.CrossRefGoogle Scholar
Thompson, B. (2002b), “What future quantitative social science research could look like: Confidence intervals for effect sizes,” Educational Researcher, 31(3): 25–32.CrossRefGoogle Scholar
Thompson, B. (2007a), “Effect sizes, confidence intervals, and confidence intervals for effect sizes,” Psychology in the Schools, 44(5): 423–432.CrossRefGoogle Scholar
Thompson, B. (2007b), “Personal website,” www.coe.tamu.edu/~bthompson/, accessed 4 September 2008.
Thompson, B. (2008), “Computing and interpreting effect sizes, confidence intervals, and confidence intervals for effect sizes,” in Osborne, J.W. (editor), Best Practices in Quantitative Methods. Thousand Oaks, CA: Sage, 246–262.CrossRefGoogle Scholar
Todorov, A., Mandisodza, A.N., Goren, A., and Hall, C.C. (2005), “Inferences of competence from faces predict election outcomes,” Science, 308(10 June): 1623–1626.CrossRefGoogle ScholarPubMed
Tryon, W.W. (2001), “Evaluating statistical difference, equivalence and indeterminacy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests,” Psychological Methods, 6(4): 371–386.CrossRefGoogle ScholarPubMed
Tversky, A. and Kahneman, D. (1971), “Belief in the law of small numbers,” Psychological Bulletin, 76(2): 105–110.CrossRefGoogle Scholar
Uitenbroek, D. (2008), “T test calculator,” website www.quantitativeskills.com/sisa/statistics/t-test.htm, accessed 27 November 2008.
Urschel, J.D. (2005), “How to analyze an article,” World Journal of Surgery, 29(5): 557–560.CrossRefGoogle ScholarPubMed
Vacha-Haase, T. (2001), “Statistical significance should not be considered one of life's guarantees: Effect sizes are needed,” Educational and Psychological Measurement, 61(2): 219–224.CrossRefGoogle Scholar
Vacha-Haase, T., Nilsson, J.E., Reetz, D.R., Lance, T.S., and Thompson, B. (2000), “Reporting practices and APA editorial policies regarding statistical significance and effect size,” Theory and Psychology, 10(3): 413–425.CrossRefGoogle Scholar
Vacha-Haase, T. and Thompson, B. (2004), “How to estimate and interpret various effect sizes,” Journal of Counseling Psychology, 51(4): 473–481.CrossRefGoogle Scholar
Van Belle, G. (2002), Statistical Rules of Thumb. New York: John Wiley and Sons.Google Scholar
Vaughn, R.D. (2007), “The importance of meaning,” American Journal of Public Health, 97(4): 592–593.CrossRefGoogle Scholar
Villar, J. and Carroli, C. (1995), “Predictive ability of meta-analyses of randomized controlled trials,” Lancet, 345(8952): 772–776.CrossRefGoogle Scholar
Volker, M.A. (2006), “Reporting effect size estimates in school psychology research,” Psychology in the Schools, 43(6): 653–672.CrossRefGoogle Scholar
Wang, X. and Yang, Z. (2008), “A meta-analysis of effect sizes in international marketing experiments,” International Marketing Review, 25(3): 276–291.CrossRefGoogle Scholar
Webb, E.T., Campbell, D.T., Schwartz, R.D., Sechrest, L., and Grove, J.B. (1981), Nonreactive Measures in the Social Sciences, 2nd Edition. Boston, MA: Houghton Mifflin.Google Scholar
Whitener, E.M. (1990), “Confusion of confidence intervals and credibility intervals in meta-analysis,” Journal of Applied Psychology, 75(3): 315–321.CrossRefGoogle Scholar
Wilcox, R.R. (2005), Introduction to Robust Estimation and Hypothesis Testing, 2nd Edition. Amsterdam: Elsevier.Google Scholar
Wilkinson, L. and the Taskforce on Statistical Inference (1999), “Statistical methods in psychology journals: Guidelines and expectations,” American Psychologist, 54(8): 594–604.CrossRefGoogle Scholar
Wright, M. and Armstrong, J.S. (2008), “Verification of citations: Fawlty towers of knowledge?Interfaces, 38(2): 125–139.CrossRefGoogle Scholar
Yeaton, W. and Sechrest, L. (1981), “Meaningful measures of effect,” Journal of Consulting and Clinical Psychology, 49(5): 766–767.CrossRefGoogle Scholar
Yin, R.K. (1984), Case Study Research. Beverly Hills, CA: Sage.Google Scholar
Yin, R.K. (2000), “Rival explanations as an alternative to reforms as ‘experiments’,” in Bickman, L. (editor), Validity and Social Experimentation: Donald Campbell's Legacy, Volume 1. Thousand Oaks, CA: Sage, 239–266.Google Scholar
Young, N.S., Ioannidis, J.P., and Al-Ubaydli, O. (2008), “Why current publication practices may distort science,” PLoS Medicine, website http://medicine.plosjournals.org/, 5(10): e201: 1–5.CrossRefGoogle ScholarPubMed
Yusuf, S. and Flather, M. (1995), “Magnesium in acute myocardial infarction: ISIS 4 provides no grounds for its routine use,” British Medical Journal, 310(25 March): 751–752.CrossRefGoogle Scholar
Ziliak, S.T. and McCloskey, D.N. (2004), “Size matters: The standard error of regressions in the American Economic Review,” Journal of Socio-Economics, 33(5): 527–546.CrossRefGoogle Scholar
Ziliak, S.T. and McCloskey, D.N. (2008), The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press.Google Scholar
Zodpey, S.P. (2004), “Sample size and power analysis in medical research,” Indian Journal of Dermatology, 70(2): 123–128.Google ScholarPubMed
Zumbo, B.D. and Hubley, A.M. (1998), “A note on misconceptions concerning prospective and retrospective power,” The Statistician, 47(Part2): 385–388.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Bibliography
  • Paul D. Ellis, Hong Kong Polytechnic University
  • Book: The Essential Guide to Effect Sizes
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511761676.011
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Bibliography
  • Paul D. Ellis, Hong Kong Polytechnic University
  • Book: The Essential Guide to Effect Sizes
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511761676.011
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Bibliography
  • Paul D. Ellis, Hong Kong Polytechnic University
  • Book: The Essential Guide to Effect Sizes
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511761676.011
Available formats
×