Hostname: page-component-77c89778f8-gvh9x Total loading time: 0 Render date: 2024-07-23T18:39:37.928Z Has data issue: false hasContentIssue false

Comments on “Randomization and the Design of Experiments” by P. Urbach

Published online by Cambridge University Press:  01 April 2022

O. Mayo*
Affiliation:
Biometry Section, Waite Agricultural Research Institute, The University of Adelaide, South Australia

Abstract

Urbach (1985) has concluded that the use of randomization in the design of clinical and agricultural trials is both inappropriate and ineffective. It is argued here that it is appropriate, as it eliminates the dependence of inference on the unknown precise physical model that underlies a set of observations, and effective, in that it is relatively simple to apply in practice compared with any competing method. Furthermore, it has been proven in practice.

Type
Discussion
Copyright
Copyright © 1987 by the Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I thank T. W. Hancock, M. M. Morris and G. N. Wilkinson for helpful discussion, and I. Hacking for improvements to the manuscript.

References

Arpaillange, P.; Dion, S.; and Mathe, G. (1985), “Proposal for Ethical Standards in Therapeutic Trials”, British Medical Journal 291: 887889.CrossRefGoogle ScholarPubMed
Basu, D. (1980), “Randomization Analysis of Experimental Data: The Fisher Randomization Test” (with discussion), Journal of the American Statistical Association 75: 575595.CrossRefGoogle Scholar
Box, J. F. (1978), R.A. Fisher the Life of a Scientist. New York: John Wiley.Google Scholar
Eden, T., and Yates, F. (1938), “On the Validity of Fisher's Z Test When Applied to an Actual Example of Non-Normal Data”, Journal of Agricultural Science 23: 617.CrossRefGoogle Scholar
Fisher, R. A. (1935), The Design of Experiments. Edinburgh: Oliver and Boyd.Google Scholar
Gillon, R. (1985), “‘Primum non nocere‘ and the Principle of Non-Maleficence”, British Medical Journal 291: 130131.CrossRefGoogle Scholar
Greenberg, B. G. (1951), “Why Randomize?”, Biometrics 7: 309322.CrossRefGoogle Scholar
Harville, D. A. (1975), “Experimental Randomization: Who Needs It?”, American Statistician 29: 2731.Google Scholar
Kempthorne, O. (1977), “Why Randomize?”, Journal of Statistical Planning and Inference 1: 125.CrossRefGoogle Scholar
Kendall, M. G., and Stuart, A. (1963), The Advanced Theory of Statistics, vol. 1. London: Griffin.Google Scholar
Neyman, J. (1935), “Statistical Problems in Agricultural Experimentation”, Royal Statistical Society, London. Journal Series B 2: 154180.Google Scholar
Savage, L. J. (1977), The Shifting Foundations of Statistics“, in R. Colodny (ed.), Logic, Laws and Life. Pittsburgh: University of Pittsburgh Press, pp. 318.Google Scholar
Tedin, O. (1931), “The Influence of Systematic Plot Arrangement Upon the Estimate of Error in Field Experiments”, Journal of Agricultural Science 21: 191208.CrossRefGoogle Scholar
Urbach, P. (1985), “Randomization and the Design of Experiments”, Philosophy of Science 52: 256273.CrossRefGoogle Scholar
Wilkinson, G. N., and Mayo, O. (1982), “Control of Variability in Field Trials: An Essay on the Controversy Between”Student“ and Fisher, and a Resolution of It”, Utilitas Mathematica 21B: 169188.Google Scholar
Wilkinson, G. N.; Eckert, S. R.; Hancock, T. W.; and Mayo, O. (1983), “Nearest Neighbour (NN) Analysis of Field Experiments” (with discussion), Royal Statistical Society, London. Journal Series B 45: 151211.Google Scholar
Yates, F. (1935), “Complex Experiments”, Royal Statistical Society, London. Journal Series B 2: 181247.Google Scholar