Hostname: page-component-7479d7b7d-k7p5g Total loading time: 0 Render date: 2024-07-13T22:27:58.487Z Has data issue: false hasContentIssue false

Toward a More Objective Understanding of the Evidence of Carcinogenic Risk

Published online by Cambridge University Press:  28 February 2022

Deborah G. Mayo*
Affiliation:
Virginia Polytechnic Institute and State University

Extract

The field of quantified risk assessment is a new field, only about 20 years old, and already it is considered to be in a crisis. As Funtowicz and J.R. Ravetz (1985) put it:

The concept of risk in terms of probability has proved to be so elusive, and statistical inference so problematic, that many experts in the field have recently either lost hope of finding a scientific solution or lost faith in Risk Analysis as a tool for decisionmaking. (p.219)

Thus the ‘art’ of the assessment of risks… is at an impasse. The early hopes that it could be reduced to a science are frustrated. …[O]thers are tending to introduce the ‘human’ and ‘cultural’ factors. The question now becomes, to what extent should these predominate? Would it be to the reduction or exclusion of the ‘scientific’ aspects? For, …if the perceived phenomena of ‘risks’ are interpreted as lacking all objective content or being merely a small part of some total cultural configuration, then there is no basis for dialogue between opposed positions on such problems, (pp.220-221)

Type
Part XV. Risk Assessment
Copyright
Copyright © 1989 by the Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

1

A portion of this research was carried out during tenure of a National Endowment for the Humanities Fellowship for College Teachers; I gratefully acknowledge that support. I would like to thank Marjorie Grene for numerous useful comments on earlier drafts.

References

Ashford, N.A., Ryan, C.W. and Caldart, C.C.(1983), “A Hard Look at Federal Regulation of Formaldehyde: A Departure from Reasoned Decisionmaking”, Harvard Environmental Law Review 7:297370.Google Scholar
Cranor, C. (1987), “Some Public Policy Problems with Epidemiology: How Good is the 95% Rule?”. Paper presented at the Pacific Division meeting of the American Philosophical Association, March 1987.Google Scholar
Douglas, M. and Wildavsky, A. (1982), Risk and Culture. Berkeley: University of California Press.Google Scholar
Fisher, R.A. (1955), “Statistical Methods and Scientific Induction”, Journal of the Royal Statistical Society (B) 17:6978.Google Scholar
Fleiss, J.L. (1986), “Significance Tests Have a Role in Epidemiologic Research: Reactions to A.M. Walker”, American Journal of Public Health 76 (No.5, May 1986): 559560.CrossRefGoogle Scholar
Formaldehyde Federal Register Notice, May 1981.Google Scholar
Freiman, J.A., Chalmers, T.C., Smith, H. Jr., and Kuebler, R.R. (1978), “The Importance of Beta, the Type II Error and Sample Size in the Design and Interpretation of the Randomized Control Trial, Survey of 71 ‘Negative’ Trials”, The New England Journal of Medicine 299 (No.13):690694.CrossRefGoogle ScholarPubMed
Funtowicz, S.O. and Ravetz, J.R. (1985), “Three Types of Risk Assessment: A Methodological Analysis”, in Risk Analysis in the Private Sector, Whipple, C. and Covello, V.T. (eds.). New York: Plenum Press, pp.217231.CrossRefGoogle Scholar
Kempthorne, O. and Folks, L. (1971), Probability, Statistics, and Data Analysis. Ames: Iowa State University Press.Google Scholar
Lash, J., Gillman, K. and Sheridan, D. (1984), A Season of Spoils: The Reagan Administration's Attack on the Environment. New York: Pantheon Books.Google Scholar
Mayo, D. (1985), “Behavioristic, Evidentialist, and Learning Models of Statistical Testing”, Philosophy of Science 52:493516.CrossRefGoogle Scholar
Mayo, D.Sociological vs. Metascientific Philosophies of Risk Assessment”, in Acceptable Evidence: Science and Values in Risk Management, Mayo, D. and Hollander, R. (eds.). Forthcoming, Oxford.CrossRefGoogle Scholar
National Research Council, Risk Assessment in the Federal Government: Managing the Process. Washington, D.C.: National Academy Press, 1983.Google Scholar
Neyman, J. and Pearson, E. S. (1933), On the Problem of the Most Efficient Tests of Statistical Hypothesis”, Philosophical Transactions of the Royal Society A 231: 289337. (Reprinted in Joint Statistical Papers, Berkeley: University of California Press, 1967, pp.276-283).Google Scholar
Pearson, E.S. (1955),“Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society (B) 17:204207.Google Scholar
Poole, C. (1987), “Beyond the Confidence Interval”, American Journal of Public Health 77 (No.2, Feb. 1987):195199.CrossRefGoogle ScholarPubMed
Silbergeld, E.K., “Risk Assessment and Risk Management—An Uneasy Divorce”, in Acceptable Evidence: Science and Values in Hazard Management, Mayo, D. and Hollander, R. (eds.). Forthcoming, Oxford.Google Scholar
U.S. House of Representatives, Formaldehyde: Review of the Scientific Basis of EPA's Carcinogenic Risk Assessment. Hearing Before the Subcommittee on Investigations and Oversight of the Committee on Science and Technology, 97th Congress (second session), May 20,1982.Google Scholar
Walker, A.M. (1986), “Reporting the Results of Epidemiologic Studies”, American Journal of Public Health 76 (No.5, May 1986):556558.CrossRefGoogle ScholarPubMed
Weinberg, A. (1972), “Science and Trans-Science”, Minerva 10:209222.CrossRefGoogle Scholar