Skip to main content Accessibility help
×
Hostname: page-component-7479d7b7d-8zxtt Total loading time: 0 Render date: 2024-07-09T03:28:21.104Z Has data issue: false hasContentIssue false

23 - Subscales and summary scales: issues in health-related outcomes

Published online by Cambridge University Press:  18 December 2009

Mark Wilson Ph.D.
Affiliation:
Professor University of California at Berkeley, Berkeley, CA
Joseph Lipscomb
Affiliation:
National Cancer Institute, Bethesda, Maryland
Carolyn C. Gotay
Affiliation:
Cancer Research Center, Hawaii
Claire Snyder
Affiliation:
National Cancer Institute, Bethesda, Maryland
Get access

Summary

Introduction

Many regard health as a multidimensional construct. Correspondingly, a number of health-related quality-of-life (HRQOL) instruments are built around a framework of subscales,– with each subscale intended to capture a particular dimension (e.g., physical, social, emotional) of the overall construct (HRQOL). The validity of the total instrument is based on the strength of the validity of the underlying subscales. Not infrequently, the instrument's scoring algorithm will also allow the derivation of summary scores. That is, the items used to construct the instrument's N subscales are further aggregated to yield M (<N) summary scales; when M = 1, the instrument yields an overall summary score.

A fundamental assumption in both Classical Test Theory (CTT) and Item Response Theory (IRT) is the unidimensionality of the latent trait. In the case at hand, this means the unidimensionality of each of the HRQOL instrument's subscales and, for that matter, the unidimensionality of any summary scales. If the analyst insists that health is a multidimensional construct, at least two important questions arise.

First, must we therefore derive and apply each subscale using information collected only from the items on that subscale, so that each subscale in the multidimensional construct essentially “floats on its own bottom”? Or, alternatively, is there some way to strengthen each subscale by drawing strength from the ensemble of information available across all related subscales?

Type
Chapter
Information
Outcomes Assessment in Cancer
Measures, Methods and Applications
, pp. 465 - 479
Publisher: Cambridge University Press
Print publication year: 2004

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aaronson, N. K., Ahmedzai, S., Bergman, B.et al. (1993). The European Organization for Research and Treatment of Cancer QLQ-C30: a quality-of-life instrument for use in international clinical trials in oncologyJournal of the National Cancer Institute 85:365–76CrossRefGoogle Scholar
Cella, D. F., Tulsky, D. S., Gray, G.et al. (1993). The Functional Assessment of Cancer Therapy scale: development and validation of the general measureJournal of Clinical Oncology 11:570–9CrossRefGoogle ScholarPubMed
Ware, J. E., Gandek, B. (1998). Overview of the SF-36 Health Survey and the International Quality of Life Assessment (IQOLA) ProjectJournal of Clinical Epidemiology 51:903–12CrossRefGoogle ScholarPubMed
McHorney, C. A., Ware, J. E., Lu, J. F. R.et al. (1994). The MOS 36-Item Short-Form Health Survey (SF-36): III. Tests of data quality, scaling assumptions and reliability across diverse patient groupsMedical Care 32(4):40–66CrossRefGoogle ScholarPubMed
Wilson, M. (2001). On choosing a model for measuring. Paper presented at the International Conference on Objective Measurement 3, Chicago, IL
Masters, G. N., Adams, R. A., Wilson, M. (1990). Charting student progress. In International Encyclopedia of Education: Research and Studies. Supplementary Volume 2, ed. T. Husen, T. N. Postlethwaite, pp. 628–34. Oxford: Pergamon Press
Raczek, A. E., Ware, J. E., Bjorner, J. B.et al. (1998). Comparison of Rasch and summated rating scales constructed from the SF-36 Physical Functioning items in seven countries: Results from the IQOLA ProjectJournal of Clinical Epidemiology 51:1203–11CrossRefGoogle ScholarPubMed
Masters, G. N., Wilson, M. (1997). Developmental Assessment. Berkeley Evaluation and Assessment Research Center Research Report. University of California, Berkeley. (see http://bear.soe.berkeley.edu/)
Patton, M. Q. (1980). Qualitative Evaluation Methods. Beverly Hills, CA: Sage
Reise, this volume, Chapter 21
Rasch, G. (1960). Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen: Danmarks Paedogogiske Institut. [also, 1980 University of Chicago Press, Chicago]
Andrich, D. (1978). A rating formulation for ordered response categoriesPsychometrika 43:561–73CrossRefGoogle Scholar
Masters, G. N. (1981). A Rasch model for partial credit scoringPsychometrika 47:149–74CrossRefGoogle Scholar
Samejima, F. (1969). Estimation of Latent Ability Using a Response Pattern of Graded Scores. Psychometrika Monograph Supplement No. 17CrossRef
Hambleton, R. K., Swaminathan, H., Rogers, H. J. (1991). Fundamentals of Item Response Theory. Newbury Park: Sage
Hambleton, this volume, Chapter 22
American Educational Research Association, American Psychological Association, National Council for Measurement in Education. (1999). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association
Reckase, M. D. (1972). Development and Application of a Multivariate Logistic Latent Trait Model. Unpublished doctoral dissertation, Syracuse University, Syracuse, NY
Sympson, J. B. (1978). A model for testing with multidimensional items. In Proceedings of the 1977 Computerized Adaptive Testing Conference, ed. D. J. Weiss, pp. 82–98. Minneapolis: University of Minnesota, Dept. of Psychology, Psychometric Methods Program
Adams, R. J., Wilson, M. (1995). Formulating the Rasch model as a mixed coefficients multinomial logit. In Objective Measurement: Theory into Practice. Volume III, ed. G. Engelhard & M. Wilson, pp. 143–66. Norwood, NJ: Ablex
Adams, R., Wilson, M., Wang, W. (1997). The multidimensional random coefficient multinomial logit modelApplied Psychological Measurement 21(1):1–23CrossRefGoogle Scholar
Davey, T., Hirsch, T. M. (1991). Concurrent and consecutive estimates of examinee ability profiles. Paper presented at the Annual Meeting of the Psychometric Society, New Brunswick, NJ
Briggs, D. C., Wilson, M. (2003). An introduction to multidimensional measurement using Rasch modelsJournal of Applied Measurement 4(1):87–100Google ScholarPubMed
Wang, W. (2000). Direct estimation of correlations among latent traits within IRT Framework. MPR-online 4(1) 47–70. (http://www.mpr-online.de/)
Wang, W., Wilson, M. (1996). Comparing open-ended items and performance-based items using item response modeling. In Objective Measurement: Theory into Practice, Volume III, ed. G. Engelhard, M. Wilson pp. 167–94. Norwood, NJ: Ablex
Wang, W., Wilson, M., Adams, R. J. (1997). Rasch models for multidimensionality between and within items. In Objective Measurement: Theory into Practice, Volume IV, ed. M. Wilson, G. Engelhard, K. Draney, pp. 139–56. Norwood, NJ: Ablex
Wang, W., Wilson, M., Adams, R. J. (1998). Measuring individual differences in change with Rasch modelsJournal of Outcome Measurement 2(3):240–65Google ScholarPubMed
Wu, M., Adams, R. J., Wilson, M. (1998). ACERConQuest. Hawthorn, Australia: ACER Press
Paek, I., Peres, D., Wilson, M. (2002). Constructing one scale to describe two statewide exams. Paper presented at the International Objective Measurement Workshop, New Orleans, LA
Brady, M. J., Cella, D. F., Mo, F.et al.Reliability and validity of the Functional Assessment of Cancer Therapy — Breast Quality of Life InstrumentJournal of Clinical Oncology 15:974–86CrossRef
Feeny, this volume, Chapter 4
Muthén, L. K., Muthén, B. O. (1998). M-Plus User's Guide. Los Angeles: Muthén & Muthén
Muthén, B. O., Khoo, S. T. (1998). Longitudinal studies of achievement growth using latent variable modelingLearning and Individual Differences, Special issue: Latent growth curve analysis 10:73–101CrossRefGoogle Scholar
SAS Institute Inc. (1999). SAS OnlineDoc (Version 8) [software manual on CD-ROM]. Cary, NC: SAS Institute Inc
Pellegrino, J., Glaser, R., Chudowsky, N. (ed.) (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press
De Boeck, P., Wilson, M. (2004). Explanatory Item Response Models. New York: Springer-Verlag

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×