Hostname: page-component-7bb8b95d7b-wpx69 Total loading time: 0 Render date: 2024-10-01T08:22:17.884Z Has data issue: false hasContentIssue false

P0076 - Assessing the ability of rater training to achieve good-to-excellent inter-rater reliability on the ham-a using kappa statistics

Published online by Cambridge University Press:  16 April 2020

A. Kott
Affiliation:
United BioSource Corporation, Prague, Czech Republic
D. Cicchetti
Affiliation:
Departments of Psychiatry and Epidemiology and Public Health in Biometry, Yale University School of Medicine, New Haven, CT, USA
O. Markovic
Affiliation:
United BioSource Corporation, Prague, Czech Republic
C. Spear
Affiliation:
United BioSource Corporation, Wayne, PA, USA

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Background:

Clinical trials in psychiatry rely on subjective outcome measures, where poor inter-rater reliability can negatively impact signal detection and study results. One approach to this challenge is to limit the number of raters thereby decreasing expected variance. However, sample size requirements—even those based on high reliability— often necessitate many sites. The implementation of comprehensive rater training combined with validated assessment of inter-rater reliability at study initiation and throughout the study is critical to ensure high inter-rater reliability. This study examined the effect of rater training and assessment to reduce inter-rater variance in clinical studies.

Methods:

After rigorous training on the administration and scoring guidelines of the HAM-A, 286 raters independently reviewed and assessed a videotaped HAM-A interview of a GAD patient. Measures of inter-rater agreement across the pool of raters, as well as for each individual rater relative to all other raters were calculated using kappa statistics modified for situations where multiple raters assess a single subject1.

Results:

The overall level of inter-rater agreement was excellent (kappa = .889), with levels of inter-rater agreement of each individual rater relative to all other raters ranging from .514 to .930. Of the 286 raters participating, more than 97.2% (278) achieved inter-rater agreement > 0.8.

Conclusion:

This study demonstrates that robust rater training can result in high levels of agreement between large numbers of site raters on both an overall and individual rater basis and highlights the potential benefit of excluding raters from study participation with inter-rater agreement below 0.8.

Type
Poster Session II: Anxiety Disorders
Copyright
Copyright © European Psychiatric Association 2008
Submit a response

Comments

No Comments have been published for this article.