Hostname: page-component-7bb8b95d7b-495rp Total loading time: 0 Render date: 2024-10-04T12:34:06.516Z Has data issue: false hasContentIssue false

Team-based infection preventionist review improves inter-rater reliability in identification of healthcare-associated infections

Published online by Cambridge University Press:  02 October 2024

Alyssa Castillo*
Affiliation:
Division of Infectious Diseases, University of Colorado School of Medicine, Aurora, CO, USA Infection Prevention and Control, University of Colorado Hospital, Aurora, CO, USA
Meghan Hudziec
Affiliation:
Infection Prevention and Control, University of Colorado Hospital, Aurora, CO, USA
Sarah Elizabeth Totten
Affiliation:
Infection Prevention and Control, University of Colorado Hospital, Aurora, CO, USA
Larissa Pisney
Affiliation:
Division of Infectious Diseases, University of Colorado School of Medicine, Aurora, CO, USA Infection Prevention and Control, University of Colorado Hospital, Aurora, CO, USA
*
Corresponding author: Alyssa Castillo; Email: Alyssa.Castillo@ucdenver.edu
Rights & Permissions [Opens in a new window]

Abstract

Accurate reporting of healthcare-associated infections (HAIs) to the National Healthcare Safety Network (NHSN) is a critical function of infection prevention and control (IPC) teams. Validation was performed to increase inter-rater reliability in HAI adjudication among infection preventionists. Benefits included improved data integrity, enhanced team performance, and individual growth.

Type
Concise Communication
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Society for Healthcare Epidemiology of America

Background

Infection Prevention and Control (IPC) programs are responsible for surveillance and mitigation strategies to prevent healthcare-associated infections (HAIs), including but not limited to central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), and surgical site infections (SSIs). Surveillance definitions for HAIs are strictly determined by the National Healthcare Safety Network (NHSN). Accurate HAI adjudication is critical to ensure data integrity given the wide-reaching impacts of HAI metrics on public reputation and value-based purchasing (including hospital reimbursement and penalties); this benchmarking system requires reliable surveillance definition application across institutions to ensure meaningful comparison to peer hospitals.

Though NHSN surveillance definitions are clearly delineated, several studies have demonstrated that application of these definitions varies. For example, 19 state health departments conducted CAUTI validation and found the classification error rate from 2015 to 2020 was 2.4%, with 66% being underreported and 34% overreported. Reference Bagchi, Watkins, Norrick, Scalise, Pollock and Allen-Bridson1 Similarly, a 2018 publication of the validation efforts of 23 state health departments showed the pooled error rate of CLABSI cases was 4.4%. Reference Bagchi, Watkins, Pollock, Edwards and Allen-Bridson2 Marked underreporting of colon SSIs has also been documented. In one blinded, retrospective review of 30 Connecticut acute care hospitals in 2012, it was determined that 34% of cases were not reported to NHSN. Reference Backman, Carusillo, D’aquila, Melchreit and Fekieta3 There are several potential causes for HAI misclassification, including variation in case finding methods, inaccurate interpretation of surveillance definitions, overreliance on clinical judgment, and denominator inaccuracy (due to coding errors or data integrity issues). Reference Bagchi, Watkins, Norrick, Scalise, Pollock and Allen-Bridson1Reference Backman, Carusillo, D’aquila, Melchreit and Fekieta3

NHSN clearly states that failure to meet clinical definitions for infection should not result in non-reporting of cases that meet surveillance definitions, as “the application of these standardized criteria, and only these criteria, in a consistent manner allows confidence in aggregation and analysis of data.” 4 It has been recommended that IPC programs “participate in external validation programs if available within their states and intermittently perform internal validation for agreement among infection preventionists (IPs) within the same program.” Reference Dhar, Sandhu, Valyko, Kaye and Washer5 Surveillance notably comprises 25.4% of IP efforts and is thus a key skill for IPs to master. Reference Landers, Davis, Crist and Malik6 However, there is little published data about the optimal structure and outcomes of internal validation programs.

We hypothesized that creation of a systematic internal validation review would create increased consistency in the application of NHSN surveillance definitions across the UCHealth metropolitan region.

Methods

The UCHealth metropolitan region is comprised of four hospitals and three ambulatory surgical centers. Ten full-time IPs review all cases of potential CLABSI, CAUTI, and SSI to arbitrate whether NHSN surveillance definitions are met for HAI. The IPs had a median of 4.4 years of experience working as a hospital-based IP (IQR 2.5 years), and 90% had earned CIC certification.

In October 2021, the UCHealth metropolitan region IP team structure transitioned from a subject matter expert (SME) model (in which each IP reviews all potential cases for a specific HAI) to a unit-based model (in which each IP reviews all HAIs on their assigned units in addition to performing SSI surveillance for one to three surgery types, such as hysterectomy or total knee arthroplasty). The goal of this organizational change was to create redundancy in IP skills. However, given that IPs began conducting surveillance on less-familiar HAIs, the team concurrently instituted weekly “Inter-rater Reliability (IRR)” meetings to review all potential cases of HAI as a group.

From August 17, 2022, through December 22, 2023, the IPC team convened one-hour IRR meetings via teleconferencing. Attendees included all UCHealth metropolitan region IPs, the IPC manager, and the IPC medical directors. During these meetings, each IP presented key details for all cases evaluated for CLABSI and CAUTI on their respective units and assigned SSI. To maximize efficiency, an online form was created; once all required patient details were entered, the case was automatically uploaded to a line list, and a slide was populated for presentation (see Figure 1). During the meeting, IPs received feedback from the team, and a final case determination was based on team consensus through open dialog and final medical director input. If there was discordant interpretation, a formal inquiry was sent to NHSN, and responses were collated for future reference.

Figure 1. IRR slide template for surgical site infection (SSI) cases.

This figure depicts an example of the information collected on each SSI case adjudicated by the infection preventionists in the online form, which auto-populated into the line list and automatically generated the slide shown. Data included in this figure are fictitious and for demonstration purposes only. Abbreviations: DOB, date of birth; MRN, medical record number; SSI, surgical site infection; Dx, diagnosis; BMI, body mass index.

The number of cases reviewed, case determinations changed, and formal inquiries to NHSN were tracked.

Results

During the study period, the IP team convened 71 weekly IRR meetings and reviewed 609 potential HAI cases; 56 were evaluated for CAUTI, 165 for CLABSI, and 388 for SSI (see Table 1). Of these, 486 (79.8%) were confirmed as HAIs. Based on collaborative team review, 41 cases (41/609, 6.7%) were changed from reportable to non-reportable—including 19 cases evaluated for CLABSI and 22 evaluated for SSI. Six cases (6/609, 1.0%) were changed from non-reportable to reportable—1 CAUTI case, 1 CLABSI case, and 4 SSI cases. Nineteen reportable cases (19/609, 3.1%) remained reportable but required a change in definition: the depth of infection was changed in 18 SSI cases, and one case of secondary BSI attribution was changed. A total of 29 formal inquiries were sent to NHSN to clarify surveillance definitions.

Table 1. Summary of HAI adjudication changes

Abbreviations: HAI, healthcare-associated infection; IRR, inter-rater reliability; CAUTI, catheter-associated urinary tract infection; CLABSI, central line-associated bloodstream infection; MBI, mucosal barrier injury; LCBI, laboratory-confirmed bloodstream infection; BSI, bloodstream infection; SSI, surgical site infection; PATOS, present at time of surgery; NHSN, National Healthcare Surveillance Network.

* Confirmed Cases = Cases confirmed to represent HAI; includes cases identified as PATOS or meeting alternate NHSN exclusion criteria.

** Near Miss Cases = Cases reviewed that are suspected to be true infections but do not meet NHSN surveillance criteria.

*** Definition Change = Change in SSI depth of infection.

Discussion

This study demonstrates a novel team-based structure to internally validate HAI reporting across a multi-hospital healthcare system. Our experience demonstrates that this model is a practical tool with beneficial impacts in three domains:

Assuring data integrity and timeliness

Team-based review of HAI cases improves consistency in application of NHSN case definitions. This was highlighted in the setting of CLABSI, as adjudication was changed in 21 cases (21/165, 12.7%) after IRR. In addition, the dynamic team discussion highlights areas of uncertainty in interpretation of NHSN surveillance definitions; as a result, the team developed a systematic way to query NHSN and collate replies to ensure the knowledge gained is maintained for future use. Finally, the weekly IRR meeting structure creates accountability for IPs in providing real-time reporting of newly identified HAI cases, allowing trends to be quickly identified.

Enhancing team performance and awareness

Within a unit-based IP model, individual team members often work independently, functioning as a representative of the IPC team on their respective units. This allows IPs to build strong, effective relationships with unit leadership and front-line staff; it also permits IPs to note rate trends across HAIs, allowing for more expeditious evaluation of root causes. Though each IP functions independently, the IRR meeting structure breaks down these silos, fostering opportunities for collaboration. For example, team members gain in-depth awareness of each metric’s global performance, share best practices, and collectively identify effective interventions.

Accelerating individual professional growth

This model fosters the development of individual IPs. During IRR meetings, IPs learn from each other’s questions and clarifications—learnings that are missed if cases are reviewed with IPC medical directors in ad hoc conversations. In addition, it offers the opportunity for senior IPs to provide peer mentoring to early-career IPs. Finally, the IRR structure strengthens IP’s knowledge of all HAI definitions, thus making them more well-rounded and equipped for future leadership positions.

There are important challenges to the external application of our team-based internal validation model and meeting structure. First, this model functions best within a unit-based IPC team structure, as it requires engagement of all team members for dynamic discussion. Second, the IRR meeting requires an additional hour-long meeting per week with associated preparation time. Given that this internal review process represents an investment of time, it necessitates hospital support and value of data integrity. Finally, it is critical to note that the issue of imperfect inter-rater reliability is very likely present at equal or greater extents between and across institutions. As such, hospital leadership buy-in is of particular importance given that HAI standardized infection ratios (SIRs) may temporally increase as data integrity improves—which may be perceived as worsening performance when compared to peer hospitals with less accurate reporting. Ultimately, valid comparison and accurate benchmarking will require structures to ensure high IRR across institutions while case definitions require clinical judgment (in contrast to LabID metrics, which require no case review).

In summary, this model offers a unique, team-based approach to internal validation of HAIs, improving consistency in the application of NHSN case definitions across a multi-hospital system. In addition, we found that this structure offered opportunities to enhance team performance and accelerate individual IP professional growth.

Acknowledgments

We greatly appreciate the time and effort of the Infection Prevention and Control staff at the UCHealth Metropolitan Region hospitals who participated in this intervention and contributed to optimization of this workflow. We also acknowledge the UCHealth Quality & Safety Department for its emphasis on the value of data integrity.

Financial support

No financial support was provided relevant to this article.

Competing interests

All authors report no conflicts of interest relevant to this article.

References

Bagchi, S, Watkins, J, Norrick, B, Scalise, E, Pollock, DA, Allen-Bridson, K. Accuracy of catheter-associated urinary tract infections reported to the National Healthcare Safety Network, January 2010 through July 2018. Am J Infect Control 2020; 48:207211.CrossRefGoogle Scholar
Bagchi, S, Watkins, J, Pollock, DA, Edwards, JR, Allen-Bridson, K. State health department validations of central line-associated bloodstream infection events reported via the National Healthcare Safety Network. Am J Infect Control 2018;46:12901295.CrossRefGoogle ScholarPubMed
Backman, LA, Carusillo, E, D’aquila, LN, Melchreit, R, Fekieta, R. Validation of surgical site infection surveillance data in colon procedures reported to the Connecticut Department of Public Health. Am J Infect Control 2017; 45:690691.CrossRefGoogle Scholar
National Healthcare Safety Network (NHSN) Frequently Asked Questions: Q5 “Surveillance vs. Clinical”. Centers for Disease Control and Prevention website. https://www.cdc.gov/nhsn/faqs/faqs-miscellaneous.html#q5. Accessed July 10, 2023.Google Scholar
Dhar, S, Sandhu, AL, Valyko, A, Kaye, KS, Washer, L. Strategies for effective infection prevention programs: structures, processes, and funding. Infect Dis Clin N Am 2021; 35:531551.CrossRefGoogle ScholarPubMed
Landers, T, Davis, J, Crist, K, Malik, C. APIC MegaSurvey: methodology and overview. Am J Infect Control 2017; 45:584588.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. IRR slide template for surgical site infection (SSI) cases.This figure depicts an example of the information collected on each SSI case adjudicated by the infection preventionists in the online form, which auto-populated into the line list and automatically generated the slide shown. Data included in this figure are fictitious and for demonstration purposes only. Abbreviations: DOB, date of birth; MRN, medical record number; SSI, surgical site infection; Dx, diagnosis; BMI, body mass index.

Figure 1

Table 1. Summary of HAI adjudication changes