Public Reporting of Central-line Infections Called into Question
Abstract & Commentary
By Joseph F. John, MD, FACP, FIDSA, FSHEA, Associate Chief of Staff for Education, Ralph H. Johnson Veterans Administration Medical Center; Professor of Medicine, Medical University of South Carolina, Charleston, is Associate Editor for Infectious Disease Alert.
Dr. John reports no financial relationships relevant to this field of study.
Source: Lin MY, et al. Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates. JAMA. 2010;304:2035-2041.
Central line-associated infections, particularly bloodstream infections (BSI), remain a huge issue in our technological age. Four academic medical centers were used to accumulate 165,963 central-line days associated with 241,518 patient days. Using the electronic medical record, an algorithm was used to determine if a BSI occurred. These results were compared to the determination by an infection preventionist who used routine infection-control activity to determine if a central line-associated BSI occurred. The median rates, as determined by both methods, differed significantly (p <0.001), with the preventionists' finding 3.3 infections per 1,000 central-line days and the algorithm finding 9.0 infections. The so-called goodness-of-fit represented how closely the observations clustered around the regression line; the fit varied widely. For example, for an algorithm rate of 9 per 1,000 central-line days, the infection-preventionist observations would vary, depending on the hospital, from 1.1 to 4.9. Ironically, the hospital named hospital C, had the lowest infection-preventionist rate (2.4) and the highest corresponding algorithm rate (12.6). The authors were surprised by the degree of variability between the two methods of determining central-line BSIs.
These differences between the computer-generated rates for central line-related BSI and algorithm-generated rates are unnerving, to say the least, particularly when the infection-preventionist rates would be the ones sent out for public review. The authors, who were, in general, associated with the surveillance programs in their respective hospitals, postulate several reasons for the discrepancies. The reasons include the quality of the infection-preventionist reviews, the type of chart review performed, the variation in local practices in culturing, and the rigor of medical record documentation.
Whatever the reason, these rates can vary greatly, and there is no true gold standard. The computer algorithm is simply another way to look at the rate but, if more valid than the infection-preventionists observations, we need to find more consistent ways to determine if a central-line BSI has occurred (see Woeltje KF, et al. Infect Control Hosp Epidemiol. 2008;29:842-846).
Coagulase-negative staphylococci can cause true confusion in studies like these. The definition for organisms like S. epidermidis as the cause of infection included having two positive cultures with the same species, the same species within 2 hospital days, or a single positive culture with vancomycin having been administered within the 2 subsequent days. We are not told if the staphylococci were indeed speciated or if all isolates were considered S. epidermidis. We need to have better technology in the future to resolve this type of issue regarding skin commensals.
The authors are to be commended for tackling this issue and for such a massive study. They have uncovered a potential trend in variable reporting of central line-associated BSI, important due to the public reporting of such data. If, as the authors imply, at the outset, public reporting is to be promoted as improving patient safety, the public deserves the very best data that our systems can deliver.