Special Feature

Computerized Medical Databases in the ICU

By Gordon D. Rubenfeld, MD, MSc

Measuring the quality of a complex service like critical care that combines the highest technology with the most intimate caring is a challenge. Recently, consumers, clinicians, and payers have requested more formal assessments and comparisons of the quality and costs of medical care.1 National reports have focused clinicians on the public health effects of medical errors and poor quality care.2,3 Computerized medical databases offer the opportunity to measure the quality of medical care and, at least theoretically, provide the opportunity to improve that care.

Computerized Medical Databases

There are 2 general types of computerized databases used to assess the quality of medical care: the electronic medical record and an administrative database. Examples of electronic medical records are Eclipsys (Sunrise Critical Care), CareVue Clinical Information System (Philips Medical Systems), MetaVision (iMDsoft), and CareSuite (Picis). There are several types of administrative databases, including state and federal reporting (such as State Inpatient Databases4), claims and billing data (such as Medicare Provider and Analysis Review [MedPAR] files), and quality improvement databases (such as Cleveland Health Quality Choice,5 APACHE Medical Systems,6 Project IMPACT,34 and New York CABG Registry).27

One type of computerized database is the administrative database collected for purposes other than direct delivery of medical care. These databases are usually collected for billing purposes, but some are collected specifically with the intent of assessing quality of care and reporting outcomes. For example, the State Inpatient Databases were developed as part of the Healthcare Cost and Utilization Project (HCUP), a federal-state-industry partnership sponsored by the Agency for Healthcare Research and Quality to inform decision making at the national, state, and community levels.4 The State Inpatient Databases cover inpatient care in hospitals in 33 states. This represents about 85% of all US hospital discharges. Elements in this database include diagnoses, procedures, admission and discharge status, patient demographics, expected payment source, total charges, and length of stay. While administrative databases are ideal for evaluating some aspects of quality across groups of hospitals, they have important limitations for studying the quality of critical care. Intensive care-specific diagnoses and procedures are not coded. It can be difficult to distinguish admission diagnoses from comorbidities and complications. Finally, there is limited physiologic severity of illness data. Some administrative databases collected specifically for evaluating the quality of intensive care may address these limitations.5,6

More recently, the electronic medical record has become a source of data for assessing and improving the quality of medical care.7 The electronic medical record is specifically designed to replace all or part of the paper medical chart. It specifically addresses some of the limitations of the administrative databases. It is clinically rich, containing all of the detail of the original medical record including the physiologic and biochemical variables needed for critical care risk adjustment and diagnoses. The data are entered by the actual clinicians caring for the patient and are not coded by lay personnel. While data from an electronic medical record are ideally suited for local quality improvement initiatives, it is difficult to use these data across multiple institutions unless they all use the same electronic medical record. Variations in the electronic medical record formats make data exchange and comparability across systems a challenge.

Garbage In = Garbage Out

This age-old programmer’s dictum applies strongly to computerized electronic medical record data. The sheer volume of data collected automatically by many systems does not guarantee its quality, nor does the volume ensure that the data are unbiased. Many of the functions designed to enhance usability—for example, functions that allow users to copy data from one day to the next—can perpetuate these errors rather than reduce them. Some data elements are particularly difficult to analyze. For example, full text fields are more prone to error than pick-lists or check boxes. Diagnostic and clinical data elements are problematic, and the extent to which these fields will be useful will depend on who and how they are entered. For example, the diagnostic and comorbidity elements in the APACHE and MPM are designed to be coded by specific rules. If users try to calculate these scores and do not use the same rules for coding these variables as the developers, then the scores will not be accurate.8,9

The electronic medical record is essentially a database and, as with all databases, the most important design step is for users to ask themselves, "What questions will I use this database to answer?" To answer this, users should generate mock tables and reports that will guide the selection of data elements and how they are defined. An important issue to consider is how patients with specific critical care syndromes will be identified. To measure quality of care in patients with acute lung injury, sepsis, or ventilator-associated pneumonia, one must be able to reliably identify patients with these syndromes. Relying on physician identification is one option that would simply have physicians check a box in patients diagnosed with these syndromes. This will only identify patients that physicians recognize with these syndromes. Limiting diagnoses to those recognized by physicians is likely to misrepresent quality of care because it eliminates diagnostic inaccuracy as a factor in poor quality of care. Unfortunately, identifying patients with critical care syndromes independently of physician recognition using electronic medical record data requires fairly sophisticated computer algorithms and some preplanning. For example, a study intended to see if physicians use appropriate low tidal volume ventilation in patients with acute lung injury should evaluate process of care in patients recognized by physicians with ALI and in those who meet the diagnostic criteria but were not diagnosed with the syndrome. To do this using data from an electronic medical record would require that the 3 diagnostic criteria (hypoxemia, chest radiographic opacities, and exclusion of clinical evidence of left atrial hypertension as the primary explanation of respiratory failure) be available independently in the database. Similarly, evaluating process of care in patients with sepsis would require that data are entered in a way to allow them to be screened independently for the diagnostic criteria (systemic inflammatory response, infection, and organ dysfunction).

Measuring ICU Process and Outcome with Computerized Medical Databases

There is an extensive body of literature documenting the inappropriate delivery, both under- and over-provision, of medical care. Despite a mature evidence database and even when conservative criteria for case selection are used, a significant proportion of patients do not receive appropriate aspirin, heparin, thrombolytic therapy, or beta antagonists in the setting of acute coronary syndromes.3 Similar evidence on implementation of effective practices in critical care is lacking. For example, we have relatively little evidence from large, community-based cohorts on the implementation of noninvasive ventilation for COPD exacerbation, low tidal volume ventilation for acute lung injury, activated protein C for severe sepsis, renal replacement therapy, or the use of daily trials of spontaneous breathing to wean patients from mechanical ventilation.

The use of the word "outcome" in research can be confusing.1 It can refer to any variable that is the dependent variable in an analysis. For example, in a clinical trial of recombinant human erythropoietin in the ICU, the outcome variable was use of blood transfusions.10 However, in the context of the Donabedian model of measuring quality and with respect to the field of outcomes research, outcomes refer to a variety of variables that measure factors that are important to patients including: symptoms, quality of life, duration of life, quality of dying, the effect of their health care on their loved ones, and the cost of medical care. Patient-centered outcomes are distinct from any number of chemical, physiologic, and radiographic variables that may be measured in clinical research. Because of these outcomes’ importance to patients, they are referred to as "patient-centered" outcomes. Ideally, clinicians will offer, insurers will pay for, and patients will have the opportunity to use treatments that have been shown to improve patient-centered outcomes. High-quality medical care is more likely to result in improved patient outcomes.

Computerized medical databases can be used to study the process and outcome of critical care as measures of quality. There are several well-described multi-center computerized medical databases of critical care, including the APACHE Medical Systems-linked group of hospitals, Project IMPACT sponsored by the Society of Critical Care Medicine, and the Intensive Care National Audit and Research Centre in the United Kingdom that collect ICU specific data on process and outcome. Whether these databases can be used to actually measure or improve quality of care remains a topic of some debate.

Limits of Using Computerized Medical Databases to Measure and Improve Quality of Care

Perhaps the greatest challenge to using computerized medical databases to measure and improve the quality of intensive care is the enormous challenge of defining what we mean by "quality care." There are 3 major impediments to using process measures from computerized medical databases to audit the quality of intensive care. The first is that even when there is strong evidence of effectiveness in critical care, considerable disagreement about the exact application of treatments in specific cases remains.11,12 Even in fairly narrow clinical situations with strict review criteria, experts can disagree on the appropriateness of care in individual cases.13 Identifying appropriate process measures entails identifying conditions where there is consensus that a specific treatment should have been provided. The evidence base for making these recommendations in critical care is only recently evolving. The second impediment to using process measures is that identifying patients with acute lung injury to whom specific therapy should be offered is considerably more difficult than identifying patients with acute myocardial infarction. Finally, ICU process measures are more subtle to capture. For example, surgical and percutaneous interventions for coronary artery disease are captured in administrative databases because they are reimbursed; however, ICU-based interventions like ventilator settings, patient positioning, and medications would require assessment from a specially designed administrative database or an electronic medical record. Other fields in medicine (eg, cardiovascular disease and renal disease) have developed networks to collect disease-specific data on process of care and outcome.14,15

The use of risk adjusted outcome, usually mortality, remains a key component in attempts to define, report, and improve quality of care.16,17 Risk adjustment is designed to address the problem of confounding, that is, centers with sicker patients will have worse outcomes and appear to deliver worse care. By adjusting for severity of illness in a multivariate model, ideally the baseline risk of death is mathematically equalized between institutions. The remaining differences in outcome are attributed to differences in structure and process of care. Unfortunately, there is now an extensive body of literature that demonstrates that risk-adjusted outcome is not a valid technique for identifying high- or poor-quality hospitals because of residual confounding, bias due to referral and upcoding of severity, and chance.18-22 These limitations are poorly recognized, as evidenced by a recent widely publicized attempt to identify the "100 Top Hospitals" and their ICUs using risk-adjusted outcomes based on administrative (non-physiologic) data from Medicare Provider Analysis and Review database.23

On Sept. 22, 1993, President Clinton summarized the 2 goals of auditing risk-adjusted outcomes when he presented his ill-fated health care reform bill to a joint session of the United States Congress: "Our proposal will create report cards on health plans, so that consumers can choose the highest-quality health care providers and reward them with their business. At the same time, our plan will track quality indicators, so that doctors can make better and smarter choices of the kind of care they provide." Computerized medical databases have proven useful in assessing medical technology to inform clinical decisions including the use of blood transfusions and the pulmonary artery catheter.24,25 However, the promise of using report cards based on risk-adjusted outcomes as a tool to improve health care quality by informing marketplace decisions has not been fully realized. Between 1991 and 1997, all 30 nonfederal hospitals in greater metropolitan Cleveland participated in the Cleveland Health Quality Choice (CHQC) program. Every 6 months, models were used to analyze whether participating hospitals’ observed in-hospital mortality rates were greater or less than expected, and these results were distributed in a public report. While there was a trend over a 7-year period for hospitals with poor risk-adjusted outcomes to lose market share, this effect was not statistically or clinically significant.26 While there appears to be some benefit to feeding back outcome data to hospitals in terms of improving outcomes, this effect does not seem to be mediated by changes in market share.27

Using the Electronic Medical Record to Improve Outcomes in Your ICU

The electronic medical record is much more than a technique to eliminate reams of paper medical records. When used correctly, the electronic medical record has been shown to be one of the few consistently effective tools to change clinician behavior and improve outcomes. There are several ways information from the electronic medical record can improve outcomes in the ICU, including: 1) computer-aided physician order entry; 2) implementing guidelines; and 3) feedback of process and outcome data as part of a quality-improvement initiative. Computer-aided physician order entry reduces medical error by preventing handwriting interpretation errors and by capturing drug dose and drug interaction errors. Guideline implementation can be facilitated by using computer-generated prompts to remind physicians to implement guidelines in appropriate cases. Finally, data from the electronic medical record can be used to track the outcomes of quality-improvement initiatives and to target feedback to individual clinicians where appropriate.

Ethics of Using Data From Computerized Medical Databases

Many uses of computerized medical record data fall under the category of routine clinical care and therefore are not covered by research requirements for institutional review. This might include using the electronic medical record to profile physicians and feedback data regarding antibiotic use; to identify resistance patterns in bacteria in the ICU; or to see whether the purchase of new antibiotic-coated catheters had reduced the incidence of catheter-related bacteremia. Analysis of the electronic medical record for these local quality improvement activities does not require Institutional Review Board approval and may not require informed consent from patients because it is part of ongoing clinical practice review (see Figure). However, the line between research and quality improvement is not well established.28 The investigator’s intent on publication is not a useful criterion for distinguishing research from clinical care. The need for informed consent or difficulty in obtaining it should not be used as a criterion for distinguishing research from clinical care; however, these are factors that an Institutional Review Board may take into account in a decision to waive informed consent. In general, any use of nonpublic medical records for research purposes should be reviewed by a research review board. Most of the analyses of computerized medical databases falls into the category of non-research or minimal risk research. Institutional Review Boards should have processes in place for the expedited review of minimal risk research covering observation of clinical practice that ensure consistent review.29

In 1996, the United States Congress passed the Health Insurance Portability and Accountability Act (HIPAA). HIPAA was created to streamline industry inefficiencies, improve access to health insurance including workers who change jobs, better detect fraud and abuse, and ensure the privacy and confidentiality of health care information. For the most part, HIPAA was designed to enhance patient privacy with respect to payers, employers, and pharmaceutical companies. However, this legislation supplements the Common Rule (the United States Code of Federal Regulations that guides clinical research) regarding the privacy of clinical data. Health information can be accessed for research in 4 ways: 1) with individual informed consent; 2) with consent waived by an appropriate Institutional Review Board or Privacy Board; 3) under the constraints of a limited data set use agreement which provides investigators with partially anonymous data and requires less-stringent requirements than waived consent; and 4) by providing completely anonymous data. How individual Institutional Review and Privacy Boards will interpret these categories and the requirements they will place on investigators for maintaining records of access to an individual patient’s computerized data remains to be seen. An excellent resource for information in this rapidly changing area is at http://privacyruleandresearch.nih.gov. Accessed November 10, 2003.


Perhaps the greatest benefits of computerized medical databases are yet to be realized. As computing power shrinks, standards for wireless computing and data sharing become more robust, and clinical users become accustomed to computer interfaces from early on in their career, the seamless integration of computers into clinical practice will be inevitable. Ample evidence exists that timely computer-generated prompts and decision support can influence clinical practice.30-32 In considering the electronic medical record as a research and quality improvement tool, investigators should realize that while it will certainly yield an increase in legibility, the content of the information may or may not be improved by conversion to bits. The use of networks of ICUs to test quality-improvement strategies and evaluate trends in outcomes of critical illness syndromes has begun and will be greatly facilitated by computerized medical databases.33

Dr. Rubenfeld is Assistant Professor of Medicine Division of Pulmonary and Critical Care Medicine University of Washington, Seattle


1. Rubenfeld GD, et al. Outcomes research in critical care: Results of the American Thoracic Society Critical Care Assembly Workshop on Outcomes Research. The Members of the Outcomes Research Workshop. Am J Respir Crit Care Med. 1999;160(1):358-367.

2. Kohn L, Corrigan J, Donaldson M, eds. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.

3. Institute of Medicine (U.S.). Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy Press; 2001.

4. State Inpatient Databases: Powerful Databases for Analyzing Hospital Care. Available at: http://www.ahcpr.gov/data/hcup/hcupsid.htm. Accessed November 10, 2003.

5. Sirio CA, et al. Cleveland Health Quality Choice (CHQC)—An ongoing collaborative, community-based outcomes assessment program. New Horiz. 1994; 2(3):321-325.

6. Knaus W, et al. APACHE III study design: Analytic plan for evaluation of severity and outcome in intensive care unit patients. Introduction. Crit Care Med. 1989;17(12 Pt 2):S176-S180.

7. Schriger DL, et al. Implementation of clinical guidelines using a computer charting system. Effect on the initial care of health care workers exposed to body fluids. JAMA. 1997;278(19):1585-1590.

8. Polderman KH, et al. Accuracy and reliability of APACHE II scoring in two intensive care units. Problems and pitfalls in the use of APACHE II and suggestions for improvement. Anaesthesia. 2001; 56(1):47-50.

9. Polderman KH, et al. Inter-observer variability in APACHE II scoring: Effect of strict guidelines and training. Intensive Care Med. 2001;27(8):1365-1369.

10. Corwin HL, et al. Efficacy of recombinant human erythropoietin in critically ill patients: A randomized controlled trial. JAMA. 2002;288(22):2827-2835.

11. Eichacker PQ, Natanson C. Recombinant human activated protein C in sepsis: Inconsistent trial results, an unclear mechanism of action, and safety concerns resulted in labeling restrictions and the need for phase IV trials. Crit Care Med. 2003;31(1 Suppl):S94-S96.

12. Eichacker PQ, et al. Meta-analysis of acute lung injury and acute respiratory distress syndrome trials testing low tidal volumes. Am J Respir Crit Care Med. 2002; 166(11):1510-1514.

13. Shekelle PG, et al. The reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med. 1998;338(26):1888-1895.

14. Jencks SF. HCFA’s Health Care Quality Improvement Program and the Cooperative Cardiovascular Project. Ann Thorac Surg. 1994;58(6):1858-1862.

15. Goldman RS. Continuous quality improvement in ESRD: The role of networks, the United States Renal Data System, and facility-specific reports. Am J Kidney Dis. 1998;32(6 Suppl 4):S182-S189.

16. Hadorn D, et al, eds. Assessing the Performance of Mortality Prediction Models. Santa Monica, Calif: Rand; 1993.

17. Iezzoni LI. Risk Adjustment for Measuring Health Care Outcomes. Ann Arbor, Mich.: Health Administration Press; 1994.

18. Chassin MR, et al. Benefits and hazards of reporting medical outcomes publicly. N Engl J Med. 1996;334(6):394-398.

19. Escarce JJ, Kelley MA. Admission source to the medical intensive care unit predicts hospital death independent of APACHE II score. JAMA. 1990; 264(18):2389-2394.

20. Park RE, et al. Explaining variations in hospital death rates. Randomness, severity of illness, quality of care. JAMA. 1990;264(4):484-490.

21. Hofer TP, Hayward RA. Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996;34(8): 737-753.

22. Thomas JW, Hofer TP. Research evidence on the validity of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care Res Rev. 1998;55(4): 371-404.

23. 100 Top Hospitals: ICU Benchmarks for Success—2000. Available at: http://www.100tophospitals.com/studies/icu00/methodology.asp. Accessed November 10, 2003.

24. Hebert PC, et al. Clinical outcomes following institution of the Canadian universal leukoreduction program for red blood cell transfusions. JAMA. 2003; 289(15):1941-1949.

25. Connors AF, Jr., et al. The effectiveness of right heart catheterization in the initial care of critically ill patients. SUPPORT Investigators. JAMA. 1996; 276(11):889-897.

26. Baker DW, et al. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care. 2003;41(6):729-740.

27. Hannan EL, et al. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994;271(10):761-766.

28. Casarett D, et al. Determining when quality improvement initiatives should be considered research: Proposed criteria and potential implications. JAMA. 2000;283(17):2275-2280.

29. Lynn J, et al. The ethical conduct of health services research: A case study of 55 institutions’ applications to the SUPPORT project. Clin Res. 1994;42(1):3-10.

30. Gardner RM, et al. Computerized continuous quality improvement methods used to optimize blood transfusions. Proc Annu Symp Comput Appl Med Care. 1993;[volume not listed]:166-170.

31. Balas EA, et al. Improving preventive care by prompting physicians. Arch Intern Med. 2000;160(3):301-308.

32. Bero LA, et al. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ. 1998;317(7156):465-468.

33. Keenan SP, et al. The Critical Care Research Network: A partnership in community- based research and research transfer. JJ Eval Clin Pract. 2000;6(1):15-22.

34. Cook SF, et al. Project IMPACT: Results from a pilot validity study of a new observational database. Crit Care Med. 2002;30(12):2765-2770.