CAHPS gains ground, but questions remain
Research papers on CAHPS are due out this fall
Even as health plans begin using the Consumer Assessment of Health Plans survey (CAHPS), those involved in outcomes management are deciding whether the new standard will become their standard. CAHPS is a tool sponsored by the Agency for Health Care Policy and Research in Rockville, MD, that forms the core of the new member survey of the National Committee for Quality Assurance (NCQA) in Washington, DC, which will be used next year with Accreditation '99.
One tool for different populations
CAHPS provides core questions for adult and pediatric versions, as well as supplemental items for Medicare and Medicaid patients, the chronically ill, and other groups. For the first time, managed care organizations can use one tool to measure and compare different populations, says Joe Carmichael, managing director of National Research Corp. in Lincoln, NE, the nation's largest health care performance measurement firm.
Yet researchers outside the CAHPS team have lacked significant information on CAHPS' technical aspects, such as the measurement scales and the survey's distributional properties (such as whether scores are clumped at the top or bottom). Articles on the psychometric properties of the new survey, such as validity, reliability, and sensitivity, are scheduled to be included in a supplement of Medical Care later this year.
When Christina Bethell, PhD, MBA, MPH, director of accountability measurement for the Foundation for Accountability (FAACT) in Portland, OR, was preparing for field-testing on some measurement sets, she debated whether to use the current satisfaction surveys or the new CAHPS tool.
"We're up against a deadline to revisit the measurement set if we want to change them for this next round of implementation," Bethell says. "A huge question is whether we just grab the CAHPS scales and include them across all measurement sets."
She ultimately decided to use them, based on the available information and her desire to conform to an emerging standard.
In contrast, medical literature includes numerous articles on the psychometric properties of evaluative, or ratings-based, questionnaires.
"We know from two decades of research in this area that, except for technical aspects, patients' assessment of their care are very accurate and highly reflective of what's going on with their care," says Dana Gelb Safran, ScD, senior policy analyst at The Health Institute of the New England Medical Center in Boston.
CAHPS carried great impact as soon as early versions became available. It was developed by a team of researchers from Research Triangle Institute in Research Park, NC, RAND in Santa Monica, CA, and Harvard Medical School in Boston.
To date, it has been used by more than 10 states. The Health Care Financing Administration recently administered it to about 130,000 Medicare patients, garnering a response rate of 74%.
The desire for one clear standard of patient assessment bolstered the CAHPS influence. Two competing surveys of health plans would confuse consumers, says Joe Thompson, MD, former assistant vice president for research and measures development at the NCQA in Washington, DC. He is now a member of the pediatric faculty at the University of Arkansas School of Medicine in Little Rock.
So NCQA considered and then adopted a version of CAHPS that incorporates some elements from the NCQA health plan member survey, he says. "There is a fair degree of dissonance and substantively different information out there for consumers to use to compare their health plans," Thompson says. "It was confusing individuals rather than helping individuals."
Will there still be multiple surveys?
Yet users of the new survey face a different dilemma. While health plan report cards may use CAHPS survey results, the plans, medical groups, and hospitals are likely to hang on to at least some of the prior survey to maintain consistency, predicts Carmichael.
Plans need historical data from identical survey questions, says Carmichael, particularly if they tie physician or employee compensation to specific satisfaction ratings contained in the prior survey, he says. "An important part of the measurement is the tracking of changes over time."
Bethell lauds a tool that can be compared across patient populations, and one that represents a national standard. But at the same time, she needed to decide if it would adequately serve FACCT's purpose, which is to assess condition-specific care such as depression and coronary artery disease.
"It's most powerful to have a standard survey that you use, comparing apples to apples," she says. "But apples aren't right for everyone. That's the issue. Are the scales the right ones for all [patient] subgroups?"