Think through issues in satisfaction survey change

Apples to oranges comparison problematic

After an affiliation with Columbia/HCA Healthcare Corp., Riverside (CA) Community Hospital switched from using Press, Ganey Associates in South Bend, IN, to The Gallup Organization in Lincoln, NE. Which methodology - telephone interviews or mail-in questionnaire - results in the most accurate patient satisfaction survey?

"Marketing researchers, administrators, and physicians all have opinions on which methodology is best, but both basic methods have their pros and cons" says Lynne Cunningham, MPA, FACHE, a health care marketing research and strategic planning consultant based in Sacramento. Cunningham recently compared patient satisfaction results in the emergency department at Riverside.

"We wanted to examine this issue carefully because emergency room physicians and staff often question the validity of customer satisfaction data because they feel like they specialize in dealing with dissatisfied patients," she says.

Richard Guth, MD, medical director of the emergency department at Riverside Community Hospital explains why ED personnel may resist measuring customer satisfaction.

· A high percentage of ED patients are acutely ill, or in pain.

· ED patients feel abandoned by their regular providers.

"Their personal physician may be out of town, not on call, too busy in the office, or simply not in the mood to deal with Mrs. Jones and her migraine headaches or Mr. Smith and his anxiety attacks," he says. "In these situations, patients are often directed to go to the ED."

· Cost pressures often result in chronic understaffing of EDs.

"Staffing is typically based on average daily census rather than using peak volume adjusted for patient acuity," he explains. "So when several patients arrive simultaneously, the department is ill-equipped to care for them, stresses mount, and the customer relations suffer."

· The ED serves as a "safety net" provider for the indigent.

"Many patients come for a variety of complex reasons related to the breakdown of their support system," he says. "For example, the homeless may come looking for a meal or a warm bed for the night. The abused, the lonely addicted, the insane, depressed, and sometimes even the bored or sleepless end up at the ED. So it's not surprising the staff tend to resist the concept that every patient is a valued customer."

To find out which methodology yielded the most accurate results, Guth and Cunningham began by comparing the Press, Ganey data from mail-in surveys from April to July 1997 to that collected via telephone interviews by Gallup from October to December of that same year.

"It was very difficult to make accurate comparisons because there is no consistency in the wording of questions between the surveys," Cunningham points out.

While direct comparison of results are inappropriate, common areas of both reports can be identified, cautions Robert Nielsen, Gallup's senior vice president and senior health care consultant.

For example, both surveys asked about the following:

· registration process;

· overall nursing care;

· quality of emergency physicians;

· overall satisfaction;

· X-ray and lab services;

· keeping family and friends informed.

In the first four areas, survey results improved 8% to 13% in the telephone survey. Satisfaction with X-ray and lab services decreased by 1% and with the ED's job of informing family and friends increased by 2%. (See table of comparison, above.)

"Yet during the period of the second survey, the ED was impacted by several variables which would be expected to lower patient satisfaction," Guth notes.

For example, a severe flu epidemic filled all area hospitals and emergency departments, causing extensive waiting times. "The waiting time was the longest I have seen in my 18 years here," Guth says. Exacerbating the problem was a new information system as well as a hospitalist program. (While a hospitalist - a physician who admits patients for other physicians or managed care provider - generally expedites the process, in this case, "the change in itself may have slowed things down," notes Cunningham.) "I am certain that by any objective criteria our performance suffered during this period and I feared significant decline in measured patient satisfaction," he says. "So I was pleasantly surprised to see improvement on the Gallup survey."

Part of the inconsistency can be attributed to the difference in the response rates between the mail-in survey and telephone interview, says Nielsen.

Gallup research on patient response rates indicates that the lower the response rate on a survey, the lower the overall satisfaction. Gallup's typical response rate on ED patient telephone surveys for Columbia hospitals is 70% to 75%, he says. Most mail survey response rates are much lower; a less than 30% rate is typical.

Getting more responses helps

"Surveys with low response rates are biased towards those respondents who have had a negative experience. These patients are most motivated to send in a response," Nielsen says. Since the higher response rate surveys are more representative of the total patient population, the satisfaction levels reported will be higher."

Research results clearly depend on the method, agrees Dennis Kaldenberg, PhD, manager of research and development for Press, Ganey Associates. But he also points out that surveys based on telephone interviews have their own inherent bias. "There is considerable evidence that telephone surveys are prone to an acquiescence bias because the respondent may be unwilling to deliver bad news to someone asking a question," Kaldenberg notes.

Although mailing may appear to be a cheaper way of conducting a survey, in actuality the cost may rise with the number of mailings required to get an appropriate response rate, points out Cunningham.

The way a question is asked will also influence the response, Kaldenberg adds. For example, a question asking for a level of satisfaction with care will elicit a different response that one asking for an evaluative rating of care. "Both questions provide useful information, but they are not directly comparable," he explains.

The only way to directly compare one evaluation period to the next, he says, is to choose a method, understand its biases, and use that method consistently. "If you change methods, you should remember that you may lose direct comparability during the transition."

Cunningham cautions that, no matter which methodology you select, be sure you have a large enough and a random enough sample. "However, the bottom line is that you take the results and demonstrate continuous improvement," she says.