THE QUALITY-CO$T CONNECTION

Preplanning improves survey data

Part 1 of 2

By Patrice Spath, RHIT
Brown-Spath & Associates
Forest Grove, OR

There are many techniques used in a health care organization to gather information about performance. One of the most commonly used instruments is the opinion survey. Such instruments are to collect information about employee morale, departmental efficiency, patient satisfaction, and a host of other variables that describe or relate to performance. Some surveys are focused on a single variable, such as timeliness of services, while others are comprehensive instruments for use in broad organizational assessments.

To ensure that data gathered through survey instruments will meet the user's needs, it is important that considerable planning go into developing the tool. Without adequate planning, the survey results may not yield useful information and the work that went into the survey process will be wasted. Part one of this two-part series describes how to create an effective survey. In next month's column, pilot testing and reporting results will be covered.

Define objectives and population

Start the process of survey development by clearly defining the purpose or intent of the survey. What precisely are you trying to find out with the survey? Why are these data needed? The objectives should be defined in writing as precisely as possible and should be limited to those issues that are really important. If more than four or five basic topics will be addressed by the survey, the instrument will probably be too long and the people who are asked to fill out the questionnaire will not respond. Keep the survey focused on just the high-priority questions that need to be answered to meet your objectives.

The population is everyone from whom you will need to have a response in order to completely and correctly answer the basic questions. It is important to be precise in defining the population to be used. Although this does not mean that a list should be made of everyone in the specific population, this is the time to consider how the survey instrument physically will be brought to the people in the respondent population. For example, will the members of the group be assembled in one place or will the questionnaire be mailed to them at their home or business address?

Ideally, one would like everyone in the population of interest to receive and respond to the survey. If the population is small, e.g. all employees in the radiology department, this might be possible. However, this may not be realistic if the survey population is all patients discharged from the hospital.

You may need to settle for a sample of the population, preferably a sample that will provide the same results as if everyone actually had responded. The two more common techniques for sampling are random sampling and stratification.

Random sampling should ensure that every person in the population has an equal chance of being picked to receive a survey.

However, this can be difficult to accomplish because any number of factors can interfere with the randomness of the selection process. Suppose, for example, that you decide to survey a 20% sample of the 783 nurses working the hospital. You could list all of the nurses by their employee number and then select 156 of them. If you were to pick the first 156 and the list were in numerical order (low to high), the sample most likely would be biased. Because employee numbers are not assigned randomly, nurses from certain units or with certain hire dates would be excluded from the sample.

If you were to choose every fifth nurse on the list, the sample would then be random. The best method would be to list the employee numbers randomly and then select every fifth one. This procedure still might not yield a representative sample because there are fewer nurse managers than staff nurses. Thus, managers would be less likely to be represented in the sample than staff nurses.

To correct this problem, you must stratify the sample — in this example, the nurses could be stratified by position. This can be done by grouping the employee numbers into two lists (management and non-management) and picking 20 percent of the nurses from each list. You can stratify the survey population in any way that logically will reduce bias or make the sample more representative.

Construct instrument

Your goal is to construct a concise survey instrument that is easy to understand and one that will be consistently interpreted by everyone completing the survey. A typical survey has three basic parts: the cover letter, the items, and the scales. Careful planning is necessary to ensure each of these parts contributes to your well-designed survey.

The cover letter should be written clearly and simply, without the use of jargon and technical words. It should convey to the respondent at least three topics:

(a) why the survey is being conducted;
(b) what the benefit of the survey might be, especially with respect to the respondent;
(c) the guaranteed anonymity and security of responses.

Respondents also should be thanked for their participation. The cover letter should not be long; two to three paragraphs usually is adequate. The more the letter looks like a "real" letter, the better. The use of letterhead stationery is highly desirable, and, whenever possible, each letter should be signed individually. The more personalized attention the respondent perceives, the higher the response rate will be.

The items are the heart of the survey instrument. Writing the questionnaire items is the most important step of the survey process. Start by revisiting the objectives you defined in step 1. You'll want to translate these objectives into specific questions.

For example, if the objective of the survey is to determine how employees perceive the patient safety attitudes of senior leaders and management, questions such as the following might be included in the survey:

  • Do senior managers at this hospital communicate to you that patient safety is a high priority?
  • Does your immediate supervisor act on reported information related to unsafe situations to improve patient safety?
  • Can staff in your department report adverse events and unsafe acts without fear of disciplinary action?
  • Does this hospital effectively balance the need for safety and the need for productivity?

Be sure to avoid threatening the respondent with the survey questions. For example, if you were trying to measure the feelings of housekeeping staff about job security an agree/disagree item such as "less productive workers should be laid off first" probably would threaten many people. A person who is threatened will frequently fail to complete the survey. When an implicit threat is inevitable, reassurance of anonymity often helps.

Good item construction depends on common-sense writing skills. Avoid leading questions and try to phrase items objectively. Always use common rather than obscure terms, and strive for brevity and clarity.

Scaling refers to the range of answers offered the respondent. The most commonly used scale is the Likert scale, with five or seven multiple-choice alternatives such as "to a very great extent...to a moderate extent."

Other dimensions that are commonly used include "agree-disagree," "how much," "how often" (frequently-infrequently, never-always, once a day-once a year), "to what degree," and "how important." When actual frequency of behaviour is being measured, the "how often" or "never-always" sets are most relevant. When personal values or the rewards one wants from work are being examined, the "how important" scale might work best.

It frequently is helpful to word the survey questions so that the answers can be graded on a continuum rather than discretely.

For example, a scale that measures degrees of managerial control (high control, some control, little control, no control) results in a continuum; whereas on an instrument that identifies sources of managerial control (fear, threats, punishment, etc.), the items must be graded discretely or individually.

A continuum generally is indicated by the use of adjectives or adverbs (high-low, often-not often, moderate-very much), and discrete items generally are nouns or verbs (reward, punishment, does, does not).

The number of points on a scale usually is between five and nine, as this is the comfortable range of discrimination for most people. It is important to avoid restricting the range of responses to only two or three categories. A limitation such as this could result in meaningless survey results.

Once the survey instrument is complete, the next step is to pilot test it. Techniques for pilot testing, as well as reporting the results, are described in next month's column.