To boldly go where no one’s gone before, use a good pilot
To boldly go where no one’s gone before, use a good pilot
Improve chances of success by testing program before implementation
Before launching a six-week course on menopause, Brenda Rizzo, MSN, CNF, program manager for Women’s Health at The Ohio State University Medical Center in Columbus, invited a group of 10 women to participate in a pilot course.
The trial run proved fruitful. The course had been patterned after childbirth classes, where a different topic was covered at each class over several weeks. The feedback from the women in the menopause pilot course revealed that a one-time, two-hour class would work much better.
With childbirth, women are building up to a due date, and their motivation for attending a course is different from women who are concerned about menopause. With menopause, the women just want information, says Rizzo.
"I would have wasted a lot of money in marketing materials and communication if I hadn’t piloted the course first to work out the kinks," she says.
Piloting a program allows you to collect health outcomes data that prove the program’s efficacy, says Laurie Doyle, MPH, diabetes education project manager for Regional Health Education at Kaiser Permanente in Northern California in Oakland. Piloting also provides an opportunity to collect process data on patient and physician satisfaction as well as information on what works and what doesn’t. For example, class count can be tracked by monitoring the sign-up sheet and noting the number of people who don’t show up for the class or who drop out.
A program that looks good on paper may not work well in the classroom, notes Susan Kayman, DrPH, health education project manager for cardiovascular programs for Regional Health Educa tion at Kaiser Permanente in Northern California. For example, the overhead illustrations may not work with the curriculum. "It is a process of trial and error," she explains.
Putting a plan into action
One of the first steps in piloting a patient education program is to determine what group to use during the pilot, says Rizzo. This group should consist of people who are able to contribute something to the overall program. When Rizzo piloted the Half Day of Health Program, where women would have their preventive health care needs taken care of at one four-hour appointment, she selected decision makers from key organizations that would be affected by the program. These women, because of their positions, also could be champions for the program when it was marketed.
The first group to pilot Half Day of Health consisted of women from health insurance companies who would decide whether or not the program would be covered by insurance. The second pilot was aimed at heads of human resource departments from companies who had wellness programs supervised by medical center staff. These companies would determine if they could support employees leaving the office for a four-hour appointment vs. four separate two-hour appointments. If they supported the program, they would help market it within their company.
"The population I piloted the program to needed to know that they could feel comfortable providing both positive and negative feedback because I needed that. I also needed them to be decision makers — people who could say, yes, we can support the program,’ or no, we can’t,’" explains Rizzo.
Half Day of Health includes screening for heart disease, a bone density test, Pap smear, and mammogram with immediate results that are discussed with the patient before she leaves the health care facility. Most insurers cover the costs of the course, leaving patients to pay a $10 copay.
The third group participating in the pilot were the providers, consisting of the nurses, physicians, and nurse practitioners. "We walked through the program, deciding how much time we were taking in each area, and from that we could decide how many patients we could take in one day, what equipment we would need, and how quickly each aspect of the screening needed to be completed so we could get immediate results," says Rizzo.
A pilot does not always need to involve patients, agrees Kayman. When additions, updates, or improvements are made to class curriculums at Kaiser Permanente, instructors are asked to integrate the new piece into their class and provide feedback on it. Changes to a program require buy-in; this gradual approach of creating changes allows instructors to have input in the final product.
The best way to pilot a new patient education program would be a randomized clinical trial, says Doyle. In such a trial, a group of patients would be assigned to normal care, while a second group participated in the new patient education program. The control group would be given access to the program following the pilot.
"If you aren’t able to do that, you can do a baseline observation of both process and health measures along with follow-up," says Doyle. Most patient education coordinators don’t have the time or funding for a clinical trial, she adds.
Pick two or more health outcomes, she advises. For example, a health outcome for a diabetes class might be improved blood glucose levels. It also can be improved self-management techniques such as frequency of blood glucose monitoring.
Process issues to observe might be the number of physician referrals you had compared to how many people actually participated in the class. "If you are having to get five times the number of referrals to get 10 people to show up for a class, that will tell you a lot about how labor-intensive the program is," says Doyle.
Take time for enough input
Make sure enough people go through the program to provide adequate feedback, says Doyle. She suggests about 100 participants. Let the program run a certain number of series, as well. A pilot for a class that is only held once a month might last six months to a year. "If you only evaluate the first program, you won’t measure the full efficacy. It usually takes the instructors a couple of programs before they hit their stride," she explains. At Kaiser, a dry run — a class that is not being evaluated as part of the pilot study — is conducted first before a formal pilot begins.
Your time line should depend on how much is invested in the program, says Rizzo. The meno pause class at Ohio State was free and involved one teacher, while the Half Day of Health program involved a physician, nurse practitioner, RN, and the mammography unit. Completing three pilots for Half Day of Health was worth the time and effort, says Rizzo.
Whatever amount of time is spent, all professionals interviewed agree that a good process for gathering and evaluating information must be determined. "I always develop a formal evaluation tool so people know exactly what I am looking for, and then I interview too," says Rizzo.
She creates a questionnaire based on her goals for the program. For example, the purpose of the menopause course is to empower people to take steps to manage their symptoms of menopause. To do that, they must have an understanding about what menopause is and why the symptoms occur, and learn management ideas they can incorporate into their lifestyle. "The questions you include in the evaluation get at the end goals you have in mind for the program," says Rizzo.
Goals for the Half Day of Health were to make the program convenient, educational, and comfortable for people to go through. Therefore, the questionnaire included questions such as:
• Did you have down-time or feel like you were wasting your time?
• Did you come away with the information you needed to take action?
Rizzo interviews people to expand on the information they provide in the questionnaire.
Other educators, however, take a slightly different approach. A combination of evaluation methods works well at Kaiser, says Kayman. These include classroom observation, a questionnaire, telephone interview, and discussion group.
For example, with a cholesterol management class, instructors received a questionnaire to determine what information should be added to or deleted from the curriculum and what elements of the course they liked. A telephone interview followed the questionnaire, and then instructors were invited to participate in a 2½-hour discussion group.
The curriculum development staff also observed instructors in the classroom to see what kinds of questions came up and if instructors were prepared to answer them appropriately. "We took all the information and walked through the curriculum and made revisions in it before we trained another group of instructors," says Kayman.
No matter how well a patient education program is fine-tuned after the pilot is completed, monitoring efforts should not stop there. The program should be included in continuous quality improvement efforts (CQI).
"Piloting would be for something you have never tried before. Once you have tried it, continuous quality improvement means you are always paying attention to how things are going and making changes along the way," says Kayman. CQI is important because information in the field of health care is constantly changing, she says.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.