Five evaluation levels measure effectiveness
Five evaluation levels measure effectiveness
From reactions to return on investment
The training evaluation process created by Donald L. Kirkpatrick has begun to catch on in the clinical trial and pharmaceutical industries. Here’s how the evaluation model works:
• Reaction / Level one evaluation: Most educational and training programs have at least a level one evaluation, which includes courses that ask for participant feedback. "It’s called a smile sheet,’" says Melissa Oakley Ockert, BS, CCRA, CCRP, program director of clinical trials research at Durham (NC) Technical Community College.
Such evaluation asks these questions:
— How did the instructor do?
— How was the content?
— Was it a good investment of time?
— Did students respond favorably to the delivery and materials presented?
"If you’ve been to a conference and had an evaluation at the end, that’s the type of evaluation we’re talking about," Ockert says. "In the academic setting we do an evaluation at the end of each class."
While this level of evaluation is relevant, it simply tells how the participants responded to questions and what they think of the training, Ockert says.
Level one evaluations do not tell an organization if the knowledge that was taught is taken in and applied or whether it advanced the participants’ skills, she says.
• Learning / Level two: The typical level two evaluation is a pre-test followed by a test after the training has taken place, Ockert says. This could involve formal tests or open-book informal tests, or a team- or self- assessment, Ockert explains.
"There are lots of different ways we could actually test," Ockert says. "We’re trying to assess whether the student or trainee learned the material we were presenting."
Level two evaluations have the goal of proving that a learner left a training program with more knowledge and comprehension than the person had prior to the training, says Kim Andrews, MEd, training manager in the global learning and performance department of PPD Inc. of Morrisville, NC.
"You have a pre-test and post-test and demonstrate a statistical difference between them to be sure the training program had an impact," Andrews explains. "Ideally, you would write objectives to the program that meet corporate needs and write questions that match the objectives to the program."
With level two evaluations, it’s important to emphasize evaluation planning, which means administrators would identify why the instruction is important, says Claudia Lappin, an associate training consultant with Eli Lilly and Co., which is headquartered in Indianapolis.
"Would they need to know the material for regulatory or safety needs, or would they need some sort of qualification requirement?" Lappin says.
The challenge of level two evaluations is that it’s important to capture more than students’ recall abilities, Ockert notes.
"It’s very easy for students to recall the information, but we want to take it further and see if they have learned the concept," Ockert says.
"When we’re talking about level two evaluation, we’d like to test in three different levels," Ockert says. "One would be recall with testing specific facts, terminology, and theories; the middle level would be application, and that’s where we use information to solve a problem or analyze a scenario."
An instructor using the application level would give students an example and ask them to solve a problem, Ockert adds.
The third area would be higher level testing, which is when students take the knowledge and derive their own hypotheses from the information, using their own judgment to take it further than the application level, Ockert says.
"We as instructors and trainers, I think, are challenged by trying to test in those areas," Ockert says. "Similarly, the test questions we use should reflect those three areas, and that’s really challenging."
While it’s easy to write recall questions, writing evaluation type of questions takes more time and effort, Ockert notes.
And it’s at this level that most training and educational programs stop.
• Behavior / Level three: At this level the question is whether the student will transfer the knowledge to his or her own job, Ockert says.
At level three evaluations, instructors take a survey or conduct an interview or focus group after the educational session to see if the person being trained has applied the knowledge to his or her job, Ockert explains.
"In academia we don’t have an extensive way to do that except through an internship program where students can take the knowledge they learned in the classroom and apply it to the field work setting," Ockert says. "In training people we hope they are able to transfer their knowledge to the field."
At level three evaluation, if the strategic planning that an associate did requires a behavioral change, then the training would be evaluated at the application or behavioral change level, Lappin explains.
For example, a goal might be to evaluate whether changes to a medical initiative are being applied in the organization, Lappin says. The critical questions would be whether participants in the training program have applied the knowledge, she says. This can be determined through a follow-up survey or structured level three techniques, which would include action planning or the use of focus groups or semi-structured interviews where you collect data and do the planning, Lappin says.
Electronic evaluation tools that are Web-based are an efficient way to collect data, Lappin notes.
"Action planning has become popular recently," Lappin says. "This is where you have training as part of an activity where participants plan how they will implement training within their organization." For example, if a clinical trial site had a clinical research course that focused on the concept connected with good clinical practices (GCP), then part of the course activity would be the concept of identifying investigator responsibilities, Lappin says.
"You could devise a higher-level activity that would require the participant to actually plan how to inform investigators of their roles and their responsibilities and to include a timetable and key stakeholders," Lappin says.
If an organization decides to use a survey or interview process as part of the level three evaluation, then it might include a group of questions that ask how the person is performing in the field, Ockert says.
"For our purpose, maybe a group gets together and talks about how they get together and apply the information in the field and how their training helped them in day-to-day jobs and how they remember it," Ockert explains. "The most likely method is for someone to go out with the individual to witness their hands-on skills and to see whether they’ve applied that knowledge."
• Results / Level four: At level four, the organization looks at the training’s impact on the business, Lappin says.
"If we had a medical program that was expensive, and we needed to determine a financial return on investment, then we would collect the data based on the other three levels and move it up to level four and five by collecting some financial metrics to get to the return on investment," Lappin explains.
Level four’s importance is with a "So what?" question. "They’ve gotten the information and transferred it to the job, but so what?" Ockert says. "That is, how does it affect our business and what is the organizational impact?"
In the clinical trial industry, this could mean looking at these measures:
— Documented error rates;
— How much time is spent redoing work;
— Quality improvement measurements;
— How frequently the organization hit its deliverables;
— How many times the organization reached the milestones on time;
— Documented complaints;
— Turnover rate in the organization;
— Employee satisfaction; and
— Employee referrals.
"It’s not only important to think about the actual business, but also how is the long term employee projection for our folks," Ockert says. "And that comes from how we treat them and part of that is how we train them and whether they feel ready for their jobs."
• Return on investment / Level five: This level was added to Kirkpatrick’s work over time, and it pertains to the benefits the organization gets out of the educational program, Ockert says.
"If we spent X amount of dollars on training, how much does that save or earn us in the long run?" Ockert says. "That’s a little tougher if you look globally at companies."
Typically, training is one of the first things to go or to be minimized when a company cuts costs, Ockert notes. "But by looking at the return on investment, then if an organization has a good training program it will ultimately provide a return on investment," Ockert says.
Clinical trial sites could assess whether they were able to stay within a study’s budget and make a profit as part of this evaluation, Ockert says.
"Then look at these results cumulatively over a number of projects," Ockert says. "At the end of the day or quarter, did you come out on top?"
Level five evaluation also will look at the cost of the educational and evaluation programs, including development training cost, designer time, expert time, materials, travel costs, and others, Andrews says.
The training evaluation process created by Donald L. Kirkpatrick has begun to catch on in the clinical trial and pharmaceutical industries. Heres how the evaluation model works:Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.