Be sure those lunch and learns are working — evaluate training efforts
Be sure those lunch and learns are working — evaluate training efforts
Five-stage evaluation process growing in popularity
As clinical trial sites work to improve quality and efficiency, some clinical trial administrators are finding that it’s important to have education and training programs that have proven success. Simple pre- and post-tests may not be enough to find out whether a site’s training is achieving its goals, experts say.
"We’re so bogged down with delivering the information and having a quality training program that we forget that we need to make sure that staff actually get it and can transfer the information to their jobs," says Melissa Oakley Ockert, BS, CCRA, CCRP, program director of clinical trials research at Durham (NC) Technical Community College.
There are many ways to look at how learning is transferred, and one way is to follow the model of various levels of evaluation developed by Donald L. Kirkpatrick, Ockert says.1 "Kirkpatrick’s copyrighted system is easy to work with," Ockert explains.
The concept of evaluating training programs is relatively new to the industry, notes Kim Andrews, MEd, training manager in the global learning and performance department of PPD Inc., of Morrisville, NC. Headquartered in Wilmington, NC, PPD is a global contract research organization that provides discovery and development services, including market development expertise and compound partnering programs.
Kirkpatrick’s model was intended to assess the impact of training in a corporate environment, but it was designed to be very general so it is easily transferable to the clinical trial industry, Andrews says. "Traditionally, in the clinical research environment an employee is very good at what he or she does and gradually evolves into the role of trainer," Andrews says. "So it’s new to have adult trainers doing an evaluation program."
Nonetheless, this new concept is going over very well in the clinical research industry. When Andrews spoke about this at a clinical trials industry conference in 2005, the session was attended by several hundred people.
Eli Lilly and Co., which is headquartered in Indianapolis, began to look into training evaluation models in 2003 as part of the company’s efforts to streamline training, says Claudia Lappin, an associate training consultant with Eli Lilly.
In order to make training more cost effective, the company needed to have a thorough evaluation process that would produce useful data about training outcomes, Lappin notes.
"We have a broad-based approach to evaluation, and we decided to take an enterprise type of approach and develop a strategy in 2004," Lappin says. "Part of that strategy does involve the use of evaluation tools and techniques, but emphasizes evaluation planning and uses criteria that Kirkpatrick pioneered."
Eli Lilly’s move to the evaluation process is on the cutting edge of what is becoming an industry-wide trend, Lappin says. "I think it will increase as the industry as a whole looks to drive improvements and efficiencies in the way they operate," Lappin says. "We have to do a better job at seeing what works and what doesn’t and evaluation data can help us do that."
Evaluating the effectiveness of a training program begins before the training program is designed, and this is where experts in instructional design are useful, Andrews says.
"The key piece that is important for me to communicate is that it’s important to plan for your evaluation while you’re creating your program," Andrews says. "And if you do that appropriately, you’ll get the information you need."
Evaluation is so critical that it should be done concurrently with developing educational or training content, Ockert notes. For instance, when an education program is being designed, the designer should decide how much emphasis will be placed on recall information, application, and higher order evaluation, Ockert suggests.
"One of the lessons I’ve learned is we spend too much time testing on recall types of questions," Ockert says. "So I’ve been trying to add more critical thinking types of questions." The question always to keep in mind is, "Am I measuring what I want to measure?"
Ockert suggests that these points be kept in mind when designing an educational, training, and evaluation program:
- Begin with the end in mind;
- Have an evaluation plan in place;
- Establish a goal of reliability and validity;
- Measure beyond knowledge;
- Determine what an organization hopes to learn from the evaluation and the desired result; and
- Determine what information must be shared.
Clinical trial trainers and educators need this information because they haven’t been taught much about evaluation traditionally, Ockert notes. Once an evaluation strategy is designed for a particular organization it allows training associates to do some critical thinking about the results they’d like to show, Lappin says.
"If we have process training for a particular medical process, and it’s at the awareness level, then you’d probably want to evaluate it based on participant satisfaction or reaction, which is normally done with a survey tool," Lappin explains. "So the kind of question you would ask is how are people reacting to the training."
Also, for some organizations it might be useful to hire professionals in the field of training and instruction evaluation. "Instructional designers are professionals in designing the performance improvement and training programs," Andrews says. "They provide training delivery using behavioral and psychological research for making the best and most appropriate decisions for improving corporate training programs."
The concept of evaluating training programs is a data-driven method for making certain an organization is putting its education money in the right place.
"Often corporations make decisions about what training was good or effective based on anecdotal information or people’s responses when they meet in the hallway, rather than making decisions like you do in clinical research based on data," Andrews says.
"While human behavior is not perfect or easily quantifiable, what the system does is try to provide data so you can make effective decisions about what needs altering and which do and don’t improve performance," Andrews adds.
Reference:
- Kirkpatrick DL. Evaluating training programs: the four levels. Berrett-Koehler; San Francisco, CA;1994 ( p. 229).
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.