Use past performance to market your trial site
Use past performance to market your trial site
Experts offer guidance on improving site selection
The way clinical trial sites are selected is an antiquated process and yields predictably poor results, a research expert says.
"The process is designed to accommodate inexperienced people in site selection," says Louis C. Kirby, MD, medical director of Pivotal Research in Peoria, AZ. Kirby has been a principal investigator on more than 400 clinical research trials and has spoken about clinical trial site selection at national research conferences.
"Thirty percent of all sites do not enroll even a single patient into a trial, and approximately 80% of patients are enrolled at 30% to 40% of sites. The cost of this is a waste of resources to sponsors and clinical research organizations," he notes.
"No matter what side of the industry you’re coming from, we’re all hurt by identifying sites poorly," says Adam R. Chasse, director of patient and site management services at Quintiles in Morrisville, NC.
Chasse also speaks at national conferences about clinical trial site selection.
Selecting sites that don’t perform costs sponsors time and money because this prolongs the enrollment period and could result in delayed research, Chasse and Kirby explain.
"If sponsors don’t do a better job of choosing sites, it takes that much longer to get studies done and drugs to market," Chasse says.
"Over 85% of clinical trials do not hit their projected original timelines for enrollment," Kirby adds. "And the ways sites are selected have not changed in response to that problem." What is needed is a robust database that takes the subjectivity out of the site selection process, he says.
Another problem that is exacerbated by poor site selection is the shrinking pool of physician investigators to conduct clinical trials, Chasse points out. "So not only does the current pool of investigators underperform on studies, the entire pool is shrinking," he says. "We as an industry need to know how to develop investigators, keep them doing research, and keep giving them more business."
Chasse and Kirby offer these suggestions for improving site selection and clinical trial quality:
• Build on success. "For sites that have been around for a while and have a reputation for doing work in a specific area, word gets around that they’re fairly competent, and they get chosen for a lot of studies," Kirby says.
However, even these more successful sites have a finite patient base, and investigators may overestimate their capacity to enroll subjects, he explains. Also, popular sites may have several competing studies at the same time, and so they may not have many patients for the second or third time around with a sponsor, Kirby adds.
"So it goes back to the first argument that sites have some responsibility in this because it’s an economic decision for all parties," he says.
Therefore, successful sites should not spread themselves too thin and end up hurting their reputation and their ability to succeed, Kirby notes.
Instead, they should take a realistic look at their capacity to enroll and communicate potential conflicts to sponsors, he says.
"Somebody should say, I don’t know if this other study will start, but if we take on a second study, then we’re stuck with it,’" Kirby says. "So the best dialogue to have with the second study’s sponsor is to say, This other study is on hold, but if they cancel, I’m yours; if they do start the study, I’ll honor my initial obligation to the sponsor.’"
• Collect data and analyze performance. "We’re devoting an enormous amount of resources into finding sites we haven’t worked with before and sending someone out there to work with them, find out what makes them tick, and to seriously analyze performance data on all sites we use," Chasse says.
"We want to know who the high performers are," he explains. "A sponsor might consider Dr. So and So a good investigator; but when you dig below the surface, you find out he didn’t enroll as well as he should have, and the monitor was simply impressed with the nice facility."
Quintiles has spent a lot of time investigating sites and has developed a process for looking at performance data more closely, Chasse adds. "We can’t afford to reinvent the wheel with every single study by not remembering that Dr. Smith did a great job and Dr. Jones did not, and then low and behold, we use Dr. Jones on the next study."
The most critical performance metric is enrollment, he notes. "Even if a site takes twice as long to get the regulatory contract and paperwork done, if they enroll twice as many participants, all is forgiven," Chasse says.
"We look at it on a time basis, so we look at enrollment per month from a number of different plotting points," he explains. "We make sure a site is not penalized for being added on late."
For example, if a study involves 40 sites and six months into a study the enrollment is poor across the board, then some sites are dropped and replaced with others, Chasse adds. "Those new sites may only enroll four patients in four months, and another site in 10 months has enrolled eight patients. I’d argue the backup site is better at enrollment per month."
Also, Quintiles compares a site’s performance to the mean and median of a study to avoid bias from statistical outliers, he says.
"People here are trained to look at a variety of factors so they can come up with the best possible potential site list to hand back to teams," Chasse explains. "We don’t place as high a priority on the other factors as we do on enrollment."
The other factors include:
— time to do enrollment;
— screen failure rates when appropriate;
— patient retention rates;
— how effectively a site uses its advertising budget and how that translates into subject enrollment numbers.
"Past performance should not be the only way sites are assessed," Chasse notes. "We have to understand the process for problems and how proactively we put procedures in place so there won’t be any problems."
For example, is there a project manager who can fix problems before he or she has to call the site and persuade them to fix it? he asks.
• Introduce customer relationship management techniques. Investigators should be viewed as a tactical advantage from one sponsor to another, Kirby says.
"If you identify highly qualified investigator sites, who reliably produce clean data and adequate enrollment, it will shorten the timeline and ultimately cost less money," he says.
"So the goal is to identify consistently producing sites, and this requires knowledge of the site," Kirby explains. "It requires planning, senior level management decisions and infrastructure, and costs, and etc. — all of which are over and above what senior managers have to do."
If the goal is to accelerate the timeline in clinical development, then the research industry is going to have to do some planning and structure changes in order to change the system, he says.
Sponsors and CROs should view investigators as customers and ask themselves whether they know what makes a particular doctor’s site work, Kirby says. "They should spend time with the senior level person and engage in conversation to identify key parameters that the clinical trial site will follow," he adds.
• Take prospective measures of site performance. While it’s important to know a site’s past performance before selecting sites for a study, it’s also necessary to prospectively measure performance, based on what are identified as leading indicators of performance, Kirby notes.
Sponsors or CROs could develop a set of indicators, measure for these, and then measure their ability to predict success across multiple sites and studies to determine how accurate they might be, he suggests.
"Look back at your analysis, and say, Which of these are positive indicators for future performance, and which of the others didn’t help much?’ It’s a matter of collecting data and looking at them," Kirby points out.
Some examples of potential indicators include:
— During a pre-site visit, how has the investigator led the protocol?
— How does a site manage internally?
— How does a site produce an ad?
— How does a site measure response?
Prospective measures might be a more accurate way of assessing a clinical trial site’s performance than simply going by enrollment, Kirby says.
"Sometimes, a site doesn’t have great enrollment for one trial, but is pretty good overall, and there may be extenuating circumstances," he notes. "So you should know from a customer relationship what kinds of things keep a site from enrolling."
For example, Pivotal Research has had cases where a sponsor ran out of money and provided no funding for advertising, Kirby explains. "So it took 12 months to get six patients because the company didn’t provide any money."
The other advantage to prospective measures is it requires some telephone conversations with the site, and this is a good way to develop trust, he adds. "This helps get you on a first-name basis with each investigator," Kirby says.
And it will give a sponsor or CRO some additional data about a site’s capability to adapt and take on new study projects.
"A site may have a good sense of a study and is flexible and able to innovate, and it may turn out to be one the best sites even if it has never done this trial before," he says. "Once you figure out what a site is good at then you want to keep using it."
The way clinical trial sites are selected is an antiquated process and yields predictably poor results, a research expert says.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.