Collecting site metrics; do you know what they say?

Ask sponsors, CROs, FDA for feedback

Sponsors, clinical research organizations (CROs), and others increasingly are collecting site-level metrics, along with other data pertaining to clinical research performance. If clinical trial sites are not finding out what their metrics show, then they're missing a good opportunity to improve their operations.

"It's not about the numbers or that red-green-yellow dashboard," says Liz Wool, CCRA, CMT, president and chief executive officer of QD – Quality and Training Solutions in San Bruno, CA. Wool is a member of the board of trustees of the Association of Clinical Research Professionals (ACRP).

"It's about establishing metrics and having a series of questions and assessments," Wool says.

The goal is to find a quality site scoring tool, enter in your site performance data, obtain a score, and learn what your risks and areas needing improvement are.

For example, the Metrics Champion Consortium (MCC) of Carmel, IN, provides members with access to a clinical trial metrics tools and activities, including process improvement meetings. MCC's website is located at

"MCC has a lot of presentations that could be a good resource," Wool says. "I have encouraged sites to join the MCC and be at the table as metrics are being developed for clinical trial performance."

Clinical trial sites also could obtain information about their performance metrics from their study sponsors and CROs, she suggests.

"There are companies that keep scorecards on your performance in a clinical trial, and that is the information I've heard that sites would like to know," she says. "I believe sites need to ask sponsors if they utilize a scoring tool, dashboard, or scorecard to evaluate performance, and after the study they can ask for a copy."

Research sites need to know what those indicators are so they can be responsive to improving performance and acting on identified problems, she adds.

Another place where sites could obtain metrics and quality data is the Food and Drug Administration (FDA).

"The FDA has been presenting at conferences for the last year about site inspections for marketing approval," Wool says. "It's called the PAI – pre-application inspection where the FDA gathers information they might have about a site."

The FDA's list of attributes include complaints that have been made about investigators, the number of investigational new drug (IND) permits, and other information collected from sponsors, she explains.

In past decades, the FDA would trust sponsors' analyses, but now the agency has the funding and time to go over a study's metrics, Wool says.

"The FDA runs the database through its own statistical analysis to confirm its conclusion that the drug is safe and effective," she explains. "And they're now analyzing site level data and assigning risk to performance using that information."

Analysis from the Clinical Trial Site Selection Tool results in each site having a risk score that the FDA's Center for Drug Evaluation and Research (CDER) uses to determine which sites will be inspected, she adds.

According to CDER, the different levels or risk attributes are the application level, the study level, and the clinical site level.

For instance, at the clinical site level, the metrics would involve enrollment, protocol deviations, adverse events, subject deaths, site specific efficacy, financial disclosures, complaints, inspection history, enroll/screen ratio, and data about subject discontinuations.

The FDA information is free, so sites can obtain it and use it to assess their own performance, Wool says.

Everyone involved in the research enterprise has a stake in high quality metrics, which is why a number of sponsors and others have joined the MCC to be engaged with its efforts to create high quality metrics that will be used to identify best practices and improvement needs throughout the industry, Wool notes.

"At MCC there are no politics or posturing," she says. "Everybody is there about the quality of the metrics we want to deliver, and there are respectful collaborative teams."

For example, the MCC also is looking at metrics for measuring protocols.

"Sponsors routinely do not send protocols to sites for feedback on whether they're viable or not," Wool says. "That's an issue we're developing in an MCC working group; that's why we have a protocol scoring tool."