The trusted source for
healthcare information and
CR site problems may mean inadequate monitoring
Expert offers quality systems approach tips
Compliance problems at clinical trial (CT) sites can result in FDA warning letters to sponsors for their lack of adequate follow-up after finding compliance problems, an expert says.
The FDA is particularly concerned about CT sites that have no clear documentation and have failed to correct problems, says Paul Below, CCRA, clinical research consultant and trainer of P. Below Consulting Inc. of Burnsville, MN.
With years of project management and clinical research association supervision experience, Below says it's frustrating to see the types of mistakes found by the FDA.
"Compliance problems seen again and again at the site level could have been prevented by monitoring," Below says.
"My colleagues and I at the Association of Clinical Research Professionals (ACRP) review the compliance problems that are identified at the site level by the FDA, as evidenced in their warning letters," Below says. "In 2006, there were 33 warning letters to investigators."
The key is for sponsors and CT sites to use a quality systems approach to research, Below says.
"The quality systems approach has been used in the pharmaceutical industry for a long time," Below explains. "It includes good lab practices and good manufacturing processes."
Some CR experts are proposing that the industry move toward a quality systems approach, including consistency and standardization in dealing with problems, because regulatory audit findings suggest that CR sites are not learning from the mistakes of their peers, Below says.
"The number of issues seen in warning letters is not going down," Below says. "Each year, there are multiple examples from FDA audits of failures in monitoring; they're not preventing deficiencies, and they're not being corrected."
Some of the problems are widespread. For example, some warning letters indicate there are informed consent problems with not just a few subjects, but with the majority of subjects, Below says.
"That's a problem that should have been picked up by the monitor initially, but obviously was not," he adds. "And if you look at FDA warning letters to sponsors, there were a number of citations of sponsors who didn't adequately follow-up on those deficiencies."
The sponsors didn't do anything to correct problems found at sites, and they passed on the findings to their management, with no plan for correction and follow-up, Below says.
"So the problems happened again and again," he says.
Whenever there's a failure in the manufacturing system, the quality systems approach requires the organization to analyze it, find the root cause, and come up with corrective action, as well as a plan to prevent it from happening again, Below says.
Called CAPA for Corrective And Preventive Action, the approach was designed to fix manufacturing problems, Below says.
"But I've had a number of clients who've started to adopt it in the good clinical practice arena, he notes.
"When a quality assurance auditor investigates a site and finds deficiencies, he or she puts in the report that a site should write a CAPA plan, so clinical sites are becoming accustomed to designing CAPA responses and audits, Below says.
"But we don't look at it as something to do on an ongoing basis for a clinical trial," he adds.
The quality systems approach would look for deficiencies or failures in the CT process, including protocol violations, safety problems for trial participants, data integrity, and other issues that could result in a FDA warning letter or deficiency issues raised by CT monitors, Below says.
Here's how the quality systems approach could be pursued by CR organizations, including CR monitoring companies:
1. Identify deficiencies.
"In the good manufacturing practice (GMP) world, they call them failures," Below says. "In the GCP world, it's not the same."
When a clinical trial site first identifies a deficiency, it could be a violation of the protocol, a violation of good clinical practice, or an informed consent violation, he explains.
"The first step is to identify the root cause of the problem; find out why it happened," Below says. "This requires a clinical research associate (CRA) to ask a lot of questions and interview the staff to try to figure out the real root causes, because there can be a couple of root causes to a deficiency."
For example, suppose a CRA was to note in the monitoring report that a serious adverse event (SAE) occurred to a subject at the site and the SAE was not reported to the sponsor or IRB, Below says.
"This is not a rare finding, and probably all CRAs have come across it at one time or another," Below says. "The issue for the CRA would be why it happened, what was the cause."
"It could have been a simple inadvertent oversight by the staff, and if that's the case, then you have to dig a little deeper to find out why there was that oversight," Below explains. "Was something else going on with the staff to prevent them from doing a thorough review? Were they not documenting study subjects like they should?"
In the audit world, questions relating to what caused a failure are referred to as the Five Why's, as in, "Why did this happen and why did I miss it?" Below notes.
Those questions should reveal the root cause.
"Another explanation might be that the study coordinator didn't report the event and didn't think it met the criteria for a serious adverse event," Below says. "So the root cause in that case is going to be a lack of training issue."
The bottom line is that the CRA needs to get down to what caused the problem, and it needs to be the first step, he adds.
"Most good CRAs will find the cause intuitively, but they don't think of it as a first step," Below explains. "Most are trained to correct a problem, and their initial activities revolve around what do we do to fix this now — instead of finding out what the root cause is."
2. Show what needs to be done to correct the problem.
In the above example of an overlooked SAE, there were two potential root causes described. If the CRA determines the root cause was staff oversight, then the correction would be to revise the study coordinators' medical record review process, Below says.
"In the second scenario of the study coordinator not thinking the SAE met criteria for SAEs, then the site will need to retrain staff on the definition of serious adverse events," Below says.
If the root cause is a workflow issue in which research professionals are not reviewing the medical charts the way they need to in order to identify the SAEs, then the corrective action is to counsel them to make sure they are doing so, Below says.
"Investigate the way they are reviewing charts, and make sure the study coordinator and principal investigator are doing independent reviews of each other, so that if one is missing something then the other one can catch it," Below adds.
Also, there are immediate actions that need to be taken, Below says.
"You have to make sure it's reported right away and that it's reported to the sponsor," he adds.
3. Review corrective and preventive plan for problems.
"An obvious failure of the plan is if you go back to the site and things still are missed," Below says.
"The major responsibility or implementation [of a CAPA] is with the principal investigator at the site," Below says. "But the project manager's responsibility also is to do ongoing monitoring to make sure preventive plans are working."
Project managers should be able to assess problems from all sites in a study and note any trends, he adds.
"They need to look at how sites are implementing corrective plans and determine whether the corrective and preventive action plan is sufficient, Below says.
"If it's not sufficient, they need to step in with a more robust plan, and good project managers do this intuitively," he says.
CR monitors also need to keep better documentation about CAPA plans.
"Monitors are good at describing deficiencies, but they're not so good at describing corrective and preventive actions they've taken," Below says.
4. Find workable strategies for ongoing monitoring.
The mechanics of monitoring needs some adjustment, Below suggests.
"A number of sponsors have gone to an approach now where they're not monitoring all of the source documents, not 100 percent," he says. "They're only choosing a number of randomly-selected patients and doing monitoring based on those patients."
This is a cost-saving strategy, but it poses risks, Below adds.
"When you take an approach like that and decide that monitors are only going to look at a selected set of data, then the project manager needs to define, in advance, what the error rate will be before you automatically have the monitor look at more documents," Below says. "This requires you to have a plan in advance."
The error rate might be the number of data queries generated by the site, which would include monitoring deficiencies noted on every visit, Below says.
"When those exceed a certain level, then you'd have the monitor stay on the site, look at more data, and monitor more frequently," Below says.
"A lot of companies that do the random sample monitoring approach do have policies where, if needed, the monitor can look at more records," Below notes. "But they don't define when they will implement that."
Another strategy for improving monitoring is to have it occur early during a trial's enrollment and to make this a best practice, Below suggests.
"Many sponsors, but not all, have adopted this approach, but a lot I've worked with have not," he says.
"After the first one or two people have enrolled, that's when the CRA has to get out there and monitor the study," Below says. "They need to look at the documentation, and if there are going to be problems, then you'll typically see them early on."
Lastly, CRAs need to do a better job of documenting and communicating the issues they've found. This is so sponsors really understand what's going on, Below says.