When does clinical care trump trial protocol?
Good intentions can lead to bad results
Occasionally a physician investigator will be torn between the need to adhere to the clinical trial protocol and the desire to help a patient/subject obtain a better medical outcome. The question is: Is it ever right to follow that desire and vary from the protocol?
The answer probably is never unless the principal investigator and/or the sponsor agree, a research ethics expert says.
"There are many times in trials where the clinician decides that something else needs to be done, and, in consultation with the sponsor or PI, they agree to remove the subject from the trial," says Charles W. Lidz, PhD, a research professor in the department of psychiatry at the University of Massachusetts Medical School in Worcester, MA.
The problem is when it is done covertly, Lidz says.
"This is an issue that people who are implementing clinical trials have, and we need to address it and not insist that it doesn't happen," he adds.
"I became interested in this issue when I was doing research on therapeutic misconception," Lidz says. "I kept running into clinicians who were saying to me, 'There's no therapeutic misconception because we always take the patient's point of view and meet the patient's needs."
Then clinician investigators gave Lidz a number of examples of how they'd override the protocol in the interest of the patient's medical care.
This is an alarming practice, he notes.
"If we violate the rules of the protocol in order to provide good clinical care for the subjects, then I believe that's a threat to the validity of the trial," Lidz says. "And how do you fix that?"
On the other hand, some clinical trials are designed so rigidly that it's difficult for physician researchers to manage the care properly, he notes.
"I've certainly talked to lots of people who tell me, 'In general of course, I always stick to the protocol,'" Lidz says. "But then they tell me about a specific case, and they didn't stick to the protocol that time because the patient needed such and such."
Lidz also has met clinical researchers who will not recruit people who meet the full study criteria if the subjects already are doing well on conventional treatment.
"That's fine, but the problem is we're no longer testing the intervention in everybody who meets the criteria," Lidz says. "We're testing it in people who have failed on conventional treatment, and that's a biased sample."
It would be fine if a protocol was designed to study only people who have failed on conventional treatment, but it's deceptive when investigators make that decision without the sponsor knowing.
"Often those decisions are not visible, and the people who write up the trial are not aware of the bias in the sample," Lidz says.
For example, suppose a sponsor is studying the efficacy of a new drug for treating diabetes. The study is supposed to enroll anyone with the disease. But instead of having a general diabetic population, physicians receive referrals and enroll only people who have glucose levels that are uncontrolled by current interventions, Lidz says.
When the diabetes treatment study's results are analyzed, the intervention group's results might look considerably better than the control group that is receiving conventional treatment since most of the people in the trial already had failed on the conventional treatment, he explains.
This means the study's results are a poor indication of how well the new drug does when compared with the conventional treatment. The only comparison was between the new drug and conventional treatment in people who already had poor results in conventional treatment, he adds.
"People implementing these trials typically are people who have a commitment to their patients, and we can't ignore that potential bias when we design clinical trials," Lidz says. "We need to appreciate that they may have a default way of doing things to match the patient's needs."
This type of bias can be managed through staff education about protocol adherence, increased vigilance over study documentation, and by considering the potential for bias in study design, he says.
"If we try to ignore the existence of these problems, then we just get into invalidating general data," Lidz says. "To me that's a big problem."
Another example of this type of problem involves an open trial in which a patient asked his physician to be put on the investigational medication because he had already failed on the control drug.
"The clinician told the patient, 'Look, if you don't get the experimental medication you can always drop out of the trial,'" Lidz recalls.
While one could understand a physician's desire to obtain the latest and possibly best treatment for his or her patients, the fact is these potential treatments need fair and honest clinical trials before they can be considered the best treatment, Lidz says.
"The key issue is that when one designs trials one has to think about how to design them in such a way that clinicians are most comfortable with the trial," he says. "Also, one has to work at educating the people implementing them to make sure they understand that this trial is not for the purpose of making a sponsor or investigator feel good or to improve their resumes."
The purpose is to obtain the best answer to the study question, and there is no sense in going through the time-consuming and expensive clinical trial process if someone involved in the trial is going to mess up data, he adds.
"I'm all for patient advocacy, but we're putting time, money, energy, and to a certain extent a patient's risk to gathering these data, and if data are incorrect, then it's an awful waste," Lidz says.