Make it a top goal to improve data collection
Capture data more efficiently
Whether an IRB relies on paper or is entirely electronic or has a combination of both, IRB staff can improve the office’s efficiency. The simplest solution is to collect and use data wisely, an expert says.
"Every IRB collects data, although they don’t necessarily recognize that they’re doing this," says Melissa Epstein, PhD, CIP, IRB administrator at Albert Einstein College of Medicine of Yeshiva University in Bronx, NY.
Even IRBs that keep paper files can collect basic data about the number and types of studies they receive annually, she adds.
Some basic information IRBs can collect are on the first page of the study submission form, including these data: the principal investigator’s name, department, the type of submission, topics covered, whether it involves bio banking, etc., Epstein says.
The real issue is: What will the IRB do with the collected data?
"And how could we collect it better?" Epstein asks. "Now that we recognize that we have information and are collecting it well, how can we measure what we do and feed that back into improving IRB performance?"
The more IRBs integrate their information systems with the IRB’s workflow, the more IRBs can address the needs of their research communities, she adds.
"Data can be used to help you place your resources," Epstein says.
For example, if an IRB finds that only a small number of principal investigators submits more than one or two studies per year requiring full board approval, then this information can be used to prioritize education and training, she says.
"You might want to designate a staff member to design a training manual because you won’t have the resources to train each investigator individually, or you may want to reach out to research coordinators and provide group training," she adds.
Epstein offers some strategies for improving data collection and analysis, including the following:
1. Collect categorical data rather than continuous data.
Categorical responses are when a data sheet gives the user a list of possible answers. This data type can be analyzed. Continuous responses are fill-in-the-blank answers, which are far more difficult to analyze and compile, Epstein explains.
"If you have a spreadsheet and want to know how many studies you receive per department, then you don’t want people to type out the department’s name because they could write, cardiology’ or heart disease’ or 100 different ways of describing the department," she says. "What you need is a pull-down list with department names to select."
"If you have free responses, these may be really informative or interesting from a review perspective, but they’re useless from a data perspective," Epstein says. "If you want to hold onto information and analyze it, then you need to collect it as a categorical question."
Most electronic IRB submission systems are able to query any field, but someone on the IRB needs to know how to make the system do this, she says.
"Someone needs to know about data analysis, and some systems make this easier than others," Epstein adds.
2. When collecting data, ask one question at a time.
For instance, an IRB submission form may include a categorical question of whether the study involves bio banking. It is not a good idea to place a continuous question, such as "If yes, describe your mechanisms for data confidentiality here," in the same box, Epstein advises.
"If you put all of that in one box, people often don’t answer the entire question," she says. "People try to load as many questions as possible in one space, but this can be disastrous."
The form’s users might respond to only three of the five questions asked when the questions are compounded in one space, she adds.
"If you have one question at a time, some will lead to analysis and others don’t," Epstein says. "If you write these questions as 1a through 1e, then it’s much more likely that people will answer all five of them."
3. Know your institution’s data collection interface capability.
"Hospitals and universities tend to have all sorts of in-house systems with information about patients, but getting the systems to communicate is not always easy," Epstein says.
When an institution improves its electronic system interfacing, it will make the IRB’s and researchers’ work easier and more efficient, she adds.
"If you have an IRB database and are generating information from the meeting minutes, can you generate from information the principal investigator has already put into the system?" Epstein says. "Can you have the principal investigator do some of the work for you?"
For example, Epstein asks investigators to write the information about their protocols exactly the way they’d like to see it appear in official IRB documents. Once they type it into the field the way they want, then every letter the IRB sends out to the PI will have the information listed in that exact same wording, she says.
"Then it’s not my responsibility to type it in the way they want it anymore," Epstein says.
4. Use data to better dedicate staff time.
Just collecting data on how long it takes to complete a study review is one issue, but there are even easier metrics to collect that could be as useful, Epstein says.
"You should know how many new protocols you get in a year," she adds. "That’s very, very informative."
With these data, for instance, an IRB administrator might be able to make the case to institutional leadership that the IRB office needs more staff and resources, she says.
"A lot of IRBs don’t know how many amendments they get because maybe they do amendments in a haphazard way, and nobody understands why the IRB worker is behind," she says.
IRBs also should know what their investigators’ research portfolios look like so they can prioritize education and training, Epstein suggests.
"If you get a Veterans Administration application once every 10 years, then you shouldn’t worry about those until one comes in," she adds. "But if you see a department of education study four times a year, then you may want to learn those education department regulations."
5. Design data collection to deconstruct certain information.
It’s important to track the time between submission of a study protocol and the full board review or full board approval, Epstein says.
"You can do that on an Excel spreadsheet, and it’s not elaborate or nuanced," she says. "If you have an electronic database, you should at the very least be achieving that."
With a sophisticated database, an IRB could measure specific lengths of time in the study review process.
For instance, the IRB could measure the precise amount of time that the study was in the principal investigator’s office while the IRB waited for a response to questions, Epstein says.
"You can measure how much time it sits with the IRB and how much with the investigator and how much with the IRB staff," she says.
"A lot of these metrics are not available because it requires a fine grade level of tracking information, but if you can find these data it’s great," she adds.
With information that is broken down into specific chunks, an IRB might learn that a specific IRB reviewer is taking a very long time reviewing studies. Or the IRB might discover that protocols submitted to the IRB on Thursday will take a week longer than others because the intake clerk only checks the pile on Wednesdays, Epstein says.
"The more fine-grained you can get the information, the more likely you can identify problems," she adds.