By Melinda Young
IRBs can help investigators create a plan to prevent survey security breaches that can lead to false data and study slowdowns and shutdowns.
“Most of the breaches we’ve seen have been in relation to how they’re recruiting in paid marketing mechanisms, like paid Facebook or Google ads, and the link to the survey is being distributed inappropriately,” says Danielle A. Griffin, EdD, CIP, associate director of research integrity and oversight at the University of Houston. “The wrong people are accessing the survey.”
Ensure Researchers Report Breaches
The first question could be whether investigators were identifying and reporting a breach to the IRB. “A lot of investigators come to the IRB for help because they don’t know how to deal with it,” she says. “But some investigators may handle it and not report it to the IRB.”
IRBs should ensure researchers know that if they detect a breach that changes/corrupts data, leads to someone outside the research team accessing data, causes potential harm to participants, or requires a change in procedures or informed consent, it should be reported to the IRB.
Create a Survey Security Plan
Researchers can prevent survey hacking issues by creating a thorough plan for preventing and identify security breaches. IRBs can ask researchers:
- How are you targeting study participants?
- Where are you targeting messages to find participants?
- Are there sufficient inclusion criteria and screening questions?
- Do you perform an extra check during the survey and after completion?
- Once someone completes the survey, do you send another link to verify the person is real?
- Will you check participants’ email addresses to identify patterns of multiple, similar, or same email addresses?
- If you are targeting one gender in the survey, do participants’ names suggest someone of another gender has completed the survey?
Some investigators will offer a drawing for survey participants. This type of incentive may not attract breaches. But, it also could reduce the number of participants enrolling in surveys requiring more time answering questions, says Lorraine R. Reitzel, PhD, FAAHB, FSRNT, IRB chair and professor at the University of Houston.
“Eliminating the incentive would eliminate potential motivation to be fake,” Reitzel explains. “However, we often get grants where there are incentives built in. This is the way we know how to do it as researchers to get the most people interested in completing your study.”
The COVID-19 pandemic has led to more online research work, including virtual meetings and participant interviews. This has opened it to breaches, says Shahnjayla K. Connors, PhD, MPH, CPH, assistant professor of health and behavioral science at the University of Houston-Downtown.
“The challenge, as we get digital and focus on Zoom and online work, is there are many more ways to [end up with] fraudulent information,” Connors says. For example, virtual meetings between investigators and participants can be hacked, as in Zoom-bombing, she adds.
As more people use these electronic meeting places, the companies are adding more security protections, such as unique links, Connors notes.
Connors, Reitzel, and Griffin offer these additional suggestions for keeping survey studies safe from hacks and data breaches:
• Prevent bots. Bots are easy to spot because they often use the same email addresses, locations, and other identifiers.
“Even if you can’t verify the first entry, all subsequent entries you can assume are not a human being because of replicate data,” Griffin says.
One simple tactic is to add reCAPTCHAs, fraud detection methods that can be as simple as asking someone to check a box that says “I’m not a robot,” to requiring people to type in the letters they see, to requiring people click on the traffic lights or crosswalks in a nine or 12-block photo.
“A reCAPTCHA is something we know reduces the chances that someone is putting in fraudulent information,” Connors says.
IRBs can request researchers include reCAPTCHAs, Reitzel says.
• Limit survey participation by location and link. If researchers plan to only enroll people in a particular state, then it can be set up to receive responses only from individuals in that state. “For national studies, we could exclude people from other countries, but that will not remedy all of our issues,” Reitzel says.
Researchers also should send each verified participant a unique link to the survey. This can prevent sharing of the survey link with people who are not eligible for the study and only want to receive the incentive payment.
• Ask questions with only one answer or open-ended answers. “We’ve tried adding an honesty pledge to our survey,” Reitzel says.
A survey might ask participants to verify their location, employment, or some other information that pertains to eligibility. If the survey is answered by a bot, it may have both “yes” and “no” answers when everyone who meets eligibility requirements could only answer “yes.”
“If it’s a bot providing random answers, and if you don’t put in ‘yes,’ then you’re taken to the end of the survey,” Reitzel explains.
If people are conducting multiple hacks to obtain the incentive payment, they usually do not take time to go through the survey and answer carefully. “At some point, they’ll answer something that doesn’t make sense,” Griffin says. “If it’s an open-ended question, the chance of someone thoughtfully answering open-ended questions is less. They’ll hack something easier to get through, so having more open-ended questions can be a hard stop.”
A similar tactic is to include a question for which the instructions say to mark a particular letter. If either bots or people are randomly entering letter answers, they may miss this instruction. That could mean their survey is discarded.
“Set up the survey so there are validity questions that need to be answered,” Griffin suggests. “These are going to be the checks that help determine which survey is real and which is not.”
• Perform random checks. “Having built-in random checks throughout the study is one of the easiest ways to [prevent breaches],” Griffin says.
“For every survey coming in, I looked at them and made sure they were OK,” Connors adds. “Most of the time, you get a couple at a time.”
If researchers expect many survey submissions, random checks would help identify breaches and problems. Investigators can inform colleagues of questionable surveys and ask if they also believe a breach occurred.