Direct-to-consumer wellness products, location-tracking apps, and access to personal data on social networks present both exciting opportunities and significant ethical worries for researchers.
“The digital revolution is rapidly influencing how health research is conducted. We can now passively observe and record people ‘in the wild’ and 24/7,” says Camille Nebeker, EdD, MS, founder and director of UC San Diego’s Research Center for Optimal Data Ethics.
The use of artificial intelligence and active assisted living robots in the health sector also is increasing. “While there is amazing potential, the digital health ecosystem is not consistently regulated. We are in the Wild West of digital health research,” says Nebeker.
The authors of a recent paper propose steps the scientific community can take to ensure social media data are used ethically.1 The paper was prompted in part by the recent Cambridge Analytica scandal, involving allegations that the firm used data improperly obtained from Facebook to build voter profiles.
“Many of my colleagues are conducting research using social media platforms,” says Nebeker, one of the paper’s authors.
Institutional review boards (IRBs) and researchers are struggling to navigate this new territory, sometimes unsuccessfully.
“When something goes wrong, as it did with Cambridge Analytica, it compromises public trust and jeopardizes research that is in progress,” says Nebeker.
The following are two central ethical concerns:
• Researchers may need to cover additional information during the informed consent process.
Commercial products — such as fitness tracking devices — are used as measurement tools. This means privacy policies and terms of service should be considered.
“These terms might influence the study risk assessment,” explains Nebeker. Potential research participants also need to factor in this information to make informed decisions.
“In many cases, the terms of service directly conflict with the federal regulations for human subjects protections in that a participant, if harmed by the product, must agree to arbitration,” notes Nebeker.
• Not all tech companies comply with federal requirements for research.
Federal regulations for human subjects protections must be followed if research is funded by the U.S. Department of Health and Human Services. However, many tech companies that are involved in biomedical research are not regulated. “We need to develop common standards that govern digital health research,” says Nebeker.
Researchers using social media data are operating in an unregulated environment. Thus, there is growing concern about potential harms. “This is another case of how technology has evolved faster than regulations,” says Sherry Pagoto, PhD, director at the University of Connecticut Center for mHealth and Social Media.
Privacy breaches are possible — intentional or not. “This poses risks to everyone involved: researchers, social media companies, and, most importantly, the general public,” says Pagoto.
For example, few Twitter users are aware that public social media posts can be used by researchers.2 Notably, the majority believe that researchers should not be able to use their tweets without consent. Also, users of commercial products do not always understand privacy implications.
“We cannot fault them, though. These policies are very lengthy and written in ways that are difficult to understand,” says Pagoto.
The following changes are needed, according to the study authors:
• Public education on the research performed with social media data, why it is important, and how researchers protect user privacy.
“Consultation with an expert in health tech ethics is critical if being proactive and diligent about human research protections,” says Nebeker.
Stakeholders — including researchers, IRBs, potential participants, or policymakers — may not be fully aware of how data are collected, used, or shared by social media platforms. “This lack of knowledge will influence risk assessment and information included in the informed consent process,” notes Nebeker.
• Federal regulations on the use of social media data in research.
“We can anticipate that the technology and research landscape will only continue to evolve, and rapidly,” says Pagoto. IRBs rely on federal regulations for guidance on the ethical conduct of research. These regulations are outdated as it pertains to the use of data generated by new technologies like social media. Thus, says Pagoto, “universities, funders, and researchers need to be more vigilant about potential harms and begin to craft guidelines for the purpose of self-policing. We need a code of conduct.”
• “Tech ethicists” working alongside researchers as they attempt to use social media data.
Someone with tech ethics expertise could comment on the ethical implications specific to technology used in studies and conduct training for clinicians. “It would also be useful for these folks to advise on grant applications, even serving as consultants or co-investigators,” says Pagoto.
Someone on the IRB could take on this role. “But if limited expertise is available on campus, external expertise should be commissioned,” says Pagoto.
IRBs also should have the expertise to properly review social media research. “Adequately attending to research ethics will require an investment,” says Pagoto. “We want to nudge institutions to make this investment.”
1. Pagoto S, Nebeker C. How scientists can take the lead in establishing ethical practices for social media research. J Am Med Inform Assoc 2019; 26(4):311-313.
2. Fiesler C, Proferes N. “Participant” perceptions of Twitter research ethics. Social Media + Society, March 10, 2018. Available at: http://bit.ly/2valr7G.