Websites allow studies to cast wide net for subjects

Amazon's Mechanical Turk used by researchers

Researchers go to all sorts of lengths to attract participants for surveys and other types of non-clinical research — recruiting Psych 101 students, posting fliers, handing out gift cards, etc. But a new method of recruitment takes advantage of an existing Internet trend toward outsourcing tasks to thousands of computer users around the world.

"Crowdsourcing" allows a requester to put out an open call for help with a particular task that requires human intelligence and therefore cannot be done by computer programs alone. These tasks are often mundane — tagging images with identifying information, filtering out obscene photos and comments from websites, transcribing audio to text, posting reviews.

Increasingly, however, investigators are exploring the use of crowdsourcing to cast their nets for subjects more widely than they can within the confines of their institution, or even their institutional Web sites. They do so by posting surveys and other studies on crowdsourcing sites alongside other non-research related tasks. The most popular of these sites is Amazon's Mechanical Turk, which bills itself as an open marketplace for work requiring human intelligence. (

According to the account posted on the Amazon website, the name comes from a chess ruse concocted in 1769 by Hungarian nobleman Wolfgang von Kempelen, who built a mechanical chess-playing "Turk" automaton that defeated nearly every opponent it faced. Kempelen convinced people he had built a machine that made decisions using artificial intelligence, but in reality a concealed human chess master made the moves. "Humans still significantly outperform the most powerful computers at completing such simple tasks as identifying objects in photographs — something children can do even before they learn to speak," Amazon notes.

Mechanical Turk brings together workers, known as "Turkers" and "requesters." Requesters post Human Intelligence Tasks (or HITs), and Turkers fulfill them. Once the requester is satisfied with the work, he or she releases money from an account to the worker's account — usually only a few cents per task.

Amazon holds the personal information about workers in order to pay them, but does not pass them along to the requesters.

This ability to attract potentially thousands of anonymous responses for minimal cost has attracted the interest of researchers such as Chris Callison-Burch, PhD, an associate research professor in the computer science department at Johns Hopkins University in Baltimore.

Callison-Burch works in the field of computational linguistics, on the problem of automatically translating from one language to another. He has posted a number of HITs on Mechanical Turk asking workers to translate words and phrases from one language to another. Despite the minute cost he pays per task, he says he now spends thousands of dollars a month conducting research on it.

He calls Mechanical Turk "a really amazing tool for performing experiments."

"When you compare it to the experiments I encountered in my undergraduate days – such as taking psychology classes and being roped into being a subject for grad students, the scale at which you can do these is much, much larger."

New to IRBs

Terrell Russell, MS, a PhD student and research assistant at the School of Information and Library Science at the University of North Carolina at Chapel Hill, is likewise a convert to the greater efficiency of using Mechanical Turk for research.

"It's cheaper — and it's way faster," he says. "I don't have to have them come into the lab. I don't have to talk to them on the computer. The coordination cost is basically zero. The transaction costs are basically zero.

"For the data on the one survey that I ran, we spent $35 and we had over 2,000 responses from 278 Turkers."

Before using Mechanical Turk for their studies, both Russell and Callison-Burch had to introduce the concept to their respective IRBs, neither of which had dealt with it before.

Callison-Burch says he tapped colleagues at other institutions who had used Mechanical Turk in order to draw up his research proposal. He asked for and received exempt approval after some e-mail exchanges with the IRB.

"They were just asking for more detail," he says. "I believe even one of the board members called me up to ask for a bit of clarification about what Mechanical Turk does."

Russell, who was studying willingness of subjects to disclose personal information online, also applied for exempt status from his IRB.

"It came back, 'Nice try,'" he said with a laugh. "The response was, 'Well, this sounds like people to us and we're not going to let that slide without further investigation.' For the most part, (the proposed study) went through once we conveyed clearly to their satisfaction that we were not holding personal information and we were not doing any manipulation (of subjects)."

Panos Ipeirotis, PhD, an associate professor at the Leonard N. Stern School of Business at New York University in New York City, has studied crowdsourcing and user-generated content on the Internet. In his blog, "A Computer Scientist in a Business School," ( he's looked at the phenomenon of Mechanical Turk and its various uses, including research.

Ipeirotis says the anonymity of the subjects is one great advantage of the system, from the perspective of IRB review.

But he says IRBs have asked questions about how the data is kept secure by Amazon, and about other issues related to the confidentiality and payment of subjects. (see accompanying story).

When to seek IRB review

Callison-Burch suspects that IRBs will begin receiving more proposals from researchers wishing to use it who previously had not dealt with human subjects.

"There are a lot of computer scientists in particular who definitely aren't used to dealing with human subjects," he says. "A large class of computer science research at the moment is data-driven machine learning and for many of our applications we just deal with existing or found data. But I think (Mechanical Turk) really transforms how I as a computer scientist who works with machine learning applications can do things."

One tricky area to navigate will be at what point a task ceases to be human subjects research and becomes something more like engaging outsourced staff.

Callison-Burch acknowledges that the line can be blurry. For his own linguistic work, some professors he consulted questioned whether he needed to apply to the IRB at all. "I did it just to be on the safe side," he says. "You can gather any type of information using Mechanical Turk. It may be that certain studies are much more squarely within the purview of the IRB than others."

Ipeirotis says the rules for research on Mechanical Turk aren't any more complicated than they are for research outside it.

"If you're asking things about codes or the state of the world, asking them to tell you whether something is blue or red, you're not asking them about personal opinions so in that case an IRB is not required," he says. "An IRB is required whenever you are asking people to do an experiment where the outcome depends upon their own personal state."

Russell says that in addition to the survey he conducted on Mechanical Turk, he also gathered responses using more traditional research methods.

One thing that makes Mechanical Turk so enticing to researchers — its protection of anonymity — makes it difficult to know who the Turkers really are. Ipeirotis has done some demographic research of Turkers, posting them on his blog. They tend to be younger, between the ages of 21 and 35. Most are from the United States or India.

Russell says that as we know more about Turkers and whether their responses are generalizable to the greater population, interest in this research method will continue to grow.

"If their reliability comes out to score well (compared to traditional research populations), then there's going to be a lot more people who will consider it legitimate research and a lot more pressure to use it, because it's cheaper and faster."