Wygant 2005 finds that incentives (in this case, entrance into a drawing for multiple $100 awards) consistently improves response rates in the context of a college campus. Across three modes (paper, phone, and web) response rates were consistently higher when the incentive was mentioned.

Goritz 2006a conducted an experiment (or really, a series of six experiments) focused exclusively on the use of cash lotteries as incentives in online panels. These experiments involved a control group to which no incentive was offered, as well as two versions of the cash lottery incentive: one in which the total payout of the lottery was mentioned, and another in which the lottery was presented as divided into multiple prizes. Goritz points out that one advantage of offering incentives in the form of a lottery is that expenses are capped, as the total prize money to be awarded is determined in advance and is not dependent upon the response rate for the panel (unlike per capita awards). She cites previous meta-analyses (Goritz 2006b) indicating that lottery incentives significantly increase response (OR=1.19) and retention (OR=1.26). However, she also notes that the studies involved in this finding used a number if types of lotteries, of which the cash lottery was only one. She cites a number of other studies, with lottery incentives ranging from $10 to $50 (in groups of tiered prizes), which show an extremely wide range of effects on retention (OR between .98 and 2.60). She sums up the evidence she presents as showing that cash incentives tend to increase response rates in Web-based studies that do not involve panels. However, the present study examines panels specifically, addressing two hypotheses:

  • 1: Panelists included in a cash lottery as part of a study are more likely to respond to this study than are panelists who are not included in a cash lottery. That is, the response rate is higher with a cash lottery than without any incentive.
  • 2: Panelists included in a cash lottery as part of a study are more likely to stay with the study until the end than are panelists who are not included in a cash lottery. In other words, the retention rate is higher with a cash lottery than without any incentive.

She also addresses the question of the way in which the cash award should be distributed with two additional hypotheses:

  • 3: Raffling one big cash prize rather than splitting the lottery into several smaller prizes influences invitees’ likelihood of responding to a study.
  • 4: Raffling one cash prize rather than splitting the lottery into multiple prizes influences respondents’ likelihood of staying until the end of a study.

The method was as follows:

Six experiments were conducted in a university-based, opt-in online panel (cf. Couper, 2000). The panel had been in operation since 1999 and contained people from all walks of life. Most panelists had found the panel through banners, search
engines, links on other Web sites, newsgroups, or word of mouth. New members could continuously sign up with this panel. Approximately 10% of the panelists had been recruited on the basis of probability samples using e-mail, fax, and letter (cf. Göritz, 2004b).In each of the six experiments, experimental groups were offered a cash lottery as incen- tive for participation, and a control group was not offered any incentive at all. In each experiment, there were two different versions of the
cash lottery. In Version 1 the payout of the cash lottery was mentioned as one lump sum prize, and in Version 2 the lottery was announced as being split into several smaller prizes. The two versions of the lottery did not differ in expected value.
The respective incentive information was mentioned in the e-mail invitation. There were two dichotomous dependent measures: invitees’ response status (responded or refused) and respondents’ retention status (retained or dropped out).

The study found essentially no statistically significant results in any of the six experiments:

In each of the six experiments, the control group’s response and retention rate was com- pared to the averaged response and retention rate of the two lottery conditions. Next, the two versions of the lottery, which differed in the number of prizes but not in total payout, were contrasted with regard to the response and the retention rates. With one exception, no statisti- cally significant effects were found in the six experiments. In Experiment 4, the response rate in the 4 × €25 lottery (39.0%) was significantly smaller that in the €100 lottery (45.7%), φ = .07, n = 927, p = .04. However, because as many as 24 statistical tests were performed, this one effect might have been significant because of chance. To find out whether the six experi ments were underpowered to detect any small effects, the individual studies were meta-analytically summarized.

Thus, none of the four hypotheses enumerated above are confirmed by this study. Goritz concludes with the following:

To conclude, in nonprofit online panels with occasional studies, cash lotteries relative to no incentives do not reliably increase response and retention in a study. Moreover, the attempt to significantly influence response and retention by splitting a cash lottery into multi- ple prizes needs to be regarded as failed.

Finally, Gatny 2009 points out that, if one decides to use incentives, reloadable debit cards may be a more cost-effective way to distribute them than checks.


Wygant 2005 - Comparative of Analyses of Parallel Paper, Phone, and Web Surveys: Some Effects of Reminder, Incentive, and Mode
Goritz 2006a - Cash Lotteries as Incentives in Online Panels
Goritz 2006b - Incentives in Web Studies: Methodological Issues and a Review
Gatny 2009 - Using Debit Cards for Incentive Payments: Experiences of a Weekly Survey Study

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License