OUP user menu

INCENTIVES TO INCREASE PARTICIPATION IN AN INTERNET SURVEY OF ALCOHOL USE: A CONTROLLED EXPERIMENT

Kypros Kypri, Stephen J. Gallagher
DOI: http://dx.doi.org/10.1093/alcalc/agg107 437-441 First published online: 12 August 2003

Abstract

Aims: To examine the use of the Internet in a survey of drinking among students, and the effectiveness of incentives to encourage participation. Methods: In a survey of drinking in university students, a random sample of 160 students were randomly assigned to one of four token incentive conditions. All received posted invitations, and reminders by e-mail and telephone. Results: Overall response was 85% and did not differ significantly by incentive condition. Conclusion: Internet surveys are effective in obtaining alcohol use information from students. Minimal incentives may suffice if coupled with intensive follow-up.

(Received 24 April 2003; first review notified 7 May 2003; in revised form 12 May 2003; accepted 29 May 2003)

INTRODUCTION

Survey non-response is a growing concern in epidemiological research. There has been a call for more research on methods of increasing survey participation and the potential impacts of non-response bias (Caetano, 2001). Survey methodologists have suggested that electronic mail and the World Wide Web (i.e. the Internet) may provide a mechanism for improving survey participation, particularly among groups with high levels of Internet access (e.g. university students) (Schmidt, 1997; Couper, 2000; Dillman, 2000). In recent years, Internet surveys have proliferated in a range of subject areas (Dillman, 2000), including alcohol and drug use (McCabe et al., 2002).

The benefits of Internet-based survey methods include reduced implementation costs, greater appeal to certain target groups, improved questionnaire formatting, improved data quality, elimination of data entry, reduced processing costs and faster data collection (Witmer et al., 1999). In surveys of university students that we conducted using pen-and-paper questionnaires (Kypri et al., 2002a,b), up to 6% of data were missing on individual items, and data collection and processing procedures were time and labour intensive.

One method of increasing survey participation is the provision of token incentives for potential participants (Dillman, 1978; Edwards et al., 2002). In the context of an Internet survey this method typically requires the mailing of incentives to sampled individuals. Provision of incentives at the outset, that is along with the invitations to participate, has been found to be more effective than the promise of a reward upon completion of the survey (Dillman, 2000; Edwards et al., 2002), but has significant cost implications, particularly where the required sample size is large and the final response rate is low.

The aims of this study were: (1) to examine the feasibility of undertaking a survey of university student alcohol consumption using the Internet, and (2) to determine the utility of various forms of token incentive in attracting survey participation.

SUBJECTS AND METHODS

Overview

Tertiary students received written and e-mailed invitations to complete a web questionnaire on their alcohol use. Each phase of the survey process was critically evaluated to identify potential barriers to a full implementation of the survey. This included scrutiny of interactions with university staff responsible for the enrolment database (the sampling frame); contacts with sampled individuals during recruitment and follow-up; the measurement of latencies to respond to the invitation to participate; and attention to technological problems experienced by respondents and respondents’ written comments. An experimental design was used to investigate the utility of incentives in increasing survey participation.

Sample

The sample consisted of 160 tertiary students (88 female/ 72 male), under 30 years old, randomly selected from the University of Otago’s enrolment database. Ethical approval to contact students was granted by the University Ethics Committee.

Survey implementation

Advance notice and personalisation of written contacts are effective means of enhancing response rates in postal (Edwards et al., 2002) and Internet-based surveys (Dillman, 2000), but are often difficult measures to implement in large samples. Focus group research conducted prior to this study suggested that although all students were issued with e-mail access, only a minority regularly accessed their student e-mail accounts and some never used them, preferring a non-University Internet service, such as Hotmail (www.hotmail.com). Additionally, in the preparation for this study, it became apparent that some residential address details provided by the university were incomplete or incorrect. Given these potential barriers to participation, and the experimental evidence concerning multiple and varied contacts (see Dillman, 2000 for a review), a combination of a posted letter, e-mail, and telephone calls, was employed in a three-phase procedure, to maximise coverage and potential participation.

Students were informed at each phase of recruitment that if they indicated a preference not to participate, their decision would be respected without question. The researchers’ e-mail address, phone number, and postal address were provided in the information sheet describing the study, in the invitation letter, and in all e-mail communications. If a student indicated during a reminder phone call that they did not want to participate, they were thanked and not contacted again.

Phase 1: invitation

A personally addressed and signed letter on university letterhead was mailed to sampled students, inviting them to participate in a confidential Alcohol Use Survey via the World Wide Web. The letter notified the recipients that in 2 days’ time an e-mail message would be sent to their student e-mail address, and that a hypertext link contained in the message, when clicked, would open their computer’s web browser at the site hosting the survey. Attached to the brief letter was an information sheet, sanctioned by the University Ethics Committee, that provided details of the study. The e-mail message was generated using mail merge software so that each message was personally addressed and sent individually rather than as part of a bulk transmission. This message was sent 2 days after the letter was expected to arrive at the students’ in-term residential addresses, so that they had had the opportunity to read the letter and to see the token incentive. Attached to the e-mail message was an electronic copy of the mailed letter and the information sheet describing the research. A hypertext link e-mail address for the researcher and their postal address and phone number were prominently displayed in the message to allow recipients to make inquiries about the study.

Phase 2: reminders

One week after the first e-mail message was sent, the survey database was checked to determine whether the student had responded. A polite personalised reminder e-mail was sent to students who had not yet responded. This also contained a hypertext link to the web questionnaire in case the previous e-mail had been deleted.

Phase 3: intensive follow-up

One week after the reminder e-mail, the survey database was checked again. Students who had not yet responded (and had not indicated refusal to participate) were telephoned to check that they had received the e-mail and were asked if they were willing to participate. Up to five follow-up telephone calls were attempted until contact was made. Those who wished to participate but preferred not to use a computer were offered a pen-and-paper alternative.

Web questionnaire

Participants were asked to make point and click responses to the following measures: the Alcohol Use Disorders Identification Test (AUDIT; Saunders et al., 1993); a 1-week drinking diary including the quantity and duration of sessions, and the number of times intoxicated; peak consumption in the previous 4 weeks; height and weight; positive aspects of drinking; alcohol-related injury; alcohol-related consequences in last 3 months (22 items); effect of drinking on grades; perceived drinking norms (national, university and closest friends); second-hand effects of drinking (Wechsler et al., 1994); and drink driving/riding in the previous 4 weeks. On the final page of the questionnaire, students were thanked for their participation and invited to type a comment on any aspect of the survey instrument or procedures. A demonstration version of the web questionnaire can be viewed at http://ipru.otago.ac.nz/ausdemo.

Although personally identifying information was not requested in the web questionnaire, the necessity to sample from the enrolment list and write to students meant that the survey was not anonymous. This may induce a social desirability effect, namely the tendency for respondents to censor reports of their alcohol consumption to fit their perceived audience. The key to minimising social desirability effects is to create an environment in which individuals fear no penalty (e.g. social judgement) for an honest response (Dillman, 2000). To this end, assurances of confidentiality were emphasised in the invitation to participate, on the web-questionnaire, and in the reminder contacts.

Experimental procedures

The students randomly sampled from the enrolment database (n = 160) were randomly assigned to one of four conditions (n = 40 per group): (1) a ball-point pen (value = $0.50); (2) a pen plus a cookie voucher with $1 face value; (3) a pen plus a lunch voucher with $5 face value; and (4) a pen plus the promise of a lunch voucher with $5 face value given on receipt of survey data.

Focus group research conducted prior to this study suggested that attempting to attract survey participants without at least a token incentive would probably yield a poor response, which is consistent with the results of experimental research (Edwards et al., 2002). Therefore, a no-incentive condition was not included. The four conditions tested reflect what would be practicable in the implementation of a large survey.

In conditions 1, 2 and 3 the token incentive was sent with the invitation letter (i.e. unconditionally). In condition 4, students were informed that the incentive would be sent to them on receipt of survey data (i.e. conditionally). Vouchers for food were provided because of their near-universal appeal and in order to avoid the cost and complication of handling cash or processing numerous small-denomination cheques. When pre-testing the procedures, food vouchers appeared to be effective in attracting participants.

Allocation concealment was achieved by not informing study participants that they had been assigned to an experimental condition, and by having research staff responsible for follow-up blind to the experimental condition of each study participant. Sampling and randomization were conducted by a biostatistician not involved in the follow-up of study participants. Staff responsible for conducting follow-up were not informed as to which experimental condition participants belonged.

A sample of 160 was deemed adequate to render sufficient variation in respondent characteristics to test the adequacy of the survey instrument and procedures, without exceeding budgetary restraints. However, a sample of this size is insufficient to detect small differences in response rates between contiguous experimental conditions (1 vs. 2; 2 vs. 3; etc). A large overall difference could be detected with reasonable power, and this would be sufficient to determine a broad recruitment strategy for full implementation of the survey. From a review of the literature, it was estimated a priori, that responses in the four conditions would be 50, 60, 85 and 75%, respectively. With this design, the power (1-β) to detect an overall difference in response rates was 0.83 with α = 0.05. Response rates are presented with 95% confidence intervals, calculated using the procedure described by Armitage and Berry (1987). Pearson’s chi-squared test was used to determine whether there was a statistically significant difference in the response rates as a function of incentive condition.

Free text comments entered by students at the end of the questionnaire were grouped into broad categories according to their subject matter: comments about the respondent’s drinking experiences, comments about others’ drinking, requests for the study results, and comments about the survey content and procedures. The last of these were analysed for the present study.

RESULTS

Feasibility of an internet survey

Some operational problems were discovered in the research team’s interface with the university administrative department responsible for providing student details for sampling and later contact. Specifically, the exclusion criteria for the sample were at first applied incorrectly, resulting in a number of ineligible students (e.g. those outside the requisite age group) being incorrectly included in the sample. This was discovered prior to the beginning of the survey and required the sampling process to be repeated. We were informed that some exclusion criteria could not be applied using the university’s database, requiring a manual review of the final sample and replacement of ineligible individuals.

During follow-up it became apparent that 10 of the sampled students had left university prior to the commencement of the survey. This occurred because these students were incorrectly listed as still enrolled at university. These individuals were ineligible to participate and were excluded. Of the 150 students deemed eligible to participate, 128 completed the questionnaire, 123 via the web and five using a paper form, giving an overall response of 85% (95% CI: 79–91). The phase in which participants responded to the survey did not vary significantly as a function of incentive condition [χ2(6) = 8.04; P = 0.24].

Of the 123 web questionnaires, 74 (60%) were completed during the first phase of recruitment, prior to the e-mail reminder; 21 (17%) were completed in the second phase, after the e-mail reminder and prior to the commencement of telephone reminders, and 28 (23%) were completed in the final phase of follow-up, after the commencement of telephone reminders.

In two cases, participants contacted the research team to report a technical problem with the web questionnaire. In the first of these, the participant reported that clicking on the hypertext link would not open the web questionnaire. Despite lengthy investigation, the reason for this remained unknown; however, the participant was sent an e-mail containing a hypertext link with a new user code and he subsequently completed the survey without apparent difficulty. In the second case, the participant reported getting part way through the questionnaire and finding an error message relating to a field requesting his weight in kilograms. Investigation revealed no apparent reason for this, but it was noted that the participant, a computer-science student, was using non-standard software (a Linux operating system and Konqueror 2.2 web-browser). After several failed attempts to find a solution to the problem, the participant agreed to complete a pen-and-paper survey which was posted to him.

Utility of different token incentive conditions

The responses for incentive conditions 1–4 are presented in Table 1. A chi-squared analysis showed that the response rate did not differ significantly by experimental condition. There were no missing data in the 123 web questionnaires, and only a few items with missing answers in the five paper-based responses.

View this table:
Table 1.

Response rates by incentive condition in a study of New Zealand university students to determine whether incentives increase participation in an internet survey of alcohol use

Pen only (unconditional)Pen + $1 cookie voucher (unconditional)Pen + $5 sandwich voucher (unconditional)Pen + promise of $5 sandwich voucher (conditional)
*Forty individuals were allocated to each group but 10 were subsequently discovered to be ineligible (they were no longer enrolled at the university). Those individuals are not included in the denominators.
No. responded/No. allocated* (%; 95% CI)34/39 (87; 73–96)35/39 (90; 76–97)29/34 (85; 69–95)30/38 (79; 63–90)

Comments were offered by 22 respondents, eight of whom referred to some feature of the survey instrument or procedures. These fell into two categories: suggestions of other behaviours that should be measured, and support for the methods of recruitment. For example: ‘I just wanted to say thank you for the letter warning me that a survey was on the way. It makes all the difference in terms of willingness to participate’ (23-year-old female), and ‘I like the free pen’ (21-year-old female).

DISCUSSION

This study showed that, among university students, a high response rate (85%) can be attained using an Internet survey. There were no missing data for those who completed the web survey and no major technological problems. The only significant operational concern was the interface with the university administrators responsible for supplying student contact details. Based on free text comments, students found providing information in this way a positive experience. The size and type of token incentive appear relatively unimportant, although it is acknowledged that the study did not include a no-incentive condition. The temporal pattern of the survey response, with 40% of questionnaires completed after the first phase of recruitment, suggests that intensive follow-up is possibly more important than token incentives in encouraging response. Up to eight contacts were attempted with each member of the sample, via mail, e-mail, and telephone.

Various aspects of the survey implementation appeared to facilitate the good response. Comments revealed that advance notice by letter, and the respect it demonstrated, was appreciated by the participants. In addition to building rapport, the letter was a vehicle for the delivery of the pen, a token incentive. This too was the subject of appreciative comments. By including the web survey address, the letter provided a means of accessing the questionnaire without the need for checking e-mail and may also have primed students to check their e-mail or at least to retain the message until they were ready to complete the questionnaire.

True anonymity is rare in survey research. Any survey conducted by phone or face-to-face interview and nearly all postal surveys cannot be said to allow for anonymous response. Internet surveys using a list sampling frame as we used are also not truly anonymous. Although in the present study participants were not asked to provide identifying information, the fact that they received personally addressed invitation letters, e-mail messages and, where necessary, follow-up phone calls, could have created apprehension among participants that their answers would be connected to them personally. Despite our assurances of confidentiality, the possibility remains that respondents under-reported their alcohol consumption or related problems.

Many previous studies of tertiary student drinking, for a variety of reasons, have relied on convenience samples (Webb et al., 1996; Steptoe and Wardle, 2001; Kypri et al., 2002a), which limits the generalizability of findings. Random sampling, coupled with the high response achieved in the present study, is more likely to produce a study sample representative of the population to whom we seek to generalize, namely all students at the University. This is especially important where survey results are to be used to monitor trends in alcohol consumption over time. The studies cited above are all significantly limited in this regard. A specific generalization to be drawn from this study is that in a future larger survey we can expect, with 95% confidence, a response in the range 79–91%, with little missing data, and high levels of acceptance among students.

Experimental manipulation of the token incentives offered the potential for strong causal inference, although it is acknowledged that there was insufficient power to detect small differences between conditions. Such small differences may be important in larger surveys in which intensive follow-up is difficult or costly to implement.

Token incentives may offer greater marginal value when the required sample is large and the population’s propensity to respond is low. Inclusion of the pen in the envelope did not add to the basic postage rate of $0.40 (NZD). This would be considered a minimum measure to encourage response. An additional advantage of the pen is that it creates an irregularly shaped package which arouses interest, and may make it less likely for the package to be discarded unopened (Dillman, 2000).

Manual tracking of various types of contact (letter, e-mail, and phone) to encourage survey participation was adequate for this survey, with a sample of 160 individuals. For a substantially larger sample, however, an automated system of managing the follow-up process would be required.

The response rate obtained in this survey compares favourably with those for other surveys of tertiary student alcohol use. In the two largest (pen-and-paper) surveys of college student drinking in recent years — conducted in the USA and Canada — the response rates were 52% (Wechsler et al., 2002) and 51% (Gliksman et al., 2000). In a study in which US college students were randomly assigned to postal versus web-based survey conditions, responses were 40 and 63%, respectively (McCabe et al., 2002). Notably, in the case of the ongoing College Alcohol Survey by Wechsler and colleagues, survey response rates have declined precipitously from the 70% achieved in the first survey, conducted in 1993 (Wechsler et al., 1994). While it is possible that non-respondents do not differ in their alcohol consumption from those who respond, the potential for non-response error increases as the response rate falls (Caetano, 2001).

In all of the studies discussed above, conditional token incentives were offered: entry to a cash prize draw (Gliksman et al., 2000; Wechsler et al., 2002) or a movie pass upon completion of the survey (McCabe et al., 2002). It is possible that New Zealand tertiary students are more willing to participate in surveys than their North American counterparts, although response rates to national household surveys of alcohol use in New Zealand (Casswell et al., 2002) are within the 60–80% range of those attained in other developed countries, including the US and Canada (World Health Organization, 2000).

It is important to continue the search for methods with which to increase survey participation. This study shows that a minimal token incentive sent to potential survey participants, coupled with intensive follow-up, may be sufficient to produce a high level of response to an Internet-based survey in populations with high levels of Internet access. It is unlikely that the result would generalize to populations with substantially lower levels of Internet access but, given the increasing use of this technology and its many potential benefits for epidemiological research, further investigation is warranted.

Acknowledgments

This research was funded by the Health Research Council of New Zealand and the Alcohol Advisory Council of New Zealand. The Injury Prevention Research Unit is funded by the Health Research Council of New Zealand and the Accident Compensation Corporation. We thank Associate Professor D. Chalmers for valuable comments on a draft of the paper.

Footnotes

  • * Author to whom correspondence should be addressed at: Injury Prevention Research Unit, Department of Preventive and Social Medicine, University of Otago, Dunedin, New Zealand. E-mail: kkypri{at}ipru.otago.ac.nz

REFERENCES

View Abstract