Mode Effects

Discussion

Telephone versus Internet

In a 2005 study comparing three randomly sampled groups (1 online survey of KN panel members, 1 phone survey of KN panel members, and 1 phone survey of individuals who refused to participate in a KN panel), Dennis et al. found that:

The differences found between the mode of data collection in this telephone versus Internet study were strikingly similar to the telephone versus mail mode effects found in civic attitude studies by Tarnai and Dillman (1992) and in telephone versus face-to-face mode effects by Krysan (1994). Both studies found a tendency for telephone respondents to answer at the extreme positive end of the scale. In addition, this study found that Internet respondents were more likely than the telephone sample to use the full range of response option scales; therefore, non- differentiation was more prevalent in the telephone sample groups.

A more detailed account of the results of this study follows:

Multivariate analyses made evident that the mode of survey data collection has a significant effect on survey response data. Some of the findings from these analyses are below:
- Seeking information: Respondents were asked seven questions designed to measure their motivation to seek information about anthrax after the 9/11 terrorist attacks. Survey mode is shown to affect information seeking positively in that the participants in the telephone sample tend to be 74.1% more likely to seek this information from local or state health departments (-1.35, p < .05), and 75.7% more likely to seek information about anthrax from toll-free government phone numbers (-1.415, p < .05). Telephone sample members also have a 58.6% greater likelihood of seeking information from cable
24-hour news channels and network news channels (-0.88, p < .05). Mode also has a positive effect on seeking information from websites, with telephone sample respondents 63.6% more likely than Internet respondents to seek information through this means (- 1.01, p < .05).
- Neighborhood statements: Mode of data collection is shown to have significant effects on responses for the neighborhood statements as well. First, participants were asked to provide a number for how many days a week they have dinner and/or participate in social events with neighbors, creating an eight-point response scale from ‘0 (Never)’ to ‘7 (Every day)’. Members of the telephone sample show a 29% increase over Internet respondents in frequency of dinner and/or social events with neighbors (-0.32, p < .05). Respondents in the telephone samples also show a 10% increase in their frequency to informally chat with their neighbors (-.38, p < .05). Using an 11-point scale from –5 to +5, in which –5 represented ‘Completely disagree,’ +5 represented ‘Completely agree’ and 0 represented ‘Neither,’ telephone respondents are twice as likely as Internet respondents to see themselves as part of a neighborhood (0.70, p < .0001), and to rate higher their sense of belonging to a neighborhood (0.67, p < .0001).
- Self-perception statements: The same 11-point scale, ranging from –5 to +5, in which –5 represented ‘Completely disagree’, +5 represented ‘Completely agree’ and 0 represented ‘Neither’ was used for the self-perception statements. Mode is a significant predictor of responses for these measures as well, in that the telephone sample, as compared to the Internet sample, is more likely to give higher ratings for self-perception statements. The telephone sample shows increases of 3.2 in odds for being more likely to trust others (1.15, p < .0001), 2.2 in odds for feeling more that they easily fit into groups (0.78, p < .0001), 2.68 in odds to be more apt to like mixing socially with others (0.99, p < .0001), an increase of 2.2 in odds to give a higher rating in their tendency to be happy (0.80, p < .0001), and twice as likely to enjoy helping others (0.69, p < .0001).

Wygant 2005 also argues that mode can have an effect. In a study using parallel phone, mail, and web surveys to query BYU students, Wygant found that paper and web responses closely followed one another, but that phone responses differed from the other two. He hypothesizes that this could be due to order effects, settled response, or interviewer effects.

Dennis 2007 takes up the question of the effect of mode on the likelihood that a respondent will answer "Don't Know" to a given question. The background for this 2007 study is as follows:

In 2000, NORC and Knowledge Networks (KN) conducted an experiment on the GSS national priority items. In that experiment, the “Don’t Know” option was shown to respondents on the screen. The results showed that respondents from the KN experiment were significantly more likely to indicate “Don’t Know” than the respondents from the in-person GSS (Smith, 2003). In 2002, NORC and KN conducted another experiment as an extension to the year 2000 experiment to investigate the effects of data collection on survey responses. In that study, results from KnowledgePanelSM collected over the Internet via WebTV were compared to the results of the GSS survey collected by in-person interviews. The results showed that when the “Don’t Know” option was not presented on-screen and respondents were instructed at the start of the survey to skip a question to indicate “Don’t Know,” the percentage of “Don’t Know” respondents in KN’s experiment was similar to that in the in-person GSS survey (Smith and Dennis, 2005). The study also showed that, with the exception of a few items that are sensitive to social desirability, the differences on the substantive findings between KN and GSS are fairly small. The general pattern was that respondents of the in-person GSS survey were consistently more likely to indicate “Too little” than were KN’s respondents.

In this study, Dennis et al used three modes of data collection: web, telephone, and in-person. The web sample included 1,689 invitees, of whom 1,428 responded. These respondents were asked questions for which the "Don't Know" option was not shown on the screen, but they were asked at the beginning to skip a question in order to indicate that they did not know the answer. Because of KN's agreement with its panelists, they first had to ask the panelists for permission to contact them by phone. Of the 1,383 panelists asked for this permission, 1,208 responded, and 600 were eventually contacted by telephone. Each of these telephone interviewees received a $10 incentive. The paper does not describe the in-person component. The study resulted in the following conclusions:

This paper continues and expands the research on modes of data collection in 2000 and 2002 using the GSS national priority item battery. In 2002, we compared KN’s online experimental treatments of “Don’t Know” responses with the actual in-person GSS results. In the current research, we examined the differences between the online, phone, and in-person modes on 17 national spending priority items from the General Social Survey. The results suggest the following findings:
• The results from the year 2002 study were replicated:
– By not showing the “Don’t Know” option on screen but instructing respondents to skip the question to indicate “Don’t Know”, KN’s online data collection can produce similar “Don’t Know” rates to those produced by the in-person and phone mode.
– Similar to the finding in the 2002 study, Respondents from KN’s online survey are consistently less likely to select “Too little” and more likely to select “Too much” than are respondents of the in-person survey. Again, these systematic differences are small in magnitude for most spending items.
– In the 2002 study, spending items dealing with urban underclass (Blacks, big cities, crimes, drugs, and welfare) and foreign aid showed large differences between the online and in-person modes. These large differences continued in the 2006 study for the same spending priority items.
• The fact that the phone and online modes have the same sampling source (i.e., KnowledgePanelSM) did not predetermine the similarities in the results between these two modes. To the contrary, the dissimilarities between the phone and online modes and the similarities between the phone and in-person modes are strong evidence for the effects of modes of data collection:
– The average difference between the in-person and phone modes is smaller than the average difference between the in-person and online modes.
– The systematic differences between the in-person and online modes do not exist between the in-person and phone modes.
– The large differences between the in-person and online modes on the spending items dealing with urban social underclass and foreign aid decreased or disappeared completely between the in-person and phone modes.

Paper versus Internet

Denscombe 2006 discusses findings that support the claim that there is little or no mode effect differentiating paper and web surveys. In study of 338 15-year old students in England who were each given identical surveys, with the exception that 69 of them took the surveys via computer rather than on paper, Denscombe finds similar completion rates and similar substantive content. The surveys concerned health and self-image, and of the 23 items on the survey, only the question which asked students how many cigarettes they smoked in a week showed a statistically significant difference between modes. In this case, those students filling out the paper survey were more likely to report a higher level of smoking than those who used the web survey.

Continuous Measurement

As Couper 2005 points out, one important advantage of online and mobile data collection is the capacity for continuous measurement, though this advantage does not come without potential costs:

> A relatively large initial investment may be required to enroll sample persons in a panel and provide them with the required equipment. Thereafter, using self-administered methods, with automated prompts, survey instruments, and reminders, small amounts of data can be
> collected at much more frequent intervals. This approach may reduce respondent burden, by spreading the load over a large number of shorter interactions. It may also improve data qual- ity, by reducing the length of recall periods and collecting information much closer to the time of occurrence. These methods hold much promise for the study of relatively frequent and recurring behaviors (e.g., alcohol consumption, diet and exercise, mood, interpersonal interaction, etc.). However, these benefits may come at a cost in terms of other sources of error, including coverage, nonresponse, and panel effects.

Sources

Couper 2005 - Technology Trends in Survey Data Collection
Dennis 2005 - Data Collection Mode Effects Controlling for Sample Origins in a Panel Survey: Telephone versus Internet
Schoen 2005 - When Methodology Interferes With Substance: The Difference of Attitudes Toward E-Campaigning and E-Voting in Online and Offline Surveys
Wygant 2005 - Comparative of Analyses of Parallel Paper, Phone, and Web Surveys: Some Effects of Reminder, Incentive, and Mode (Powerpoint)
Denscombe 2006 - Web-Based Questionnaires and the Mode Effect: An Evaluation Based on Completion Rates and Data Contents of Near-Identical Questionnaires Delivered in Different Modes
Dennis 2007 - Results Of A Within-Panel Survey Experiment Of Data Collection Mode Effects Using The General Social Survey's National Priority Battery
Hill 2007 - The Opt-in Internet Panel: Survey Mode, Sampling Methodology and the Implications for Political Research
Malhotra 2007 - The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples
Schonlau 2007 - Are “Webographic” or attitudinal questions useful for adjusting estimates from Web surveys using propensity scoring?
Chang and Krosnick 2008 - National Surveys Via RDD Telephone Interviewing vs. the Internet: Comparing Sample Representativeness and Response Quality
Denscombe 2008 - The Length of Responses to Open-Ended Questions: A Comparison of Online and Paper Questionnaires in Terms of a Mode Effect
Dever 2008 - Internet Surveys: Can Statistical Adjustments Eliminate Coverage Bias?
Smyth 2008b - Does "Yes or No" on the Telephone Mean the Same as "Check-All-That-Apply" on the Web?
Bates 2009 - Cell Phone-Only Households: A Good Target for Internet Surveys?
Isreal 2009 - Obtaining Responses by Mail or Web: Response Rates and Data Consequences
Yeager and Krosnick 2009a - Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples
Yeager and Krosnick 2009b - Online Supplement to Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License