Volume:  19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1



A peer-reviewed electronic journal. ISSN 1531-7714 
Search:
Copyright 2003, PAREonline.net.

Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms.


Cui, Wei Wei (2003). Reducing error in mail surveys. Practical Assessment, Research & Evaluation, 8(18). Retrieved September 20, 2014 from http://PAREonline.net/getvn.asp?v=8&n=18 . This paper has been viewed 54,796 times since 9/2/2003.

Reducing Error in Mail Surveys
Wei Wei Cui, University of Maryland

Surveys allow information to be collected from a sample group and generalized to the population at large. Because they are low cost and easy to implement, mail surveys are used more frequently for social research than either telephone or face-to-face interviews. Those conducting surveys should recognize four potential sources of error -- sampling error, non-coverage error, non-response error, and measurement error -- and take steps to minimize their impact. Any one of these sources of error may make the survey results unacceptable (Groves, 1989; Salant and Dillman, 1994; Dillman, 1991, 1999). This article describes the four types of errors and summarizes ways they can be reduced.

Sampling Error

Mail surveys, like all surveys, collect information only from the people who are included in the sample. Because certain members of the population are deliberately excluded through selection of the sample, their responses are not obtained. Conclusions about the population at large are thus drawn from sample survey results. The heterogeneity of the survey measures among members of the population (in other words, the degree to which it does not represent the general population) will cause the so-called sampling error.

Sampling error is examined through inferential statistics applied to sample survey results. In general, increasing sample size will decrease sampling error when simple random sampling is used. For example, when the sample size is increased from 400 respondents to 1,000 respondents for a simple random sample, the sampling error decreases from 5% to 3%. Survey organizations tend to consider this an acceptable trade-off between precision of estimation and costs. Most national polls, for example, report a 3% margin of error. For simple random sampling, the margin of error for proportions is

ole.gif

where ole1.gif denotes the sample proportion, n is the sample size, and z represents the critical value from the standard normal distribution for the desired confidence level. For the 95% confidence level and a reasonable sample size, z = 1.96. The margin of error is widest when ole2.gif = .5.

When simple random sampling is difficult to conduct, other methods, such as cluster sampling and stratification sampling, may be used. Calculating estimates of the precision of these methods is complex.

Non-Coverage Error

If some members of the population are not covered by the sampling frame, they have no chance of being selected into the sample. It is one of the major reasons that mail surveys have not been as useful as desired in surveying the general public. If complete, up-to-date lists of populations were available, non-coverage error would not exist. However, there are no up-to-date lists that provide complete coverage of all the households in the United States. Telephone directories are often out-of-date and also don’t include the small number of households without a phone. Likewise, driver’s license lists don’t cover all of the population.

Non-Response Error

No matter how carefully a sample is selected, some members of the sample simply do not respond to the survey questions. When those who respond to the mail survey differ on the survey measures from those who don’t, non-response error will become a problem. A low response rate does not necessarily lead to non-response error. However, whether differences exist between the responding and non-responding segments of the sample is not known when the survey is conducted. Therefore, low response has long been considered the major problem of mail surveys, and the vast majority of research on improving mail survey methods has focused on response rates. Research studies have successfully identified methods for improving response rates and individual factors associated with improved return rates. Heberlein & Baumgartner (1978), for example, used the technique of meta-analysis to test the predictability of 71 characteristics on response rate. They determined that a ten-variable model predicted 66% of the variation in the final response rate. Seven of the ten variables were found to have positive effect on response rate:

  1. The number of contacts: More contacts will increase the response rate. Advance letters, postcards, follow-ups that include additional copies of questionnaires, and even telephone calls are all examples of such contacts.
  2. Salience of the topic: Questionnaires are more likely to be returned if respondents consider them relevant. A very common reason given for non-response is that the survey doesn’t mean anything to the person who received it.
  3. Government sponsorship: Government-sponsored survey research had higher response rates than that from private organizations.
  4. Employee population: Samples from some special subgroups, such as employees from certain occupations, are more likely to return survey research than the general population.
  5. School or army population: Students and military personnel are more likely to return questionnaires than the general population.
  6. Special third contact: Following up the advance letter and initial follow-up with the use of special mailing procedures, such as certified mail or special delivery, or with personal or telephone contact increases the response rate.
  7. Incentive on the first contact: Incentives included with the first mailing will increase response rate.

 Three factors were found to have negative effect on response rate:

  1.  Marketing research sponsorship: Marketing research surveys in which the information will benefit the firm have lower response rates.
  2. General population: Samples drawn from the general population have lower response rates.
  3. Questionnaire length: Questionnaires with more items or more pages have a lower return rate.

Goyder (1982) replicated this study with similar results, except that the negative effect of market research sponsorship disappeared. Church (1993), using meta-analysis, tested the effects of four types of incentives--monetary (cash and check) and non-monetary (entrance to lottery, donation to charity, coffee, books, pens, key rings, tie clip, golf balls, stamps etc.) incentives mailed with the survey and monetary and non-monetary incentives given upon the return of the questionnaire. His findings demonstrated meaningful increases in response rates only for the two initial mailing incentive conditions and not for those where the incentives were made contingent on return response. Further, no statistically significant difference was found between monetary and non-monetary incentives. Eichner & Habermehl (1981), using studies from Austria and West Germany, suggested potential cross-cultural differences. In contrast to Americans, the European data suggested that government sponsorship has negative effect on final response rate, while general population and questionnaire length have positive effects.

Fox, Crask and Kim (1988), using a different meta-analysis method, identified the following six methods of improving response rate. There is little or no interaction effect among these factors:

  • University sponsorship (vs. business sponsorship)
  • Pre-notification by letter
  • Stamped return postage (vs. business reply)
  • Postcard follow-up, first-class (vs. second-class and bulk) outgoing postage
  • Green questionnaire (vs. white questionnaire)
  • A small monetary incentive

Armstrong & Luske’s research (1987) also shows a positive effect for applying postage to a return letter (vs. including business-reply envelopes).

The Total Design Method for improving return rates

An attempt has also been made to construct a comprehensive system of procedures or techniques to obtain high response rates. Total Design Method (TDM), developed by Don Dillman (1978, 1991) is comprehensive system used to accomplish higher response rates for mail surveys. Guided by social exchange theory, TDM emphasizes how the elements fit together more than the effectiveness of any individual technique, though most of the important factors identified by previous studies are included in TDM. Social exchange theory posits that questionnaire recipients are most likely to respond if they expect that the perceived benefits of responding will outweigh the perceived costs of responding. According to the theoretical frame of TDM, the questionnaire development and the survey implementation process is subject to three considerations:

  1. Reducing the perceived cost, such as making the questionnaire short and easy to complete;
  2. Increasing perceived rewards, such as making the questionnaire itself interesting to fill out; and
  3. Increasing trust, such as using official stationery and sponsorship.

Specific TDM recommendations include the following:

  • Let the interesting questions come first.
  • Use graphics and various question-writing techniques to ease the task of reading and answering the questions.
  • Print the questionnaire in a booklet format with an interesting cover.
  • Use capital or dark letters.
  • Reduce the size of the booklet or use photos to make the survey seem smaller and easier to complete.
  • Conduct four carefully spaced mailings: the questionnaire and a cover letter for the original mailing; a postcard follow-up one week after the original mailing; a replacement questionnaire and cover letter indicating that the questionnaire has not yet been received four weeks after the original mailing; and a second replacement questionnaire and cover letter to non-respondents by certified mail seven weeks after the original mailing.
  • Include an individually printed, addressed, and signed letter.
  • Print the address on the envelopes rather than use address labels.
  • Use smaller stationery.
  • Let the cover letter focus on the importance of the study and the respondent’s reply.
  • Explain that an ID number is used and the respondent’s confidentiality is protected.
  • Fold the materials in a way that differs from an advertisement.

 Although some research studies (for example, Jansen, 1985) question the effect of some parts of the TDM procedure, such as photo reduction, there is evidence that when TDM is used, the response rate typically reaches 50 to70 percent for surveys of the general public, and 60 to 80 percent for more homogenous groups where low education is not a characteristic of the population (Dillman, 1978, 1983). It should be noted, however, that while TDM is a one-size-fit-all method, different survey situations may require quite different survey procedures. For example, some surveys may require personal delivery, some may entail completion of diaries for certain days of certain weeks, and others may require the surveying of the same individuals year after year.

Survey researchers have realized that mixed-mode surveys, in which some respondents are surveyed by mail questionnaires, some by electronic mail, some by telephone, and others by face-to-face interview, can help increase the response rate over that of a typical mail survey. For example, for the large-scale pilot of the National Survey of College Graduates conducted by the Census Bureau for the National Science Foundation, researchers first attempted collect data by mailing the questionnaires, then tried to locate the telephone numbers and call those who either had not responded or whose mail addresses were no longer current at the survey time, and finally, established a personal contact and tried to conduct an interview with the remaining nonrespondents.

To adapt the original TDM to different survey situations, including those involving the Internet, Dillman developed a new method, called the Tailored Design Method (1999), in which base elements are shaped further for particular populations, sponsorship, and content.

Measurement Error

Unlike sampling error, non-coverage error, and non-response error, which arise from non-observations or non-participation, measurement error results from mistakes made by respondents. Measurement error results when respondents fill out surveys, but do not respond to specific questions, or provide inadequate answers to open-ended questions, or fail to follow instructions telling them to skip certain sections depending on their answers to previous questions. Measurement errors also arise from lack of control of the sequence in which the questions were asked, and various respondents’ characteristics. These problem areas tend to be balanced by two advantages of mail surveys: the absence of an interviewer lessens the likelihood both of respondents’ feeling driven to provide socially desirable response and of interviewers’ accidental or purposeful subversion of the purpose of the survey (Dillman,1978).

Mixed-mode surveys introduce new considerations related to measurement error. The issue becomes not only how accurate the data obtained are, but also whether the answers are the same as those obtained for telephone, Internet, and face-to-face interview surveys. There is evidence that some differences exist between responses to certain questions asked by mail versus by telephone or face-to-face interview surveys:

  • Order effects were less likely to occur in mail surveys than telephone surveys (Bishop et al., 1988). In other words, which questions are asked first appears to influence respondents more during telephone surveys than mail surveys.
  • Telephone and face-to-face respondents tend to select more extreme answers than mail respondents when vaguely quantified scale categories are used. Mail respondents tend to distribute themselves across the full scale (Hochstim, 1967; Mangione et al., 1982; Walker & Restuccia 1984).
  • Mail surveys are more reliable than telephone and face-to-face interview surveys (DeLeeuw, 1992).

Potential explanations for these differences have been suggested, but each one can explain only part of the differences across survey methods. Most of the comparative studies have been empirically focused and have made only very limited attempts to provide theoretical explanations for the differences. More studies are needed to develop a theory of response effect.

Summary

During past years, tremendous progress has been made in improving survey response rates. Measurement error issues have also been identified. The increasing interest in mixed-mode surveys will likely lead to more focused attention on measurement error issues. Reducing measurement error will be an important advance for this method of social science research.

References

Armstrong, J.S., & Luske, E. J. (1987). Return postage in mail survey: a meta-analysis. Public Opinion Quarterly, 51, 233-248.

Bishop, G.G., Hippler, H., & Schwarz, F. (1988). A comparison of response effects in self-administered and telephone surveys. In Groves, R.M., et al. (Eds.), Telephone Survey Methodology. New York: Wiley & Sons.

Church, A.H. (1993). Estimating effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly, 57 (1): 62-79.

DeLeeuw, E. (1992) Data Quality in Mail, Telephone and Face to Face Surveys. Amsterdam: TT-Publikaties.

Dillman, D.A. (1978). Mail and Telephone Surveys: The Total Design Method. New York: Wiley & Sons.

Dillman, D.A. ( 1983 ). Mail and other self-administered questionnaires. In Rossi, P.H., Wright, J.D., and Anderson, A.B. (Eds.), Handbook of Survey Research (pp. 359-377). New York: Academic Press.

Dillman, D.A. (1991). The design and administration of a mail survey. Annual Review of Sociology, 17, 225-249.

Dillman, D. A. (1999). Mail and Internet Surveys – The Tailored Design Method, 2nd ed. New York: Wiley & Sons.

Eichner, K., & Habermehl, W. (1981). Predicting response rates to mailed questionnaires. American Sociological Review, 46, 361-363.

Fox, R.J., Crask, M.R., & Kim, J. (1988). Mail survey response rate: a meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52, 467-491.

Goyder, J. (1982). Further evidence on factors affecting response rates to mailed questionnaires. American Sociological Review, 47, 550-553.

Groves, R. (1989). Survey Errors and Survey Costs. New York: John Wiley & Sons.

Heberlein, T.A., and Baumgartner, R. (1978). Factors affecting response rates to mailed questionnaires: a quantitative analysis of the published literature. American Sociological Review, 43, 447-462.

Hochstim, J.R. (1967). A critical comparison of three strategies of collecting data from households. Journal of the American Statistical Association, 62, 976-987.

Jansen, J. H. (1985). Effect of questionnaire layout and size and issue-involvement on response rates in mail surveys. Perceptual and Motor Skills, 61, 139-142.

Mangione, T.W., Hingson, R.W., & Barrett, J. (1982). Collecting sensitive data: a comparison of three survey strategies. Sociological Methods and Research, 10 (3): 337-346.

Salant, P., & Dillman, D.A. (1994). How To Conduct Your Own Survey. New York: John Wiley & Sons.

Walker, A.H., & Restuccia, J.D. (1984). Obtaining information on patient satisfaction with hospital care: mail versus telephone. Health Services Research, 19, 291-306.

 

Descriptors: Mail Surveys; Research Design; Research Methodology; Response Rates; Questionnaires; Responses; Survey Research