ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 16:00 - 17:30, Room: No. 1

Is it worth mixing modes? New evidence on costs and survey error on mixed-modes surveys 2

Convenor Dr Ana Villar (City University London)
Coordinator 1Professor Peter Lynn (University of Essex)

Session Details

Survey designers face a continuous tension between minimizing survey error and keeping costs as low as possible (Groves, 1989). One of the strategies that have been pursued to reduce costs is the use of mixed modes of data collection, using cheaper modes early in the process, and reserving more efficient and expensive modes to increase response rates and coverage.

A considerable amount of research has tried to assess the impact of using mixed modes of data collection on data quality in terms of response error and measurement error. This body of research typically compares response distributions and response rates across modes but fail to report the effect of the mode design on actual comprehensive costs and on timeliness.

However, mixed mode survey implementations may not be as efficient as first thought. With the current technological tools, the costs associated with the production of equivalent questionnaires across modes, equivalent contact forms, equivalent data protocols, and other fieldwork documents might be an underestimated burden. Further to this, findings about the effects on response rates and measurement effects are far from conclusive, and the field is in need of new evidence linking total survey error and survey costs.

In this session we invite studies that address challenges and lessons learned from the implementation of mixed mode designs, with an emphasis on the link between survey error and costs. Papers submitted for this session will ideally include evidence of the effect of the use of mixed-modes on:
- costs, time, and other resources;
- coverage error;
- response rates and/or response bias;
- measurement error.


Paper Details

1. An Experimental Evaluation of How Mode Sequence for offering Internet, Mail and Telephone Options Affects Responses to a National Survey of College Graduates

Mr John Finamore (National Science Foundation)
Dr Don Dillman (Washington State University)

One of the reasons sometimes given for mixing survey modes is to improve response rates. However, achieving that goal must be considered in relation to measurement and cost consequences. Our purpose in this paper is to report results from a large-scale experiment in which three modes of data collection - Internet, mail and telephone - were offered to a national sample of U.S. college graduates. Treatments included 1) web first (n=5,000), 2) mail first (n=5,000), 3) telephone first (n=3,500) and 4) choice (n=47,375) treatments. The mode alternatives not offered initially to a particular treatment group were offered later in a 42-week data collection process. Thus, the experimental design makes it possible to analyze effects of each mode on initial response as well as how all three modes individually contribute to the final response rates. The large sample sizes also allow us to compare the demographic composition of respondents across treatment groups. In addition, we are able to evaluate the costs associated with different sequences of mode use.

Results of this experiment show that the choice of initial mode has minimal effect on final response rates; all four treatment groups achieve similar overall response rates of about 75%. However, the web first methodology encourages a much greater portion of respondents to respond by web, and at considerably less cost than the other designs. In addition, the demographic composition of respondents across treatment groups was quite similar. Finally, quality metrics will be presented.



2. Does Internet Use Improve Surveys?: Studies of Costs, Response Rates and Coverage.

Dr Virginia Lesser (Oregon State University)
Ms Lydia Newton (Oregon State University)
Dr Daniel Yang (Bureau of Labor Statistics)

Survey response rates have been decreasing both when using mail and telephone as the contact mode over the past 20 years. As response rates have decreased using these modes, methodologists have incorporated new approaches such as the Internet to collect survey data. Some research data exist on the strengths and weaknesses of incorporating the Internet into surveys, but further research is needed to investigate the cost-effectiveness of this survey mode. A number of studies have been conducted in Oregon over the past 6 years to explore the costs and benefits of telephone, mail, and mixed mode surveys. The mixed modes in these studies consisted of combining mail and the Internet to collect survey data. One mixed mode approach gave the participants the option to complete the survey by either mail or the Internet. Another mixed mode approach asked participants to complete the survey by the Internet first, and followed up by mailing printed surveys to the nonrespondents. We will examine response rates and coverage error across the mixed modes and single mail mode groups. We will compare the demographics of the completed sample in the mixed mode and single mode groups to the population demographics. The Internet is attractive to use since there are no postage and printing charges and data entry time is minimized. However, other costs are introduced, such as labor associated with Web programming. We will therefore also compare the cost per completed survey across the survey modes.


3. Mixed-mode and the European Social Survey (ESS): evidence from the UK

Ms Alison Park (NatCen Social Research)
Mr Alun Humphrey (NatCen Social Research)
Ms Maya Agur (NatCen Social Research)

This paper will report on the results of a sequential mixed-mode experiment that took place in the UK alongside the main round of ESS 2012. The study aims to test the feasibility of data collection for a long interview using a web followed by face-to-face design in a country where the only credible general population sample frame is an address-based list.

The experiment involved sampling 2,000 addresses from the Postcode Address File and contacting them by letter. The letter invited potential respondents to complete the survey online. After a reminder process, a proportion of non-respondents were followed up by visits from an interviewer who attempted a face-to-face interview. Different incentive values and approaches to respondent selection were also tested.

The paper would explore the following questions:

1. How does the total response rate (online and face-to-face) compare to the response rate achieved in equivalent areas on the main ESS survey?
2. What impact did different value incentives have upon online response? What impact did the initial web phase have on response to the subsequent face-to-face interview?
3. What impact did the sequential mixed-mode design have on survey cost?
4. How did people complete a long interview online? How long did the interview take, did they tend to answer questions in one 'sitting', and what proportion dropped out?
5. What is the best approach to carrying out random respondent selection when no interviewer is present?



4. Survey errors and costs - comparison of mixed-mode and face - to face ESS surveys in Estonia

Dr Mare Ainsaar (Tartu University)
Kaur Lumiste
Laur Lilleoja
Ave Roots

In 2012, the European Social Survey carried out a mixed-mode experiment, with the aim of investigating the feasibility of using a mixed-mode of data collection process that results in better coverage of the population and lower costs than the traditional face-to-face only data collection methodology. Estonia is one of the countries with highest Internet access and usage in Europe, where 70% of Estonians have internet access at home and the proportion of internet users is highest among those 20-40 years old. Young respondents are the main source of non-contacts and refusals in Estonian surveys, and it was expected that they would be more likely to participate in a web survey and an rise overall response rate. A sequential mixed-mode design was implemented with 927 sample units. All participants were first invited by mail to complete a survey online and nonrespondents were contacted later for a face-to-face interview. The timing, set-up, wording of questionnaires, fieldwork agency, interviewers and all other aspects of the mixed-mode survey were kept very similar to the main face-to-face ESS survey. 355 respondents completed online survey and 223 questionnaires were completed in face-to-face mode.
This paper analyzes mixed mode survey response rates, non-response structures and quality of same ESS repeated questions in comparison with main ESS data. Face- to – face mixed mode respondents had an additional question about reasons why they did not respond to online call, what will be analyzed. Quality of the mixed-mode survey outcome will be compared to the face-to-face ESS main survey in terms of costs, timing and other additional challenges.


5. Assessing data quality with time measurements of responses in the Estonian mixed-mode survey experiment

Mr Kaur Lumiste (University of Tartu)

In autumn of 2012 European Social Survey conducted a mixed-mode survey experiment under the supervision of Centre for Comparative Social Surveys, UK. Alongside UK, Sweden and Slovenia, Estonia was one of the participating countries. In this mixed-mode survey respondents were asked to answer the ESS questionnaire online and if there was no reaction to the postal invitation and the two reminders, the respondent was referred to the CAPI mode.
We aim to measure quality with respect to time variables and answer a number of questions. Was response time different in the two modes and has it affected data quality? Was there any random clicking in the web mode? Were response times different in the beginning and the end of the questionnaire? Did break-offs in the web mode have an effect on data quality?
Response times for every question in the CAWI mode and for question blocks in the CAPI mode were recorded. If response times in the web mode are very short then it might be that the respondent just chose some random answer. If random clicking is found, was this phenomenon more prevalent in the end of the questionnaire? Web mode allows break-offs and we assume that this helps to prevent random clicking, but it might have other unwanted effects.
We measure quality with number of missing values, differences in auxiliary variables available in the sample frame and significant differences in response distributions in the two modes that are not explainable by mode effects.