ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 09:00 - 10:30, Room: No. 1

Mode Effects in Mixed-Mode Surveys: Prevention, Diagnostics, and Adjustment 4

Convenor Professor Edith De Leeuw (Utrecht University)
Coordinator 1Professor Don Dillman (Washington State University)
Coordinator 2Dr Barry Schouten (Statistics Netherlands)

Session Details

Mixed-mode surveys have become a necessity in many fields. Growing nonresponse in all survey modes is forcing researchers to use a combination of methods to reach an acceptable response. Coverage issues both in Internet and telephone surveys make it necessary to adopt a mixed-mode approach. Furthermore, in international and cross-cultural surveys, differential coverage patterns and survey traditions across countries make a mixed-mode design inevitable.

From a total survey error perspective a mixed-mode design is attractive, as it is offering reduced coverage error and nonresponse error at affordable costs. However, measurement error may be increased when using more than one mode. This could be caused by mode inherent effects (e.g., absence or presence of interviewers) or by question format effects, as often different questionnaires are used for different modes.

In the literature, two kinds of approaches can be distinguished, aimed at either reducing mode effects in the design of the study or adjusting for mode effects in the analysis phase. Both approaches are important and should complement each other. The aim is to bring researchers from both approaches together to exchange ideas and results.

This session invites presentations that investigate how different sources of survey errors interact and combine in mixed mode surveys. We particularly invite presentations that discuss how different survey errors can be reduced (prevented) or adjusted for (corrected). We encourage empirical studies based on mixed-mode experiments or pilots. We especially encourage papers that attempt to generalize results to overall recommendations and methods for mixed-mode surveys.



Note: Depending on the number of high quality paper proposals we could organize one or more sessions.
Note 2: We have four organizers, this does not fit the form. Fourth is Joop Hox Utrecht University, j.hox@uu.nl


Paper Details

1. Evaluating Mode Effects in Mixed-Mode Data through the Back-Door and Front-Door

Mr Jorre Vannieuwenhuyze (KU Leuven)

An inconvenient feature of mixed-mode survey data is a confounding of selection and measurement effects between the modes and this confounding precludes evaluation of data quality as well as unbiased estimation of target statistics. Solutions to this confounding problem have already been reported by several mixed-mode studies. Most of these studies start from the back-door method which includes covariates explaining the selection effects. Unfortunately, these covariates must meet strong assumptions which are generally ignored. In this presentation, I will discuss these assumptions into greater detail but also provide an alternative method for solving the confounding problem. This alternative method is the front-door method which includes covariates explaining the measurement effects instead of selection effects. The application of both the back-door and front-door methods is illustrated by real example data stemming from a survey on opinions about surveys. This example yields mode effects in line with the expectations when the front-door method is used, and mode effects against the expectations when the back-door method is used. However, the validity of these results completely depends on the (add hoc) chosen covariates. Research to better back-door as well as front-door covariates might thus be a topic for future studies.


2. Two New Methods to Disentangle Measurement and Selection Effects in Mixed-Mode Surveys

Mr Thomas Klausch (Utrecht University)
Professor Joop Hox (Utrecht University)
Dr Barry Schouten (Utrecht University / Statistics Netherlands)

Measurement effects (ME) are major problems in mixed-mode surveys suggesting that the same respondent provides different answers to questions posed under different modes. Estimating ME suffers from two complications, however. First, the analyst normally observes only one answer provided under a given mode, whereas the potential answer under a different mode is unknown. Second, assignment mechanisms to mode are often nonrandom, for example due to self-selection, which is called a selection effect (SE). Whereas available adjustment techniques are effective in theory, survey practice shows that accessible auxiliary information normally is insufficient to plausibly ignore the assignment mechanism.

We present two methods to estimate ME and SE based on weaker assumptions. Our methods require a repeated measures data collection design that enables substitution of two normally unobserved quantities. In the design, subjects are randomized to modes (time point one) and are re-approached by at least one of the modes in a follow-up survey (time point two). The first estimation technique then exploits the answers provided by respondents to two different modes as substitutes to the potential answers that are normally unknown. The second technique exploits only the information about the selection mechanism of the follow-up mode for persons allocated to a different mode at time point one, but not their potential answers. This is useful, if the follow-up survey cannot repeat all questions. Both techniques are demonstrated using real-world data with an emphasis on exemplification and testing of underlying assumptions.



3. Investigating Mode Effect and possible Adjustments in Mixed-Mode Surveys

Dr Annamaria Bianchi (University of Bergamo)
Professor Silvia Biffignandi (University of Bergamo)

Nowadays mixed mode surveys are widely used to overcome various problems, such as nonresponse, survey costs, coverage problems, and measurement error. However, the impact of mixed-mode design on final estimates has not yet been extensively studied and it still requires insights and experiments.
In this paper, we discuss this issue, with reference to two panels built within the PAADEL project (sponsored by the Lombardy Region in Italy and managed by the University of Bergamo). The issue is tackled with reference both to the recruitment and the first wave of data collection. At the recruitment level a concurrent mode mail-telephone was used. At the survey level the possibility to choose between mail, telephone, and web was given. A unimode design was used for the questionnaire.
First, we analyze participation across modes and we study whether and how the mode of data collection has an impact on the final estimates. Second, we discuss if and how it is possible to construct non-response weights, taking into account the mode of data collection. We critically evaluate the application of alternative methods, such as different types of calibration and including different weighting variables, possibly related to the mode of participation. We investigate whether the resulting weights are able to correct for the presence of mode effects.


4. Calibrating measurement errors in mixed-mode sample surveys

Dr Bart Buelens (Statistics Netherlands)
Professor Jan Van Den Brakel (Statistics Netherlands and Maastricht University)

In the Dutch crime and victimization survey, a mix of data collection modes is employed in a sequential fashion: when no response is obtained in one mode, a different mode is used for re-approach. Year-to-year instabilities in the general regression (GREG) estimates of this survey were observed. Analysis suggested that these were at least partly due to changes in the response mode composition between the subsequent editions of this survey, indicating the presence of mode effects. Selection effects cause the subpopulations that are reached by the various modes to differ. These effects are confounded with mode dependent measurement errors. The latter are causing the instabilities, as weighting to reduce selectivity of the response can be assumed to correct for selection effects - an assumption which was verified in a large scale experiment. A practical approach to reduce the problem of temporal instabilities is presented. The method consists of extending the model underlying the GREG-estimator with a component that specifies the distribution of the population over the data collection modes as an independent variable. An arbitrary but fixed composition of the response is used as calibration benchmark. The resulting estimator is shown to render measurement errors constant over the various editions of the survey, allowing for unbiased estimation of change over time of the survey variables. The method can be applied to cross sectional surveys too, in situations where the mode composition differs between domains. Results, benefits, and drawbacks of this solution are discussed.