Tuesday 14th July
Wednesday 15th July
Thursday 16th July
Friday 17th July
Friday 17th July, 13:00 - 14:30 Room: HT-105
Practical solutions for mixed mode survey users and producers
|| Mrs Michèle
Ernst Stähli (FORS, FORS, Swiss Centre of Expertise in the Social Sciences )
|Coordinator 1||Mrs Caroline Roberts (Institut des sciences sociales, University of Lausanne)|
Mixed mode surveys have been gaining popularity over the course of the past decade. Many academically led and government-funded studies have been exploring such survey designs whereas survey organisations in some countries now routinely offer clients mixed mode survey designs as a way to improve population coverage and reduce survey costs. In response to these developments, the methodological literature exploring the advantages and disadvantages of mixed mode surveys has burgeoned, with a growing number of studies tackling the thorny issue of how to disentangle so-called ‘mode effects’ (differential measurement errors between modes) from selection effects. This research has highlighted the considerable analytic burden mixed mode data place on methodologists interested in measuring and potentially correcting for confounded survey errors, as well as on substantive researchers who analyze mixed mode data. Yet, there is still a relative lack of guidance available for designers and users of mixed mode data about whether mode effects matter enough to preclude the use of such data collection designs, or to warrant the use of potentially cumbersome analytic methods to control the potential impact of measurement differences on substantive research conclusions.
How should data providers and data users handle mixed mode data? What procedures need to be undertaken when analysts start to use the data? What thresholds should we set to decide whether measurement differences between modes are important enough to warrant special measures at the analysis stage? What preventative measures have to be taken in order to avoid a misuse of mixed mode data?
For this session, we are particularly interested in contributions that consider, in a pragmatic way, the challenges of using mixed mode data, and offer practical solutions, either for survey designers deciding whether to mix modes, or for data users approaching their analyses.
Paper Details1. Current challenges and open questions in the field of mixed mode survey methodology
Professor Caroline Roberts
(University of Lausanne)
Dr Michèle Ernst Stähli (FORS - Swiss Centre of Expertise in the Social Sciences)
Mixed mode survey methodology has emerged as a distinct field of research in response to changes in data collection technology that have increased the range of methods available for gathering survey data. The field has been marked by changing research interests over time, which we review in this introduction to the session. The aim is to highlight the key challenges that still need to be tackled, and to introduce the contributed papers to the session.
2. The impact of using the web in a mixed mode follow-up of a longitudinal birth cohort study: Evidence from the National Child Development Study
Mr Matt Brown
(Centre for Longitudinal Studies - UCL Institute of Education)
Mr Joel Williams (TNS-BMRB)
Professor Alissa Goodman (Centre for Longitudinal Studies - UCL Institute of Education)
The NCDS Age 55 survey used a mixed-mode design involving web and telephone; a first for the UK birth cohort studies. Reducing costs was the key motivation but there were some key concerns: 1) Could a high web take-up rate be obtained? 2) Could the high response rates achieved in prior waves be maintained? 3) Will mode effects result in measurement differences? A control group comprised of a random sub-sample, were allocated to a telephone-only design. This paper will contrast the mixed mode and telephone only designs to evaluate the impact of the mixed-mode approach.
3. Using the measurement model to correct for mode effects: the equivalence testing approach
Mr Alexandru Cernat
(Institute for Social and Economic Research, University of Essex)
In order to evaluate the utility of mixed mode designs we must separate selection and measurement effects of the modes. The typical approach used in the literature is the back-door method to control for selection. Nevertheless, this method has some important assumptions, such as that of no selection on unobserved variables, that are not always plausible. Here I propose applying the front-door method using equivalence testing in latent measurement models. Simulations will show if this approach works and how the assumptions of exhaustiveness and isolation can bias estimates.
4. Estimating survey errors of mixed-mode designs using survey-based benchmarks
Dr Thomas Klausch
(Utrecht University / Statistics Netherlands)
Dr Barry Schouten (Utrecht University / Statistics Netherlands)
Professor Joop Hox (Utrecht University)
We evaluated total, measurement, and selection bias on thirty variables and three sequential mixed-mode designs of the Crime Victimization Survey: telephone, mail, and web, where nonrespondents were followed up face-to-face. In the absence of true scores, all biases were estimated against two different types of survey-based benchmarks. For the ‘single-mode benchmark’, biases were evaluated against a face-to-face reference survey assuming both measurements and selection mechanism of this mode are optimal. Additionally, a ‘hybrid-mode benchmark’ was used, where biases were evaluated against a mix of the measurements of a web survey.